ASCB logo LSE Logo

Feedback about Teaching in Higher Ed: Neglected Opportunities to Promote Change

    Published Online:https://doi.org/10.1187/cbe.13-12-0235

    Abstract

    Despite ongoing dissemination of evidence-based teaching strategies, science teaching at the university level is less than reformed. Most college biology instructors could benefit from more sustained support in implementing these strategies. One-time workshops raise awareness of evidence-based practices, but faculty members are more likely to make significant changes in their teaching practices when supported by coaching and feedback. Currently, most instructional feedback occurs via student evaluations, which typically lack specific feedback for improvement and focus on teacher-centered practices, or via drop-in classroom observations and peer evaluation by other instructors, which raise issues for promotion, tenure, and evaluation. The goals of this essay are to summarize the best practices for providing instructional feedback, recommend specific strategies for providing feedback, and suggest areas for further research. Missed opportunities for feedback in teaching are highlighted, and the sharing of instructional expertise is encouraged.

    INTRODUCTION

    Despite heroic dissemination of evidence-based teaching practices and their documented improvement on student learning (Ebert-May et al., 1997; Derting and Ebert-May, 2010; Crouch and Mazur, 2001; Udovic et al., 2002; Knight and Wood, 2005; Freeman et al., 2007), university science faculty members have been slow to adopt these practices. In a national survey of new physics faculty members, 25% reported they had attended teaching workshops (Henderson, 2008) and 87% of these reported knowledge of one or more evidence-based strategies, yet only 50% of those attending report adopting these practices (Henderson and Dancy, 2009). These faculty members identified several impediments to adoption, including inadequate training, misunderstanding of evidence-based teaching practices, and lack of support for implementation (Dancy and Henderson, 2010). Two separate studies have documented misunderstandings about what is involved in evidence-based teaching. Ebert-May and colleagues (2011) identified a significant discrepancy between the degree to which faculty members report using active learning versus levels of active learning observable in video recordings of their classrooms. A multi-institution investigation of introductory biology courses also revealed that self-reported use of active-learning instruction was not associated with student learning gains (Andrews et al., 2011). Collectively, this work suggests that one-time workshops raise awareness of evidence-based teaching strategies but are not sufficient for faculty to adopt and successfully use these strategies (National Research Council [NRC], 2012).

    We propose that learning to teach, like developing other professional skills, requires acquiring knowledge about performing job-related tasks, but it also must involve feedback and mentoring in order to monitor and improve performance (Hattie and Timperley, 2007; Nielsen, 2011; Finkelstein and Fishbach, 2012). However, college teaching is one of the few vocations that requires neither formal training (Golde and Dore, 2001; Tanner and Allen, 2006; Addy and Blanchard, 2010) nor standard processes for evaluation and supervision (Centra, 1993; Weimer and Lenze, 1994; Johnson and Ryan, 2000). We know that effective dissemination of evidence-based teaching practices requires more intensive training than a one-time workshop can offer (Sunal et al., 2001; Dancy and Henderson, 2010; Singer et al., 2012). Further, when faculty members are given feedback that both motivates and enables them to improve, they are more likely to make significant changes in their teaching practices (Sunal et al., 2001; Henderson et al., 2011). We argue that providing faculty with formative teaching feedback may be the single most underappreciated factor in enhancing science education reform efforts.

    In this essay, we argue that models of peer feedback or coaching rather than peer observation and review could encourage the adoption and effective use of evidence-based teaching strategies in science (American Association for the Advancement of Science [AAAS], 2011). We begin by considering the purpose of instructional feedback. We provide a broad review of the best practices of giving feedback and describe feedback approaches used by several national faculty development programs that feedback recipients might borrow. Finally, we highlight opportunities for research on feedback and pose questions about how providing feedback can affect teaching in higher education to encourage the development of specific strategies for providing feedback in higher education. We write for a diverse audience, including individuals who are experienced mentors or consultants involved in faculty development and individuals who consider themselves to be “change agents” leading faculty toward the Vision and Change goals, as well as faculty members who seek more or higher-quality instructional feedback. To faculty seeking feedback, we offer strategies to help identify and solicit needed instructional feedback.

    THE NEED FOR FEEDBACK ABOUT TEACHING

    Institutions are beginning to recognize the need to offer more substantive and formative instructional feedback to faculty (Seldin, 1999; Bernstein, 2008; Huston and Weaver, 2008; Ismail et al., 2012), although few agree on how to provide it (Johnson and Ryan, 2000). Safavi and colleagues (2013) report that 96% of faculty surveyed (n = 237) desire more meaningful instructional feedback. Currently, faculty members receive the majority of their teaching feedback through student evaluations (Keig, 2000; Loeher, 2006), with the occasional peer-teaching observation (Seldin, 1999). There are considerable limitations to both feedback mechanisms.

    Student evaluations typically focus on gathering data about student perceptions of teacher-centered behaviors such as instructor enthusiasm, clarity of explanations provided by the instructor, rapport, and breadth of coverage, and provide only limited opportunities for students to comment on the use of learner-centered pedagogies (Murray, 1983; Cashin, 1990; Marsh and Roche, 1993). This may partially explain the decline in student evaluation scores often mentioned by faculty members who incorporate active learning into their courses (Walker et al., 2008; Brickman et al., 2009; White et al., 2010). Items on student evaluations typically focus on student satisfaction and didactic teaching, rather than measuring learning (d’Apollonia and Abrami, 1997; Aleamoni, 1999; Kolitch and Dean, 1999; Kember et al., 2002). Disciplinary and class-size bias have already been noted as a problem in student evaluations: science and mathematics disciplines garner the lowest student evaluation scores (Cashin, 1990; Ramsden, 1991; Aleamoni, 1999); science courses typically have larger enrollments than arts and humanities courses (Cheng 2011); and student evaluations are lower in larger classes (Aleamoni and Hexner, 1980; McKeachie, 1990; Franklin, 1991).

    Faculty members express reservations about the use of student evaluations, particularly for personnel and tenure decisions, and even opposed them outright when they were first introduced (Hills, 1974; Chandler, 1978; Vasta and Sarmiento, 1979; Dowell and Neal, 1982; Menefee, 1983; Zoller, 1992; Goldman, 1993). Faculty members contend that student evaluations lead to lower morale and job satisfaction and may even motivate faculty to reduce standards on examinations and assignments in an effort to placate students, due to their focus on students’ satisfaction (Ryan, 1980; Schneider, 2013). Faculty members have also expressed concern over the appropriate role for student evaluations of their teaching effectiveness in personnel decisions such as retention, promotion, tenure, and salary increases (Cashin and Downey, 1992).

    Others have repeatedly argued that student evaluations improve teaching effectiveness (Overall and Marsh, 1979; Cohen, 1980; Marsh and Roche, 1993). However, as the sole measure of teaching effectiveness or as an impetus to increase active learning in the college classroom, student evaluations are far from adequate. Student evaluations provide few concrete ideas for improving instructional effectiveness or learning outcomes (Cohen and McKeachie, 1980; Abrami et al., 1990) or changing curriculum or course objectives (Neal 1988; Abrami 1989). Instructors find it difficult to reconcile contradictory opinions expressed in student evaluations (Ryan, 1980; Callahan, 1992). Consequently, only a small percentage of faculty members report making changes to their courses as a result of student evaluations (Spencer and Flyr, 1992; Kember et al., 2002; Richardson, 2005). And, as we later discuss in depth, faculty may have little incentive to use the data from student evaluations (Kember et al., 2002; Mervis, 2013). Researchers have documented that pairing student evaluations with qualitative student interviews or peer consultations are much more effective at influencing faculty behavior (Cohen, 1980; Wilson, 1986; Tiberius, 1989; Seldin, 1993). However, these practices are not currently implemented at most universities and are difficult to implement at the scale required by many institutions.

    Peer-review approaches for evaluating teaching have also been studied and found lacking (Hutchings, 1995; Quinlan and Bernstein, 1996; Huston and Weaver, 2008). One-time classroom observations conducted by peer faculty typically focus on content accuracy, while offering little input about curricular alignment or objectives (Malik, 1996), and often lack collaboration and support from colleagues (Bernstein, 2008). One-time classroom observations also suffer from additional problems, including but not limited to, faculty lack of expertise in providing instructional feedback (Kremer, 1990), observer bias toward similar teaching style (Centra, 2000), reliability issues and conflicts of interest resulting in reluctance to give a peer negative feedback (Marsh, 1984; Feldman, 1988), and power dynamics requiring delicate maneuvering (Keig and Waggoner, 1994). Moreover, one-time observations have been shown to have virtually no impact on faculty teaching, aside from influencing textbook selection (Spencer and Flyr, 1992), and may even lead to erroneous inferences (Weimer, 2002). Faculty members are also resistant to the use of summative peer evaluation, which they feel contributes little to tenure and promotion decisions (Iqbal, 2013).

    Having considered the purpose of instructional feedback, and current practices, we provide a broad review of the best practices of giving feedback in the next section.

    CHARACTERISTICS OF EFFECTIVE FEEDBACK

    In general, regardless of the task, feedback is meant to provide advice from a mentor or provider to assist a recipient with modifying and improving future performance. The question is how to best provide feedback so that it results in improved performance of a specific task. There are a host of factors that come into play, from the complexity of the task to the method of imparting feedback to the definition used to judge performance. Although the value of feedback is frequently noted in the literature (Brinko, 1993; Hattie and Timperley, 2007; Ismail et al., 2012), there is little research on what makes feedback given to faculty effective for improving undergraduate teaching (Bernstein, 2008; Stes et al., 2010). For the purposes of this review, we define feedback as “information provided by an agent (e.g., teacher, peer, book, parent, self, experience) regarding aspects of one's performance or understanding” (Hattie and Timperley, 2007, p. 81). Feedback does not have to be provided by another person; individuals are capable of acquiring feedback through self-reflection. For example, one may learn tasks simply through observing others’ performance (Bandura, 1977; Green and Osborne, 1985). The observer then modifies his or her own behavior by comparisons with others and subsequent self-reflection (Wong, 1985).

    We draw on the extensive literature from organizational psychology about the characteristics of feedback that are important for improving workplace performance. For example, Kluger and DeNisi (1996) review the effectiveness of vocational interventions designed to inform recipients about ways to improve their performance on tasks, but exclude feedback related to interpersonal issues. These tasks were as diverse as typing, test performance, and attendance behavior on the job. They caution that feedback does not always result in improved performance and can in fact be detrimental. They conclude that several factors affect the outcome of feedback. These factors include how the task is defined and how feedback is delivered. In work situations, for example, feedback that threatens self-esteem or interferes with the initial stages of learning a new task can have a negative effect on performance (Kluger and DeNisi, 1996). We also draw from literature on the effects of feedback on K–12 student outcomes. Researchers have shown that in testing situations, for example, students do not improve on subsequent tests simply by knowing they missed an item. To improve on subsequent tests, they also need to know the correct answer (Bangert-Drowns, 1991). Finally, we include substantial evidence from the K–12 teacher education literature that immediate and specific instructional feedback supports continuing growth (Brinko, 1993; Scheeler et al., 2004). We also reference the few empirical studies analyzing the effectiveness of feedback, mentoring, and coaching given as part of university faculty instructional development (Stes et al. 2010).

    Through review of these and other studies from K–12 teacher education and workplace performance, we identified characteristics of effective feedback (Table 1) that are described in detail below. Effective feedback: 1) clarifies the task by providing instruction and correction; 2) improves motivation that can prompt increased effort; and 3) is perceived as valuable by the recipient, because it is provided by credible sources (Table 1). We propose that feedback about undergraduate teaching that is characterized by these features can lead to tangible benefits, including instructor growth and accolades, increased instructor motivation, and improved student learning.

    Table 1. Providing effective instructional feedback

    Qualities of effective feedbackCharacteristicsSuggestions
    1. Clarifies the task by providing instruction and correction• Provides instruction• Teaching and learning conferences
    • Workshops on innovative teaching practices
    • Defines a clear standard for how the task should be completed• Online video resources
    • Concrete and specific• Feedback is guided by validated classroom observation protocols.
    • Identifies types of errors and provides suggestions for correction
    • Timely (as soon as possible after performance of the task)• Debrief immediately after the peer observation, rather than months later or at the end of semester.
    • Occurs over multiple occasions• Observations occur several times during the semester.
    • Consistent, minimizes conflicting messages from students and peers• Discuss expectations of department and methods for dealing with student resistance.
    • Have a consistent template for peer-teaching evaluations.
    • Self-referenced (compared with an individual's ability and expectations rather than compared with a peer)• Discuss individual's concerns and address specific challenges that instructor wishes to solve.
    • Meet before classroom observation to set up expectations and solicit feedback about specific challenges.
    • Does not interfere with the initial stages of learning• Choose a date after the first instructional opportunity.
    • Does not threaten self-esteem• Highlight areas of strength and areas for improvement as a formative evaluation that is not part of promotion and tenure decisions.
    2. Improves motivation that can prompt increased effort• Leads to higher goal setting• Focus on student outcomes and changes that result in gains in student achievement.
    • Provides a positive encouraging message• Acknowledge challenges but emphasize solutions.
    • Accounts for confidence and experience level• For novices, emphasize what they are doing well; experts are ready for more corrective feedback.
    3. Perceived as valuable by the recipient because it is provided by a reputable source• Encourages seeking feedback voluntarily• Unit head implements peer-coaching model with volunteers.
    • Increases perception of value of feedback to improve job status• Unit head provides rewards for seeking feedback in the same way he or she rewards positive student evaluations in evaluating faculty performance.
    • Protects the ego and others’ impressions• Private and developmental rather than public and evaluative. Copies of any written materials provided to the department mention that peer evaluation occurred, not the substance of the discussions.
    • Respected status of feedback provider• Knowledgeable source of higher status who expresses they are providing feedback for the well-being and improvement of the recipient and for improved student outcomes.

    Table 2. Resources for providing feedback in higher education

    TypeResources for feedback in higher education
    Conferences and workshops• Instructional development workshops (centers for teaching and learning, National Academies Summer Institutes [www.academiessummerinstitute.org])
    • Process Oriented Guided Inquiry Learning (https://pogil.org)
    • Project Kaleidoscope meetings (PKAL, American Association of Colleges and Universities [www.aacu.org/pkal])
    • CIRTL (www.cirtl.net)
    Online videos• iBiology education videos from the American Society for Cell Biology (www.ibiology.org/ibioeducation.html)
    • Howard Hughes Medical Institute biological demonstrations (www.researchandteaching.bio.uci.edu/lecture_demo.html#ATP)
    Classroom observation protocols• Classroom observation protocols (RTOP [http://physicsed.buffalostate.edu/AZTEC/RTOP/RTOP_full/about_RTOP.html])
    • Classroom Observation Protocol for Undergraduate STEM (COPUS; Smith et al., 2013)
    • Taxonomy of observable practices for scientific teaching (Swarts et al., 2013)
    • Electronic Quality of Inquiry Protocol (EQUIP; Marshall et al., 2010)
    Departmental culture• Discuss expectations of department and methods for dealing with student resistance (Seidel and Tanner, 2013)
    • PULSE Vision & Change Rubrics (Aguirre et al., 2013)
    Peer evaluation• Excellent peer evaluation of teaching guide at http://tenntlc.utk.edu/ut-peer-evaluation-of-teaching-guide
    • Peer review of teaching project www.courseportfolio.org/peer/pages/index.jsp
    • Peer Review of Teaching: A Sourcebook, 2nd ed. (Chism, 2007)
    • “The role of colleagues in the evaluation of college teaching” (Cohen and McKeachie, 1980)

    Effective Feedback Clarifies the Task in a Specific, Timely Manner, with a Consistent Message That Informs Recipients How to Improve

    At a fundamental level, feedback provides information useful for measuring performance compared with expectations (a task standard) and provides suggestions to correct discrepancies between one's performance and that task standard (Hattie and Timperley, 2007). To correct discrepancies, feedback must identify the type and extent of errors and contain suggestions for correcting them (Scheeler et al., 2004). If the task standard the recipient is aiming for is not clear, then feedback is less likely to be effective. For example, physicians in training are able to improve performance when the feedback they receive includes critical incidents that indicate when their performance deviated from the task standard (Wigton et al., 1986). The recipients of this specific feedback understand their evaluations better (Ilgen et al., 1979). However, if there is no clear task-related standard against which to compare for a novel task, then it should not be surprising that feedback will have little effect. If there are conflicting sources of feedback in the environment (peers, etc.), then the discrepancy may make it difficult to resolve how to integrate the feedback (Kluger and DeNisi, 1996).

    Feedback must be concrete and specific: not only is concrete, specific feedback preferred by recipients (Liden and Mitchell, 1985), it is also more effective than general feedback. For example, K–12 teachers are more likely to improve their behaviors (e.g., the amount of time spent asking questions of students or other pacing and prompting behaviors) when they are given specific feedback that includes examples of how to improve rather than just general information, for example, telling them the number of questions they asked students (Englert and Sugai, 1983; Hindman and Polsgrove, 1988; Giebelhaus, 1994; O’Reilly and Renzaglia, 1994).

    Feedback has been shown to be most effective when it is provided in a timely manner. In the K–12 setting, researchers compared changes to teaching behaviors following feedback that was delivered immediately or after a delay. Providing feedback after a delay was less effective compared with providing feedback immediately after performance. Immediate feedback involved supervisors interrupting instruction when the teacher incorrectly performed a target behavior, identifying the error for the teacher, asking the teacher how he or she could correct the error, and often providing a more appropriate procedure or modeling the correct behavior (O’Reilly, 1992; O’Reilly and Renzaglia, 1994; Coulter and Grossen, 1997). Similar studies demonstrated that feedback was more effective at changing teaching behaviors beyond an immediate class session if given over multiple—but not too-frequent occasions (Rezler and Anderson, 1971; Ilgen et al., 1979; Chhokar and Wallin, 1984; Fedor and Buckley, 1987).

    Effective feedback provides a consistent message that considers both the recipient's knowledge and other conflicting messages they may be receiving. Both peers and students explicitly compare teaching performance with that of other instructors (Cavanagh, 1996). McColskey and Leary (1985) refer to this comparative feedback as “norm-referenced.” Norm-referenced feedback that conveyed the message of failure (negative feedback) led to lower self-esteem, expectations, and motivation (McColskey and Leary, 1985). In contrast, “self-referenced” feedback, which compared an individual's performance with other measures of his or her ability, produced increased feelings of competence, because the feedback attributed the individual's skills to personal effort and contained higher expectations for future performance (McColskey and Leary, 1985).

    One alternative to norm-referenced feedback is Utell's (2013) facilitative feedback model, which seeks to build skills and expose opportunities for growth. The facilitative feedback model shares similarities with the peer-teaching discussion group model proposed by Anderson et al. (2011). Other models also rely on the establishment of a mentoring relationship between the individuals receiving and providing feedback (Showers, 1984; Centra, 1993; Johnson and Ryan, 2000). In these models, the instructor's strengths and weaknesses are explicitly identified before a task is performed. During and after performance of the task, the instructor receives feedback from the mentor. The mentor suggests ways for the instructor to improve and highlights areas of strength and future potential. Additionally, meeting before the observation may increase buy-in for the process. This opens the door for two-way conversation, shifting the process from evaluation to coaching, and provides opportunities for the instructor to suggest areas of concern or interest to the mentor (Skinner and Welch, 1996). This type of model accounts for individual differences in experience and presents a consistent message. This could help instructors navigate the conflicting, and frequently negative feedback given by disparate sources.

    Effective Feedback Encourages the Instructor, Improving Motivation and Stimulating Increased Effort

    Both the tone of feedback and the context in which it is given have both been shown to be important for determining effectiveness. Thinking about business management author Michael Leboeuf's quote from his 1985 book The Greatest Management Principle in the World (Putnam), “what gets rewarded gets done,” reminds us to consider the factors that motivate someone to want to improve at his or her job. Locke and Latham's (2006) goal-setting theory suggests that providing feedback per se does not improve motivation or performance, but it will do so if it leads to higher goals being set or greater commitment to existing goals. In a meta-analysis of 33 studies, Locke and Latham (1990) report that the setting of specific, challenging goals, instead of easy or vague goals like “doing your best,” consistently led to better performance.

    Feedback should be positively framed but not generically positive. Instructors prefer hearing positive feedback over negative feedback (Jussim et al., 1995). Feedback is more easily recalled when it is accompanied by a positive encouraging message compared with negative messages (Podsakoff and Farh, 1989); and positive feedback is considered more accurate (Podsakoff and Farh, 1989; Jussim et al., 1995). In K–12 settings, researchers have demonstrated that the addition of a positive message to noncorrective feedback (e.g., information on the number of times the teacher exhibited a specific behavior) increases the effectiveness of that feedback as compared with noncorrective feedback alone (Cossairt et al., 1973). However, perpetually receiving only positive feedback leads to complacency (Podsakoff and Farh, 1989); perhaps an instructor begins to think, “I am doing so well, I don't need to improve.”

    Feedback providers should consider the confidence and experience of the recipient when choosing the appropriate amount of encouragement. Individuals with lower self-confidence tend to view negative feedback as more accurate (Jussim et al., 1995) and to rely on feedback from external sources rather than from themselves (Ilgen et al., 1979). Novices generally have lower self-esteem, and they indicate a preference for positive feedback. For example, novice learners preferred language instructors who emphasized what students were doing well in the classroom rather than correcting mistakes (Finkelstein and Fishbach, 2012). Experts, however, will seek out negative feedback, indicating more interest in learning what they did wrong and how to correct it (Finkelstein and Fishbach, 2012).

    Unfortunately, the common practices for imparting instructional feedback in higher education do not account for differences in instructor self-confidence and experience. Faculty commonly receive negative, or what Utell (2013) refers to as “failure-based feedback,” which focuses on fault-finding in task performance. Failure-based feedback can be found in the two most common types of teaching feedback. Students’ references to evaluations as an opportunity to “vent” (Marlin, 1987; Lindahl and Unger, 2010) or as a “plot to get back at an instructor” (Jacobs, 1987) are examples of faultfinding feedback. Students can also express failure-based feedback by choosing not to enroll in courses, and this feedback can have devastating consequences. For example, one study documented the termination of a faculty member following rising student attrition rates in a course utilizing evidence-based teaching practices (Silverthorn et al., 2006).

    An instructor may be less likely to take risks and therefore choose not to adopt evidence-based teaching strategies if these are perceived as too risky or likely to result in failure-based feedback from students or peers.

    Feedback Is More Likely to Be Sought If the Potential Benefit Outweighs the Costs

    As we reviewed in the Introduction, the current models for receiving feedback in higher education—end-of-course student ratings and peer reviews—are intended to assess competence using a standardized instrument, are prescribed rather than voluntary, and are not perceived as coming from credible sources. Those interested in improving teaching recommend adopting a more formative developmental feedback model that endeavors to improve performance on a task (Weimer and Lenze, 1994) and solicits volunteers who have been shown to be more receptive to receiving feedback (Blumenthal, 1978; Sweeney and Grasha, 1979). “Feedback seeking” is the better description for this type of situation, because individuals are motivated to voluntarily seek feedback for their own improvement (Ashford et al., 2003).

    Organizational psychologists characterize two major competing motives that influence the likelihood that someone will voluntarily seek feedback related to job performance. Ashford and colleagues explain that “individuals are instrumentally motivated to obtain valued information, but are also motivated to protect and/or enhance their ego and to protect others’ impressions of them” (Ashford et al., 2003, p. 774). Perceived benefits and costs are weighed in each decision. For perceived benefits, feedback seekers look for credibility, seeking feedback from individuals who possess relevant and accurate information (Fedor et al., 1992; Finkelstein and Fishbach, 2012). Negative feedback is accepted only if it comes from a high-status source (Ilgen et al., 1979), and status changes both the perception of and the desire to respond to feedback (Ilgen et al., 1979; Greller, 1980). On the other side, costs to one's ego are also considered. For example, researchers find that individuals with longer time on the job seek less feedback, possibly due to reduction in perceived value or increased perception of costs (Ashford, 1986). In addition, feedback is more likely to be sought if the situation is uncertain and the individual perceives the risk to his or her job warrants this sacrifice of his or her ego (Hays and Williams, 2011). Individuals are more likely to seek feedback if the supervisor shows respect and concern (VandeWalle et al., 2000) and if the feedback will be private and developmental rather than public and evaluative (Ashford and Northcraft, 1992).

    The organizational context for university faculty bears some similarity to the corporate and K–12 scenarios studied above. Our tiered system of ranks denotes status, and established individuals with tenure have less uncertainty about their future than junior faculty and instructors. One of the major differences may be the particularly low value associated with job performance in teaching and the associated lack of reward for these activities (Hativa, 1995; Walczyk and Ramsey, 2003; Gibbs and Coffey, 2004; AAAS, 2010; Mervis, 2013). Faculty members attribute greater value to feedback if it comes from sources who are knowledgeable, and they also consider the perspective and motivation of the source (Wergin et al., 1976). Applying the principles from an organizational setting, one would predict that junior university faculty would be more likely to voluntarily seek out feedback if it is perceived as providing value—for example, increasing likelihood of receiving tenure and promotion. Feedback would also be accepted (even negative feedback) and responded to if the source is in a position of greater status. For tenured faculty members, there is less value added from feedback. They are not likely to gain status as a result of improving their teaching, so the cost to their self-image may be too great to warrant voluntarily seeking feedback from peers.

    Vision for Feedback in Higher Ed

    We summarize here these research findings to help formulate specific suggestions for structuring feedback (Table 1). That way, feedback may be structured to best support a faculty recipient in modifying and improving his or her teaching. If at all possible, feedback should be delivered immediately and on more than just one occasion. This could entail going over instructional materials before a class and immediately discussing thoughts for improvement, or right before and after a class session, but not after the long delay common to end-of-semester student evaluations or peer evaluations. Feedback providers need to be perceived as sympathetic, credible, and unbiased. Selecting coaches from outside the tenure-granting department may minimize conflicts and preserve collegiality and allow senior faculty access to expert role models (Huston and Weaver, 2008). However, research from peer coaching in the K–12 setting using collaborative teams of teachers of equal status rather than expert supervisors also showed demonstrable improvement on changing teaching behavior and student achievement (Showers, 1984). Stes and colleagues (2010), reviewing the handful of studies empirically examining the effects of instructional mentoring or coaching in higher education, noted an increase in teacher attitudes (Finkelstein, 1995; Gallos et al., 2005; McShannon and Hynes, 2005) and knowledge (Harnish and Wild, 1993) after peer mentoring and coaching. However, none of these studies utilized comparison groups or empirically and specifically tested the effect of the mentor's status. Regardless of their status, providers need to be able to account for individual differences in experience and self-confidence when counseling recipients. To be most useful, feedback should be voluntarily sought. Newer faculty members, be they tenure-track or not, are more likely to appreciate the benefit of feedback to their advancement. Senior faculty members without the need to achieve promotion may respond better to encouragement and attaining goals such as documenting improved student learning in their classes. Finally, the most effective feedback identifies errors in a positive manner and provides examples of how to improve. This requires an increased openness and visibility where it is accepted that faculty regularly observe teaching in the classroom in the same manner used when mastering a new research technique. It also requires better descriptions (task standards) that explain what evidence-based practices look like during implementation (i.e., the taxonomy of observable practices for scientific teaching in development by Swarts et al., 2013).

    OVERCOMING EXISTING BARRIERS: STRATEGIES FOR RECIPIENTS OF FEEDBACK

    In this section, we identify barriers to implementing best practices for providing effective feedback on undergraduate teaching. Then we highlight strategies that recipients of feedback may borrow from existing programs facilitating pedagogical change and faculty development.

    Situational barriers to providing effective feedback are apparent early in faculty careers. In fact, these barriers begin in graduate school. During their graduate training, most faculty members had few opportunities for teacher development: only a third of science graduate students report having access to a one-semester training in pedagogy (Golde and Dore, 2001; Tanner and Allen, 2006). Given this lack of professional development, many instructors are unaware of pedagogical techniques (Crouch and Mazur, 2001; Handelsman et al., 2004; Pukkila, 2004; DeHaan, 2005). Therefore, it is unsurprising that effective use of challenging pedagogical techniques is rare (Andrews et al., 2011; Henderson et al., 2012). This lack of training ultimately impacts not only the use of good teaching practices but also ability to provide instructional feedback. Scientists’ professional identities may also act as a barrier to widespread reform in science education, an idea proposed by Brownell and Tanner (2012). Teaching is sometimes an undervalued part of faculty professional identity. Incorporating long-term ongoing opportunities for pedagogical development for graduate students can address this barrier by promoting innovative ways to seek and give feedback at the earliest stages of faculty careers (Brownell and Tanner, 2012).

    Alternatively, faculty members may be aware of evidence-based teaching methods, but demonstrate a performance gap between what they are doing (or not doing) as compared with what they should be doing (Andrews et al., 2011; Ebert-May et al., 2011). After exposure to these teaching practices at workshops, faculty may need additional support through implementation (Table 2). While discipline-based science education research continues to grow, there are not necessarily in-house experts to provide feedback in each department, and these individuals may not have sufficient status for their feedback to be valued. Showers’ model (1984) supports the hypothesis that peers can be effectively trained as coaches, and Bernstein (2008) mentions several models for engaging centers for teaching and learning and fellow faculty members in the process (Hutchings, 1995; Chism, 2007). Buy-in to evidence-based teaching practices may be another barrier, however. Faculty may be resistant to change, for reasons such as commitment to content coverage (Anderson, 2002), lack of confidence in student ability (Brown et al., 2006; Henderson and Dancy, 2007), employment as adjunct faculty with different expectations and campus involvement (Roney and Ulerick, 2013), or concerns over classroom management (Welch et al., 1981). Consequently, instructional feedback may not be framed from a reformed perspective.

    Moreover, the reward structure at research institutions often undervalues teaching (Hativa, 1995; Walczyk and Ramsey, 2003; Gibbs and Coffey, 2004; AAAS, 2010; Mervis, 2013). Often, there are no formal mechanisms in place for offering peer feedback beyond promotion and tenure evaluations, nor rewards for participating in a peer-feedback process. Faculty members may lack incentives for improving teaching while facing high expectations for research productivity (Boyer Commission on Educating Undergraduates in the Research University, 1998; NRC, 2003; DeHaan, 2005). Taken together, these barriers compound over time so that a sense of community around teaching in higher education may not be the norm.

    Given the barriers described above, we recommend that change-makers and faculty development consultants consider the following example. In nursing, researchers have identified a systematic approach to improve productivity and competence (Stolovitch et al., 2000). The stepwise approach involves first analyzing the performance gap to understand the difference between the behavior exhibited and expectations, as well as its significance. Then, the underlying cause of the gap is identified before an appropriate intervention is selected. Finally, subsequent change is measured (Stolovitch et al., 2000). This approach has relevance for higher education, as there may be multiple underlying reasons that faculty may fail to adopt evidence-based teaching practices. Feedback providers should use knowledge of the reason(s) why someone is not implementing evidence-based teaching practices to frame and develop appropriate feedback interventions. Change-makers should consider that multiple intertwined causes may prevent effective implementation. This stepwise analysis supports feedback-giving efforts tailored to individuals’ needs and challenges with room for flexibility, variation, and change.

    Both change-makers and feedback recipients might look to strategies that support shifts in professional identity while building a sense of community around teaching (thus changing culture) (Table 2). Establishing faculty learning communities for those willing to participate may be one avenue for offering and receiving regular feedback to support faculty with feedback beyond student evaluations and drop-in peer evaluations. Peer coaching is another strategy that may support this shift. A peer-feedback model, unlike a one-time classroom observation, is all-encompassing—providing feedback about everything from learning objectives to assessment strategies—rather than just evaluating the in-class performance. In this model, instructors regularly observe one another, providing support, feedback, and assistance in order to improve one another's instructional practices (Mallette et al., 1999; Weimer, 2002; Huston and Weaver, 2008). Weimer (2002, p. 197) suggests that this is a way to let peers “function as colleagues and work collaboratively on improvement efforts.” Weimer offers two recommendations that are useful guiding principles: first, practice the “golden rule” in giving feedback, “give unto each other the kind and quality of feedback you would like to receive,” and second, develop an agenda. With a defined agenda, faculty members may learn and reflect together on specific problems. This shifts the feedback giving-and-receiving dynamic from a one-way exchange to more productive two-way communication. Both faculty learning communities and peer coaching may support science, technology, engineering, and mathematics (STEM) faculty grappling with student resistance to evidence-based instructional practices.

    Given what we know about best practices for feedback, we recommend that change-makers, feedback providers, and feedback recipients focus on identifying how to make feedback specific, timely, corrective, and positively framed. Both change-makers and feedback recipients might borrow tools from existing faculty development programs to structure higher-quality feedback (Table 2). For example, interested faculty might use the feedback practices used by the Faculty Institutes for Reforming Science Teaching (FIRST IV; www.msu.edu/∼first4/Index.html). FIRST IV participants watch videotaped classroom sessions and then respond to questions such as: “What are the students doing? What is the instructor doing? How would you go about changing this classroom so it is more student-centered? What is the instructor doing that students themselves should be doing?” Participants discuss and reflect, and then perform self-evaluations of their own videotaped classroom samples in concert with peer and expert review. Faculty may use rubrics developed by the Partnership for Undergraduate Life Science Education (PULSE; www.pulsecommunity.org). These rubrics are intended to structure departmental-level discussion and reflection about how program curricula and teaching practices align with Vision and Change goals. Faculty may use these rubrics to spark more nuanced discussions about feedback for teaching practices. Extensive additional resources are available through the Center the Integration of Research, Teaching, and Learning (CIRTL; www.cirtl.net) and the Measures of Effective Teaching (MET) project (www.metproject.org/faq.php).

    Feedback recipients may be their own best advocates for receiving more useful feedback (Tables 1 and 2). Feedback recipients could propose a preclassroom observation meeting to discuss class goals, challenges faced, and areas for a peer observer to suggest specific strategies. This preobservation meeting may set up a framework for feedback recipients to receive more thoughtful, focused, practical feedback. Such a framework may also increase feedback recipients’ perception of the value of feedback and give them a voice in the process. Because barriers to accessing locally based learning communities may exist, programs such as PULSE make use of technology to share resources across institutional borders. We encourage feedback recipients to think beyond their department walls, to seek additional feedback from external mentors. From research about highly effective athletic coaches, we know that individuals with strong social networks who discussed their practices with others and dedicated portions of their off-season to studying their sports, had larger winning records than coaches who did not (Horton and Young, 2010). Essentially, winning coaches were more successful because they actively sought out feedback to improve their performance. Instructors, like coaches, also benefit from discussing their practices and sharing feedback to achieve a winning season as measured by student achievement. This mirrors what we know about how people learn: we continually reconstruct our understanding of the world and this process is social (Bransford et al., 2000). Likewise, we need to actively seek feedback to revise and improve our teaching practices.

    AREAS FOR FURTHER RESEARCH

    What we know about best practices for feedback primarily comes from the realm of K–12 teacher education research, as well as organizational psychology research. Research about best practices for instructional feedback in higher education—for college faculty—is uncharted territory. Here, we propose several areas of instructional feedback in need of more research, specifically focusing on instructional feedback for college faculty and potential outcomes related to student experiences.

    Many faculty members, including educational researchers, are confused or disagree as to what exactly constitutes active learning (Hativa, 1995; Miller et al., 2000; Winter et al., 2001; Hanson and Moser, 2003; Yarnall et al., 2007; Chi, 2009; Allendoerfer et al., 2012). As a result, faculty members struggle to define the standards by which to frame feedback. Few models exist, consequently even faculty members who have attended workshops about active learning mischaracterize their own performance (Ebert-May et al., 2011). This disconnect between understanding and implementation suggests that feedback must clarify specific expectations while limiting contradictory information. One resource compilation to help instructors better envision and create engaged classroom environments is in development: the iBiology Project at the American Society for Cell Biology is in the process of creating and posting videos through their iBiologyEducation YouTube channel that showcase evidence-based classroom practices (iBioEducation, 2013). Research is needed to address questions such as: How does feedback that includes clarification about effectively implementing evidence-based teaching practices impact faculty teaching practices? In other words, to what extent does “clarifying the task” aid instructors? Does this increase the likelihood that faculty members are able to accurately define and effectively implement active-learning strategies?

    We know that simply providing instructors with evidence about their teaching practices is not enough to instigate improved teaching (Andrews and Lemons, personal communication). Tools are needed to provide structured feedback for evidence-based teaching practices that will both support implementation and inform a peer-teaching evaluation system. Classroom observation protocols exist (e.g., the Reformed Teaching Observation Protocol; Sawada et al., 2002), but these are used for evaluative research purposes rather than for formative feedback, and the measurement scales are challenging to interpret (Marshall et al., 2010). Moreover, these do not offer strategic feedback for improvement (Marshall et al., 2011). New classroom observation protocols are in development that may be useful for formative instructional feedback (Eddy et al., 2013; Smith et al., 2013; Swarts et al., 2013), as is a feedback tool to improve evidence-based teaching practices (Gormally et al., unpublished data). More work is needed to understand: What are effective means for instructional feedback in higher education? How should this feedback be structured? What types of feedback do instructors report as most engaging them in trying new techniques?

    To understand how to motivate faculty to seek and use feedback, we need to clarify the types of feedback desired by faculty in different job settings. First, we need to know more about how faculty members give and receive feedback. Then we can question whether informal or formal feedback approaches yield different outcomes in terms of how instructors perceive and respond to the feedback. How does an instructor's perception of a feedback provider's value impact his or her response to feedback? How does the manner in which the feedback is conveyed impact instructor morale? How do different types of faculty respond to different ways of conveying instructional feedback? It will also be critical to characterize, measure, and quantify instructional change following feedback. How do faculty behaviors, beliefs, and attitudes change as a result of feedback? How do faculty professional identities shift as a result of feedback? Researchers may explore whether we begin to see a cultural shift and whether “what gets rewarded, gets done” will encompass both research and teaching.

    Studies show modest but significant improvements in teaching as measured by student perceptions (through student evaluations) of faculty change (Cohen, 1980; Safavi et al., 2013). We need to understand whether receiving feedback ultimately impacts student outcomes. How do students perceive changes in teaching behaviors following feedback? Further, how might end-of-semester course evaluations be revised to be more learner centered? How might the type of feedback elicited from a learner-centered course evaluation differ from a teacher-centered course evaluation? Do faculty members view this feedback as more valuable than traditional teacher-centered course evaluations? Do more faculty members report using this feedback? How might this feedback be used to address or head off student resistance in future courses? How does feedback lead to change that impacts student attitudes about the classroom environment, pedagogy, and learning science? Research to address these questions could substantially affect both faculty and student resistance to adopting evidence-based practices.

    People are more likely to increase effort when “the goal is clear, when high commitment is secured for it, and when belief in eventual success is high” (Kluger and DeNisi, 1996). The efforts on the part of STEM instructors to reform instruction and shift the status quo closer to evidence-based teaching practices are heroic and ongoing, but we must match these efforts with improved instructional feedback. More research is needed to understand the outcomes and impacts of offering feedback to faculty. Implementing a reformed instructional feedback protocol, in addition to reformed teaching, may seem daunting. However, our current strategies for providing instructional feedback in STEM are inadequate. Therefore, we must challenge one another to move beyond student evaluations and the typically unproductive drop-in observations. Instead, we must advocate for more research in STEM education that focuses on the outcomes of improved instructional feedback, leading to the development and implementation of successful models of instructional feedback.

    ACKNOWLEDGMENTS

    The authors acknowledge continuing support and feedback from the University of Georgia Biology Education Research Group. This work was supported by National Science Foundation grant DUE-0942261 to P.B.

    REFERENCES

  • Abrami PC (1989). How should we use student ratings to evaluate teaching?. Res High Educ 30, 221-227. Google Scholar
  • Abrami PC, Cohen PA, d’Apollonia S (1990). Validity of student-ratings of instruction—what we know and what we do not. J Educ Psychol 82, 219-231. Google Scholar
  • Addy TM, Blanchard MR (2010). The problem with reform from the bottom up: instructional practises and teacher beliefs of graduate teaching assistants following a reform-minded university teacher certificate programme. Int J Sci Educ 32, 1045-1071. Google Scholar
  • Aguirre KM, Balser TC, Thomas J, Marley KE, Miller KG, Osgood MP, Pape-Lindstrom PA, Romano SL (2013). Letter to the editor: PULSE Vision & Change rubrics. CBE Life Sci Educ 12, 579-581. LinkGoogle Scholar
  • Aleamoni LM (1999). Student rating myths versus research facts from 1924 to 1998. J Pers Eval Educ 13, 153-166. Google Scholar
  • Aleamoni LM, Hexner PZ (1980). A review of the research on student evaluation and a report on the effect of different sets of instructions on student course and instructor evaluation. Instruct Sci 9, 67-84. Google Scholar
  • Allendoerfer C, Kim MJ, Burpee E, Wilson D, Bates R (2012). Awareness of and receptiveness to active learning strategies among STEM faculty In: Frontiers in Education Conference Proceedings, Seattle, WA, pp. 1–6. Google Scholar
  • American Association for the Advancement of Science (AAAS) (2010). Vision and Change: A Call to Action, Washington, DC. Google Scholar
  • AAAS (2011). Vision and Change in Undergraduate Biology: A Call to Action, Washington, DC. Google Scholar
  • Anderson R (2002). Reforming science teaching: what research says about inquiry. J Sci Teach Educ 13, 1-2. Google Scholar
  • Anderson WA, et al. (2011). Changing the culture of science education at research universities. Science 331, 152-153. MedlineGoogle Scholar
  • Andrews TM, Leonard MJ, Colgrove CA, Kalinowski ST (2011). Active learning NOT associated with student learning in a random sample of college biology courses. CBE Life Sci Educ 10, 394-405. LinkGoogle Scholar
  • Ashford SJ (1986). Feedback-seeking in individual adaptation—a resource perspective. Acad Manage J 29, 465-487. Google Scholar
  • Ashford SJ, Blatt R, VandeWalle D (2003). Reflections on the looking glass: a review of research on feedback-seeking behavior in organizations. J Management 29, 773-799. Google Scholar
  • Ashford SJ, Northcraft GB (1992). Conveying more (or less) than we realize—the role of impression-management in feedback-seeking. Organ Behav Hum Dec 53, 310-334. Google Scholar
  • Bandura A (1977). Social Learning Theory, Englewood Cliffs, NJ: Prentice Hall. Google Scholar
  • Bangert-Drowns RL (1991). The instructional effect of feedback in test-like events. Rev Educ Res 61, 213-238. Google Scholar
  • Bernstein DJ (2008). Peer review and evaluation of the intellectual work of teaching. Change 40, 48-51. Google Scholar
  • Blumenthal P (1978). Watching ourselves teaching psychology. Teach Psychol 5, 162-163. Google Scholar
  • Boyer Commission on Educating Undergraduates in the Research University (1998). Reinventing Undergraduate Education: A Blueprint for America's Research Universities, Stony Brook: State University of New York www.sunysb.edu/boyerreport (accessed 11 June 2013). Google Scholar
  • Bransford JDE, Brown ALE, Cocking RRE (2000). How People Learn: Brain, Mind, Experience, and School, expanded ed., Washington, DC: National Academies Press. Google Scholar
  • Brickman P, Gormally C, Armstrong N, Brittan H (2009). Effects of inquiry-based learning on students’ science literacy skills and confidence. Int J Scholarsh Teach Learn 3, (2), 1-22. Google Scholar
  • Brinko KT (1993). The practice of giving feedback to improve teaching: what is effective?. J High Educ 64, 574-593. Google Scholar
  • Brown PL, Abell SK, Demir A, Schmidt FJ (2006). College science teachers’ views of classroom inquiry. Sci Educ 90, 784-802. Google Scholar
  • Brownell SE, Tanner KD (2012). Barriers to faculty pedagogical change: lack of training, time, incentives, and … tensions with professional identity?. CBE Life Sci Educ 11, 339-346. LinkGoogle Scholar
  • Callahan JP (1992). Faculty attitude towards student evaluation. Coll Student J 26, 98-102. Google Scholar
  • Cashin WE (1990, Ed. M TheallJ Franklin, Students do rate academic fields differently In: New Directions for Teaching and Learning, San Francisco, CA: Jossey-Bass, 113-121. Google Scholar
  • Cashin WE, Downey RG (1992). Using global student rating items for summative evaluation. J Educ Psychol 84, 563-572. Google Scholar
  • Cavanagh RR (1996). Formative and summative evaluation in the faculty peer review of teaching. Innov High Educ 20, 235-240. Google Scholar
  • Centra JA (1993). Reflective Faculty Evaluation: Enhancing Teaching and Determining Faculty Effectiveness, San Francisco, CA: Jossey-Bass. Google Scholar
  • Centra JA (2000). Evaluating the teaching portfolio: a role for colleagues. New Direct Teach Learn 2000, (83), 87-93. Google Scholar
  • Chandler TA (1978). The questionable status of student evaluations of teaching. Teach Psychol 5, 150-152. Google Scholar
  • Cheng DA (2011). Effects of class size on alternative educational outcomes across disciplines. Econ Educ Rev 30, 980-990. Google Scholar
  • Chhokar JS, Wallin JA (1984). A field study on the effect of feedback frequency on performance. J Appl Psychol 69, 524-530. Google Scholar
  • Chi MTH (2009). Active-constructive-interactive: a conceptual framework for differentiating learning activities. Top Cogn Sci 1, 73-105. MedlineGoogle Scholar
  • Chism NV (2007). Peer Review of Teaching: A Sourcebook In: 2nd ed Bolton, MA: Anker. Google Scholar
  • Cohen PA (1980). Effectiveness of student-rating feedback for improving college instruction: a meta-analysis of findings. Res High Educ 13, 321-341. Google Scholar
  • Cohen PA, McKeachie WJ (1980). The role of colleagues in the evaluation of college teaching. Improving Coll Univ Teach 28, 147-154. Google Scholar
  • Cossairt A, Hall RV, Hopkins BL (1973). Effects of experimenters instructions, feedback, and praise on teacher praise and student attending behavior. J Appl Behav Anal 6, 89-100. MedlineGoogle Scholar
  • Coulter GA, Grossen B (1997). The effectiveness of in-class instructive feedback versus after-class instructive feedback for teachers learning direct instruction teaching behaviors. Effect School Pract 16, 21-35. Google Scholar
  • Crouch CH, Mazur E (2001). Peer instruction: ten years of experience and results. Am J Phys 69, 970-977. Google Scholar
  • Dancy M, Henderson C (2010). Pedagogical practices and instructional change of physics faculty. Am J Phys 78, 1056-1063. Google Scholar
  • d’Apollonia S, Abrami PC (1997). Navigating student ratings of instruction. Am Psychol 52, 1198-1208. Google Scholar
  • DeHaan RL (2005). The impending revolution in undergraduate science education. J Sci Educ Technol 14, 253-269. Google Scholar
  • Derting TL, Ebert-May D (2010). Learner-centered inquiry in undergraduate biology: positive relationships with long-term student achievement. CBE Life Sci Educ 9, 462-472. LinkGoogle Scholar
  • Dowell DA, Neal JA (1982). A selective review of the validity of student ratings of teaching. J Higher Educ 53, 51-62. Google Scholar
  • Ebert-May D, Brewer C, Allred S (1997). Innovation in large lectures—teaching for active learning. BioScience 47, 601-607. Google Scholar
  • Ebert-May D, Derting TL, Hodder J, Momsen JL, Long TM, Jardeleza SE (2011). What we say is not what we do: effective evaluation of faculty professional development programs. BioScience 61, 550-558. Google Scholar
  • Eddy S, Converse M, Abshire E, Longton C (2013, Ed. AP Wenderoth, Development and implementation of an instrument to characterize active learning in large lecture classes In: Conference Proceeding, Minneapolis, MN: Society for the Advancement of Biology Education Research. Google Scholar
  • Englert CS, Sugai G (1983). Teacher training: improving trainee performance through peer observation and observation system technology. Teach Educ Special Educ 6, 7-17. Google Scholar
  • Fedor DB, Buckley MR (1987). Providing feedback to organizational members. J Bus Psychol 2, 171-181. Google Scholar
  • Fedor DB, Rensvold RB, Adams SM (1992). An investigation of factors expected to affect feedback seeking—a longitudinal-field study. Pers Psychol 45, 779-805. Google Scholar
  • Feldman KA (1988). Effective college teaching from the students’ and faculty's view: matched or mismatched priorities?. Res High Educ 28, 291-344. Google Scholar
  • Finkelstein M (1995). Assessing the Teaching and Student Learning Outcomes of the Katz/Henry Faculty Development Model, South Orange: New Jersey Institute for Collegiate Teaching and Learning. Google Scholar
  • Finkelstein SR, Fishbach A (2012). Tell me what I did wrong: experts seek and respond to negative feedback. J Consumer Res 39, 22-38. Google Scholar
  • Franklin J, Theall M, Ludlow L (1991). Grade inflation and student ratings: a closer look. Paper presented at the annual meeting of the American Educational Research Association, Chicago. Google Scholar
  • Freeman S, O’Connor E, Parks JW, Cunningham M, Hurley D, Haak D, Dirks C, Wenderoth MP (2007). Prescribed active learning increases performance in introductory biology. CBE Life Sci Educ 6, 132-139. LinkGoogle Scholar
  • Gallos MR, van den Berg E, Treagust DF (2005). The effect of integrated course and faculty development: experiences of a university chemistry department in the Philippines. Research report. Int J Sci Educ 27, 985-1006. Google Scholar
  • Gibbs G, Coffey M (2004). The impact of training of university teachers on their teaching skills, their approach to teaching and the approach to learning of their students. Active Learn Higher Educ 5, 87-100. Google Scholar
  • Giebelhaus CR (1994). The mechanical third ear device: a student teaching supervision alternative. J Teach Educ 45, 365-373. Google Scholar
  • Golde CM, Dore TM (2001). At Cross Purposes: What the Experiences of Today's Doctoral Students Reveal about Doctoral Education, Philadelphia, PA: Pew Charitable Trusts. Google Scholar
  • Goldman L (1993). On the erosion of education and the eroding foundations of teacher education (or why we should not take student evaluation of faculty seriously). Teacher Educ Q 20, 57-64. Google Scholar
  • Green G, Osborne JG (1985). Does vicarious instigation provide support for observational learning theories? A critical review. Psychol Bull 97, 3-16. Google Scholar
  • Greller MM (1980). Evaluation of feedback sources as a function of role and organizational level. J Appl Psychol 65, 24-27. Google Scholar
  • Handelsman J, et al. (2004). Scientific teaching. Science 304, 521-522. MedlineGoogle Scholar
  • Hanson S, Moser S (2003). Reflections on a discipline-wide project: developing active learning modules on the human dimensions of global change. J Geogr Higher Educ 27, 17-38. Google Scholar
  • Harnish D, Wild LA (1993). Peer mentoring in higher education: a professional development strategy for faculty. Commun College J Res Pract 17, 271-282. Google Scholar
  • Hativa N (1995). The department-wide approach to improving faculty instruction in higher-education—a qualitative evaluation. Res High Educ 36, 377-413. Google Scholar
  • Hattie J, Timperley H (2007). The power of feedback. Rev Educ Res 77, 81-112. Google Scholar
  • Hays JC, Williams JR (2011). Testing multiple motives in feedback seeking: the interaction of instrumentality and self protection motives. J Vocat Behav 79, 496-504. Google Scholar
  • Henderson C (2008). Promoting instructional change in new faculty: an evaluation of the physics and astronomy new faculty workshop. Am J Phys 76, 179-187. Google Scholar
  • Henderson C, Beach A, Finkelstein N (2011). Facilitating change in undergraduate STEM instructional practices: an analytic review of the literature. J Res Sci Teach 48, 952-984. Google Scholar
  • Henderson C, Dancy MH (2007). Barriers to the use of research-based instructional strategies: the influence of both individual and situational characteristics. Phys Rev ST Phys Educ Res 3, 020102. Google Scholar
  • Henderson C, Dancy MH (2009). Impact of physics education research on the teaching of introductory quantitative physics in the United States. Phys Rev ST Phys Educ Res 5, 020107. Google Scholar
  • Henderson C, Dancy M, Niewiadomska-Bugaj M (2012). Use of research-based instructional strategies in introductory physics: where do faculty leave the innovation-decision process?. Phys Rev ST Phys Educ Res 8, 020104. Google Scholar
  • Hills JR (1974). On the use of student ratings of faculty in determination of pay, promotion, and tenure. Res High Educ 2, 317-324. Google Scholar
  • Hindman SE, Polsgrove L (1988). Differential effects of feedback on preservice teacher behavior. Teach Educ and Special Educ 11, 25-29. Google Scholar
  • Horton S, Young B (2010). Pedagogical Self-improvement Methods: Lessons from a Master Coach Extrapolated to Developing Educators. PHENex Journal/Revue phenEPS 2, (2), 1-12. Google Scholar
  • Huston T, Weaver CL (2008). Peer coaching: professional development for experienced faculty. Innov High Educ 33, 5-20. Google Scholar
  • Hutchings P (1995). From Idea to Prototype: The Peer Review of Teaching, Sterling, VA: Stylus. Google Scholar
  • iBioEducation (2013). iBiology Scientific Teaching Series In: iBiology Scientific Teaching Series, YouTube. Google Scholar
  • Ilgen DR, Fisher CD, Taylor MS (1979). Consequences of individual feedback on behavior in organizations. J Appl Psychol 64, 349-371. Google Scholar
  • Iqbal I (2013). Academics’ resistance to summative peer review of teaching: questionable rewards and the importance of student evaluations. Teach High Educ 18, 557-569. Google Scholar
  • Ismail EA, Buskist W, Groccia JE (2012, Ed. ME Kite, Peer review of teaching In: Effective Evaluation of Teaching: A Guide for Faculty and Administrators, Society for the Teaching of Psychology, 95. Google Scholar
  • Jacobs LC (1987). University Faculty and Students’ Opinions of Student Ratings, Indiana Studies in Higher Education no. 55, Bloomington: Bureau of Evaluative Studies and Testing, Indiana University. Google Scholar
  • Johnson TD, Ryan KE (2000). A comprehensive approach to the evaluation of college teaching. New Dir Teach Learn 2000, (83), 109-123. Google Scholar
  • Jussim L, Yen HJ, Aiello JR (1995). Self-consistency, self-enhancement, and accuracy in reactions to feedback. J Exp Soc Psychol 31, 322-356. Google Scholar
  • Keig L (2000). Formative peer review of teaching: attitudes of faculty at liberal arts colleges towards colleague assessment. J Pers Eval Educ 14, 67-87. Google Scholar
  • Keig L, Waggoner MD (1994). Collaborative Peer Review: The Role of Faculty in Improving College Teaching, ASHE-ERIC Higher Education Report no. 2, Washington, DC: ERIC Publications. Google Scholar
  • Kember D, Leung DYP, Kwan KP (2002). Does the use of student feedback questionnaires improve the overall quality of teaching?. Assess Eval High Educ 27, 411-425. Google Scholar
  • Kluger AN, DeNisi A (1996). The effects of feedback interventions on performance: a historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychol Bull 119, 254-284. Google Scholar
  • Knight JK, Wood WB (2005). Teaching more by lecturing less. Cell Biol Educ 4, 298-310. LinkGoogle Scholar
  • Kolitch E, Dean AV (1999). Student ratings of instruction in the USA: hidden assumptions and missing conceptions about “good” teaching. Stud High Educ 24, 27-42. Google Scholar
  • Kremer JF (1990). Construct validity of multiple measures in teaching, research, and service and reliability of peer ratings. J Educ Psychol 82, 213-218. Google Scholar
  • Liden RC, Mitchell TR (1985). Reactions to feedback—the role of attributions. Acad Manage J 28, 291-308. Google Scholar
  • Lindahl MW, Unger ML (2010). Cruelty in student teaching evaluations. Coll Teach 58, 71-76. Google Scholar
  • Locke EA, Latham GP (1990). The Theory of Goal Setting and Task Performance, Englewood Cliffs, NJ: Prentice Hall. Google Scholar
  • Locke EA, Latham GP (2006). New directions in goal-setting theory. Curr Dir Psychol Sci 15, 265-268. Google Scholar
  • Loeher L (2006). An Examination of Research University Faculty Evaluation Policies and Practice, Portland, OR: Professional and Organizational Development. Google Scholar
  • Malik DJ (1996). Peer review of teaching: external review of course content. Innov High Educ 20, 277-286. Google Scholar
  • Mallette B, Maheady L, Harper GF (1999). The effects of reciprocal peer coaching on preservice general educators’ instruction of students with special learning needs. Teach Educ Special Educ 22, 201-216. Google Scholar
  • Marlin JW (1987). Student perception of end-of-course evaluations. J Higher Educ 58, 704-716. Google Scholar
  • Marsh HW (1984). Students’ evaluations of university teaching: dimensionality, reliability, validity, potential biases, and utility. J Educ Psychol 76, 707-754. Google Scholar
  • Marsh HW, Roche L (1993). The use of students’ evaluations and an individually structured intervention to enhance university teaching effectiveness. Am Educ Res J 30, 217-251. Google Scholar
  • Marshall JC, Smart J, Horton RM (2010). The design and validation of equip: an instrument to assess inquiry-based instruction. Int J Sci Math Educ 8, 299-321. Google Scholar
  • Marshall JC, Smart J, Lotter C, Sirbu C (2011). Comparative analysis of two inquiry observational protocols: striving to better understand the quality of teacher-facilitated inquiry-based instruction. Sch Sci Math 111, 306-315. Google Scholar
  • McColskey W, Leary MR (1985). Differential-effects of norm-referenced and self-referenced feedback on performance expectancies, attributions, and motivation. Contemp Educ Psychol 10, 275-284. Google Scholar
  • McKeachie WJ (1990). Research on college teaching: the historical background. J Educ Psychol 82, 189-200. Google Scholar
  • McShannon J, Hynes P (2005). Student achievement and retention: can professional development programs help faculty GRASP it?. J Faculty Dev 20, 87-94. Google Scholar
  • Menefee R (1983). The evaluation of science teaching. J Coll Sci Teach 13, 138. Google Scholar
  • Mervis J (2013). Transformation is possible if a university really cares. Science 340, 292-296. MedlineGoogle Scholar
  • Miller JW, Martineau LP, Clark RC (2000). Technology infusion and higher education: changing teaching and learning. Innov High Educ 24, 227-241. Google Scholar
  • Murray HG (1983). Low-inference classroom teaching behaviors and student ratings of college teaching effectiveness. J Educ Psychol 75, 138-149. Google Scholar
  • National Research Council (NRC) (2003). Improving Undergraduate Instruction in Science, Technology, Engineering, and Mathematics: Report of a Workshop, Washington, DC: National Academies Press. Google Scholar
  • NRC (2012). Discipline-Based Education Research: Understanding and Improving Learning in Undergraduate Science and Engineering, Washington, DC: National Academies Press. Google Scholar
  • Neal JE (1988). Faculty Evaluation: Its Purposes and Effectiveness, ERIC Digest, Washington, DC: ERIC Clearinghouse on Higher Education. Google Scholar
  • Nielsen N (2011). Promising Practices in Undergraduate Science, Technology, Engineering, and Mathematics Education: Summary of Two Workshops, Washington, DC: National Academies Press. Google Scholar
  • O’Reilly MF (1992). Teaching systematic instruction competencies to special education student teachers: an applied behavioral supervision model. J Assoc Pers Sev Handicaps 17, (2), 104-111. Google Scholar
  • O’Reilly M, Renzaglia A (1994). A systematic approach to curriculum selection and supervision strategies: a preservice practicum supervision model. Teacher Educ Special Educ 17, 170-180. Google Scholar
  • Overall JU, Marsh HW (1979). Midterm feedback from students: its relationship to instructional improvement and students’ cognitive and affective outcomes. J Educ Psychol 71, 856-865. Google Scholar
  • Podsakoff PM, Farh JL (1989). Effects of feedback sign and credibility on goal setting and task-performance. Organ Behav Hum Dec 44, 45-67. Google Scholar
  • Pukkila PJ (2004). Introducing student inquiry in large introductory genetics classes. Genetics 166, 11-18. MedlineGoogle Scholar
  • Quinlan K, Bernstein DJ (1996). Special issue on peer review of teaching. Innov High Educ 20, (4). Google Scholar
  • Ramsden P (1991). A performance indicator of teaching quality in higher education: the Course Experience Questionnaire. Stud High Educ 16, 129-150. Google Scholar
  • Rezler AG, Anderson AS (1971). Focused and unfocused feedback and self-perception. J Educ Res 65, 61. Google Scholar
  • Richardson JT E (2005). Instruments for obtaining student feedback: a review of the literature. Assess Eval High Educ 30, 387-415. Google Scholar
  • Roney K, Ulerick SL (2013). A roadmap to engaging part-time faculty in high-impact practices. Peer Rev 15, (3), 24. Google Scholar
  • Ryan JJ (1980). Student evaluation: the faculty responds. Res High Educ 12, 317-333. Google Scholar
  • Safavi SA, Bakar KA, Tarmizi RA, Alwi NH (2013). Faculty perception of improvements to instructional practices in response to student ratings. Educ Assess Eval Accountability 25, 143-153. Google Scholar
  • Sawada D, Piburn MD, Judson E, Turley J, Falconer K, Benford R, Bloom I (2002). Measuring reform practices in science and mathematics classrooms: the Reformed Teaching Observation Protocol. Sch Sci Math 102, 245-253. Google Scholar
  • Scheeler MC, Ruhl KL, McAfee JK (2004). Providing performance feedback to teachers: a review. Teacher Educ Special Educ 27, 396-407. Google Scholar
  • Schneider G (2013). Student evaluations, grade inflation and pluralistic teaching: moving from customer satisfaction to student learning and critical thinking. Forum Soc Econ 42, 122-135. Google Scholar
  • Seidel SB, Tanner KD (2013). What if students revolt?”—considering student resistance: origins, options, and opportunities for investigation. CBE Life Sci Educ 12, 586-595. LinkGoogle Scholar
  • Seldin P (1993). The use and abuse of student ratings of professors. Chronicle of Higher Education 39, (46), A40. Google Scholar
  • Seldin P (1999). Changing Practices in Evaluating Teaching: A Practical Guide to Improved Faculty Performance and Promotion/Tenure Decisions, Boston, MA: Anker. Google Scholar
  • Showers B (1984). Peer Coaching: A Strategy for Facilitating Transfer of Training. A CEPM R&D Report, Eugene: Center for Educational Policy and Management, Oregon University. Google Scholar
  • Silverthorn DU, Thorn PM, Svinicki MD (2006). It's difficult to change the way we teach: lessons from the integrative themes in physiology curriculum module project. Adv Physiol Educ 30, 204-214. MedlineGoogle Scholar
  • Singer SR, Nielsen NR, Schweingruber HA (2012). Discipline-Based Education Research: Understanding and Improving Learning in Undergraduate Science and Engineering, Washington, DC: National Academies Press. Google Scholar
  • Skinner ME, Welch FC (1996). Peer coaching for better teaching. Coll Teach 44, 153-156. Google Scholar
  • Smith MK, Jones FHM, Gilbert SL, Weiman CE (2013). The classroom observation protocol for undergraduate STEM (COPUS): a new instrument to characterize university STEM classroom practices. CBE Life Sci Educ 12, 618-627. LinkGoogle Scholar
  • Spencer PA, Flyr ML (1992). The Formal Evaluation as an Impetus to Classroom Change: Myth or Reality? Reports—Research, Riverside: University of California Press. Google Scholar
  • Stes A, Min-Leliveld M, Gijbels D, Van Petegem P (2010). The impact of instructional development in higher education: the state-of-the-art of the research. Educ Res Rev 5, 25-49. Google Scholar
  • Stolovitch HD, Keeps EJ, Finnegan G (2000). Book review: Handbook of Human Performance Technology: Improving Individual and Organizational Performance Worldwide (second edition). Perf Improv 39, (5), 38-44. Google Scholar
  • Sunal DW, Hodges J, Sunal CS, Whitaker KW, Freeman LM, Edwards L, Johnston RA, Odell M (2001). Teaching science in higher education: faculty professional development and barriers to change. Sch Sci Math 101, 246-257. Google Scholar
  • Swarts T, Schelpat T, Couch B, Wood B (2013, Ed. MP Wenderoth, Defining observable behaviors associated with scientific teaching In: Conference Proceeding, Minneapolis, MN: Society for the Advancement of Biology Education Research. Google Scholar
  • Sweeney JM, Grasha AF (1979). Improving teaching through faculty-development triads. Educ Technol 19, (2), 54-57. Google Scholar
  • Tanner K, Allen D (2006). Approaches to biology teaching and learning: on integrating pedagogical training into the graduate experiences of future science faculty. Cell Biol Educ 5, 1-6. AbstractGoogle Scholar
  • Tiberius RG (1989). The influence of student evaluative feedback on the improvement of clinical teaching. J High Educ 60, 665-681. Google Scholar
  • Udovic D, Morris D, Dickman A, Postlethwait J, Wetherwax P (2002). Workshop biology: demonstrating the effectiveness of active learning in an introductory biology course. BioScience 52, 272-281. Google Scholar
  • Utell J (2013). What the food network can teach us about feedback. University of Venus: GenX Women in Higher Ed Writing across the Globe (blog), Inside Higher Ed, January 13, 2013, www.insidehighered.com/blogs/university-venus/what-food-network-can-teach-us-about-feedback (accessed 4 July 2013). Google Scholar
  • VandeWalle D, Ganesan S, Challagalla GN, Brown SP (2000). An integrated model of feedback-seeking behavior: disposition, context, and cognition. J Appl Psychol 85, 996-1003. MedlineGoogle Scholar
  • Vasta R, Sarmiento RF (1979). Liberal grading improves evaluations but not performance. J Educ Psychol 71, 207-211. Google Scholar
  • Walczyk JJ, Ramsey LL (2003). Use of learner-centered instruction in college science and mathematics classrooms. J Res Sci Teach 40, 566-584. Google Scholar
  • Walker JD, Cotner SH, Baepler PM, Decker MD (2008). A delicate balance: integrating active learning into a large lecture course. CBE Life Sci Educ 7, 361-367. LinkGoogle Scholar
  • Weimer M (2002). Learner-Centered Teaching, San Francisco: Jossey-Bass. Google Scholar
  • Weimer M, Lenze LF (1994, Ed. KFMB Paulsen, Instructional interventions: a review of the literature on efforts to improve instruction In: Teaching and Learning in the College Classroom, Needham Heights, MA: Ginn. Google Scholar
  • Welch WW, Klopfer LE, Aikenhead GS, Robinson JT (1981). The role of inquiry in science education: analysis and recommendations. Sci Educ 65, 33-50. Google Scholar
  • Wergin JF, Mason EJ, Munson PJ (1976). The practice of faculty-development: An experience-derived model. J High Educ 47, 289-308. Google Scholar
  • White J, Pinnegar S, Esplin P (2010). When learning and change collide: examining student claims to have “learned nothing.”. J Gen Educ 59, 124-140. Google Scholar
  • Wigton RS, Patil KD, Hoellerich VL (1986). The effect of feedback in learning clinical-diagnosis. J Med Educ 61, 816-822. MedlineGoogle Scholar
  • Wilson RC (1986). Improving faculty teaching: effective use of student evaluations and consultants. J High Educ 57, 196-211. Google Scholar
  • Winter D, Lemons P, Bookman J, Hoese W (2001). Novice instructors and student-centered instruction: identifying and addressing obstacles to learning in the college science laboratory. J Scholarsh Teach Learn 2, (1), 14-42. Google Scholar
  • Wong BY L (1985). Self-questioning instructional-research—a review. Rev Educ Res 55, 227-268. Google Scholar
  • Yarnall L, Toyama Y, Gong B, Ayers C, Ostrander J (2007). Adapting scenario-based curriculum materials to community college technical courses. Community College J 31, 583-601. Google Scholar
  • Zoller U (1992). Faculty teaching performance evaluation in higher science education: issues and implications (a “cross-cultural” case study). Sci Educ 76, 673-684. Google Scholar