ASCB logo LSE Logo

Evidence-Based Teaching GuidesFree Access

Peer Instruction

    Published Online:https://doi.org/10.1187/cbe.18-02-0025

    Abstract

    Peer instruction, a form of active learning, is generally defined as an opportunity for peers to discuss ideas or to share answers to questions in an in-class environment, where they also have opportunities for further interactions with their instructor. When implementing peer instruction, instructors have many choices to make about group design, assignment format, and grading, among others. Ideally, these choices can be informed by research about the impact of these components of peer instruction on student learning. This essay describes an online, evidence-based teaching guide published by CBE—Life Sciences Education at http://lse.ascb.org/evidence-based-teaching-guides/peer-instruction. The guide provides condensed summaries of key research findings organized by teaching choices, summaries of and links to research articles and other resources, and actionable advice in the form of a checklist for instructors. In addition to describing key features of the guide, this essay also identifies areas in which further empirical studies are warranted.

    INTRODUCTION

    Peer instruction is a well-researched active-learning technique that has been widely adopted in college science classes. In peer instruction, the instructor poses a question with discrete options and gives students the chance to consider and record their answers individually, often by voting using clickers. Students then discuss their answers with neighbors, explaining their reasoning, before being given a chance to vote again. Finally, the instructor discusses the answer to the question, often soliciting input from the class. While instructors vary the exact implementation of this process—sometimes eliminating the individual voting process, sometimes using colored cards or a show of hands instead of clickers—the general process is an adaptation of the think–pair–share technique (Crouch and Mazur, 2001).

    Peer instruction can improve students’ conceptual understanding and problem-­solving skills, an effect that has been observed in multiple disciplines, in courses at different levels, and with different instructors (for a review, see Vickrey et al., 2015). Student response to peer instruction is generally positive; students report that the technique helps them learn course material and that the immediate feedback it provides is valuable.

    Peer instruction’s value as a teaching approach is unsurprising, as it incorporates many elements known to promote learning. It is a form of cooperative learning, which has been shown to increase student achievement, persistence, and attitudes toward science (e.g., Johnson and Johnson, 2009). The peer instruction cycle provides opportunities for all the elements that social interdependence theory identify as necessary for cooperative learning: individual action; positive interdependence, wherein individual success is enhanced by the success of other group members; promotive interaction, or actions by individuals to help other group members’ efforts; and group processing (Johnson and Johnson, 2009). It explicitly incorporates opportunities for students to explain their reasoning and engage in argumentation, practices that help students integrate new information with existing knowledge and revise their mental models (e.g., Chi et al., 1994). In addition, as with many types of informal cooperative learning, peer instruction provides opportunities for formative assessment with immediate feedback and thus incorporates opportunities for students to be metacognitive, monitoring their understanding and reflecting on misunderstanding (McDonnell and Mullally, 2016).

    In implementing peer instruction, instructors have many choices to make that can impact students’ experience. In this article, we describe an evidence-based teaching guide that condenses, summarizes, and provides actionable advice from research findings (including many articles from CBE—Life Sciences Education). It can be accessed at http://lse.ascb.org/evidence-based-teaching-guides/peer-instruction. The guide has several features intended to help instructors: a landing page that indicates starting points for instructors (Figure 1), syntheses of observations from the literature, summaries of and links to selected papers (Figure 2), and an instructor checklist that details recommendations and points to consider. The guide is meant to aid instructors as they implement peer instruction and may also benefit researchers new to this area. Some of the questions that serve to organize the guide are highlighted below.

    FIGURE 1.

    FIGURE 1. Screenshot of the landing page of the guide, which provides readers with an overview of choice points.

    FIGURE 2.

    FIGURE 2. Screenshot showing a summary of research findings and representative article summaries for one element of peer instruction.

    WHAT TYPES OF QUESTIONS SHOULD BE USED?

    There are a few clear recommendations about the types of questions that are particularly beneficial in peer instruction. First, questions should be challenging enough to provoke interest and discussion, and the greatest gains are seen with the most difficult questions (Knight et al., 2013; Zingaro and Porter, 2014). Importantly, question difficulty is not necessarily defined by the level of cognitive activity a student engages in to answer the question (e.g., Bloom’s application vs. evaluation levels). Questions that require lower-order cognitive skills can promote as robust peer discussion as those that require higher-order skills, with discussions on both potentially leading to conceptual change (Knight et al., 2013; Lemons and Lemons, 2013). Further, questions that uncover misconceptions can have particular benefits (Modell et al., 2005), in that they expose students to a commonly held incorrect idea and then give them opportunity to discover why that idea is incorrect.

    Several sources report on different types of questions that instructors use in peer instruction but do not explicitly investigate benefits or limitations of the different types of questions (e.g., Turpen and Finkelstein, 2009, and others within the Question Structure and Purpose section of the guide). For example, conceptual questions can be based on applications, case studies, or procedures. Alternatively, questions may be logistical, recall, or algorithmic rather than conceptual. Question format can also vary; questions are often one best answer multiple choice, but formats such as multiple true–false and free response and questions that promote drawing can also provide benefits. Thus, there are several areas still prime for investigation regarding the questions used in peer instruction:

    • Are there question types or formats that are particularly effective at helping students meet particular types of outcomes? For example, do questions that ask students to illustrate their ideas, or constructively build theoretical models, impact student learning?

    • What combinations of question cognitive level (e.g., Bloom’s level) and difficulty help promote self-efficacy, conceptual change, and conceptual understanding? Do different “levels” of questions promote some of these outcomes over others?

    WHAT INSTRUCTIONAL PRACTICES PROMOTE PRODUCTIVE PEER INTERACTIONS?

    Incentives for students to participate in peer instruction increase student engagement. Low-stakes grading incentives, in which correct and incorrect answers receive equal or very similar credit, result in more robust exchanges of reasoning and more equitable contribution of all group members to the discussion, whereas high-stakes grading incentives tend to lead to dominance of the discussion by a single group member (e.g., James, 2006, and others within the Accountability section of the guide). Social incentives can also impact peer discussion. For example, randomly calling on groups to explain reasoning for an answer rather than asking for volunteers increases exchanges of reasoning during peer discussion (Knight et al., 2016).

    Instructor cues that encourage students to explain their reasoning influence both student behavior and the classroom norms that students perceive. Thus, these cues can have a large impact on the nature of peer discussion (Turpen and Finkelstein, 2010, and others in the Instructional Cues section of the guide). Specifically, instructor language that encourages students to explain their reasoning can lead to higher-quality peer discussion and greater use of scientific argumentation moves (Knight et al., 2013). Further, instructor-led discussion of the answer after peer discussion provides clear benefits, particularly for weaker students and on more difficult questions (Smith et al., 2009, 2011; Zingaro and Porter, 2014).

    One common practice may have unintended negative consequences. Traditional implementation of peer instruction involves displaying the histogram of student responses after students answer individually but before peer discussion. Several lines of work suggest that this practice may bias students toward the most common answer and reduce the value of peer discussion (Perez et al., 2010). Thus, instructors may choose to prompt peer discussion that focuses on reasoning before showing the response histogram, and only use the histogram as a summary of student choices after students have shared their reasoning.

    More generally, instructors have options in how they interact with small groups during peer discussion and during whole-class discussions (Turpen and Finkelstein, 2009, and others in the Instructional Cues section of the guide). For example, an instructor may sometimes stay in earshot of students but not engage with them during peer discussion to promote autonomy, and at other times may answer student questions or discuss possibilities with small groups. During the discussion of the solution, an instructor may sometimes explain the solution or, alternatively, may encourage students to jointly explain and evaluate the solution. By varying behavior during peer instruction, instructors can provide students with opportunities to engage in a broader range of activities. Within this general recommendation, there are several unanswered questions:

    • One of the steps that is most commonly omitted during peer instruction is the individual response (Turpen and Finkelstein, 2009). Students have been reported to prefer the inclusion of individual thinking time, and it appears to increase discussion time (Nicol and Boyle, 2003; Nielsen et al., 2014). What is the role of this step in promoting productive peer discussion? Can objective measures of student learning be applied to determine its efficacy? (Vickrey et al., 2015).

    • Several studies indicate that students prefer to use personal response devices during peer instruction but that their use does not appear to impact students’ learning when compared with other reporting methods (such as a show of hands or colored cards). The role of anonymity and its potential relationship to stereotype threat has not been investigated, however. Can peer instruction induce stereotype threat, and if so, can the effect be mitigated by an anonymous reporting device or by other instructor interventions?

    • Further, stereotype threat is most relevant when people are working at the edge of their ability (O’Brien and Crandall 2003), and it therefore seems more likely to be a factor for more difficult peer instruction questions. While active-learning approaches have generally been shown to be particularly effective for students from underrepresented groups (e.g., Eddy and Hogan, 2014), investigating the nuanced effects within particular groups of students can help instructors make effective choices (Eddy et al., 2015). Can personal response devices, which afford anonymity, have particular value for more difficult questions?

    WHAT CHALLENGES ARE ASSOCIATED WITH PEER INSTRUCTION?

    Finally, it is important to note that there can be challenges to implementing peer instruction. As noted earlier, instructors implement peer instruction differently, leading to classroom norms that can work to enhance or detract from student learning and affect student perceptions. Further, students have many different kinds of discussions during peer instruction, not all focused on the topic and not all centered around the concepts instructors intend. By its very nature, peer instruction allows exposure to others’ ideas, which can lead to better understanding but also potentially to shared misconceptions, an effect that may be enhanced among students who feel less confident in the classroom. Thus, the peer discussion part of each clicker question cycle is truly the key to successful peer instruction. Perhaps due to the reasons cited above, peer instruction does not uniformly improve students’ course grades. However, it clearly improves students’ use of reasoning and argumentation skills (Knight et al., 2013, 2016), which may contribute to student learning in nonobvious ways. Avoiding the pitfalls discussed in this article and maximizing the benefits of peer instruction require that instructors carefully construct challenging questions and intentionally promote classroom norms that value reasoning and argumentation.

    ACKNOWLEDGMENTS

    We acknowledge and thank Adele Wolfson and Kristy Wilson for their thoughtful review. We also thank William Pierce and Thea Clarke for their efforts in producing the Evidence-Based Teaching Guide website.

    REFERENCES

  • Chi, M. T. H., de Leeuw, N., Chiu, M-H., & Lavancher, C. (1994). Eliciting self-explanations improves understanding. Cognitive Science, 18, 439–477. Google Scholar
  • Crouch, C. H., & Mazur, E. (2001). Peer instruction: Ten years of experience and results. American Journal of Physics, 69, 970. 10.1119/1.1374249 Google Scholar
  • Eddy, S. L., Brownell, S. E., Thummaphan, P., Lan, M-C., & Wenderoth, M. P. (2015). Caution, student experience may vary: Social identities impact a student’s experience in peer discussions. CBE—Life Sciences Education, 14, ar45. LinkGoogle Scholar
  • Eddy, S. L., & Hogan, K. A. (2014). Getting under the hood: How and for whom does increasing course structure work. CBE—Life Sciences Education, 13, 453–468. LinkGoogle Scholar
  • James, M. C. (2006). The effect of grading incentive on student discourse in peer instruction. American Journal of Physics, 74, 689. Google Scholar
  • Johnson, D. W., & Johnson, R. T. (2009). An educational psychology success story: Social interdependence theory and cooperative learning. Educational Research, 38, 365–379. Google Scholar
  • Knight, J. K., Wise, S. B., & Sieke, S. (2016). Group random call can positively affect student in-class clicker discussions. CBE—Life Sciences Education, 15(4), ar56. LinkGoogle Scholar
  • Knight, J. K., Wise, S. B., & Southard, K. M. (2013). Understanding clicker discussions: Student reasoning and the impact of instructional cues. CBE—Life Sciences Education, 12, 645–654. LinkGoogle Scholar
  • Lemons, P. P., & Lemons, J. D. (2013). Questions for assessing higher-order cognitive skills: It’s not just Bloom’s. CBE—Life Sciences Education, 12, 47–58. LinkGoogle Scholar
  • McDonnell, L., & Mullally, M. (2016). Research and teaching: Teaching students how to check their work while solving problems in genetics. Journal of College Science Teaching, 46, 68–75. Google Scholar
  • Modell, H., Michael, J., & Wenderoth, M. P. (2005). Helping the learner to learn: The role of uncovering misconceptions. The American Biology Teacher, 67, 20. Google Scholar
  • Nicol, D. J., & Boyle, J. T. (2003). Peer instruction versus class-wide discussion in large classes: A comparison of two interaction methods in the wired classroom. Studies in Higher Education, 28, 458–473. Google Scholar
  • Nielsen, K. L., Hansen, G., & Stav, J. B. (2014). How the initial thinking period affects student argumentation during peer instruction: Students’ experiences versus observations. Studies in Higher Education, 3, 1–15. Google Scholar
  • O’Brien, L. T., & Crandall, C. S. (2003). Stereotype threat and arousal: Effects on women’s math performance. Personality and Social Psychology Bulletin, 29, 782–789. MedlineGoogle Scholar
  • Perez, K. E., Strauss, E. A., Downey, N., Galbraith, A., Jeanne, R., & Cooper, S. (2010). Does displaying the class results affect student discussion during peer instruction. CBE—Life Sciences Education, 9, 133–140. LinkGoogle Scholar
  • Smith, M. K., Wood, W. B., Adams, W. K., Wieman, C., Knight, J. K., Guild, N., & Su, T. T. (2009). Why peer discussion improves student performance on in-class concept questions. Science, 323, 122–124. MedlineGoogle Scholar
  • Smith, M. K., Wood, W. B., Krauter, K., & Knight, J. K. (2011). Combining peer discussion with instructor explanation increases student learning from in-class concept questions. CBE—Life Sciences Education, 10, 55. LinkGoogle Scholar
  • Turpen, C., & Finkelstein, N. D. (2009). Not all interactive engagement is the same: Variations in physics professors’ implementation of peer instruction. Physical Review Special Topics–Physics Education Research, 5, 020101. Google Scholar
  • Turpen, C., & Finkelstein, N. D. (2010). The construction of different classroom norms during peer instruction: Students perceive differences. Physical Review Special Topics–Physics Education Research, 6, 020123. Google Scholar
  • Vickrey, T., Rosploch, K., Rahmanian, R., Pilarz, M., & Stains, M. (2015). Research-based implementation of peer instruction: A literature review. CBE—Life Sciences Education, 14, es3. LinkGoogle Scholar
  • Zingaro, D., & Porter, L. (2014). Peer instruction in computing: The value of instructor intervention. Computers & Education, 71, 87–96. Google Scholar