ASCB logo LSE Logo

Clickers in the Large Classroom: Current Research and Best-Practice Tips

    Published Online:https://doi.org/10.1187/cbe.06-12-0205

    Abstract

    Note from the Editor

    Use of the audience response devices known as “clickers” is growing, particularly in large science courses at the university level, as evidence for the pedagogical value of this technology continues to accumulate, and competition between manufacturers drives technical improvements, increasing user-friendliness and decreasing prices. For those who have not yet tried teaching with clickers and may have heard unsettling stories about technical problems with earlier models, the decision to use them and the choice of an appropriate brand may be difficult. Moreover, like any classroom technology, clickers will not automatically improve teaching or enhance student learning. Clickers can be detrimental if poorly used, but highly beneficial if good practices are followed, as documented in a growing body of educational literature.

    In this Special Feature, we present two reviews that should assist instructors and teachers at all levels in taking the step toward clicker use and choosing an appropriate model. In the first, Barber and Njus compare the features, advantages, and disadvantages of the six leading brands of radio-frequency clicker systems. In the second, Caldwell reviews the pedagogical literature on clickers and summarizes some of the best practices for clicker use that have emerged from educational research. In a related article elsewhere in this issue, Prezsler et al. present the results of a study showing that clicker use can improve student learning and attitudes in both introductory and more advanced university biology courses.

    Audience response systems (ARS) or clickers, as they are commonly called, offer a management tool for engaging students in the large classroom. Basic elements of the technology are discussed. These systems have been used in a variety of fields and at all levels of education. Typical goals of ARS questions are discussed, as well as methods of compensating for the reduction in lecture time that typically results from their use. Examples of ARS use occur throughout the literature and often detail positive attitudes from both students and instructors, although exceptions do exist. When used in classes, ARS clickers typically have either a benign or positive effect on student performance on exams, depending on the method and extent of their use, and create a more positive and active atmosphere in the large classroom. These systems are especially valuable as a means of introducing and monitoring peer learning methods in the large lecture classroom. So that the reader may use clickers effectively in his or her own classroom, a set of guidelines for writing good questions and a list of best-practice tips have been culled from the literature and experienced users.

    INTRODUCTION

    Many instructors at both large and small educational institutions have begun to use classroom technology that allows students to respond and interact via small, hand-held, remote keypads. This technology, which we will refer to as an audience response system (AR system or ARS), resembles the “Ask the Audience” portion of the game show “Who Wants to be A Millionaire,” and enables instructors to instantaneously collect student responses to a posted question, generally multiple choice. The answers are immediately tallied and displayed on a classroom projection screen where both students and instructor can see and discuss them.

    Uses of this technology vary widely and include spicing up standard lecture classes with periodic breaks, assessing student opinions or understanding related to lecture, increasing the degree of interactivity in large classrooms, conducting experiments on human responses (e.g., in psychology courses), and managing cooperative learning activities. Students and instructors who have used AR systems are generally positive and often enthusiastic about their effects on the classroom, and many researchers and educators assert their great potential for improving student learning (Beatty et al., 2006).

    The literature on applications and classroom outcomes of ARS use includes not only descriptive articles but also quantitative educational research studies with varying degrees of rigor (for reviews see Roschelle et al., 2004a; McDermott and Redish, 1999; Duncan, 2005; Simpson and Oliver, 2006). This article aims to survey some of that literature and research as it applies to large-enrollment classes, to offer some best-practice tips culled from both the literature and the experience of users at West Virginia University (WVU), and to discuss the successes, outcomes, and challenges resulting from this technology. Some basic motivations for using an ARS and the attitudes of both students and faculty who have used this technology are also summarized.

    OVERVIEW

    What is a Clicker? Description of Hardware and Software

    The handheld devices used in an ARS—commonly called “clickers” or “key-pads” in the United States and “handsets” or “zappers” in the United Kingdom (d'Inverno et al., 2003; Simpson and Oliver, 2006)—are small transmitters about the size of a television remote control. Students use their clickers to transmit their answers by pressing the clicker buttons. Although one early example of a clicker had a single response button (Poulis et al., 1998), modern clickers usually have a 10-digit numeric keypad and often some accessory buttons including a power switch, a send button, or function keys that permit text entry (Barber and Njus, 2007).

    Modern clicker units are “two-way,” meaning that the clicker not only sends a signal but also indicates whether it was received. Although early clickers were often connected to the rest of the system by wiring, modern systems are wireless and use either infrared (IR) or, more recently, radiofrequency (RF) signals. The RF systems are rapidly becoming the current standard, because they send stronger signals, require only a single receiver, do not experience interference from classroom lights or other IR-emitting equipment, and do not require a direct line of sight between the student and the receiver. In all AR systems, each clicker unit has a unique signal so that the answer from each individual student can be identified and recorded. When polling is complete, answers from the entire class are displayed on the projection screen, usually in the form of a histogram, although some systems offer more sophisticated options (Roschelle et al., 2004b). The feature of an ARS that allows this incoming mass of student answers to be rapidly collected, tabulated, and displayed is the coupling of a proprietary receiver unit with an ordinary classroom computer and projection system.

    Such systems of clickers, receiver, and software are given various names in the educational and product literature, including classroom response system, personal response system, classroom communication system, group response system, audience response system, electronic voting system, audience paced feedback system, and classroom network (Poulis et al., 1998; Draper et al., 2002; d'Inverno et al., 2003; Roschelle et al., 2004a, 2004b; Simpson and Oliver, 2006).

    Although this conglomeration of technological hardware may sound complex, the instructor typically can ignore all but the software interface during class. This software is used to create and administer questions, which is usually not much more complicated than creating or displaying PowerPoint (Microsoft, Redmond, WA) slides. Most systems are said to be easy to use with only an “intermediate” level of computer skill, thereby freeing the instructor to consider pedagogy rather than technical operations (Cue, 1998; Brewer, 2004; Parsons, 2005). Most ARS software not only controls display settings and data collection during class but also helps the instructor format questions (usually as PowerPoint slides) and grade student responses. Grading tools in the software typically allow the instructor to specify which answer or answers will be treated as correct. These tools also permit different point values to be given for correct versus incorrect answers. Typical ARS software can export or even upload student scores to classroom management systems such as Blackboard and WebCT (Washington, DC). Six different commercially available RF systems are described, and their advantages and disadvantages are discussed in the accompanying article by Barber and Njus (2007).

    Who Uses Clickers? (Typical Course and Student Characteristics)

    Although this article focuses mainly on the use of AR systems in large lecture courses, instructors have reported using clickers in classes ranging from 15 students (e.g., Draper, 2002) to more than 200 students (e.g., Cue, 1998; Draper and Brown, 2002; Wit, 2003). Although much of the early research and development of clickers was done by physics instructors, a creative or willing instructor can apply the technology to virtually any subject. ARS technology has been incorporated into courses in nursing (Halloran, 1995), communication (Jackson and Trees, 2003), engineering (van Dijk et al., 2001; d'Inverno et al., 2003), computer science (Draper, 2002; Draper and Brown, 2002; d'Inverno et al., 2003; Roschelle et al., 2004a), mathematics (Mays, personal communication1; Draper and Brown, 2002; Wit, 2003; Roschelle et al., 2004a; Caldwell et al., 2006), chemistry (Roschelle et al., 2004a; Bunce et al., 2006), philosophy (Draper and Brown, 2002), biology (McGraw, personal communication2; Draper, 2002; Draper and Brown, 2002; Brewer, 2004; Roschelle et al., 2004a; Wood, 2004; Hatch and Jensen, 2005; Knight and Wood, 2005), physics (Cue, 1998; Poulis et al., 1998; Dufresne et al., 2000; Burnstein and Lederman, 2001; Lindenfeld, 2001; Hake, 2002; Roschelle et al., 2004a; Pollock, 2005, 2006; Beatty et al., 2006), premedical education (Roschelle et al., 2004a), medical, veterinary, and dental education (Draper, 2002; Draper and Brown, 2002), business (Cue, 1998; Roschelle et al., 2004a; Beekes, 2006), economics (Simpson and Oliver, 2006), and psychology (Draper, 2002; Draper and Brown, 2002).

    ARS technology has been successfully used in varied course formats, ranging from optional tutorials (d'Inverno et al., 2003) to formal standard lectures and cooperative learning through peer instruction (Nichol and Boyle, 2003). With a skilled instructor, an AR system can be a useful instructional tool for students of all ages and levels of preparation, from freshmen in large, introductory courses for nonmajors (Caldwell, unpublished observations),3 to juniors and seniors in required, high-level majors courses (Halloran, 1995; Knight and Wood, 2005) or even graduate students (Beekes, 2006). AR systems have also been used in elementary (Johnson and McLeod, 2004) and K–12 settings (Roschelle et al., 2004a).

    Typical Characteristics of Questions

    Typically, ARS questions are written before class as a part of preparing lecture notes or lesson plans. Inserting questions is typically no more difficult than creating a new slide in PowerPoint. Instructors can also add questions “on-the-fly” during class, when hit by a sudden inspiration, concern about student understanding, or a question from a student that could be addressed to the class as a whole.

    Modes of implementation are as varied as the instructors who use them, but typically between two and five questions are given per 50 minutes of class instruction (e.g., Burnstein and Lederman, 2001; Elliot, 2003; Jackson and Trees, 2003; Beatty, 2004; Caldwell et al., 2006).

    There are many types of questions, but some common features have been noted (e.g., Poulis et al., 1998; Draper et al., 2002, Simpson and Oliver, 2006). Among the common uses of clicker questions are the following:

    1. to increase or manage interaction, through questions that:

    2. to assess student preparation and ensure accountability, through:

    3. to find out more about students, by:

      • surveying students' thoughts about the pace, effectiveness, style, or topic of lecture

      • polling student opinions or attitudes

      • probing students' pre-existing level of understanding

      • asking how students feel about clickers and/or active learning

    4. for formative (i.e., diagnostic) assessment, through questions that:

      • assess students' understanding of material in lecture

      • reveal student misunderstandings of lecture (e.g., Wood, 2004)

      • determine future direction of lecture, including the level of detail needed

      • test students' understanding of previous lecture notes

      • assess students' ability to apply lecture material to a new situation

      • determine whether students are ready to continue after working a problem (Poulis et al., 1998)

      • allow students to assess their own level of understanding at the end of a class (Halloran, 1995)

    5. for quizzes or tests (Draper, 2002) although reports of using clickers for summative high-stakes testing are relatively rare. Quiz questions typically check whether students are:

      • paying attention

      • taking good notes

      • preparing for class or labs

      • keeping up with homework

      • actively thinking

      • able to recall material from previous lectures

    6. to do practice problems, especially in math, chemistry, engineering, or physics courses

    7. to guide thinking, review, or teach, including questions used to:

    8. to conduct experiments on or illustrate human responses (Draper et al., 2002; Simpson and Oliver, 2006)

    9. to make lecture fun.

    This list should in no way be considered limiting—ARS technology is a flexible tool limited only by the imagination of the instructor and the question format itself. As an example, some less common but innovative uses include:

    • using an ARS as a “clapometer” to continuously monitor in real time whether students are confused (Cutts et al., 2004)

    • using an ARS for “differentiated instruction” to track the level of understanding and progress in a small class with unevenly distributed abilities (Parsons, 2005)

    • using questions with multiple correct answers or only partially correct answers to prompt discussion (Burnstein and Lederman, 2001).

    Why Bother? Motivations for Clicker Use

    To paraphrase Stephen Draper, technology is only worth using in the classroom when it addresses a specific instructional deficit (Draper, 1998). Many instructors have adopted clicker technology to compensate for the passive, one-way communication inherent in lecturing and the difficulty students experience in maintaining sustained concentration. This is certainly a case where “simple” technology can be enough to “overcome crucial problems in the traditional delivery” (Draper, 1998). Some institutions have adopted clickers solely for this reason, in the hope of addressing high attrition rates in the sciences by making lecture classes less passive and impersonal (Burnstein and Lederman, 2001).

    Many of the courses that use clickers have abandoned lecture altogether or at least reduced it to a smaller component of class time (Draper et al., 2002; Cutts et al., 2004; Knight and Wood, 2005). These “interactive engagement” or “peer instruction” methods are quite powerful, but still fairly new to most instructors. The current discussion will focus primarily on motivations for using clickers within traditional lectures. Peer and interactive instruction methods will be discussed later in this article. Even when simply added to a traditional lecture, the “give-and-take atmosphere encouraged by use of clickers … makes the students more responsive in general, so that questions posed to the class as a whole during lecture are much more likely to elicit responses and discussion.” (Wood, 2004).

    By their nature, clickers increase participation by allowing all students to respond to all questions asked by the instructor. The idea behind clickers is not new—teachers have used interactive, instructive questioning to teach students since at least the time of Socrates. This style of interaction, however, becomes very difficult as class size increases. Students in large classes are often hesitant or unwilling to speak up because of fear of public mistakes or embarrassment, fear of peer disapproval, pre-existing expectations of passive behavior in a lecture course—both on the part of lecturer and students, or even uncertainty of acceptable behavior in a class that may be larger than one's own hometown. Instructors have tried countless creative methods to prompt student participation, from calling on student volunteers, calling student names from a roll book, or assigning a different set of “special volunteers” who are designated participants each day (Wiedemeier, personal communication).4 These options maintain participation but by their nature only elicit participation from a fraction of the class. These methods of sampling class opinion, unfortunately, are vulnerable to small sample size problems: a small but vocal minority can give the impression that the silent majority of the class understands (or misunderstands) a topic (Simpson and Oliver, 2006).

    Instructors can instead use other equally low-tech methods to ask the entire class a question and collect responses by “show-of-hands” votes, applause or other audible feedback, and prefabricated response cards that indicate a vote with various colors, shapes, or words (Heward et al., 1996). However, these low-tech methods, although less expensive, have several disadvantages. The lack of privacy during voting may prevent completely honest votes, time constraints may preclude accurate estimates, and (aside from the applause method) the overall trend of student responses is only truly apparent to the instructor.

    These shortcomings are directly addressed by ARS technology, which not only allows private votes, but also accurately tallies and displays them very quickly. A further benefit of ARS questioning is the permanent and individualized record of student votes that can be accessed after the class. These records can be used later for attendance records, student tutorials, lesson planning, or educational research.

    Clickers are useful in sustaining attention and breaking up lectures. It has been demonstrated that the most well-recalled portion of a lecture is the first five minutes (Burns, 1985), so using clickers to emphasize an important concept at the beginning of class may make good use of this phenomenon, as well as helping students to focus and settle down at the start of class (Elliot, 2003). Sometimes the latter half of lectures is lost; because the average human attention span is no more than 20 minutes, recall of information drops drastically after 15–20 minutes (Burns, 1985), and students themselves report that the longest time they can comfortably endure uninterrupted lecture is 20–30 minutes (MacManaway, 1970). Periodic breaks (e.g., for clicker questions) may help relieve student fatigue and “restart the attention clock” (Middendorf and Kalish, 1996). Some educators recommend using these breaks for relevant demonstrations or activities, including a “debriefing” at the end to ensure that students get the point (Middendorf and Kalish, 1996; Allen and Tanner, 2005). Clicker questions seem ideal for this debriefing, because they are active and can demonstrate to the lecturer whether that point has gotten across.

    Clickers can help reveal student misunderstandings, as long as questions are carefully designed (discussed below). This is often an exciting and helpful moment for lecturers who assumed that their students were following along. The following comments from a biology instructor are illustrative, after discovering that although 90% of his students recalled a rule of genetics, only 48% were able to apply it correctly:

    “For me, this was a moment of revelation. … for the first time in over 20 years of lecturing I knew… that over half the class didn't ‘get it’…. Because I had already explained the phenomenon as clearly as I could, I simply asked the students to debate briefly with their neighbors and see who could convince whom about which answer was correct. The class erupted into animated conversation. After a few minutes, I asked for a revote, and now over 90% gave the correct answer…” (Wood, 2004).

    The literature abounds with such inspiring examples (e.g., d'Inverno et al., 2003). Often a few questions will be needed for students to practice and fully master a difficult new idea. Although this takes away from lecture time, it appears that this practice time is well spent. For example, during a set of practice questions a class improved from 16% correct to 100% correct after three questions. Furthermore, when asked a similar question one week later, 80% still answered correctly (Hatch and Jensen, 2005).

    These examples illustrate the powerful potential of clickers not just to reveal but to address student misconceptions as part of formative assessment. This means that rather than simply noting the responses of students, the instructor responds to them and may use them to modify the subsequent direction of the lecture. Not only does this imply that instructors use poor responses as a cue for further explanation, but also that if students demonstrate solid understanding of a topic, it is unnecessary to lecture further on it. This approach does entail some amount of thinking on one's feet and planning lectures for contingencies, but instructors who take this approach regularly offer assurance that it becomes easier with practice (Beatty, 2004).

    Clickers are a boon because they “increase the ease with which teachers can engage all students in frequent formative assessment” (Roschelle et al., 2004a). They can offer rapid feedback to the instructor both about the course and the quality of the teaching (Draper et al., 2002). To use formative assessment successfully as part of classroom teaching, it helps to write good clicker questions, including some that not all students will answer correctly (discussed further below). It is also advisable for the instructor to focus the attention of students on the reasoning involved, rather than the “rightness” of specific answers (Dufresne et al., 2000).

    Clickers tend to change the atmosphere of lectures (Roschelle et al., 2004a, 2004b). Although pressing buttons on a clicker itself does not seem very much like active engagement, instructors frequently report that students who use clickers become more visibly active participants as well, more likely to ask and answer questions (Elliot, 2003; Beekes, 2006). Instructors who use the systems strongly advocate that students who commit to an answer—even if they just guess—are “emotionally” or “psychologically invested” in the question and pay better attention to the discussion that follows (Wit, 2003; Beatty, 2004). Students not only become more aware of the diversity of ideas and understanding within the classroom (Roschelle et al., 2004b), but also realize when they are not alone in their confusion (Knight and Wood, 2005). In general, students think clickers are fun, and their use tends to liven up a classroom. Instructors report less sleeping, more discussion, and improved alertness during class (Jackson and Trees, 2003). Increases in attendance have been repeatedly documented, particularly when performance on ARS questions is linked to grades (Burnstein and Lederman, 2001; Jackson and Trees, 2003; Wit, 2003; Caldwell, unpublished observations).

    Clickers offer an efficient way to hold all students accountable for preclass preparation. Students who were regularly quizzed on readings prepared more for class, but didn't seem to mind so long as they earned something toward their final grades. This offers an instructor a way out of two common dilemmas: the need to “cover” the material in lecture leaves little time for more interactive teaching, and many students in a standard lecture course disregard reading assignments because they believe the important material will be covered in class. If clickers are used for brief quizzes on assigned readings or homework to encourage preparation, then class time can then be spent in more productive ways than “coverage” (e.g., Knight and Wood, 2005).

    The most important motivation for using clickers, however, is their benefit to learning. Some educators have noted that the instructor feedback provided by clickers may in itself spur changes in teaching approach (d'Inverno et al., 2003). Depending on the method of implementation, typical classroom outcomes include increased student response and interaction—both with peers and with the instructor, improved student understanding and learning (even of complex material), improved achievement on exams, increased attendance, and increased instructor awareness of student problems (Johnson and McLeod, 2004; Roschelle et al., 2004a, 2004b; Knight and Wood, 2005). These outcomes will be discussed in more detail in the review of research that follows.

    LITERATURE REVIEW

    General Summary of the State of the Field

    A wealth of journal articles explore the uses, outcomes, and benefits of clicker use, and some good reviews exist (McDermott and Redish, 1999; Roschelle et al., 2004a; Duncan, 2005; Simpson and Oliver, 2006). Most reviews agree that “ample converging evidence” suggests that clickers generally cause improved student outcomes such as improved exam scores or passing rates, student comprehension, and learning and that students like clickers. The reviews of the literature, however, also agree that much of the research so far is not systematic enough to permit scientific conclusions about what causes the benefits (Roschelle et al., 2004a, 2004b; Simpson and Oliver, 2006). It is possible that the alteration of teaching methods associated with clickers is responsible, rather than the use of clickers themselves. It is also possible that a “Hawthorne Effect” (Mayo, 1977) is responsible: the treatment of our student “test subjects” is different when we use clickers, and this special treatment causes the improvement rather than the use of clickers. This explanation seems less likely when the systems have been used several times by an instructor and are thus no longer novel (Poulis et al., 1998), but a Hawthorne effect is difficult to rule out.

    A tentative explanation (Poulis et al., 1998) for positive effects of clickers on student achievement suggests several factors:

    • increased active participation of students during lecture

    • removal of the “house of cards effect,” in which students understand new material poorly because it is based on other poorly understood material

    • use of discussions and peer learning in many implementations.

    For clicker research to proceed rapidly in a variety of fields, good standardized tests that assess student understanding of concepts would be helpful to evaluate the effect of various instructional methods (Hake, 2002). Such exams do exist in physics, astronomy, and economics, but are only slowly becoming available in other fields (Anderson et al., 2002; Hake, 2002; Klymkowsky et al., 2003).

    Generally the use of clickers either improves or does not harm exam scores (Knight and Wood, 2005). There are so far no consistent factors in clicker-using courses that correlate with increased exam scores: the style of teaching varies, as does the presence or absence of peer-learning activities (Simpson and Oliver, 2006).

    The use of an AR system does increase the likelihood of active student engagement during class (van Dijk et al., 2001). Students reported that they were twice as likely to work on a problem presented during class if answers were submitted by clicker than by show of hands—and even more likely if credit was given for answering (Cutts et al., 2004). For instructors not comfortable with significant amounts of peer learning during class, a still worthwhile compromise may be a combination of an ARS used with traditional lecture. Research has shown that the amount of content coverage and level of interaction obtained when using an ARS in a lecture is intermediate between traditional lecture (high content, low interaction) and more intensive application of peer learning (reduced content, high interaction; van Dijk et al., 2001).

    Improvements in Attendance, Retention, and Sometimes Grades

    When linked to grades, and particularly if it becomes a daily feature of class, an ARS increases attendance (Cue, 1998; Jackson and Trees, 2003). Physics instructors report that when clicker scores accounted for 15% or more of the course grade, attendance levels rose to 80–90%, preparation for quizzes became more serious, and students were noticeably more alert during class (Burnstein and Lederman, 2001). Figure 1 shows that attendance can be increased if clicker points are worth just 10% of the course grade (Caldwell, unpublished observations). Other instructors, however, report that when clickers contribute 5% or less to the course grade, their effect on attendance remains negligible (Merovich, personal communication; Zelkowski, personal communication). This seems to be common sense: when students are held accountable, they are more likely to meet our expectations. Some instructors suggest that linking interactive instruction to grade incentives causes students to take it more seriously (Hake, 1998; Cutts et al., 2004).

    Figure 1.

    Figure 1. Increased attendance resulting from clicker use. Data compares different sections of a nonmajors introductory biology course taught by the author at the same time of day, one year apart, at WVU. Blue bars indicate attendance data collected using exam attendance and periodic quizzes on index cards during spring 2004. Yellow bars indicate attendance data collected with clickers one year later, during spring 2005. Without clickers, attendance fluctuated widely (A), with high attendance generally limited to exam days. With clickers, the attendance figures were much more uniform and significantly higher (by 20% or more) on nonexam days (B). Error bars, SD. This was not a precisely controlled study; during 2005 a different textbook was used. Each course enrolled a maximum of 250 students (Caldwell, unpublished observations).

    Clickers appear to reduce student attrition compared with lecture without clickers. Table 1 compares the attendance at the beginning and end of the semester in two courses conducted with and without clickers. With clickers, roughly 4% of students stopped attending by the final exam. This attrition rate was noticeably higher without the clickers, ranging from 8 to nearly 12%. A possible explanation is related to the regular attendance encouraged by daily clicker questions and attendance checks. Students were either better prepared for the exam and chose to attend or were more invested in the course after having spent so much time attending—regardless of preparation. In any case, it is interesting to note that attrition was dramatically reduced during fall semester, when freshmen are typically adjusting to college life.

    Table 1. Effect of clickers on attrition over the semester in freshman nonmajors biology courses at WVU

    Lecture treatmentBiology 101 (Fall) % Attendance
    Percent decline
    First examFinal exam
    Without ARS10088.111.9
    With ARS10095.74.3
    Lecture treatmentBiology 102 (Spring) % Attendance
    Percent decline
    First examFinal exam
    Without ARS10091.98.1
    With ARS10095.94.1

    Use of clickers decreased attrition to 4–8% by the final exam. Biology 101 probably showed a higher overall rate of attrition because it is offered in fall, when more students withdraw from college; Biology 102 is offered in spring, and serves students who have survived that first round of attrition. All courses enroll a maximum of 250 students (Caldwell, unpublished data).

    Figure 2 indicates similar positive outcomes from clicker use in a mathematics course (Mays, personal communication): Use of clickers increased the number of A's earned by 4.7%, reduced the rate of withdrawal by nearly 3%, and decreased the combined proportion of students earning D's, F's, or withdrawing by 3.8%. These results suggest that active engagement in class boosts achievement for at least some students and prevents others from dropping or failing the course. These findings are consistent with J. Zelkowski's observations in other mathematics courses at WVU that have used clickers: Exam scores increased for students in the top quartile, and attendance increased for midday (but not early morning) classes (Caldwell et al., 2006).

    Figure 2.

    Figure 2. Effect of clickers on grade distribution for two sections of college trigonometry taught at WVU. Courses were taught by the same instructor, the same semester, using the same course curricula, but in different rooms—one of which lacked an ARS. The total enrollments for the non-ARS and ARS courses were, respectively, 211 and 194 (Mays, personal communication).

    Coping with Decreased Lecture Coverage

    Most studies of clicker use agree that when time is spent on ARS activities there is usually a decrease in content coverage (Burnstein and Lederman, 2001; Simpson and Oliver, 2006; McGraw, personal communication). Generally this decreased coverage is considered “more than compensated” by perceived improvements in student comprehension, instructor awareness of student difficulties, and the ability to assess instantly whether the pace of the course is appropriate (Elliot, 2003; Beatty, 2004).

    One solution to decreased coverage is the use of lecture “scripts” or outlines. An instructive example comes from Belfast (Burns, 1985): Students given transcripts of lectures and asked not to attend class produced better notes and achieved higher test scores than students who did attend the lecture class (but were not given the transcript)—as long as the class was lecture only. This suggests an alternative: We could give students a lecture outline for portions of the lecture we choose to omit in favor of clicker questions, as was done in an engineering course (d'Inverno et al., 2003). Just-in-Time-Teaching (JiTT) offers another alternative: Web-based classroom management systems are used to give students “warm up exercises” outside of class and to hold them responsible for learning material before class; class time is used to refine and apply those understandings (Novak et al., 1999; Marrs and Novak, 2004; Smith et al., 2005). Another successful method is to make students more responsible for reading and homework outside of class, by assessing comprehension using clickers at the beginning of class meetings, as described above (Knight and Wood, 2005).

    If concerns about content coverage are severe, it may be worth evaluating the purpose and goals of lecture within the course. Studies of lecturing indicate that more coverage does not necessarily indicate more learning or more retention by students (Johnstone and Su, 1994). Furthermore, because students remember only 20–25% of the information we present, even in that most fertile, first 15–20 minutes of class (Burns, 1985), it seems that our time might be better spent in activities other than lecturing—such as peer instruction or problem solving. An underlying assumption noted in much of the literature on clicker usage is the conviction that covering content is not the most effective way to teach and that active engagement leads to more effective learning (Draper et al., 2002; Cutts et al., 2004; Knight and Wood, 2005; Simpson and Oliver, 2006).

    Attitudes Toward Clickers

    Student Attitudes.

    A sampling of student attitudes toward clickers is included in Figure 3. About 88% of students either “frequently” or “always” enjoyed using the clickers in class. This reflects the overall trend in the literature: most students like using clickers. When asked if clickers were enjoyable, helpful, or should be used, students typically gave approval ratings around or above 70%, or average Lichert scale ratings above 4 on a scale of 1–5 (McDermott and Redish, 1999; Draper et al., 2002; d'Inverno et al., 2003; Elliot, 2003; Beekes, 2006; Bunce et al., 2006; Simpson and Oliver, 2006). Students' ratings of the system are less consistent when asked if the system helps them learn or concentrate, but are still generally positive (McDermott and Redish, 1999; Elliot, 2003; Hatch and Jensen, 2005; Beekes, 2006). Sometimes students felt that the system was helpful even when there was no evidence of significant improvement in exam scores over non-ARS classes (Bunce et al., 2006).

    Figure 3.

    Figure 3. Students in an introductory nonmajors freshman biology course at WVU (as in Figure 1) evaluated clickers as part of standardized course evaluations. Students who did not respond to this question totaled to 1.6%. The instructor was not present during the evaluation, and students were reminded that their responses would not be given to the instructor until after final course grades were submitted. The response by 125 students is 77% of the total enrollment (Caldwell, unpublished observations).

    When clickers were used, students tended to view the instructor as more aware of students' needs and the teaching style as more “immediate (warm, friendly, close)” (Jackson and Trees, 2003; Nichol and Boyle, 2003) or caring (Knight and Wood, 2005).

    Features that students particularly liked about the system were its anonymity (Jackson and Trees, 2003), its potential to reinforce learning (Bunce et al., 2006), and the possibility of comparing one's answers with the rest of the class (Bunce et al., 2006) because “they like the reassurance that they're not alone even when they're wrong.” (Beatty, 2004) When allowed to work in groups, they feel that talking with a classmate helped their understanding, and collaborative work was important to learning (e.g., summarized comments from M. Butler's math students in Caldwell et al., 2006).

    Some student comments from a recent course at WVU include (McGraw, personal communication):

    • [clicker quizzes are] “better than written quizzes [because we] got feedback right away.”

    • “I enjoyed using the clickers.”

    • “I like the clickers [because] it helps in the learning experience [because] you can talk out some problems with others.”

    • “I liked the clickers better than paper quizzes.”

    • “I really enjoyed using the clickers. It did help reinforce the material and provided a nice break in lecture and a chance to make sure you understand the material.”

    Not all students like clickers. Some negative reactions in the past have included: “stop messing around with technology and get back to good basic teaching” (d'Inverno et al., 2003). Although negative responses are generally outnumbered by positive ones in any individual course, some general trends in complaints are notable. Students who complain about little else will complain about the cost of a clicker. To address this concern, some institutions (e.g., WVU) currently purchase clickers that are stored in wall-mounted distribution boxes and picked up and returned by students at each class meeting.

    Other predictably negative student reactions to clickers occur in response to lost clickers, technical problems with software or instructor's lack of experience, consumption of class time, and the idea of “forcing” or monitoring attendance in a college class (Halloran, 1995; Knight and Wood, 2005). Other problems occur when the learning value of the questions was unclear and they seemed to be included just for the sake of using the ARS technology, to gather data for future years, or for no reason at all (Simpson and Oliver, 2006). Students are understandably unhappy when the clickers seem to be driving course content and not vice versa (Simpson and Oliver, 2006). Some students who prefer a competitive class atmosphere dislike the use of clickers for cooperative learning activities (e.g., Knight and Wood, 2005).

    Some students report anxiety about using clickers, usually because the scores are part of their course grade, and they are unsure whether answers were recorded properly (Jackson and Trees, 2003; Johnson and McLeod, 2004). Instructors have noted that regular communication about clicker scores may reduce this anxiety (Jackson and Trees, 2003). Others recommend a low-stakes contribution of clickers to grades, so that attention remains focused on reasoning and not scores (Beatty, 2004). Popular ways of keeping the pressure off include: giving partial credit for any answer and full credit for correct answers, using only randomly selected clicker data as part of the grade, and dropping a handful of lowest clicker scores from each student's grade.

    Instructor Attitudes.

    Like students, most instructors rate the ARS experience favorably. In general, they view it as a quick and convenient way to check student understanding. They note that their students are more active, attentive, and pleasant to teach. Typical comments include:

    • “I have never seen a student doze off during a CCS [classroom communication system]-based class.” (Beatty, 2004).

    • “In my experience [with an ARS] there is nothing [else] that engenders discussion in a large class to the same extent. … When [students] see that the choices that they have made are controversial, they are eager to discuss them.” (Lindenfeld, 2001).

    • “[ARS use] has had a very significant effect on students' performance in lectures, stimulating their interest and concentration as well as their enjoyment of lectures…. I felt that students were more willing to ask questions in both lectures and follow-up tutorials [when an ARS was used in lecture]…. (Elliot, 2003).”

    • “I do feel more learning went on in the classroom, and student attention was improved. … I will use them again. I really like the instant feedback.” (McGraw, personal communication).

    • “[Compared with traditional lecture]… teaching with clickers is a lot more fun!” (Wood, 2004).

    • “… if students enjoy the [ARS] session, they appear to be more receptive to technical issues and material that otherwise would have been difficult to teach.” (Beekes, 2006).

    • “… my teaching is being directed more by what the students… say they need, rather than what I think they need.” (Draper, 2002).

    Of course, not all faculty like clickers. Negative reactions understandably occur when the systems experience technical problems or lack technical support from IT staff, but also if they are only used for recording attendance. Faculty concerns about using an ARS include its expense and the time that questions consume during class (Brewer, 2004). This latter concern, mentioned above, is addressed further below.

    BEST PRACTICE TIPS

    Several texts exist to help a new user of clickers get started (Mazur, 1997; Duncan, 2005). Various other articles provide the following list of suggestions for effectively using clickers in class.

    Planning

    • Know why you are using an ARS in class, and keep this in mind while writing questions (Draper, 2002).

    • Plan your grading system in advance. Make sure it aligns with your learning goals (Duncan, 2005).

    • Plan in advance for how to deal with students whose clickers are forgotten, need batteries, or are broken: Use slips of paper, have students trade ID cards for clickers, or keep some “loaner” clickers on hand. Discourage perpetual freeloaders (Duncan, 2005; Hatch and Jensen, 2005).

    • Before teaching your first course, watch another instructor who uses an ARS (Draper, 2002).

    • Be aware that the first year of use requires extra time to prepare good questions (Burnstein and Lederman, 2001).

    Attendance

    • If you want to increase attendance, use clickers daily and link clicker usage to grades (Cue, 1998).

    • Use clickers especially with introductory courses for freshmen to encourage attendance and accountability and to reduce attrition (Caldwell, unpublished observations).

    • If you are requiring attendance, expect an increase in noise and possibly some disengaged students who are attending only for points (Jackson and Trees, 2003).

    Communication with Students

    • Explain to students why you are using the system and what you expect students to gain from the experience in order to get them to support the idea, especially if you are using it for nontraditional activities like active learning (Simpson and Oliver, 2006).

    • Plan discussion time to respond to ARS answers. Be willing to adapt your lesson plan according to the results you collect. Let students “learn from the discussion of right and wrong answers.” This is considered vital by most researchers in the field (Poulis et al., 1998; Draper, 2002; Draper et al., 2002; Nichol and Boyle, 2003; Beatty et al., 2006; Simpson and Oliver, 2006).

    • If incorporating a classwide discussion into your ARS use, be sure to summarize the discussion and explain the correct answer afterward (Nichol and Boyle, 2003).

    • Explain to students the purpose of homework, and use clickers to hold them accountable (Cutts et al., 2004).

    • Discuss cheating with students, and clearly state that use of another student's clicker is unacceptable (Duncan, 2005). In a survey, between 20 and 58% of students reported seeing a classmate cheat by using multiple clickers at some point during the semester (Jackson and Trees, 2003).

    Peer Learning

    • If using peer learning groups, limit group size to no more than four to six members (MacManaway, 1970). Students seem to prefer small-group discussions to classwide discussions led by the instructor (Nichol and Boyle, 2003).

    Grades and Anxiety

    • If clicker scores are part of the course grade, make those scores accessible on a regular basis to reduce student anxiety (Jackson and Trees, 2003). Consider showing students clicker scores from past semesters on the first day of class (Duncan, 2005).

    • Give partial credit for any answer and full credit for correct answers to reduce anxiety and limit cheating. Consider dropping a few of the lowest clicker scores or selecting a portion of data at random (Duncan, 2005).

    Prevent Wasted Time and Frustration

    • Spend some time in the first classes training students to use clickers (Draper, 2002).

    • Set up the system before class, and practice this before the semester begins (Draper, 2002; Duncan, 2005).

    • If your clickers require a registration system, test it in advance (Duncan, 2005).

    • Allow a few days for students to buy and/or register clickers. Be aware that in some cases, 5–10% of students never purchased or registered their clickers (Hatch and Jensen, 2005).

    • Expect that a few students will intentionally press the wrong button, cast misleading votes, or delay voting to consume class time (Simpson and Oliver, 2006; Caldwell, unpublished observations).

    • If possible, find a resource person, train a teaching assistant, or start a faculty support group.

    Other Survival Tips

    • Keep a positive attitude, and be willing to make a few mistakes as you learn. Consider this a chance to model the learning behavior you desire from your students (Draper, 2002; Beatty, 2004; Dufresne et al., 2000).

    • Be willing to throw out or regrade a question that contains an error or is unclear.

    • Encourage students to discuss answers with each other. This increases peer learning and will eliminate one type of “cheating.”

    • Encourage class discussion of incorrect answers to reveal unclear wording; this can be especially important if you notice dramatic improvement in scores after peer discussion (Knight and Wood, 2005).

    • Consider building a library of ARS questions with colleagues, as too few of these exist in most fields.

    WRITING EFFECTIVE QUESTIONS

    Clickers are a flexible tool, but like most technology are not a panacea in and of themselves. This theme repeats frequently in the clicker literature (Draper et al., 2002; Hake, 2002; Jackson and Trees, 2003; Wood, 2004; Parsons, 2005; Beatty et al., 2006; Simpson and Oliver, 2006): ARS questions are “best understood as a tool rather than a teaching approach” (Simpson and Oliver, 2006), and their effectiveness in increasing learning depends heavily on the intent and thought behind their design. One recommendation is that the instructor approach class meetings as learning sessions rather than knowledge-dispensing sessions (Beatty, 2004).

    There is overall a consensus that it takes some time and practice to develop good questions and that they must be carefully designed and “woven” into lecture (Burnstein and Lederman, 2001; Elliot, 2003; Beatty et al., 2006; Simpson and Oliver, 2006). In general, there are few (if any) collections of good clicker questions available for most fields (Jackson and Trees, 2003; Beatty et al., 2006) beyond collections for physics (Mazur, 1997), although some concept tests for specific biological topics have been published in recent years (Anderson et al., 2002; Udovic et al., 2002).

    If properly designed, clicker questions may enable courses to be more attuned to the way human learning and memory works than simple lecture. Traditional lectures may produce poor results because they fail to account for the “chunking” of information into categories, linking of new information with familiar concepts or creation of new categories, and the use of examples and practice to learn new concepts (Middendorf and Kalish, 1996). If the way we learn is kept in mind, however, it is possible to design clicker questions that favor learning. By this criterion, examples of good questions include presenting a new concept and asking which ideas (or categories) it is most closely related to, showing an example of a new concept, or applying a mastered concept to a new situation.

    There is general agreement that a good clicker question is different from a good exam question, but exam questions can be modified for this use (Beatty et al., 2006). Some detailed treatments of question design are available in the literature (e.g., Beatty et al., 2006). Generally speaking, qualitative questions (that avoid calculations, memorization, or facts) are favored because they guide the student to focus on the concept without becoming distracted by details (Beatty, 2004; Beatty et al., 2006). Some useful goals for question design can be culled from the literature:

    1. Good clicker questions should address a specific learning goal, content goal, skill, or reinforce a specific belief about learning (Beatty et al., 2006).

    2. Questions can (Beatty, 2004):

      • assess students' background, knowledge, or beliefs

      • make students aware of others' views or of their own

      • locate misconceptions and confusion

      • distinguish between related ideas

      • show parallels or connections between ideas

      • explore or apply ideas in a new context.

    Some examples of questions recommended by the literature include (Dufresne et al., 2000; Wit, 2003):

    • given a term or concept, identify the correct definition from a list, and vice versa

    • given a graph, match it with the best description or interpretation, and vice versa

    • match a method of analysis with an appropriate data set, and vice versa

    • questions that link the general to the specific

    • questions that share a familiar situation or example with several other questions

    • questions that students cannot answer, to motivate discussion and curiosity before introducing a new topic

    • questions that require ideas or steps to be sorted into order

    • questions that list steps and ask “which one is wrong?”

    • questions that apply a familiar idea to a new context.

    Several researchers assert that it is useful, and even important, to design questions that produce a wide set of responses or on which some portion of the class makes mistakes (Dufresne et al., 2000; Hake, 2002; Wit, 2003; Beatty, 2004; Brewer, 2004; Johnson and McLeod, 2004; Wilson et al., 2006). Others seem to agree, asserting that exploring those misconceptions can be an important part of steering students toward deeper understanding, not just factual knowledge (Tanner and Allen, 2005). To construct such questions, it is helpful to:

    • identify student misconceptions and include them as answers, plausibly phrased

    • “shut up and listen” to students to find out how they think, and pay particular attention to wrong answers

    • include answers that contain common errors.

    A variety of questions is usually deemed useful. While instructors are learning to write questions, often most of their questions consist of factual recall (Brewer, 2004). One set of researchers reports that asking instructors to identify the type of question they are writing can help increase the diversity of questions (Brewer, 2004).

    Practical suggestions include (Wit, 2003; Beekes, 2006):

    • limit the number of answers to five or less, so that question is easy to read and consider

    • assess knowledge of jargon separately from concepts to ensure that each is addressed clearly and effectively

    • create wrong answers (distractors) that seem logical or plausible to prevent “strategizing” students from easily eliminating wrong answers

    • include “I don't know” as an answer choice to prevent guessing

    • plan to ask some questions twice to allow peer learning and build emotional investment. (Allow students to answer individually, but do not display the correct answer; then direct students to discuss the question with their peers and answer again.) This approach is advocated by many instructors who have used clickers, including Wilson et al. (2006) and Knight and Wood (2005).

    CLICKERS AND PEER LEARNING

    One method of instruction that particularly benefits from clickers is peer learning. Peer learning has attracted a high level of interest—-especially in the physics education community—because peer learning and other active learning methods have been demonstrated to result in higher learning gains and/or exam scores than more traditional, content-based approaches to course material such as lecture (MacManaway, 1970; Hake, 1998; Pollock, 2006). Although it exists in many formats, ranging from ConcepTests (tests of conceptual understanding, often alternating with mini-lectures; Mazur, 1997; Anderson et al., 2002; Udovic et al., 2002) to question cycles (Beatty et al., 2006), the overall theme of peer learning is similar: Students spend a significant portion of class time working or discussing problems in small groups.

    For the instructor, clickers offer an efficient means to monitor progress and problems in peer-learning groups and to intervene when either the class is very confused or has understood the concept thoroughly and is ready to move on. In practice, such “interactive engagement” methods have been shown to be twice as effective as traditional lecture (Hake, 1998). It is not necessary, however, to abandon lecture altogether: The setting for this nontraditional approach can still be a traditional lecture hall, and the peer instruction may be inserted into a traditional lecture or interspersed between mini-lectures. The strength of peer instruction is the interaction it fosters between students, who by virtue of their similar ages, language, and common experience, are often “better at clearing up each other's confusions and misconceptions” than their instructor (Wood, 2004).

    There are two fairly distinct approaches to peer instruction that differ in when the group interaction occurs (Nichol and Boyle, 2003). The classwide discussion method (also known in the literature as the “PERG” approach) begins with a question and proceeds immediately to small-group discussion to answer it, followed by full-class discussion. The peer-learning model (also known in the literature as Peer Instruction) requires that students think and answer independently first, see the answers, and then spend time in groups struggling to reach a consensus answer. Some data indicate that the latter method works better in larger classes, because individual answers force stronger engagement, and the class discussion portion of PERG may introduce too much confusion, unless the question asked is very difficult (Nichol and Boyle, 2003). In practice, a careful combination of the two methods by an observant instructor may be best.

    Students themselves feel that discussion with other students is helpful. In surveys about peer learning (Nichol and Boyle, 2003), 92% of students agreed that discussing questions with others aided understanding, 82% agreed that hearing others' explanations helped them learn, and more than 90% reported that the moment they felt most engaged during class was while working in small peer groups. Instructors agree that “when a student must cast such [ill-formed or nebulous] thinking into language… deficiencies become evident [to the student]” (Beatty et al., 2006). This opportunity for cooperative learning with peers has great potential as a means for training students for cooperative interactions in future employment (Knight and Wood, 2005; Smith et al., 2005) and for stemming the “hemorrhage” of students who dislike the traditionally competitive atmosphere of courses in science, technology, engineering, and mathematics (Tobias, 1990).

    Peer learning appears to work: Students who used class time primarily to discuss assigned topics in small groups did at least as well or better as a class than students who experienced traditional lecture (MacManaway, 1970). Students who participated in peer-learning groups made statements (when interviewed) that support the idea that more able or knowledgeable students do generally help those who are less advanced achieve a higher level of understanding (Nichol and Boyle, 2003). Peer-learning approaches in physics tend to emphasize conceptual understanding more heavily than numerical problem solving. The benefits of this approach are that it improves both conceptual understanding and problem-solving skills more than courses that focus primarily on solving numeric problems (Hake, 1998). Similar approaches in biology courses have shown significant improvement in measured student learning gains over traditional lecture-only approaches (Knight and Wood, 2005).

    CONCLUSIONS

    Overall, clickers offer a powerful and flexible tool for teaching. They can be used in a variety of subjects with students of almost any level of academic training. Clickers may occupy either a peripheral or central role during class. They can be incorporated into a standard lecture course to increase interaction between students and instructor or used as part of a more radical change in teaching style toward primarily active learning in class (whether it be peer learning, debate, or other activities).

    Clickers can be used with many styles of questions, and new variations on the technology allow formats other than multiple-choice questions (Barber and Njus, 2007). The only “rule” for question design is that each question's structure and content reflect specific learning goals. Questions may have a single correct answer or be designed without any “right” answer in order to encourage debate and discussion.

    Although much research remains to be done to elucidate the reasons why clickers are effective, they do seem to enhance students' active learning, participation, and enjoyment of classes. When used during lectures, clickers have either neutral or positive effects and a more strongly positive effect on learning outcomes when combined with peer or cooperative learning. They increase attendance and retention and can be used to promote student accountability. They simulate a one-to-many dialogue and make it easier for both instructors and students to receive prompt feedback.

    Overall, clickers have the potential to improve classroom learning, especially in large classes. Students and instructors find their use stimulating, revealing, motivating, and—as an added benefit—just plain fun.

    ACKNOWLEDGMENTS

    I gratefully acknowledges the support of the Eberly College of Arts and Sciences at WVU, who purchased clickers and gave financial support during my initial implementation of clickers in General Biology classes for nonmajors. I am also grateful for the financial support of the WVU Office of the Provost, which provided summer salary for the production of this literature review and has now supported the implementation of radiofrequency clickers at WVU. Jeremy Zelkowski, Melanie Butler, and Michael Mays of the WVU Department of Mathematics, as well as Catherine Merovich and James McGraw of the WVU Department of Biology, are all thanked for stimulating discussions and their willingness to share data and observations for this report.

    FOOTNOTES

    1 These were unpublished observations of trigonometry classes (Math 128) at WVU in 2005, by M. Mays.

    2 These were personal communications between the author and J. McGraw and his students' evaluations of an ecology course (Biology 221) at WVU in 2006.

    3 These were unpublished observations of General Biology 101 and 102 courses at WVU in 2005, by the author.

    4 Dr. Wiedemeier randomly chooses a different small group of students each day, designated “Wied's Wonderful,” from her large lecture course of more than 300 students. These students answer questions and are designated the “volunteers” (as needed) during the current class meeting for extra credit.

    REFERENCES

  • Allen D., Tanner K. (2005). Infusing active learning into the large-enrollment biology class: seven strategies, from the simple to complex. Cell Biol. Educ. 4, 262-268. LinkGoogle Scholar
  • Anderson D. L., Fisher K. M., Norman G. J. (2002). Development and evaluation of the conceptual inventory of natural selection. J. Res. Sci. Teach. 39, 952-978. Google Scholar
  • Barber M., Njus D. (2007). Clicker evolution: seeking intelligent design. CBE—Life Sci. Educ. 6, 1-8. LinkGoogle Scholar
  • Beatty I. (2004). Transforming student learning with classroom communication systems. EDUCAUSE Center Appl. Res. (ECAR) Res. Bull. 2004, (3), 1-13. Google Scholar
  • Beatty I. D., Gerace W. J., Leonar W. J., Dufresne R. J. (2006). Designing effective questions for classroom response system teaching. Am. J. Phys. 74, (1), 31-39. Google Scholar
  • Beekes W. (2006). The “Millionaire” method for encouraging participation. Active Learn. Higher Educ. 7, (1), 25-36. Google Scholar
  • Brewer C. (2004). Near real-time assessment of student learning and understanding in biology courses. BioScience 54, (11), 1034-1039. Google Scholar
  • Bunce D. M., Van den Plas J. R., Havanki K. L. (2006). Comparing the effectiveness on student achievement of a student response system versus online WebCT quizzes. J. Chem. Educ. 83, (3), 488-493. Google Scholar
  • Burns R. A. (1985). Information Impact and Factors Affecting Recall Annual National Conference on Teaching Excellence and Conference of Administrators May 22–25, 1985 Austin, TX Presented at (ERIC Document No. ED 258 639). Google Scholar
  • Burnstein R. A., Lederman L. M. (2001). Using wireless keypads in lecture classes. Phys. Teach. 39, 8-11. Google Scholar
  • Caldwell J., Zelkowski J., Butler M. (2006). Using Personal Response Systems in the Classroom (accessed 1 August 2006) WVU Technology Symposium April 11, 2006 Morgantown, WV Presented at www.math.wvu.edu/∼mbutler/CompAndTechSymp.pdf. Google Scholar
  • Cue N. (1998). A Universal Learning Tool for Classrooms? (accessed 12 July 2006) Proceedings of the “First Quality in Teaching and Learning Conference,” December 10–12, 1998 Hong Kong SAR, China http://celt.ust.hk/ideas/prs/pdf/Nelsoncue.pdf. Google Scholar
  • Cutts Q., Kennedy G., Mitchell C., Draper S. (2004). Maximizing Dialogue in Lectures Using Group Response Systems (accessed 20 July 2006) Presented at 7th IASTED International Conference on Computer and Advanced Technology in Education August 16–18, 2004 Hawaii www.dcs.gla.ac.uk/∼quintin/papers/cate2004.pdf. Google Scholar
  • d'Inverno R., Davis H., White S. (2003). Using a personal response system for promoting student interaction. Teach. Math. Appl. 22, (4), 163-169. Google Scholar
  • Draper S. (1998). Niche-based success in CAL. Comput. Educ. 30, 5-8. Google Scholar
  • Draper S. W. (2002). Evaluating effective use of PRS: results of the evaluation of the use of PRS in Glasgow University, October 2001-June 2002 (accessed 31 July 2006) www.psy.gla.ac.uk/∼steve/ilig/papers/eval.pdf. Google Scholar
  • Draper S. W., Brown M. I. (2002). Use of the PRS (Personal Response System) handsets at Glasgow University, Interim Evaluation Report: March 2002 (accessed 27 July 2006) www.psy.gla.ac.uk/∼steve/ilig/interim.html. Google Scholar
  • Draper S., Cargill J., Cutts Q. (2002). Electronically enhanced classroom interaction. Aust. J. Educ. Technol. 18, (1), 13-23. Google Scholar
  • Dufresne R., Gerace W., Mestre J. P., Leonard W. University of Massachusetts, Physics Education Research Group (2000). ASK·IT/A2L: Assessing Student Knowledge with Instructional Technology (accessed 14 July 2006) Tech. Rep. No. 9 umperg.physics.umass.edu/library/UMPERG-2000–09/entirePaper/. Google Scholar
  • Duncan D. (2005). Clickers in the Classroom: How to Enhance Science Teaching Using Classroom Response Systems In: New York: Addison Wesley and Benjamin Cummings. Google Scholar
  • Elliot C. (2003). Using a personal response system in economics teaching. Int. Rev. Econ. Educ. 1, (1), 80-86. Google Scholar
  • Hake R. R. (1998). Interactive-engagement versus traditional methods: a six-thousand student survey of mechanics test data for introductory physics courses. Am. J. Phys. 66, (1), 64-74. Google Scholar
  • Hake R. (2002). Lessons from the physics education reform effort Conserv. Ecol. 5 2 28 http://www.consecol.org/vol5/iss2/art28. Google Scholar
  • Halloran L. (1995). A comparison of two methods of teaching: computer managed instruction and keypad questions versus traditional classroom lecture. Comput. Nursing 13, (6), 285-288. MedlineGoogle Scholar
  • Hatch J., Jensen M. (2005). Manna from Heaven or “clickers” from Hell. J. Coll. Sci. Teach. 34, (7), 36-39. Google Scholar
  • Heward W. L., Gardner R., Cavanaugh R. A., Courson F. H., Grossi T. A., Barbetta P. M. (1996). Everyone participates in this class: using response cards to increase active student response. Teaching Exceptional Children 28, (2), 4-10. Google Scholar
  • Jackson M. H., Trees A. R. (2003). Clicker implementation and assessment (accessed 16 July 2006) comm.colorado.edu/mjackson/clickerreport.htm. Google Scholar
  • Johnson D., McLeod S. (2004). Get answers: using student response systems to see students' thinking. Learn. Lead. Technol. 32, (4), 18-23. Google Scholar
  • Johnstone A. H., Su W. Y. (1994). Lectures—a learning experience?. Educ. Chem. 31, (1), 75-79. Google Scholar
  • Klymkowsky M. W., Garvin-Doxas K., Zeilik M. (2003). Bioliteracy and teaching efficacy: what biologists can learn from physicists. Cell Biol. Educ. 2, 155-161. LinkGoogle Scholar
  • Knight J. K., Wood W. B. (2005). Teaching more by lecturing less. Cell Biol. Educ. 4, 298-310. LinkGoogle Scholar
  • Lindenfeld P. (2001). We can do better. J. Coll. Sci. Teach. 31, (2), 82-84. Google Scholar
  • MacManaway L. A. (1970). Teaching methods in higher education—innovation and research. Universities Quart. 24, (3), 321-329. Google Scholar
  • Marrs K. A., Novak G. (2004). Just-in-time-teaching in biology: creating an active learner classroom using the Internet. Cell Biol. Educ. 3, 49-61. LinkGoogle Scholar
  • Mayo E. (1977). The Human Problems of an Industrial Civilization In: New York: Arno Press, 55-98. Google Scholar
  • Mazur E. (1997). Peer Instruction In: A User's Manual, Upper Saddle River, NJ: Prentice-Hall. Google Scholar
  • McDermott L. C., Redish E. F. (1999). Resource letter PER-1. Phys. Educ. Res. Am. J. Phys. 67, (9), 755-767. Google Scholar
  • Middendorf J., Kalish A. (1996). The “change-up” in lectures. Natl. Teach. Learn. Forum 5, (2), 1-5. Google Scholar
  • Nichol D. J., Boyle J. T. (2003). Peer instruction versus class-wide discussion in large classes: a comparison of two interaction methods in the wired classroom. Stud. Higher Educ. 28, (4), 457-473. Google Scholar
  • Novak G., Patterson E. T., Gavrin A. D., Christian W. (1999). Just-In-Time Teaching: Blending Active Learning with Web Technology In: Upper Saddle River, NJ: Prentice Hall. Google Scholar
  • Parsons C. V. (2005). Decision making in the process of differentiation. Learn. Lead. Technol. 33, (1), 8-10. Google Scholar
  • Pollock S. J. (2005). No single cause: learning gains, student attitudes, and the impacts of multiple effective reforms. AIP Conf. Proc. 790, (1), 137-140. Google Scholar
  • Pollock S. J. (2006). Transferring transformations: learning gains, student attitudes, and the impacts of multiple instructors in large lecture courses. AIP Conf. Proc. 818, (1), 141-144. Google Scholar
  • Poulis J., Massen C., Robens E., Gilbert M. (1998). Physics lecturing with audience paced feedback. Am. J. Phys. 66, (5), 439-441. Google Scholar
  • Roschelle J., Penuel W. R., Abrahamson L. (2004a). Classroom Response and Communication Systems: Research Review and Theory (accessed 16 July 2006) Annual Meeting of the American Educational Research Association 2004 San Diego, CA Presented at the ubiqcomputing.org/CATAALYST_AERA_Proposal.pdf. Google Scholar
  • Roschelle J., Penuel W. R., Abrahamson L. (2004b). The networked classroom. Educ. Leadership 61, (5), 50-54. Google Scholar
  • Simpson V., Oliver M. (2006). Using electronic voting systems in lectures (accessed 12 July 2006) www.ucl.ac.uk/learningtechnology/examples/ElectronicVotingSystems.pdf. Google Scholar
  • Smith A. C., Stewart R., Shields P., Hayes-Klosteridis J., Robinson P., Yuan R. (2005). Introductory biology courses: a framework to support active learning in large enrollment introductory science courses. Cell Biol. Educ. 4, (2), 143-156. LinkGoogle Scholar
  • Tanner K., Allen D. (2005). Understanding the wrong answers—teaching toward conceptual change. Cell Biol. Educ. 4, 112-117. LinkGoogle Scholar
  • Tobias S. (1990). They're Not Dumb, They're Different: Stalking the Second Tier In: Tucson, AZ: Research Corporation. Google Scholar
  • Udovic D., Morris D., Dickman A., Postlethwait J., Wetherwax P. (2002). Workshop biology: demonstrating the effectiveness of active learning in an introductory biology class. BioScience 52, (3), 272-281. Google Scholar
  • van Dijk L. A., van den Ber G. C., van Keulen H. (2001). Interactive lectures in engineering education. Eur. J. Eng. Educ. 26, (1), 15-18. Google Scholar
  • Wilson C. D., Anderson C. W., Heidemann M., Merrill J. E., Merritt B. W., Richmond G., Sibley D. F., Parker J. M. (2006). Assessing students' ability to trace matter in dynamic systems in cell biology. Cell Biol. Educ. 5, 323-331. AbstractGoogle Scholar
  • Wit E. (2003). Who wants to be… The use of a personal response system in statistics teaching. MSOR Connections 3, (2), 14-20. Google Scholar
  • Wood W. B. (2004). Clickers: a teaching gimmick that works. Dev. Cell. 7, (6), 796-798. Google Scholar