ASCB logo LSE Logo

Students' Motivations for Data Handling Choices and Behaviors: Their Explanations of Performance

    Published Online:https://doi.org/10.1187/cbe.02-09-0043

    Abstract

    Cries for increased accountability through additional assessment are heard throughout the educational arena. However, as demonstrated in this study, to make a valid assessment of teaching and learning effectiveness, educators must determine not only what students do, but also why they do it, as the latter significantly affects the former. This study describes and analyzes 14- to 16-year-old students' explanations for their choices and performances during science data handling tasks. The study draws heavily on case-study methods for the purpose of seeking an in-depth understanding of classroom processes in an English comprehensive school. During semistructured scheduled and impromptu interviews, students were asked to describe, explain, and justify the work they did with data during their science classes. These student explanations fall within six categories, labeled 1) implementing correct procedures, 2) following instructions, 3) earning marks, 4) doing what is easy, 5) acting automatically, and 6) working within limits. Each category is associated with distinct outcomes for learning and assessment, with some motivations resulting in inflated performances while others mean that learning was underrepresented. These findings illuminate the complexity of student academic choices and behaviors as mediated by an array of motivations, casting doubt on the current understanding of student performance.

    INTRODUCTION

    With the national and international cries for accountability in education, student performances at all levels and in all school contexts are coming under ever-closer scrutiny. The outcomes of student performances are subjected to detailed analyses and published in newspapers, becoming a factor in school selection and funding. The consequences for various levels of performance range in effect from student promotion and graduation, to teacher pay scale, to school accreditation. However, in the midst of all the discussion about what students do, there remains little understanding of why they do it. Even less do the impacts of various student motivations on learning and the accuracy of performances in representing learning factor into the political debate. The current study explores these issues in the context of secondary school science.

    In the United States, the United Kingdom, and other educational systems around the world, “practical work” has come to play an increasingly significant role in a variety of subjects, especially in science. Having begun with Science—A Process Approach (American Association for the Advancement of Science [AAAS], 1969) in the United States and carried on in projects such as the Nuffield Science projects (Wellington, 1998; Donnelly and Jenkins, 2001) and the Assessment of Performance Unit (APU) (Driver et al., 1982; Archenhold et al., 1988) in Britain, the procedural science movement continues to influence curriculum and assessment in science education worldwide.

    Science in the National Curriculum (Department for Education, 1995) of England and Wales includes content goals and targets in biology, chemistry, and physics, but its first section at every level is Experimental and Investigative Science. In this primary section lie the procedural goals of handling data, including making decisions about what and how much data to collect, analyzing data through calculations and graphs, interpreting data by seeking patterns, and evaluating data for reliability. This section is assessed through the students' submission of course work and forms 25% of their final marks for their General Certificate of Secondary Education (GCSE). Through the assessment process teachers, Boards of Examiners, and interested others can ascertain how well students performed on data handling tasks in this particular assessment context. However, a detailed exploration of this system reveals that it does not begin to address the issues of determining the full extent of students' knowledge and skills in data handling or their reasons for implementing or failing to implement their complete range of competencies. The research described here, a naturalist study of 14- to 16-year-old students doing practical work in an English school, analyzes students' explanations for their data handling in an effort to understand their performances on these various tasks.

    THEORETICAL BACKGROUND

    According to Head (1985, p. 31), “Both the ability to perform a task and a willingness to do so are necessary for success, the latter often proves the more important.” Motivation can be conceived of as a will they or won't they phenomenon (Cannon and Simpson, 1985) or“ his/her willingness to engage in the relevant learning activities” (Hofstein and Kempa, 1985, p. 222); however, it is the development of classifications of types of motivations that illuminates students' complex behaviors and performances. Hodson (1998b, p. 55) makes the point that various motives have different results for student learning and performance:

    We should also bear in mind that when students are presented with a learning task they may perceive it in a way that is in marked contrast to the way in which the teacher saw it during the planning stage. Consequently, their actions may be somewhat different from those anticipated. Rather than attending to the rational appraisal of competing explanations in order to extend their understanding, for example, students may be actively engaged in any number of other pursuits, including: seeking teacher approval for compliant behavior; trying to look busy, thereby avoiding unwelcome teacher attention; ascertaining the'right answer' (that is, the one that gains marks in tests); trying to maintain feelings of self-worth; attending to their'classroom image'. These other agendas may lead students to adopt behaviors and make responses that are not helpful in bringing about better scientific understanding.

    As Weiner (1984, p. 18) explains,

    It is evident that many aspects of student activity are quite logical and rational: strategies are consciously employed to deal with threat and anxiety; goal expectations are consciously calculated; logical decisions are reached; information is sought and processed; self-insight may be attained; and so on. Conversely, many aspects of student behavior appear to be quite irrational: self-esteem is defended in unknown ways; expectations are biased; illogical decisions are reached; information is improperly utilized; and there is gross personal delusion.

    Yet the seeming illogic of some student behavior can be laid at the feet of the observer: “If we cannot specify an individual's goals, we cannot judge what behavior will maximise the chances of achieving these goals and minimise the chances of avoiding undesirable outcomes” (Nicholls, 1984, p. 40). Thus, examining why students engage in various classroom behaviors involves the dissecting of multiple overt and covert goals and agendas.

    Maehr (1983) describes four goal types that he believes are associated with school achievement: task involvement, ego involvement, social solidarity, and extrinsic rewards. According to Maehr, when students are pursuing task goals they are absorbed in the activity and seek competence in the task for the sheer pleasure of doing well. Ego goals, however, involve “doing better than some socially defined standard, especially a standard inherent in the performance of others” (Maehr, 1983, p 192); here doing better or best is the motivator. Maehr describes social solidarity goals as being directed toward pleasing others, while extrinsic reward involves motivation by acquisition of something such as a high mark or extra free time. Nicholls (1983) combines the latter two goals into a single extrinsic involvement motivation. These distinctions among motivations, their implications for interpreting classroom behaviors, and their interactions with concepts such as attribution have been explored and expanded in theory and research during the past two decades. Ames and Archer (1988, p. 260) collapse the various classification systems, claiming that “the conceptual relations among task, learning, and mastery goals and among ego, performance, and ability goals are convergent,” and thus in their work use only“ mastery” and “performance goals,” respectively. However, this composite leaves out much of the richness and complexity of students' motivations in classroom settings, especially with reference to performance goals. Deci et al. (2001a) analyze“ intrinsic motivations” and “extrinsic rewards” but create many internal classifications depending on the type of reward and context. Maehr's framework of four goal types illuminates the subtle but important differences among motivations that are focused away from the task, yet even this set does not incorporate the goal of limiting effort that appears in research on mental models (Norman, 1983).

    An additional possible explanation for student behavior falls outside the literature of motivation but, nevertheless, is worthy of consideration: rule following. White (1988, p. 38) defines rules as “procedures, algorithms, which are applicable to classes of tasks.” Scardamalia and Bereiter (1983, p. 63) go as far as to say that “the normal processes of acquiring procedural knowledge or'know-how' include observation, practice, and rule learning.” However, while they acknowledge that rule development and following are normal parts of learning, they discuss the drawbacks to such rules with regard to reading and writing:

    Here, too, there is much redundancy, so that with practice students can develop efficient strategies that allow them to meet the routine demands of school reading and writing tasks with a minimum of effort. The result, however, is comprehension strategies that are insensitive to the distinctiveness and complexity of text information (Scardamalia and Bereiter, in press c), and writing strategies that are insensitive to distinctive requirements of different writing goals (Scardamalia and Bereiter, in press a). Rising above these routine “cognitive coping strategies,” as we call them, requires sustained effort directed towards one's own mental processes. (Scardamalia and Bereiter, 1983, p. 65)

    Thus, when rule following, students are not truly task involved because the rules may not actually be appropriate to the task. Scardamalia and Bereiter (1983, p. 65) warn that there are some “skill areas in which'practice makes perfect' is an untrustworthy slogan.”

    Davis (1988, p. 97) discusses another concern within the area of rules:

    Rule-following requires the existence of an established use or custom. Understanding a rule is to possess certain abilities. Wittgenstein distinguishes between following a rule, where an agent “knows that there is arule, understands it, and intentionally moulds his actions to it” (1958, p. 155), and merely acting in accord with a rule, as a monkey might move chess pieces on a chess board in a way which happened to conform with the rules.

    Thus, if this is correct, a crucial difference exists between students who understand rules and those who merely follow them. The danger to educators is that the two performances may appear identical, eliminating the possibility of easily discerning learning from conforming.

    There is overlap among these various theories of motivation and explanations of student behavior. Thus, a compiled literature framework for description of student goal types and behavioral explanations in classroom activity consists of 1) task involvement, 2) ego involvement, 3) social solidarity, 4) extrinsic reward, 5) effort minimization, and 6) rule following. However, some significant areas of contradiction and variation in emphasis exist among these researchers and their findings. For example, current meta-analysis authors disagree about the negative effects of extrinsic rewards on intrinsic motivation (Cameron, 2001; Deci et al., 2001b). Some researchers have found that motivations can coexist, while others claim that the appearance of one replaces another (Seifert, 1996). Additionally, whether rule following enables or inhibits student learning remains a contested issue (Scardamalia and Bereiter, 1983; Davis, 1998). Therefore, a thorough understanding of student performances from within the current environment requires further study and analysis of their motivations.

    METHODS

    This motivation study is part of a larger project exploring the data handling choices and behaviors of 14- to 16-year-old students engaged in science activities in an English comprehensive school (Keiler, 2000). The study relies on qualitative case study methods of observations and interviews, with the motivation findings coming from interviews with student participants. A single school was selected so that an in-depth description of the students' experiences could be developed (Schofield, 1990; Wolcott, 1994), considering that “selection on the basis of typicality provides the potential for a good `fit' with many other situations. Thick description provides the information necessary to make informed judgments about the degree and extent of that fit in particular cases of interest” (Schofield, 1990, p. 211). The study school was selected based on certain criteria that maximized its “typicality.” According to its own literature, it “is a medium sized fully comprehensive school for pupils between the ages of 11 to 16 serving [a town] and the surrounding area. It is the only Secondary School in [the town].” Being the only secondary school in town diminished the effects that school choice and parent selection might have on the student sampling frame. The school had students from a wide range of economic backgrounds, but very little ethnic or language diversity. According to the school's information on the Science department, “At Key Stage 4 all pupils follow the NEAB Modular Science Course. This allows pupils to gain a double award in Science. Pupils study modules which cover aspects of Biology, Chemistry and Physics. “These modules, taught by teachers qualified in the subject area, spread the three subject areas over the 2 years comprising Key Stage 4.

    Key Stage 4 students are 14–16 years old or in Years 10 and 11 of school. Key Stage 4 Science in the National Curriculum (Department for Education, 1995) includes sections on Experimental and Investigative Science, Life Processes and Living Things, Materials and Their Properties, and Physical Processes. These areas can be roughly translated into experimental procedures, Biology, Chemistry, and Physics, respectively, with a little Earth Science mixed in. Key Stage 4 comprises the final 2 years of school when all students are required to take science, which culminates in the General Certificate of Secondary Education (GCSE) exam. The final GCSE mark combines course work, called assessed practicals, with the examination score. By investigating Key Stage 4 students, the study allows for the maximum possible learning while avoiding the self-selection of the next level of education. The classes observed were preparing for the higher tier of their Science GCSE exam. According to School Curriculum and Assessment Authority (SCAA) (1995, p. 74) regulations, the lower tier includes only selected parts of each of the numbered areas of the Key Stage 4 program of study, while the higher tier “must address all aspects of these sections of the programme of study.” Thus, sampling the students preparing for the lower tier would have restricted the range of knowledge and skills the students were expected to display in Science. Limiting the range of tasks in Science lessons might have artificially reduced the use of data handling by the students, of which this study sought the maximum. This sampling of higher tier curriculum demarcated the range of students participating in the study, and claims developed from the work might be confined in application to this population. However, approximately 90% of the students at the school were preparing for the upper tier examination and, thus, included in the sampling frame. Students were tracked into classes based on past performance in Science, and classes from high, medium, and low levels were included in the study.

    Twelve units of work, selected by the teachers at the school as involving data handling, were observed and both impromptu inclass interviews and semistructured out-of-class interviews were conducted (Merriam, 1988; Millar et al., 1994; Kvale, 1996). Impromptu interviews occurred while students were engaged in data handling during lessons, usually following up a comment made by a student to his/her classmates. These interviews consisted of a question or two. According to Kvale (1996, p. 27),“ Technically, the qualitative research interview is semistructured: It is neither an open conversation nor a highly structured questionnaire.” This method was chosen because information was being gathered about a specific topic, data handling, but the researcher wanted to remain responsive to relevant issues raised by the interviewees. According to Merriam (1988, p. 74),

    In the semistructured interview, certain information is desired from all the respondents. These interviews are guided by a list of questions or issues to be explored, but neither the exact wording nor the order of the questions is determined ahead of time. This format allows the researcher to respond to the situation at hand, to the emerging worldview of the respondent, and to new idea topics.

    However, while the semistructured interview was used for all events that the researcher labeled “interviews” to the participants, the in-class impromptu interviews and out-of-class discussions with teachers more closely followed Wolcott's (1995, p. 106) “casual or conversational interviewing.” These events were much less directed by the researcher and, in many cases, provided context and topics for the semistructured interviews. In providing so much control to the research subjects, semistructured interviews access information that the interviewer may not have known was available. However, this structure limits the effectiveness of quantifying responses and making cross-interview comparisons, as the subjects may not choose to address identical topics. This type of qualitative research generates a broad description of phenomena, not necessarily an accurate estimate of frequency.

    Three groups of two to five students who worked together in each class were asked to participate in semistructured interviews outside of class time. The students were interviewed as soon after observed class periods as could be scheduled without interfering with their other responsibilities, usually during their break or lunch the following day. The purpose of the student interviews was to ascertain the students' thinking and decision-making during the data handling portion of the unit of work. The same groups of students were interviewed at two or three points during the unit of work, to check their progress with the work, confirm their explanations, and compare their plans to their accomplishments. The timing and number of interviews were determined by the instructional events in the unit. The students were treated as experts on their own actions and learning and asked to explain their choices and behaviors since “interviews are a useful means of gaining partial access to the child's knowledge and attitudes” (Palincsar and Brown, 1989, p. 23). Interview questions were designed to ascertain the sources of information and skills demonstrated by the students, the students' thought processes as they handled data, and their affective responses to these activities. Shulman (1986, p. 17) suggests,“ To understand why learners respond (or fail to respond) as they do, ask not what they were taught, but what sense they rendered of what they were taught.” It was this sense of their own learning experiences about which the students were questioned. Finally, the students were asked to evaluate their performances during the classes and provide suggestions for improving their work; this served to demonstrate some of the differences between what students can and what they do accomplish, as the students pointed out discrepancies between what they knew and what they produced. Samples of the students' work were reviewed during classes and interviews. These interviews lasted between 20 and 40 min, with interviews later in the unit lasting longer than those following data collection. Additionally, lessons reviewing for GCSE examinations were observed and students were interviewed immediately following these examinations for approximately 0.5 h. The units of work observed were as follows.

    • 1 Year 11 Biology class—assessed practical: enzyme catalysis (3 lessons, 6 student interviews, 2 teacher interviews)

    • 1 Year 11 Physics class—assessed practical: springs (4 lessons, 10 student interviews, 1 teacher interview)

    • 1 Year 11 Chemistry class—investigation: rates of reaction (2 lessons, 2 student interviews, 1 teacher interview) (This unit was cut short due to the death of the teacher.)

    • 3 Year 11 Chemistry classes—assessed practical: rate of reaction (9 lessons, 11 student interviews, 1 teacher interview)

    • 2 Year 11 Biology classes—inheritance problems (4 lessons, 4 student interviews, 1 teacher interview)

    • 2 Year 10 Physics classes—assessed practical: electrical resistance (10 lessons, 6 student interviews, 2 teacher interviews)

    • 2 Year 10 Biology classes—assessed practical: osmosis (5 lessons, 5 student interviews, 1 teacher interview)

    • 2 Biology, 3 Chemistry, 2 Physics classes—GCSE review sessions (10 classes, 4 student interviews)

    In two cases a class of students appeared in more than one unit. Thus, excluding the review sessions, the study included 10 classes of students, with 60 students being directly involved through interviews and/or work samples.

    Student sampling decisions were made based on detailed information from the study site. For example, the physical arrangement of the room and the number of students in work groups partially determined how many students could usefully be observed in one lesson. As Cooper and McIntyre (1996, pp. 28–29) find, the ideal of interviewing all students about the lessons was impractical and impossible. They explain their alternative:

    In order to minimize the potentially negative effects of failing to interview all pupils a sampling procedure was operated. This involved gathering data from the teachers about their perceptions of individual differences among members of the teaching group, through interviews and brief written comments. On the basis of these data it was possible to ensure that the pupils interviewed were broadly representative in terms of the salient differences among them as perceived by teachers.

    Stake (1994, p. 244) suggests for within case sampling that the: “researcher notes attributes of interest .... discusses these characteristics with informants, gets recommendations, .... The choice is made, assuring variety but not necessarily representativeness, without strong arguments for typicality” (Stake 1994, p. 244) thus, prioritizing the opportunity to learn. As the study's focus was the students' data handling, the most important student feature for which some sort of representative sample was desirable was data handling performance. During student selection, science teachers were consulted in order to ensure the inclusion of highly skilled, middle range, and low performing students in the groups interviewed, with consideration for gender balance influencing the selection.

    The data analyzed by developing, testing, and modifying assertions about the students' explanations through multiple readings of the student interview transcripts (Tobin and Fraser, 1987; Tobin and Gallagher, 1987; Anderson and Burns, 1989; Maykut and Morehouse, 1994; Millar, et al. 1994; Cooper and McIntyre, 1996). The percentage of students who provided explanation in each category was calculated for each class and the entire sample. The conclusion discusses the outcomes for student learning and performance associated with the explanation categories. Marked papers and student examination marks were unavailable due to student and teacher confidentiality issues, the honoring of which was a condition of school access. However, even if the school had provided students' examination scores, no question-by-question analysis is conducted by the Examination Boards. Thus, it would be impossible to ascertain whether a high or low score in Science was due to data handling proficiency or the other 75% of the examination material. In this study judgments of learning were based on students' claims of understanding, demonstrations of understanding in interviews, and samples of work reviewed during interviews and classes. Claims are not made about student scores, but about the quality and quantity of learning that appeared to occur in these circumstances.

    FINDINGS

    Both spontaneously and in response to questions students provided explanations for their choices and behaviors with regard to handling data. These student explanations fall within six categories, labeled 1) implementing correct procedures, 2) following instructions, 3) earning marks, 4) doing what is easy, 5) acting automatically, and 6) working within limits. These categories emerged from the data and use student language as closely as possible. The categories and their combination form the bases for the seven assertions about student motivations while handling data in science activities.

    Some Students Claimed to Base Their Data Handling on“ Implementing Correct Procedures”

    The “implementing correct procedures” category consists of explanations students gave when they provided what they believed to be accepted criteria for their decisions about data handling. Seventy-seven percent of the students interviewed made at least one statement that fell into this category, including 100% of the students whose work was not currently being assessed. While none of the students explicitly said that they were“ implementing correct procedures,” the explanations supporting this assertion demonstrated the students' beliefs that there are right and wrong ways to handle data and they were doing it the right way. In some cases, these explanations were based in accepted scientific practice; however, even when the students' scientific facts were erroneous, the motivation behind the explanation was a desire to follow what they believed were “correct procedures.” These explanations appeared especially frequently in their discussions about how much and what type of data to collect.

    Researcher: Why did you choose six different concentrations (of chemical solutions)?

    Winston: Because we wanted to get enough so that we could see a pattern developing in our results, and we thought that would be the right number.

    Winston knew that patterns would be important for later data interpretation and was seeking the correct number to allow him to proceed accurately. Students also gave “correct procedures” explanations when they described how to analyze and interpret data. For example, in this group, John spoke as his two partners nodded in accord.

    John: We're recording the voltage across the wire and the amps. And we do it five times and we average out the results.

    Researcher: Why do you do it five times?

    John: So we get an average of all the results, because one might be a freak result and where you got everything wrong or something. So you do it to see if you get all the same numbers.

    John and his group wanted reliable data and believed that multiple trials and averaging would allow them to avoid a “freak result” or the effects of their getting “everything wrong.” This represented a widely held belief among these students that averaging was done to reduce the impact of “freak,”“stray,” or “anomalous” results. While this is not the statistical rationale for averaging data in experiments, the students in this study claimed that they were conducting multiple trials and averages for this reason. The fact that their justifications were unscientific does not lessen their legitimacy for the students, who expressed their belief in these procedures with deep conviction.

    As part of their data analysis, student had to select the type of graph to include in their write-ups, sometimes attributing the decision to following a“ correct procedure”:

    Norman: It's not discrete data that you have. The gas syringe could have any quantity of gas in it, so rather than drawing a bar chart, it doesn't jump from thirty-six to thirty-seven in less than an instant, but it will go through thirty-six point one two three four. Because I think it's a line graph for continuous data.

    Norman, unlike all others interviewed, correctly attributed his decision to the type of data he collected; he knew that line graphs were used for“ continuous data” and that is what he believed he had. Therefore Norman followed a “correct procedure” and constructed a line graph. While Norman's reasoning stood alone in its scientific validity, other students did give explanations that demonstrated a concern for good practice, with the most common being the claim that line graphs showed patterns more clearly than other types of graphs.

    Some Students Claimed to Base Their Data Handling on What They Have Been Told to Do

    The “following instructions” category consists of explanations in which students claimed that the basis of their choice or behavior was doing what they were taught or told to do. Seventy-four percent of the students provided this justification at some point in their interviews. Comments such as “We aren't going to play dot-to-dot because that's what [the teacher] doesn't like” (George) typified this form of motivation. Students' attempting to follow rules passed on by their teachers also falls into this category.

    Ruth: I think that's how it—there is a kind of rule that you have to use. I think that's how it is. I think time goes up the side and the variables come along the bottom, but it might be the other way around. I have to ask about that before I do it. We have been taught that but I've forgotten which way it goes [laughing].

    Thus, the rule takes the form of instructions by the teacher, allowing the student to avoid making the decision for herself by asking the teacher to repeat the rule.

    Some students appeared to be heavily reliant on teacher instructions, even when doing supposedly independent work.

    Researcher: At what point did you decide that you were going to do averages?

    Jane: She [the teacher] kind of told us.

    Cathy: She wasn't supposed to and everyone kept saying“ should we do the average” and then I don't know

    Charlotte: I think people just figured it out.

    In her interview the teacher claimed that the students had asked her whether they should average their results and that she asked them the question back, a claim that was supported by tape recordings from the lessons. However, at least Jane remembered the interaction as having been told to do the averaging; she considered herself to be doing what she was told. While the students knew that their teachers were restricted by assessment conditions, they still tried to ascertain what the teachers expected; Had the teachers been allowed, what instructions would they have been provided?

    Sometimes the practices that the students claimed to have been taught were correct scientific procedures, e.g., using lines of best fit; sometimes the practices were erroneous, e.g., always putting time on the Y-axis. The identifying factor for this category is that the students believed that they were doing what the teacher wanted and expected. The critical difference between this and the previous category is the location the of the authority for the choice. In the “implementing correct procedures” category, the authority was in the method itself. Students claimed to be doing what was right because it was right; they indicated that they had made the choice themselves because it was, in some absolute way, the right choice to make. In the “following instructions” category the authority was with the teacher. In a sense, the fact that the teacher had told them to engage in a certain behavior absolved them of the responsibility of making the decision themselves.

    Some Students Claimed to Base Data Handling on Earning Marks

    The “earning marks” category consists of explanations in which students' descriptions indicated that their behavior was directed by what they must do to earn good marks on their assessed practical or exams. Seventy percent of students made claims related to earning marks at least once in their interviews. Some student conversations revealed this as the main reasons for all work in the 2 years of preparation for GCSE examinations.

    Veronica: So basically everything you write down is just trying to gain you extra marks. That's the general reason for the investigations.

    Rosie: That's the only reason we're doing it [laughing]is for the mark.

    Researcher: Do you ever do experiments in class that aren't written up as assessed practicals?

    Veronica: We used to in the first and third year, but now it's just either learning things for the final exam or doing assessed practicals. I think the teachers used the experiments when we were younger just to interest us and make us learn, but now they don't have time to do that.

    Rosie: They have to count.

    Veronica: If you're not doing theory, then you're doing an assessed practical.

    Rosie: The last two years are just aimed at GCSEs mainly. Everything goes to a GCSE.

    The marking system seemed so prevalent for these students that they perceived that all their work was directed toward earning marks:“ Basically everything you write down is just trying to gain you extra marks.” When they mentioned learning, it was for examination purposes; they saw that the days when their interest in the material mattered were long past—”now they don't have time to do that.” These students' perceptions may have been reinforced by teacher comments similar to one during a Physics lesson, when the teacher admonished, “This is important. This is your GCSE practical.”

    For some students, marks served as a motivation to do high-quality work. One student discussed the extra care she was putting into her write-up for her osmosis practical because it was being marked.

    Valerie: With my write-up, I was trying to get everything in from diagrams and trying to explain what I was doing, because I'm trying to, I can't remember what my other grades were with my other investigations. But I'm trying to get better every time, trying to fit more in, so I can get a good grade, get a better grade for it.

    Valerie was motivated to do good work because of the marks she could earn. She wanted to “get better every time” in order to “get a good grade”; she wanted her performance to improve.

    Although motivations for earning marks appear to increase efforts by some students, for others the marking scheme acted as an upper limit on performance. The latter groups explained that they knew much more than they demonstrated on their assessed practicals because “You don't really need to, to get good marks” (Frank). Other students claimed that the desire to earn marks was so great that they would deliberately do work that they thought was erroneous if it was higher in the marking scheme:”Even if there's something that might be better, you still have to do the stuff on the syllabus, because otherwise you don't get the marks” (Jane). Thus, for some students, the “earning marks” motivation took priority over the “following correct procedures” option.

    Some Students Claimed to Base Their Behaviors on “Doing What Is Easy”

    Just over half the students in the study explained their decisions about data handling using the words “easy,” “easier,” and/or“ easiest.” For some students, these reasons indicated a desire to minimize effort on their part; i.e., “easy” meant that they had to do less work or work that, for them, was at a lower level.

    Researcher: Why did you choose to do a line graph for this one?

    Joe: It was easiest [laughing].

    Researcher: Okay.

    Joe: A bar graph's really practically unlikely; I like line graphs.

    This student and others like him consistently made choices that allowed them to put the least effort possible into their work. However, when some students used these terms to justify their choices, their explanations revealed a desire for elegance in the process of handling data. Although another group made the same decision, claiming that what they did was“ easiest,” their explanation was very different from Joe's.

    Harriet: Because you can see, because the shape of the line can show you easily the patterns.

    Karen: It's the simplest to understand really. You just look at it and you can see exactly what happens.

    Although they used the same terms as Joe, Harriet and Karen communicated a desire to understand their data using their graphs: “You just look at it and you can see exactly what happens.” They did not convey the impression that personal preference was an acceptable justification. They did indicate an awareness of the next step in their investigations, keeping in mind the overall purpose of graphing: “The shape of the line can show you easily the patterns.”

    Similarly, a student claimed to be using lines of best fit and line graphs for the subsequent ease of interpretation.

    Researcher: Why are smooth lines better?

    Frank: It gives a clearer indication. It's easier to draw comparisons between two lines that are smooth than two lines that perhaps intersect and bubble.

    And later,

    Frank: With a line graph it tends to be easier to see, with the steepness; if it's a steep curve then the reaction is happening quickly and if it shallows out then the reaction will begin to slow down. It's harder to see that with a bar chart or whatever. It tends to be an easy type of graph to interpret.

    While using the term “easier,” Frank described a “correct procedures” concern with identifying patterns as he justified his use of line graphs with lines of best fit, suggesting that he planned to “draw comparisons between two lines” and relate the rate of reaction to relative gradients. Karen, Harriet, and Frank used language that allowed their explanations to fall within the “easy” category, yet both groups indicated that, in this case, they were more concerned with gaining a quality product from their work than with reducing their efforts.

    Some Students Claimed to Follow Data Handling Procedures“ Automatically”

    According to approximately one-fifth of the students in the study, various data handling procedures had become automatic; when they did an investigation they did not have to think about what to do with their collected data.

    Julie: In Maths, in Science, we've just been encouraged to do graphs for years, and now it just comes naturally whenever you do an investigation.

    Laura: Yeah, you don't think, “Oh we have to do.”

    Geri: You have to do a graph.

    Julie: You just do and so you do stuff like predictions hypothesis naturally, as well. So it's just a way of showing the results that you've got.

    They had been trained to do procedures that they had repeated so often that, these students claimed, the implementation of the procedures had become subconscious. Julie believed of graphing that “now it just sort of comes naturally,” while Laura's “you don't think ... you just do” clearly communicated the removal of the conscious aspects of the process.

    Another group, discussing their resistance investigation, made claims of automatic practice.

    Jane: People just do it naturally now.

    Researcher: What do you mean “do it naturally”?

    Jane:It's just something you have to do [laughing]. That's about it, with a graph or anything.

    Charlotte: Yeah, usually you find the average of any results you have.

    Jane: Make graphs with averages. You can't make a graph with every single result, because you'd have ten million graphs.

    Cathy: And we've been told, when we do an experiment or something, we always have to do it five times, so we always have five results on one thing but less actual at the end results. So you just do it just automatically.

    According to Jane, averaging is automatic for students doing investigations, “People just do it naturally now,” with Cathy supporting this claim, “You just do it just automatically.” While“ naturally” and “automatically' may not technically mean the same thing, these students used them interchangeably. Both of these groups claimed not to think consciously about what procedures to include in their investigations. While the initial impetus for averaging and graphing may have been following teacher instructions, these students indicated that teachers no longer had to tell them to do graphs and other parts of their investigations.

    Some Students Claimed That Their Behaviors Were Limited by Contextual Factors

    Almost two-fifths of the students in the study discussed contextual factors, such as time and equipment, in their explanations of their data handling. According to these students, they would have behaved differently if they had not been working under particular conditions, especially time limitations.

    Frank: The time limit that we get, two lessons, and it's not enough to really do four to six experiments. We have fifteen to do, which is.

    Norman: We're supposed to do it on our own, as well, which is stupid because there's only fifteen gas syringes between a class of thirty.

    Frank:We're meant to share, and it doesn't work.

    Norman: There's always some people larking around [laughing].

    Frank: If we're given the time, then we do it properly but, it's too much pressure. There's not enough equipment. There's always never enough acid or anything.

    In addition to their discussion of “larking around,” some students did indicate that certain time constraints were of the students' own making. The same group who complained that they were not given enough time and equipment to collect their data later explained their poor performance as their own responsibility.

    Frank: I probably could have done a lot better if I had really had the time. Because we tend to do it the night before it is due really [laughing]. They give you six weeks but the longer they give you ...

    Norman: It doesn't matter. We're going to do it the night before anyway.

    Frank: They might as well give it to you tomorrow. It would be a better experiment that way; you'd actually remember it.

    Nevertheless, students used these factors to account partially for their data handling choices and behaviors. These choices included fabricating data for their practicals, about which they were embarrassed but justified their behavior by describing what they considered to be unreasonable working conditions.

    Many Students Described Multiple Motivations for Their Data Handling

    While students occasionally gave a single explanation for their behaviors and choices, they frequently provided more than one reason in consecutive interviews, the same interview, or even the same response to a single question. Only two students provided a single explanation for all their decisions, which was earning high marks. In some cases one of the multiple motivations the students discussed seemed to take precedence, while in others no clear supremacy emerged. Some students appeared to include both the“ right” reason and the “real” reason, parroting the“ correct” explanations they had been given but mentioning their own underlying motivations. For example, while explaining their decisions about how many data to collect for their resistance investigation, one group included both “correct procedures” and “earning marks” reasons

    Jane: Accuracy. Make sure you get it right. It can be just really fluky or something like that.

    Charlotte: Just in case.

    Cathy: It could be the wrong power or something, for some stupid reason.

    Jane: And you wouldn't know, unless you...

    Charlotte: And you have to do it three times. You have to repeat it three times.

    Jane: To get the accuracy and points and stuff.

    Charlotte: Averages.

    Researcher: You just talked about a whole bunch of things, so can you tell me more about the points?

    Jane: Oh yeah. It's just like for marking. If you've just done it once then you don't have a very reliable source. So you don't get as many marks as if you did it a whole load of times, got an average, and said why you think they weren't all the same and why they were the same and what you think about it.

    Jane's initial response was that they made their decisions based on what would be the most accurate. Her group's “correct procedures” explanations included incorrect rationales of avoiding “fluky” results and experimenter error, but they were about accuracy nonetheless. Later, Jane elaborated on the marking scheme, but she indicated a belief that the higher marks depended on more accurate procedures. It was unclear whether her primary motivation was being accurate or earning marks by being accurate. These multilayered motives exhibit the complexity of the problem of understanding students' choices and behaviors.

    Table 1 lists the percentages of students who provided at least one motivation in each of the categories by unit of work and overall.

    Table 1. Percentage of students who provided at least one motivation in each category by unit of work and overall

    Implementing correct proceduresFollowing instructionsEarning marksDoing what is easyActing automaticallyWorking within limits
    Enzyme catalysis6767838317100
    Springs8010060408060
    Investigation100757575250
    Rates of reaction86437136057
    Inheritance1001000000
    Resistance671006767330
    Osmosis675010050017
    Exam sessions00100000
        Overall777470511938

    DISCUSSION OF OUTCOMES

    The various explanations that students gave were associated with specific choices and behaviors, resulting in identifiable outcomes for both learning and performance. These outcomes were revealed through student explanations of their performances, including discussions of work samples. These outcomes are summarized below and related to findings in the literature.

    When students claimed to be “implementing correct procedures,” they attempted to produce the highest-quality work they could. They focused on the task, rather than external factors. Thus, their performances could be expected to reflect their knowledge and skills of data handling accurately. When students provided “correct procedures” explanations, teachers could accurately assess what the students' understood and could correct their misperceptions. This was one of the two most common categories of student explanations. This “implementing correct procedures” set of explanations most closely matches Maehr's (1983) task orientation motivation and the positive outcomes associated with intrinsic motivation described by Deci et al. (2001a).

    When students claimed to be “following instructions,” they did not make choices for themselves but depended on what they believed a past or present teacher had told them. Thus, they were able to avoid responsibility for their work. Further, they relied upon recollections of teachings rather than true understandings of procedures, making them vulnerable to memory failures. Additionally, these students sometimes were able to produce work that they did not understand, simply by following a set of rules. Student work associated with this explanation provides no real insight into student understanding of science, merely memory for instruction. This category composed the overwhelming motivation for a minority of students and appeared sporadically in other interviews. For some students, this category corresponds to aspects of Maehr's (1983) social solidarity motivation, as seeking teacher approval appeared to be part of the explanation. For others, it corresponds more closely to the literature regarding the negative aspects of rule following (Scardamalia and Bereiter, 1983; Larson, 1995) involving limited mental activity and lack of effort.

    When “earning marks” motivated students, they considered learning to be secondary to performance. Several students explained that they saw the marking scheme standards as the upper limit of performance. They were not willing to expend time and energy that were not rewarded by marks, so they did not demonstrate their full range of knowledge and skills. Students also admitted that the emphasis of the school system on earning high marks justified their fabricating experimental data. Thus, when students were motivated by “earning marks,” their performances frequently misrepresented their scientific understandings. This explanation competed with“ implementing correct procedures” for being the most prevalent in interviews and most powerful for the students. Many characteristics of the“ earning marks” motivation coincide with Maehr's (1983) description of extrinsic rewards orientations and support Deci and co-workers' (2001b) conclusions about the negative effects of rewards on intrinsic motivation.

    When students claimed to be “doing what is easy,” they acted in one of two ways. For one set of students “easy” meant that they avoided challenges, resulting in poor performances unrelated to actual knowledge and skill levels (Norman, 1983; Loughran and Derry, 1997). Other students used words such as “easy” when they sought elegant and useful procedures, producing work at their highest levels, matching characteristics of task-oriented students (Maehr, 1983). This category demonstrates that not only must educators listen to students' explanations of their work, but they must listen carefully if they want to appreciate true levels of understanding.

    When students claimed to be “acting automatically,” they did not make conscious decisions about their choices and behaviors. Sometimes this led to their efficiently using tacit knowledge to perform data handling tasks. In other cases, students applied “automatic” behaviors inappropriately. In both instances, the choices, behaviors, and their products were unmonitored by the students. These students' explanations of their performances seem to fall outside Maehr's framework, even with the addition of an effort minimization motivation. Rather, they appear to relate to the literature of tacit knowledge (e.g., Polanyi, 1962; Woolnough, 1989; Claxton, 1997).

    When students claimed to be “working within limits,” they did not perform at their full potential. They used contextual limitations as an excuse to avoid accountability for the quality and quantity of their work. Additionally, they created limitations for themselves, which could further protect them from exposing their actual levels of knowledge and skills. These explanations may belong to Maehr's ego orientation, in that blaming contextual factors allowed students to preserve their egos, or to effort minimization, as focusing on external constraints permitted them to reduce their efforts.

    Thus, it seems that only students who were motivated by “implementing correct procedures,” and some of those who were “doing what is easy” and “acting automatically,” produced work for their assessed practicals that accurately reflected their data handling knowledge and skills. In some cases, such as when they were “following instructions,” students' final products exceeded their understandings. More commonly, however, the students' level of performance on their assessed practicals was far inferior to their potential, either because the marking scheme did not included relevant mastered techniques or because the students were able to shield themselves from responsibility for their choices and behaviors. These findings have strong implications for the reliability of conclusions drawn about levels of individual, school, and program performances when students' motivations are not fully understood. Further, this research suggests that, to maximize learning and accurately assess students' understanding, educators must resist the temptation to motivate students through extrinsic rewards, be judicious in their provision of specific instructions and standards for success, and foster a desire in students to perform their tasks completely and accurately.

    FOOTNOTES

    Monitoring Editor: Karen Kalumuck

  • Adey, P. (1996). Does motivation style explain CASE differences? A reply to Leo and Galloway. Int. J. Sci. Edu. 18,51– 53. Google Scholar
  • American Association for the Advancement of Science (1969). The Psychological Bases of Science—A Process Approach. Washington, DC: AAAS. Google Scholar
  • Ames, C., and Archer, J. (1988). Achievement goals in the classroom: Students' learning strategies and motivation processes.J. Educ. Psychol. 80,260– 267. Google Scholar
  • Anderson, L., and Burns, R. (1989). Research in Classrooms: The Study of Teachers, Teaching and Instruction. Oxford: Pergamon Press. Google Scholar
  • Archenhold, F. (ed), Bell, J., Donnelly, J., Johnson, S., and Welford, G. (1988). Assessment of Performance Unit: Science at Age 15: A Review of APU Survey Findings 1980–84. London: Her Majesty's Stationary Office. Google Scholar
  • Cameron, J. (2001). Negative effects of reward on intrinsic motivation—A limited phenomenon: Comment on Deci, Koestner, and Ryan (2001). Rev. Educ. Res. 71,29– 42. Google Scholar
  • Cannon, R., and Simpson, R. (1985). Relationships among attitude, motivation, and achievement of ability grouped, seventh-grade, life science students. Sci. Educ. 69,121– 138. Google Scholar
  • Claxton, G. (1997). Hare Brain, Tortoise Mind: Why Intelligence Increases When You Think Less. London: Fourth Estate. Google Scholar
  • Cooper, P., and McIntyre, D. (1996). Effective Teaching and Learning: Teachers' and Students' Perspectives. Buckingham, UK: Open University Press. Google Scholar
  • Davis, A. (1998). Transfer, abilities and rules.J. Philos. Educ. 32,75– 106. Google Scholar
  • Deci, E.L., Koestner, R., and Ryan, R. (2001a). Extrinsic rewards and intrinsic motivation in education: Reconsidered once again. Rev. Educ. Res. 71,1– 27. Google Scholar
  • Deci, E.L., Ryan, R., and Koestner, R. (2001b). The pervasive negative effects of rewards on intrinsic motivation: Response to Cameron (2001). Rev. Educ. Res. 71,43– 51. Google Scholar
  • Department for Education (1995). Science in the National Curriculum. London: HMSO. Google Scholar
  • Donnelly, J.F., and Jenkins, E.W. (2001).Science Education: Policy Professionalism and Change . London: Paul Chapman. Google Scholar
  • Driver, R., Gott, R., Johnson, S., Worsley, C., and Wylie, F. (1982). APU: Science in Schools: Age 15: Report No. 1. London: Department of Education and Science. Google Scholar
  • Head, J. (1985). The Personal Response to Science. Cambridge: Cambridge University Press. Google Scholar
  • Hodson, D. (1998a). Is this really what scientists do? Seeking a more authentic science in and beyond the school laboratory. In:Practical Work in School Science: Which Way Now ? ed. J. Wellington. London: Routledge. Google Scholar
  • Hodson, D. (1998b). Teaching and Learning Science: Towards a Personalized Approach. Buckingham, UK: Open University Press. Google Scholar
  • Hofstein, A., and Kempa, R. (1985). Motivating strategies in science education: Attempt at analysis. Eur. J. Sci. Educ. 7,221– 229. Google Scholar
  • Keiler, L.S. (2000). Factors affecting student data handling choices and behaviors in Key Stage 4 science. D.Phil. thesis, University of Oxford, Oxford. Google Scholar
  • Kvale, S. (1996). Interviews: An Introduction to Qualitative Research Interviewing. London: Sage. Google Scholar
  • Larson, J. (1995). Fatima's rules and other elements of and unintended chemistry curriculum. Presented at the Annual Meeting of the American Educational Research Association, San Francisco, Apr. 23. Google Scholar
  • Loughran, J., and Derry, N. (1997). Researching teaching for understanding: The students' perspective. Int. J. Sci. Edu. 19,925– 938. Google Scholar
  • Maehr, M.L. (1983). On doing well in science: Why Johnny no longer excels; Why Sarah never did. In: Learning and Motivation in the classroom, ed. S.G. Paris, G.M. Olson, and H.W. Stevenson. Hillsdale, NJ: Lawrence Erlbaum Associates. Google Scholar
  • Maykut, P., and Morehouse, R. (1994). Beginning Qualitative Research: A Philosophic and Practical Guide. London: Falmer Press. Google Scholar
  • Merriam, S. (1988). Case Study Research in Education: A Qualitative Approach. London: Jossey–Bass. Google Scholar
  • Millar, R., Lubben, F., Gott, R., and Duggan, S. (1994). Investigating in the school science laboratory: Conceptual and procedural knowledge and their influence on performance.Res. Papers Educ. Policy Pract. 9,207– 248. Google Scholar
  • Nicholls, J.G. (1983). Conceptions of ability and achievement motivation. In: Learning and Motivation in the Classroom, ed. S.G. Paris, G.M. Olson, and H.W. Stevenson. Hillsdale, NJ: Lawrence Erlbaum Associates. Google Scholar
  • Nicholls, J. (1984). Concepts of ability and achievement motivation. In: Research on Motivation in Education: Student Motivation, Vol. 1, ed. R. Ames and C. Ames. London: Academic Press. Google Scholar
  • Norman, D.A. (1983). Some observations on mental models. In: Mental Model, ed. D. Genter, A.L. Stevens. London: Lawrence Erlbaum Associates. Google Scholar
  • Palincsar, A., and Brown, B. (1989). Instruction for self-regulated reading. In: Toward the Thinking Curriculum: Current Cognitive Research, ed. L. Resnick and L. Klopfer. Alexandria, VA: Association of Supervision and Curriculum Development. Google Scholar
  • Polanyi, M. (1962). Personal Knowledge: Towards a Post-critical Philosophy. London: Routledge and Kegan Paul. Google Scholar
  • Scardamalia, M., and Bereiter, C. (1983). Child as coinvestigator: Helping children gain insight into their own mental processes. In: Learning and Motivation in the Classroom, ed. S.G. Paris, G.M. Olson, and H.W. Stevenson. Hillsdale, NJ: Lawrence Erlbaum Associates. Google Scholar
  • Schofield, J. (1990). Increasing the generalizability of qualitative research. In: Qualitative Inquiry in Education: The Continuing Debate, ed. E. Eisner and A. Peshkin. New York: Teachers College Press. Google Scholar
  • School Curriculum Assessment Authority (1995).GCSE Regulations and Criteria . London: SCAA. Google Scholar
  • Seifert, T. (1996). The stability of goal orientations in grade five students: Comparison of two methodologies. Br. J. Educ. Psychol. 66,73– 82. Google Scholar
  • Shulman, L. (1986). Those who understand: Knowledge growth in teaching. Educ. Res. 15,4– 21. Google Scholar
  • Stake, R. (1994). Case studies. In: Handbook of Qualitative Research ed. N. Denzin and Y. Lincoln. London: Sage. Google Scholar
  • Tobin, K., and Fraser, B. eds. (1987).Exemplary Practice in Science and Mathematics Education . Perth: Key Centre for School Science and Mathematics, Curtin University of Technology. Google Scholar
  • Tobin, K., and Gallagher, J. (1987). What happens in high school science classrooms. J. Curric. Stud. 19,549– 560. Google Scholar
  • Weiner, B. (1984). Principles for a theory of student motivation and their application within an attributional framework. In:Research on Motivation in Education: Student Motivation , Vol.1 , ed. R. Ames and C. Ames. London: Academic Press. Google Scholar
  • Wellington, J. (1998). Practical Work in School Science: Which Way Now? London: Routledge. Google Scholar
  • White, R.T. (1988). Learning Science. Oxford: Basil Blackwell. Google Scholar
  • Wolcott, H. (1994). Transforming Qualitative Data: Description, Analysis, and Interpretation. London: Sage. Google Scholar
  • Wolcott, H. (1995). The Art of Fieldwork. London: Altamira Press. Google Scholar
  • Woolnough, B. (1989). Towards a holistic view of processes in science education (or the whole is greater then the sum of its parts, and different). In: Skills and Processes in Science Education: A Critical Analysis, ed. J. Wellington, London: Routledge. Google Scholar