ASCB logo LSE Logo

Knowledge of Learning Makes a Difference: A Comparison of Metacognition in Introductory and Senior-Level Biology Students

    Published Online:https://doi.org/10.1187/cbe.18-12-0239

    Abstract

    Metacognitive regulation occurs when learners regulate their thinking in order to learn. We asked how introductory and senior-level biology students compare in their use of the metacognitive regulation skill of evaluation, which is the ability to appraise the effectiveness of an individual learning strategy or an overall study plan. We coded student answers to an exam self-evaluation assignment for evidence of evaluating (n = 315). We found that introductory and senior students demonstrated similar ability to evaluate their individual strategies, but senior students were better at evaluating their overall plans. We examined students’ reasoning and found that senior students use knowledge of how people learn to evaluate effective strategies, whereas introductory students consider how well a strategy aligns with the exam to determine its effectiveness. Senior students consider modifying their use of a strategy to improve its effectiveness, whereas introductory students abandon strategies they evaluate as ineffective. Both groups use performance to evaluate their plans, and some students use their feelings as a proxy for metacognition. These data reveal differences between introductory and senior students, which suggest ways metacognition might develop over time. We contextualize these results using research from cognitive science, and we consider how learning contexts can affect students’ metacognition.

    INTRODUCTION

    Metacognition is a potentially powerful yet largely underutilized mechanism for helping undergraduates succeed in biology courses. Students with strong metacognitive regulation skills know how to select and implement learning strategies as part of their study plans. They can evaluate the effectiveness of their individual strategies as well as their overall plans. Metacognitive students can then use their evaluations to plan for future learning. These abilities can have a positive impact on learning and achievement (Wang et al., 1990), but many students come to college without strong metacognitive skills. To help biology undergraduates develop these skills, we need to understand the ways in which student metacognition can change in college. One way to do this is to compare the use of metacognitive regulation skills in introductory and senior-level biology students. The knowledge gained from these comparisons can then be used to help undergraduates enhance their use of metacognition early in their college careers.

    Metacognition is our awareness and control of thinking (Cross and Paris, 1988). The metacognition framework consists of two parts: metacognitive knowledge and metacognitive regulation (Brown, 1978; Jacobs and Paris, 1987). Metacognitive knowledge is what learners know about their own thinking, including what they know about approaches for learning (Brown, 1978; Jacobs and Paris, 1987; Schraw and Moshman, 1995). Students show metacognitive knowledge when they distinguish between concepts they do and do not know. For example, students who know that they understand the phases of mitosis but realize that they do not understand how mitosis differs from meiosis display metacognitive knowledge. Metacognitive knowledge is important for learning, but it will not result in learning if a student does not act on this information (Veenman, 2005). For example, just because students realize they do not know how mitosis differs from meiosis does not mean they will learn the differences. What students do in order to learn what they do not already know involves metacognitive regulation.

    Metacognitive regulation is how learners regulate their thinking for the purpose of learning, and it includes the actions taken in order to learn (Sandi-Urena et al., 2011). There are three major metacognitive regulation skills: planning, monitoring, and evaluating (Schraw and Moshman, 1995; Ambrose, 2010). Students plan for a learning task by selecting approaches for learning and determining when they will study. For example, a student may decide to highlight the textbook after each lecture, answer practice questions on weekends, and meet with a study group the day before the exam. Students monitor by enacting their approaches and considering how well these approaches are serving their learning in real time. For example, while reading the textbook, students can ask themselves how well highlighting the book is helping them to learn. After completing a learning task, students evaluate by appraising the effectiveness of 1) their individual strategies for learning and 2) their overall study plans (Schraw, 1998). For example, upon reflection, a student can determine that the individual strategies of answering practice questions and meeting with a study group were helpful, while highlighting the textbook was not. While evaluating their overall study plan, students may realize that, as a whole, their plans helped them learn concepts, but their plans did not prepare them to apply concepts. Metacognitive regulation skills can form a cycle that students can continue by using their appraisals to plan for future learning (Jacobs and Paris, 1987). For example, a student may decide to continue answering practice questions, but may decide to modify this approach by answering the questions in writing rather than mentally.

    Planning, monitoring, and evaluating are also important parts of self-regulated learning (Zimmerman, 1986; Schraw et al., 2006; Sebesta and Speth, 2017). Metacognitive regulation skills can contribute to undergraduate success in biology, but many students come to college without having developed these skills, because they did not need metacognition to succeed in high school (McGuire, 2006). Thus, undergraduate science courses can be a catalyst for metacognitive development (Dye and Stanton, 2017), because students use metacognition when they find learning to be important and challenging (Carr and Taasoobshirazi, 2008).

    As an initial step toward understanding how metacognition develops in undergraduates, we studied introductory biology students’ use of metacognitive regulation skills in the context of exam preparation (n = 245; Stanton et al., 2015). We used content analysis to examine students’ open-ended self-evaluation assignments for evidence of their ability to plan and evaluate. Half of the introductory biology students (49.0%) evaluated the effectiveness of their individual strategies for learning for the first exam. Nearly all of the introductory students were willing to select new learning strategies for future exams, but only about half (44.9%) used their evaluations of their overall study plans while planning for the second exam. Because effective metacognition involves action, we explored whether students who evaluated and planned for exam 2 carried out their new plans. Half of these students (49.0%) failed to follow their plans. Interestingly, many students explained that this was because they did not know how to carry out their plans. These data suggest that prompting students to use metacognition is enough for some introductory biology students to take action, but other students need additional help in order to respond optimally. After gaining insights into introductory biology students’ metacognition, we were interested in understanding how undergraduates use metacognitive regulation skills later in their college careers.

    We investigated metacognition in senior-level biology students who had already developed metacognitive regulation skills or were actively developing them (Dye and Stanton, 2017). We focused on the metacognitive regulation skill of evaluation in order to examine it deeply. Our goal was to understand, through analysis of semistructured interview data, when, why, and how undergraduates evaluate (n = 25). Most senior students evaluated their individual strategies for learning when they earned an unsatisfactory grade on an exam, which is an external indicator of effectiveness. Only a few students evaluated using internal indicators; they appraised their strategies when they could not answer questions on practice exams. Senior students evaluated their strategies based on their ability to obtain and recall information or their ability to use information. Importantly, all but one of the senior students in our study could evaluate their individual strategies for learning and their overall study plans when prompted. Having studied the skill of evaluation in introductory biology students through assignments and senior biology students’ metacognition through interviews, our next step was to make more direct comparisons between the two groups.

    We wanted to compare introductory and senior-level biology students’ use of the metacognitive regulation skill of evaluation to identify changes that might occur over time. Using our previous research for preliminary comparisons proved to be problematic, not only because of the different data-collection methods, but also because the two studies described earlier were conducted at two different universities. Like other complex constructs, metacognition can be affected by the context in which learning takes place (McCardle and Hadwin, 2015). For example, different institutions may have different academic settings that can affect student use of metacognitive skills. To address this concern, we investigated student metacognition at the same university using a cross-sectional study design. We addressed three research questions:

    1. How do introductory and senior-level biology students compare in their use of the metacognitive regulation skills of evaluation?

    2. How do biology students evaluate the effectiveness of their individual learning strategies, that is, what is their reasoning when they identify a strategy that worked well or a strategy that did not work well for their learning?

    3. How do biology students evaluate the effectiveness of their overall study plans?

    METHODS

    Participants and Context

    Participants were undergraduate biology students enrolled in an introductory biology course (BIOL 1107), or a senior-level cell biology course (CBIO 3400) at a public southeastern university that has a Carnegie Classification of R1 or “doctoral university with highest research activity.” BIOL 1107 is a beginning biology course for science majors and is focused on concepts in cell biology and genetics. It includes lecture and lab components and is taken primarily by first-year and sophomore students. In lecture, students work in small groups on worksheets that require them to apply course concepts. CBIO 3400 is a capstone course for life science majors. It includes lecture and breakout session components and is taken almost exclusively by seniors. In breakout sessions, students learn experimental methods, analyze data, evaluate evidence, design experiments, and predict experimental outcomes through small-group interactions. Additional information about the two courses is provided in Supplemental Table 1. All participants gave written consent, and the University of Georgia Institutional Review Board approved this study (#STUDY00001123).

    Data Collection

    Data on student metacognition were collected using a published self-evaluation assignment (Dye and Stanton, 2017). The two-page assignment was given after the first exam in each course. The assignment was designed to measure metacognitive regulation, including three related evaluation skills: evaluating effectiveness of individual learning strategies, evaluating ineffectiveness of individual learning strategies, and evaluating overall study plans. The assignment included one open-ended prompt and 12 open-ended questions. For example, one question asked students how they studied for the first exam and encouraged them to list all the learning strategies they used. The next questions asked, “Which study strategies (from your list above) worked well for you?” and “Why did these study strategies work well for you?” to examine students’ ability to evaluate the effectiveness of their individual learning strategies. The assignment was given on paper after students saw their exam grades and it was turned in to the researchers (rather than the instructors) 5 days later. Students earned points on the assignment as part of their regular homework or participation points, which totaled 1–1.5% of the total course grade. Of the 229 senior-level CBIO 3400 students invited to participate in the study, 221 students completed the assignment (96.5%), and 174 students consented to participate in this study (76.0%). Of the 261 introductory-level BIOL 1107 students invited to participate in the study, 231 students completed the assignment (88.5%), and 141 students consented to participate in this study (54.0%).

    Qualitative Data Analysis

    Written data were first analyzed using a deductive approach called content analysis. For content analysis, we used preexisting codes derived from the metacognition framework and our research on metacognition. We first read all self-evaluation assignments with metacognitive regulation in mind, while also considering other ideas in the data, in a process known as open coding (Saldaña, 2013). Next, we revised a codebook developed through our previous studies (Stanton et al., 2015; Dye and Stanton, 2017) to label data with codes to indicate the level of evidence students provided for three related evaluation skills: 1) evaluating effectiveness of individual learning strategies, 2) evaluating ineffectiveness of individual learning strategies, and 3) evaluating overall study plans (Supplemental Table 2). It is important to note that what a student writes on a self-evaluation assignment may not capture all of the tacit thoughts that constitute his or her metacognition. Therefore, we cannot conclude that a lack of written evidence of metacognition means a student is not metacognitive. Thus, we labeled participants’ answers as providing sufficient evidence, partial evidence, or insufficient evidence of evaluating, and we use this same language in Tables 13. These labels constitute a three-level magnitude code, in which codes indicate the level of content found in the data (Saldaña, 2013). We coded in an iterative cycle that included individual coding, group discussion, and revision of the codebook. Once the codebook was revised for the current data set (Supplemental Table 2), we again used this cycle to magnitude code the assignments in approximately 20% increments. We coded for each of the three evaluation skills separately, meaning that we looked for evidence of evaluating effectiveness of individual strategies separately from looking for evidence of the other two evaluation skills. In some cases, students provided evidence of an evaluation skill in response to an unrelated question, so we found we had to examine the entire assignment when coding for each skill, not just the portion designed to assess that form of evaluation.

    TABLE 1. Evaluating individual strategies: What worked wella

    Evaluating codePercentage of introductory biology studentsPercentage of senior-level biology studentsExample student response and content analysis notes
    Sufficient evidence63.1(89/141)58.0(101/174)Strategy that worked well: Writing down everything in their own words on blank paper (without resources)Why this worked well: “I forced myself to write things down to make sure I knew them. It’s easy to think you know something without actually knowing it, so writing helps.”Note: The student identifies a strategy that worked well and explains how the strategy helped with learning by allowing the student to monitor understanding of what he or she did and did not know.
    Partial evidence33.3(47/141)34.5(60/174)Strategy that worked well: Watching videos about course conceptsWhy this worked well: “The videos were very helpful and a great source. I prefer the videos to the other sources. Videos go more slowly and explain in simpler terms. It’s also visually appealing.”Note: The student identifies a strategy that works well and writes about a preference for videos because of their pace, accessibility, and visual appeal, but does not elaborate on how videos help with learning.
    Insufficient evidence3.5(5/141)7.5(13/174)Strategy that worked well: (not applicable—student does not select any strategies that worked well, reports that all strategies worked well)Why they worked well: “I got a better understanding of broad information.”Note: The student does not identify any specific strategies that worked well and gives a general explanation for why all strategies worked well.

    aWe asked introductory biology (n = 141) and senior-level biology students (n = 174) “Which study strategies (from your list above) worked well for you?” and “Why did these study strategies work well for you?” Using content analysis, we coded students’ answers as providing sufficient, partial, or insufficient evidence of evaluating (see Methods). The percentage and number of students in each category are shown. We performed a chi-square test of independence to determine whether there were differences in the amount of evidence introductory and senior students provided (p = 0.29, df = 2).

    TABLE 2. Evaluating individual strategies: What did not work wella

    Evaluating codePercentage of introductory biology studentsPercentage of senior-level biology studentsExample student response and content analysis notes
    Sufficient evidence48.9(69/141)49.4(86/174)Strategy that did not work well: Typing class notesWhy this did not work well: “I don’t really think through the information I type. I don’t think I was really absorbing the information, and I was unable to recall the information on the test.”Note: The student identifies a strategy that did not work well and writes about how typing notes is passive and does not require the student to think about the material in a way that aided retention or learning.
    Partial evidence36.1(51/141)34.5(60/174)Strategy that did not work well: Reading through notes from classWhy this did not work well: “I can’t read my own handwriting and I didn’t take complete notes sometimes.”Note: The student identifies a strategy that did not work well and writes about the quality of the notes, but does not elaborate on how this affected learning.
    Insufficient evidence14.9(21/141)16.1(28/174)Strategy that did not work well: (not applicable—student does not select any strategies that did not work well)Why this did not work well: “I should have studied more with others than by myself. I think I could have benefited more from talking through topics with others rather than dwelling on subjects I could not figure out by myself.”Note: The student does not identify a strategy that did not work well and writes about what he or she should have done rather than what he or she did. The student is reflecting, but not evaluating the ineffectiveness of the strategy.

    aWe asked introductory biology (n = 141) and senior-level biology students (n = 174) “Which study strategies (from your list above) did not work well for you?” and “Why didn’t these study strategies work well for you?” We coded students’ answers as providing sufficient, partial, or insufficient evidence of evaluating using content analysis. We performed a chi-square test of independence to determine whether there were differences between the two groups (p = 0.93, df = 2).

    TABLE 3. Evaluating overall study plansa

    Evaluating codePercentage of introductory biology studentsPercentage of senior-level biology studentsExample student response and content analysis notes
    Sufficient evidence17.7 (25/141)32.2(56/174)Student’s evaluation: “My plan was somewhat effective.”Why this did not work well: “I understood the details of and reasons for proteins in each pathway. However, my plan was not very time efficient and I was unable to see the big picture. It was also difficult to ‘trace’ pathways from beginning to end because of this lack of ‘big picture’ understanding.”Note: The student appraises the study plan in three ways (using personal insights), explaining that the plan allowed him or her to learn detailed information, was not efficient and did not include approaches for seeing the major themes, and affected the learning of whole pathways.
    Partial evidence52.5(74/141)50.6(88/174)Student’s evaluation: “My plan was moderately effective.”Why this did not work well: “I did alright [on the exam], but definitely could have done better by studying more efficiently.”Note: The student appraises the study plan in two ways, writing about exam performance (outside information) and the efficiency of the study plan (personal insight). Yet the student does not explain why the study plan was not efficient or how this affected his or her learning.
    Insufficient evidence29.8(42/141)17.2(30/174)Student’s evaluation: “My plan was relatively effective.”Explanation: “I did better on the exam than I expected to compared to the average.”Note: The student appraises the study plan solely on performance (outside information) and does not offer personal insights on the plan’s effectiveness.

    aTo examine evaluation of overall study plans, we asked introductory biology (n = 141) and senior-level biology students (n = 174) “How effective was your study plan for exam one? Please explain your answer.” We coded students’ answers as providing sufficient, partial, or insufficient evidence of evaluating using content analysis (see Methods). The p value from our chi-square test of independence was <0.01 (p = 0.0028, df = 2), indicating there are differences in the amount of evidence introductory and senior students provided.

    Next, we used an inductive approach to code the data for the reasoning students gave for the three types of evaluation: 1) evaluating effectiveness of individual learning strategies, 2) evaluating ineffectiveness of individual learning strategies, and 3) evaluating overall study plans. Because the reasoning undergraduate biology students use to evaluate has not been described before, these codes were derived from the data. We used descriptive coding to identify and label types of student reasoning (Miles and Huberman, 1994; Wolcott, 1994; Saldaña 2003). In descriptive coding, a word or a few words are used to summarize a topic or idea (Saldaña, 2013). For example, a student might evaluate an individual strategy for learning as effective because the strategy helped his or her monitoring of understanding of concepts. The descriptive code for this type of reasoning was “monitoring understanding.” After labeling the reasoning behind the three types of evaluation skills for all the assignments, we used pattern coding to identify themes that emerged from the data by grouping related codes together (Miles and Huberman 1994; Saldaña, 2013). For example, the descriptive code “wasted time” was one form of reasoning for evaluating a strategy as ineffective. In pattern coding, “wasted time” was grouped with the descriptive code “not efficient.” Next, we reanalyzed segments of the data that were given a pattern code to confirm those codes. Descriptive and pattern coding allowed us to make comparisons between introductory biology and senior-level biology students and to identify the reasoning for their evaluations.

    Throughout our coding processes, we coded to consensus to ensure rigor (Richards and Hemphill, 2018). Consensus coding was important for this study because of the complexity of metacognition as a construct. This process allowed us to uncover nuances in the data that we might have missed if we had focused on interrater reliability (Bogdan and Biklen, 2003; Denzin and Lincoln, 2005; Stanton et al., 2015). For example, consensus coding helped us to identify and discuss potentially overlooked details that provided important evidence of students’ ability to evaluate. At least two authors coded 100% of the data.

    Statistical Analysis

    We used a sample transformation design, in which qualitative data are transformed into ordinal variables (categorical data) for statistical analysis and are then interpreted along with qualitative data (Warfa, 2016). Our three-level magnitude coding (i.e., codes of sufficient evidence, partial evidence, or insufficient evidence) provided categorical data that we could analyze using chi-square tests of independence. We used 2 × 3 chi-square analysis to determine whether there were differences in the level of evidence provided by introductory and senior-level biology students for their use of evaluation skills. For 2 × 3 chi-square analysis, the degrees of freedom was 2. All statistical analysis was performed using contingency tables in GraphPad Prism v. 7.0e (GraphPad Software, San Diego, CA).

    Quotes

    Quotes from written data have been lightly edited for clarity, as previously described (Dye and Stanton, 2017). Quotes from senior-level biology students are noted with “senior student” and quotes from introductory biology students are noted with “introductory student.”

    RESULTS

    How Do Introductory and Senior-Level Biology Students Compare in Their Use of the Metacognitive Regulation Skills of Evaluation?

    Introductory and senior students demonstrated similar ability to evaluate their individual learning strategies, but they differed in their ability to evaluate their overall study plans (Tables 13). Introductory and senior biology students in our study did not differ in their evaluation of individual strategies that worked well (p = 0.29, df = 2; Table 1), nor did they differ in their evaluation of individual strategies that did not work well (p = 0.93, df = 2; Table 2). The percentages of students who demonstrated each skill indicate that both groups of students were better at identifying and explaining why some of their approaches to learning were effective as compared with identifying and explaining why some approaches were ineffective (see percentages, Tables 1 and 2). These data suggest that both groups of students find it easier to determine what works, but they may have some difficulty determining what does not work. This difficulty is important to note, as it could lead students to spend their time using ineffective strategies, because they do not realize that those strategies are not helping them learn.

    More senior biology students evaluated their overall study plans than introductory students (p = 0.0028, df = 2; Table 3). Yet this was the metacognitive skill that both groups of students used the least, suggesting that, in general, students struggle to appraise the overall effectiveness of their study plans, as opposed to appraising a single strategy within a plan (see percentages, Tables 13). After magnitude coding, we analyzed the data with the goal of comparing the ways in which introductory and senior biology students used metacognitive skills. Specifically, we investigated the basis of students’ evaluations to see whether their reasoning differed.

    How Do Biology Students Evaluate the Effectiveness of Their Individual Learning Strategies?

    To understand the reasoning behind students’ evaluations, we asked students which of the learning strategies they used worked well and why the strategies they identified were effective. Both introductory and senior students wrote about the value of strategies that allowed them to self-test, practice writing, and cover all the material. Both groups also wrote about strategies that worked well because they matched their personal preference for visual, audio, or kinesthetic learning. In a few cases, a student’s explanation for why a strategy was effective was simply that he or she “liked” that strategy. Where senior and introductory students differed was in their positive evaluations of study groups, interest in monitoring understanding, desire for resources that align with exams, and reasons for the effectiveness of memorization techniques.

    Senior Students Value Study Groups Because They Understand the Benefits

    Senior students wrote about the effectiveness of studying with classmates. In particular, senior students described the benefits of explaining concepts to others. They wrote about how sharing their knowledge with peers allowed them to monitor their understanding of the material and identify areas for further study.

    “Trying to teach someone else showed me the things I didn’t know, especially when they asked questions.”—Senior student

    Not only were explanations in study groups an opportunity for senior students to find gaps in their knowledge, they also reported that study groups gave them the chance to obtain that knowledge.

    “Meeting with the study group was amazing, because it gave us each a greater incentive to study and forced us to explain the material to one another. It also allowed us to fill in the gaps in one another’s understanding.”—Senior student

    Senior students also wrote about the opportunity to learn different points of view from other students within study groups, in addition to sharing their own knowledge.

    “Talking out pathways in a small group allowed me to hear other people’s perspectives and forced me to say everything I knew from memory.”—Senior student

    In contrast, the few introductory students who reported that working with peers was an effective learning approach used study groups differently. Often their goal for studying with others was to compare answers on a practice test right before the actual exam. Only a few introductory students wrote about working with a peer on a regular basis. For example, one student wrote,

    “[Working with a group worked well because] it was nice to bounce ideas off each other.”—Introductory student

    This introductory student reported that the study group is effective but was less specific than senior students in the explanation for how this approach benefits learning.

    Senior Students Value Strategies for Monitoring Their Understanding

    Senior students evaluated the effectiveness of their learning strategies based on the ability to use the approach to monitor their understanding. As mentioned earlier, study groups were a context for determining concepts that senior students did and did not know. They wrote about how study groups allowed them to identify concepts they needed to review and clarify their understanding of those concepts with peers. In addition to study groups, senior students also explained that the effectiveness of several other strategies was based on the ability to monitor understanding. For example, they wrote about answering questions and drawing pathways.

    “Answering the study questions each week in detail helped me to recognize topics I knew and topics I needed to go back and review.”—Senior student

    “Redrawing figures/pathways on my own (allowed me to) identify holes in my knowledge.”—Senior student

    Some introductory students revealed that strategies they reported to be effective may have allowed them to monitor understanding, but most of them did not write about those strategies in a way that made it clear that they understood this benefit. For example, one student wrote,

    “If I can’t answer a question on the worksheet I try to read the section on the topic in the book.”—Introductory student

    This introductory student describes using another resource to obtain knowledge when faced with a question he or she cannot answer, but this seems to be a report of the student’s procedure for using worksheets rather than an explanation of why the worksheets are effective for monitoring understanding.

    Introductory Students Value Strategies That Align with Summative Assessment

    Introductory students wrote about the effectiveness of study strategies that aligned with the exam. For example, introductory students valued tools such as worksheets and old tests, because these resources were similar in style to the exam. They reported that they appreciated the opportunity to learn more about the exam format.

    “The worksheets and old test worked well because it gave me a feel of how the questions were going to look on the test and how the wording of the questions would be.”—Introductory student

    “[The old test] got me comfortable with the style of the test and what types of questions to expect.”—Introductory student

    Introductory students wanted to understand the style of the exam, perhaps because they were taking their first college biology course. In contrast, very few senior students wrote about strategies being effective because they aligned with exam format. Yet senior students in our study were not familiar with the exam format in the cell biology course in which data collection took place (Dye and Stanton, 2017). CBIO 3400 exams were exclusively constructed-response with ∼40% of the points coming from analyzing data and predicting experimental outcomes.

    Senior and Introductory Students Value Strategies for Memorization, but Their Explanations Differ

    Although both introductory and senior students reported that strategies for memorization worked well, several senior students explained the effectiveness of those strategies based on their knowledge of how people learn. Information regarding how people learn was not covered in the course where data collection took place; senior students gained this knowledge in another way, possibly through a neuroscience or cognitive psychology course. Senior students wrote about the importance of repetition for memorization and referenced neurobiology in their explanations.

    “My approach (of repeatedly writing things down on paper) allowed me to have at least three exposures to the material, which is how we learn. It takes several repeated excitation events to develop memory in the CA3 region of the hippocampus.”—Senior student

    “My strategies allowed me to repeat the information multiple times and because of long-term potentiation, I was more able to remember this information for the exam.”—Senior student

    Introductory students also wrote about the effectiveness of memorization strategies, but they did not give explanations related to how people learn. Introductory students who wrote about effective approaches for memorization often wrote about the value of flash cards.

    “I think flash cards worked the best. It forced me to memorize information. It was like a challenge.”—Introductory student

    Introductory students who found flash cards to be effective did not focus on the benefit of repeated exposure to the material.

    How Do Biology Students Evaluate the Ineffectiveness of Their Individual Learning Strategies?

    Students who can evaluate learning strategies that are not effective may be less likely to continue using ineffective strategies, and they may use their study time more wisely than students who cannot identify strategies that do not help them learn. We asked students which strategies did not work well for them and asked them to explain their answers in order to understand how students evaluate ineffective study strategies. Both introductory and senior students identified and explained what did not work less frequently than they identified and explained what did work (Tables 1 and 2). Both groups of students explained that their strategies were ineffective if the strategy was passive instead of active, or if they did not spend enough time on the strategy. Both groups also appraised strategies by critiquing the tools they used for learning. Senior students evaluated the way they used strategies and the efficiency of their strategies. In contrast, introductory students evaluated the value of a strategy without considering how they used it.

    Senior Students Critique the Way They Use Strategies

    When senior students wrote about learning strategies that did not work well, they described the way they used the approach as the reason for its ineffectiveness. Senior students appraised how they carried out a strategy rather than just appraising the usefulness of the strategy itself. For example, some senior students practiced explaining concepts out loud, but realized that practicing this way did not help them prepare for the exam.

    “I think talking out the content instead of writing [the content] didn’t work. This is a short-answer test, so I had a hard time translating my thoughts to writing.”—Senior student

    Several senior students described engaging in a study strategy with the aid of one or more resources and then explained why this was not effective. For example, many senior students critiqued their use of notes to answer study questions.

    “I think I relied too much on the notes to answer questions related to learning objectives. While it was fine for me to use the notes to help answer some of the [questions], I didn’t always put in the extra effort needed to learn the material I was having difficulty with.”—Senior student

    Other senior students wrote about answering practice questions with a partner or study group. Some realized they would have benefited more from simulating the exam conditions more closely.

    “Doing the practice test with a partner was not as effective as I thought it would be. I think it would be better for me to do the practice test in a more test-like situation, taking the time to fully write out answers to the practice test before discussing them with a partner.”—Senior student

    Senior students like this one considered how they could modify their use of a strategy in the future, rather than discard the strategy completely. Senior students may particularly benefit from their metacognitive evaluations, because they feel capable of adapting “ineffective” strategies to enhance their learning.

    Introductory Students Critique Strategies Irrespective of the Way They Used Those Strategies

    Whereas senior students explained that the way they used a strategy was ineffective, introductory biology students wrote about a strategy being ineffective in general. For example, some introductory biology students explained why working with peers did not work well without commenting on their use of this approach.

    “The study group didn’t work the best for me unfortunately. Working with others became somewhat distracting and confusing, since we all had different interpretations on comprehending the materials and concepts.”—Introductory student

    Rather than considering how to modify this learning approach, this student planned to stop working with a study group altogether. Introductory students identified additional strategies that were not effective and explained why they would not use them again.

    “Studying the old tests [did not work well]. You don’t get answers for the old tests, so they are basically useless.”—Introductory student

    This student did not consider whether old tests still have benefits even in the absence of an instructor-provided answer key. Similarly, another introductory student wrote about the ineffectiveness of using learning objectives to study. The student explained that he or she just “read the learning objectives online.”

    “Reading the learning objectives was not as beneficial…because they are not always specific and do not tell me the actual information.”—Introductory student

    This student does not consider using the learning objectives in a different way, such as answering them as if they were questions or using them to guide use of resources such as the textbook (Osueke et al., 2018). After reading learning objectives with the goal of being told information, this introductory student simply stopped using learning objectives rather than modifying the use of the learning objectives or the goal for their use. In contrast, senior students rarely evaluated strategies as not useful without considering how they used the strategy. Instead, senior students wrote about strategies that were not efficient.

    Senior Students Identify Strategies That Are Not Efficient

    Senior students explained that their learning strategies did not work well because their strategies were a poor use of their study time. For example, some senior students noticed that their strategies were inefficient because they were redundant. Similarly, other senior students noted that they wasted time making resources for themselves that were already available to them. These students were copying an existing resource rather than elaborating on it.

    “Some of my diagrams weren’t too helpful. I feel like it took me too long to make some of my drawings and I should’ve just used the diagrams [from class].”—Senior student

    Senior students also wrote about learning strategies that were inefficient because they worked with all the information from the course rather than selecting some of the information to learn. One student described why copying notes from class was not effective and how this approach could be modified in the future.

    “Rewriting notes didn’t work very well because this took me a lot of time. It was not efficient. I will just write down key words and important reminders next time.”—Senior student

    After describing an inefficient strategy, senior students often explained how they could have used this time on a different strategy that would have contributed more to their learning. One student wrote about replacing flash cards with self-explanation:

    “I found that the Quizlet didn’t work well for me at all. Making Quizlets wasted valuable time that could have been spent on explanations of different topics.”—Senior student

    Only a few introductory biology students who wrote about an inefficient strategy provided another approach that would have been more beneficial. For example, an introductory student wrote that reading the book wasted time and that a focus on applying the material instead would have been better.

    How Do Biology Students Evaluate Their Overall Study Plans?

    To understand how students evaluate their overall plans for studying as opposed to individual strategies within those plans, we asked students about the effectiveness of their study plans. Senior students evaluated their overall study plans more frequently than introductory students (Table 3), but the basis of their evaluations was sometimes similar. For both groups, two of the most prevalent factors students used for study plan evaluation were 1) their performance and 2) how they felt. Both groups also evaluated their plans based on issues of time, such as the amount of time they spent studying or how early they started studying. In contrast, senior students differed from introductory students, because senior students evaluated their plans based on additional factors, including the opportunity to apply knowledge, obtain an in-depth understanding, and make connections between concepts.

    Both Introductory and Senior Students Evaluate Their Plans Based on Performance

    Both introductory and senior students used their grades on exams as the basis for their evaluation of their overall study plans. Many students used this as the sole factor for appraising the effectiveness of their studying.

    “[My plan] wasn’t as effective as I hoped. I got quite a low grade, for me, on the test.”—Introductory student

    This response fits with data from our previous study of senior biology students, in which most students interviewed only evaluated their learning strategies when they earned an unsatisfactory grade (Dye and Stanton, 2017). In the current study, a low performance helped some senior and introductory students further evaluate the way they studied.

    “My study plan was pretty ineffective. I got a 45.5% on the test, so it really was not effective. I think I studied to a multiple-choice level.”—Senior student

    This student uses the external indicator of an exam grade to initially evaluate the study plan and then reflects on what could have been done differently. Other senior and introductory students indicated that they felt their study plans were effective, but their performances caused them to question their evaluations.

    “Apparently, my study plan was bad. I didn’t do very well, though I thought I knew the material.”—Introductory student

    These students had inaccurate metacognitive knowledge about what they knew and/or how prepared they were, which caused them to be surprised by their performances. Yet in their self-evaluation assignments, they did not move beyond this sense of surprise to consider what they could have done differently.

    Both Introductory and Senior Students Evaluate Their Plans Based on a Feeling (Confidence or Preparedness)

    Introductory and senior students used the way they felt about the material as a measure of the effectiveness of their study plan. For example, some students wrote about whether or not their study plans gave them a feeling of confidence.

    “My study plan was very effective for learning the material. I felt very confident and knew the material well.”—Senior student

    “My plan was not that effective. I felt like I covered everything in my study plan, but not enough to be 100% confident in my knowledge.”—Introductory student

    Other students wrote about a feeling of preparedness that led them to evaluate their study plans as effective. They also wrote about their experiences with the test as further confirmation of the effectiveness of their plans.

    “My study plan was very effective. I felt very prepared and had an answer for every question on the test.”—Senior student

    “From 1–10 [for effectiveness], my plan was about an 8. I felt very prepared for the test. I recognized the material and the format of questions on the test.”—Introductory student

    For both groups of students, their feelings of confidence or preparedness sometimes matched their achievement on an exam, but at other times, positive feelings were inaccurate and coincided with poor performance.

    Senior Students Evaluate Their Plans Based on Factors Related to Their Learning

    While both introductory and senior students used performance and feelings to evaluate their study plans, senior students also reflected on how well their plans helped them learn. For example, senior students considered whether or not their plans prepared them to apply concepts when making an evaluation.

    “My plan was somewhat effective. I had difficulty applying my understanding to concepts and problems on the exam. I need to do more than just know the information. I should be able to apply it to whatever type of question I may be presented with.”—Senior student

    This student critiques the study plan because he or she struggled to answer application questions on the exam. As a result the student recognizes the need to go beyond just understanding the concepts. Similarly, other senior students mentioned the importance of having a depth of knowledge when they evaluated their study plans.

    “My plan appeared somewhat effective. My downfall was that I did not provide enough information nor did I study as thoroughly as the test required. I need to study the material more in-depth and maybe do more practice questions.”—Senior student

    Senior students like this one evaluated their study plans on the basis of whether they gained a complete understanding of the material, and they considered how they could improve their studying to gain a more complete understanding in the future. Similarly, some senior students considered how well they could connect the concepts they learned.

    “My plan worked well enough…I studied in a way that gave me a brief pass over the major points, but I should have better understood individual concepts and how they relate to each other.”—Senior student

    This student reported that the overall study plan was ineffective, because he or she recognized a failure to make connections between concepts. While senior students focused on how their plans helped them learn (e.g., as evidenced by an ability to make connections between concepts), introductory students considered other goals while evaluating their overall study plans. For example, introductory students evaluated based on whether their plans kept them up to date with the material.

    DISCUSSION

    We compared introductory biology students’ and senior-level biology students’ use of the metacognitive regulation skill of evaluation using an open-ended assignment. In our study, introductory and senior students demonstrated similar ability to evaluate strategies that worked and strategies that did not work. Both groups of students were better at evaluating what worked as opposed to evaluating what did not work. We found that senior students demonstrated a greater ability to evaluate their overall study plans.

    While the statistical data were interesting, we were most intrigued by the differences in the explanations given by introductory and senior students for their evaluations. We gained insights into the basis of introductory and senior students’ metacognition through thematic analysis. In this section, we connect our qualitative results to prior work in other fields such as cognitive psychology. We then synthesize the information to make suggestions for instructors who want to help their students use metacognition. Finally, we discuss why the learning context matters when studying metacognitive regulation skills.

    Knowledge about Learning Affects How Students Evaluate and Use Strategies

    What students know about how people learn can affect their metacognitive regulation. In our study, some senior students evaluated the effectiveness of their study strategies by describing how memories form in the brain. This finding fits with the idea that students need a mental model of how learning occurs in order to be metacognitive (Bjork, 1999; Kornell and Bjork, 2007). We found that senior students used their knowledge of how people learn not only to evaluate their past studying, but also to make future decisions about how to study. In contrast, introductory biology students in our study did not write about how memories form in the brain in their evaluations.

    Students can benefit when they understand how an effective strategy contributes to learning as opposed to using an effective strategy without understanding its benefits (Bjork et al., 2013). Students’ awareness of the specific advantages of a learning approach can affect the way they enact the approach (Carpenter et al., 2008). For example, senior students in our study demonstrated an understanding of the value of interactive learning. Many of the explanations senior students gave for the effectiveness of study groups are well documented in the literature. Senior students explained that studying with their peers allowed them to hear different perspectives and generate new knowledge (Chi and Wylie, 2014). They wrote about the benefits of using study groups to learn by explaining and to demonstrate their understanding (Teichert and Stacy, 2002). They also reported that study groups helped them organize information and identify gaps in their knowledge (Webb, 1989). These data suggest that senior students used study groups in specific ways because they understood the value of those approaches.

    Implication for Instructors: Students May Need Help Modifying Learning Strategies to Improve a Strategy’s Effectiveness

    Our data suggest that introductory students may discard a new learning strategy after one use, whereas senior students may try to modify their use of a new learning strategy. For example, many introductory students reported using study groups for the first time. Several of them found study groups to be “more distracting than helpful” and felt they could be more productive studying on their own. Most introductory students who found study groups to be ineffective reported that they did not plan to use study groups again. In contrast, senior students also reported that some of their learning strategies were ineffective, but many went on to consider how they could alter their use of these strategies to increase the effectiveness of the strategies.

    We recommend that instructors give introductory students specific ways to modify beneficial study strategies along with explanations of the value of enacting a strategy a certain way. For example, instructors can provide students with instruction on how to study productively in groups and what the specific benefits of each step are. Direct instruction can be helpful, because many undergraduate students are still developing the skills required for effective collaboration (Winne et al., 2013). Instructors can provide students with learning objectives and invite them to answer the objectives as if they were questions (Osueke et al., 2018). Instructors can then encourage students to discuss their answers to learning objectives with a study group (Rybczynski and Schussler, 2011). Instructors can also encourage students to take turns testing one another in a study group and providing feedback on one another’s answers (Bjork et al., 2013). Combining these suggestions with the rationale behind each step can help students use strategies more effectively. Other studies have shown that this type of instructor guidance can have an impact on the way students use learning strategies (Sabel et al., 2017).

    Students May Use Their Subjective Feelings as a Proxy for Metacognition

    Both introductory and senior students evaluated the effectiveness of their study plans based on feelings of confidence or preparedness, which suggests that they may use subjective feelings as a proxy for metacognitive evaluations. Feelings of confidence and preparedness are related to fluency, or the sense of ease or difficulty in learning and remembering information (Alter and Oppenheimer, 2009). Fluency can play an important role in metacognition, because it is information that is always available and easily accessed by a learner (Whittlesea and Leboe, 2003; Greifeneder and Bless, 2007). In addition, fluency can give a learner insight into complex information. For example, fluency can give learners a way to appraise what they know and whether their knowledge is correct (Reber and Greifeneder, 2017). Yet fluency can be an inaccurate measure of learning, because this feeling can be affected by how recently a learner encountered information and how intuitive the information is (Bjork et al., 2013). Thus, fluency can lead to “illusions of competence” (Koriat and Bjork, 2005). Instead of using subjective feelings, student should consider other factors when evaluating their study plans (see the following section on Implication for Instructors).

    Implication for Instructors: Students May Need Help Making Evaluations Based on Factors Other Than Performance

    Introductory and senior students demonstrated the ability to evaluate the effectiveness of their overall study plans less than the other evaluation skills we examined (Tables 13). In alignment with our previous research, we found that students in this study often used performance as the primary way they evaluated how well their study plans worked (Dye and Stanton, 2017). Although performance is indeed one indication of how effective a study plan is, a grade on an exam comes from an external source of information, rather than an evaluation made by students themselves. Our data suggest that biology students may need help considering additional factors for evaluating their plans besides performance. For examples of other factors, we can look to students in our study who evaluated their study plans based on internal sources of information. These students used metacognition to reflect on how well their plan served the goals they had for studying, such as applying information, gaining an in-depth understanding, and making connections between concepts.

    We recommend that instructors help students evaluate their study plans by asking students to consider specific questions about their plans. Instructors can give an assignment that asks students to respond in writing to “How well did your plan help you understand concepts?” and “How well did your plan help you apply concepts and make connections between concepts?” Instructors can also ask students to answer “How well did your plan help you check whether or not you understood the concepts?” and “How well did your plan help you self-test?” Another important factor instructors should ask students to consider is the efficiency of their plans. Time constraints are known to dictate decisions students make about studying (Kornell and Bjork, 2007; Morehead et al., 2016). Given the limited time undergraduate biology students may have for studying, evaluating the efficiency of a plan could be particularly important for their learning.

    Using Resources While Studying May Affect Students’ Ability to Evaluate Their Learning

    When students make predictions about what they have learned before they have been tested on that information, they use their metacognition to make a judgment of learning (JOL; Arbuckle and Cuddy, 1969). JOLs tend to be inaccurate when students’ learning conditions are different from their testing conditions (Koriat and Bjork, 2005). For example, students who always study with their book and notes open may believe that they have learned concepts that they have not actually learned. These students may be misled by their JOLs, because they have access to resources while studying that will not be available during an exam. In this situation, students’ JOLs are subject to foresight bias, which occurs when students predict how well they have learned something in the presence of that information (Koriat and Bjork, 2005). Foresight bias distorts JOLs, and it can lead to problems with metacognition that affect use of study time (Kornell and Bjork, 2007). For example, if students mistakenly believe they have learned the key concepts of cell cycle control, this will affect whether they will continue studying cell cycle control. If they choose to continue studying cell cycle control, foresight bias may affect the approaches they use for learning this topic.

    In our study, introductory students sought situations that could lead to foresight bias and cause subsequent issues with metacognition. For example, several introductory students explained that practice exams have no value to them unless an answer key is provided. Yet studying with an answer key can lead to foresight bias, because if students have the answers in front of them, it is hard for them to imagine what it would be like to encounter the questions without the answers. In contrast, several senior students in our study realized that using resources while studying distorted their sense of what they did and did not know. These students planned to study without the information in front of them by using strategies for self-testing. Importantly, they made sure they self-tested in ways that were similar to the exam, such as testing themselves by writing short-essay questions. By making the learning conditions more similar to the testing conditions, students can improve the accuracy of their JOLs, which in turn may allow them to make better choices about their future study plans.

    Cross-Sectional Study Design: Understanding the Role of Context

    We investigated the metacognitive regulation skill of evaluation in undergraduate biology students toward the beginning and end of their time in college using a cross-sectional study design. By comparing the ways introductory and senior-level biology students evaluate, we uncovered clear differences between these two groups. Yet care should be taken when using these data to suggest possible ways that metacognitive skills develop in undergraduates. There are at least two reasons for taking a conservative approach to interpreting cross-sectional data. First, a cross-sectional design involves studying different participants who are at different points in their undergraduate careers. It is possible that differences we observed in our study are because of variation in our participants rather than changes that tend to occur in undergraduate students over time (Sedgwick, 2014). For example, we estimate that 20–25% of the students who take the introductory biology course go on to take the senior-level biology course at the institution where data collection took place. Thus, the population of students who took the senior-level course may be different from the population of students who took the introductory course. Future work can include a longitudinal study design to follow the metacognitive development of the same undergraduates throughout their time in college. Longitudinal designs can account for individual differences such as cognitive ability (Schaie, 2005).

    A second concern with using a cross-sectional approach is that students’ use of metacognitive regulation skills can depend on the learning context (McCardle and Hadwin, 2015). For example, it is possible that features of the courses in our study made it either easier or harder for students to show their metacognitive skills. During data analysis, we noted one context-dependent feature that may have affected participants’ ability to demonstrate the metacognitive skill of evaluation. Many introductory biology students explained that reading the assigned free online textbook was not an effective study strategy, because the textbook did not align with the course. After considering the prevalence of this explanation, we contacted the instructors to ask about the textbook. We learned that the online textbook was a relatively new addition to their course, and it did not relate as well to the in-class activities and assessment as the instructors had hoped. We suspect that having an assigned resource that did not align well with the course gave introductory students a strategy that was easy to identify and evaluate as ineffective. It is possible that these introductory students would not be as successful at evaluating the ineffectiveness of their strategies had they focused less on the textbook and more on other strategies. Thus, this unique feature of the introductory biology learning context may have resulted in an overestimation of a metacognitive regulation skill among students in this course. Introductory students may have appeared as skilled as senior students at evaluating strategy ineffectiveness, when in fact they may not be (Table 2).

    Our work underscores the need to consider the learning context when investigating metacognition. For example, we encourage metacognition researchers to be aware of the ways in which course features might affect students’ ability to demonstrate their metacognitive skills. If context-related anomalies are detected during data analysis, they can be further explored by talking with instructors and students. This exploration is a type of “member reflection,” in which qualitative researchers discuss their results with participants and stakeholders to increase the credibility of their findings (Tracy, 2010). A specific form of member reflection called “member checking” involves asking participants whether researchers’ findings seem accurate, given the participants’ experience (Taylor and Lindlof, 2002). Results of member reflection can then be incorporated into the study, as we have done here. Researchers should consider how the alignment of course components (e.g., alignment between the textbook and assessment) might affect students’ ability to demonstrate their metacognitive regulation. Other contextual variables researchers might consider include the format of exams, the nature of in-class activities, and whether the instructors talk about metacognition in class. Metacognition is a complex construct to study, but we can embrace this complexity and gain further knowledge by striving to understand the context in which metacognition takes place (Hammer et al., 2018).

    CONCLUSIONS

    In conclusion, introductory biology students and senior-level biology students differ not only in their use of the metacognitive regulation skill of evaluation, but also in the reasoning behind their metacognition. Senior students draw on their knowledge of how people learn when evaluating effective strategies, whereas introductory students focus on a strategy’s alignment with exams to determine its effectiveness. Senior students consider how they could modify their use of a strategy to improve its effectiveness, whereas introductory students discard strategies they evaluate as ineffective. Many introductory and senior students rely on performance to appraise their overall study plans, while others use their feeling of confidence or preparedness as a substitute for metacognitive evaluation. Thus, our data reveal differences in metacognition between introductory and senior students that suggest how their use of metacognitive regulation skills might develop over time. We invite researchers to consider how the learning context affects metacognition and other complex constructs. We will use the knowledge gained from this study to enhance undergraduates’ use of metacognition early in their college careers, with the goal of helping them succeed in biology and beyond.

    ACKNOWLEDGMENTS

    We are grateful to Johnathan Mayfield and Enya Granados for their help with preliminary data analysis. We thank Dr. Tessa Andrews and Dr. Jennifer Thompson for their insightful feedback on this research. We appreciate the time that students and instructors took to participate in this study. This research was funded by start-up funds from the University of Georgia (to J.D.S.). This material is based in part on work supported by the National Science Foundation under the Peach State Louis Stokes Alliance for Minority Participation grant number 1619689 (in support of M.J.). Any opinions, findings, and conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the National Science Foundation.

    REFERENCES

  • Alter, A. L., & Oppenheimer, D. M. (2009). Uniting the tribes of fluency to form a metacognitive nation. Personality and Social Psychology Review, 13(3), 219–235. MedlineGoogle Scholar
  • Ambrose, S. A. (2010). How learning works: Seven research-based principles for smart teaching (1st ed.). San Francisco, CA: Jossey-Bass. Google Scholar
  • Arbuckle, T. Y., & Cuddy, L. L. (1969). Discrimination of item strength at time of presentation. Journal of Experimental Psychology, 81(1), 126. Google Scholar
  • Bjork, R. A. (1999). Assessing our own competence: Heuristics and illusions. In Koriat, A. (Ed.), Attention and performance XVII: Cognitive regulation of performance (pp. 435–459). Cambridge, MA: MIT Press. Google Scholar
  • Bjork, R. A., Dunlosky, J., & Kornell, N. (2013). Self-regulated learning: Beliefs, techniques, and illusions. Annual Review of Psychology, 64, 417–444. MedlineGoogle Scholar
  • Bogdan, R., & Biklen, S. K. (2003). Qualitative research for education: An introduction to theories and methods. Boston, MA: Pearson. Google Scholar
  • Brown, A. L. (1978). Knowing when, where, and how to remember: A problem of metacognition. In Glaser, R. (Ed.), Advances in instructional psychology (Vol. 1, pp. 77–165). Hillsdale, NJ: Erlbaum. Google Scholar
  • Carpenter, S. K., Pashler, H., Wixted, J. T., & Vul, E. (2008). The effects of tests on learning and forgetting. Memory & Cognition, 36(2), 438–448. MedlineGoogle Scholar
  • Carr, M., & Taasoobshirazi, G. (2008). Metacognition in the gifted: Connections to expertise. Shaughnessy, M. F.Veenman M.Kleyn-KennedyC. Meta-cognition: A recent review of research, theory and perspectives 125Hauppauge, NYNova Sciences Publishers. Google Scholar
  • Chi, M. T. H., & Wylie, R. (2014). The ICAP Framework: Linking cognitive engagement to active learning outcomes. Educational Psychologist, 49(4), 219–243. Google Scholar
  • Cross, D. R., & Paris, S. G. (1988). Developmental and instructional analyses of children’s metacognition and reading comprehension. Journal of Educational Psychology, 80(2), 131. Google Scholar
  • Denzin, N. K., & Lincoln, Y. S. (2005). Introduction: The discipline and practice of qualitative research. In Denzin, N. K.Lincoln, Y. S. (Eds.), Handbook of qualitative research (3rd ed., pp. 1–32). Thousand Oaks, CA: Sage. Google Scholar
  • Dye, K. M., & Stanton, J. D. (2017). Metacognition in upper-division biology students: Awareness does not always lead to control. CBE—Life Sciences Education, 16(2), ar31. LinkGoogle Scholar
  • Greifeneder, R., & Bless, H. (2007). Relying on accessible content versus accessibility experiences: The case of processing capacity. Social Cognition, 25(6), 853–881. Google Scholar
  • Hammer, D., Gouvea, J., & Watkins, J. (2018). Idiosyncratic cases and hopes for general validity: What education research might learn from ecology/Casos idiosincrásicos y expectativas de validez general: Lo que la investigación en educación puede aprender de la ecología. Infancia y Aprendizaje, 41(4), 625–673. Google Scholar
  • Jacobs, J. E., & Paris, S. G. (1987). Children’s metacognition about reading—Issues in definition, measurement, and instruction. Educational Psychologist, 22(3–4), 255–278. Google Scholar
  • Koriat, A., & Bjork, R. A. (2005). Illusions of competence in monitoring one’s knowledge during study. Journal of Experimental Psychology: Learning, Memory, Cognition, 31(2), 187. MedlineGoogle Scholar
  • Kornell, N., & Bjork, R. A. (2007). The promise and perils of self-regulated study. Psychonomic Bulletin & Review, 14(2), 219–224. MedlineGoogle Scholar
  • McCardle, L., & Hadwin, A. F. (2015). Using multiple, contextualized data sources to measure learners’ perceptions of their self-regulated learning. Metacognition and Learning, 10(1), 43–75. Google Scholar
  • McGuire, S. Y. (2006). The impact of supplemental instruction on teaching students how to learn. New Directions for Teaching and Learning, 2006(106), 3–10. Google Scholar
  • Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis (2nd ed.). Thousand Oaks, CA: Sage. Google Scholar
  • Morehead, K., Rhodes, M. G., & DeLozier, S. (2016). Instructor and student knowledge of study strategies. Memory, 24(2), 257–271. MedlineGoogle Scholar
  • Osueke, B., Mekonnen, B., & Stanton, J. D. (2018). How undergraduate science students use learning objectives to study. Journal of Microbiology and Biology Education, 19(2), 1–8. Google Scholar
  • Reber, R., & Greifeneder, R. (2017). Processing fluency in education: How metacognitive feelings shape learning, belief formation, and affect. Educational Psychology, 52(2), 84–103. Google Scholar
  • Richards, K. A. R., & Hemphill, M. A. (2018). A practical guide to collaborative qualitative data analysis. Journal of Teaching in Physical Education, 37(2), 225–231. Google Scholar
  • Rybczynski, S. M., & Schussler, E. E. (2011). Student use of out-of-class study groups in an introductory undergraduate biology course. CBE—Life Sciences Education, 10(1), 74–82. LinkGoogle Scholar
  • Sabel, J. L., Dauer, J. T., & Forbes, C. T. (2017). Introductory biology students’ use of enhanced answer keys and reflection questions to engage in metacognition and enhance understanding. CBE—Life Sciences Education, 16(3), ar40. LinkGoogle Scholar
  • Saldaña, J. (2003). Longitudinal qualitative research: Analyzing change through time. Walnut Creek, CA: AltaMira Press. Google Scholar
  • Saldaña, J. (2013). The coding manual for qualitative researchers (2nd ed.). Los Angeles: Sage. Google Scholar
  • Sandi-Urena, S., Cooper, M. M., & Stevens, R. H. (2011). Enhancement of metacognition use and awareness by means of a collaborative intervention. International Journal of Science Education, 33(3), 323–340. Google Scholar
  • Schaie, K. W. (2005). What can we learn from longitudinal studies of adult development? Research in Human Development, 2(3), 133–158. MedlineGoogle Scholar
  • Schraw, G. (1998). Promoting general metacognitive awareness. Instructional Science, 26(1–2), 113–125. Google Scholar
  • Schraw, G., Crippen, K., & Hartley, K. (2006). Promoting self-regulation in science education: Metacognition as part of a broader perspective on learning. Research in Science Education, 36(1–2), 111–139. Google Scholar
  • Schraw, G., & Moshman, D. (1995). Metacognitive theories. Educational Psychology Review, 7(4), 351–371. Google Scholar
  • Sebesta, A. J., & Speth, E. B. (2017). How should I study for the exam? Self-regulated learning strategies and achievement in introductory biology. CBE—Life Sciences Education, 16(2), ar30. LinkGoogle Scholar
  • Sedgwick, P. (2014). Cross sectional studies: Advantages and disadvantages. BMJ, 348, g2276. Google Scholar
  • Stanton, J. D., Neider, X. N., Gallegos, I. J., & Clark, N. C. (2015). Differences in metacognitive regulation in introductory biology students: When prompts are not enough. CBE—Life Sciences Education, 14(2), ar15. LinkGoogle Scholar
  • Taylor, B., & Lindlof, T. (2002). Qualitative communication research methods. Thousand Oaks, CA: Sage. Google Scholar
  • Teichert, M. A., & Stacy, A. M. (2002). Promoting understanding of chemical bonding and spontaneity through student explanation and integration of ideas. Journal of Research in Science Teaching, 39(6), 464–496. Google Scholar
  • Tracy, S. J. (2010). Qualitative quality: Eight “big-tent” criteria for excellent qualitative research. Qualitative Inquiry, 16(10), 837–851. Google Scholar
  • Veenman, M. V. J. (2005). The assessment of metacognitive skills: What can be learned from multi-method designs? In Artelt, C.Moschner, B. (Eds.), Lernstrategien und metakognition: Implikationen für forschung und praxis (pp. 75–97). Berlin: Waxmann. Google Scholar
  • Wang, M. C., Haertel, G. D., & Walberg, H. J. (1990). What influences learning? A content analysis of review literature. Journal of Educational Research, 84(1), 30–43. Google Scholar
  • Warfa, A.-R. M. (2016). Mixed-methods design in biology education research: Approach and uses. CBE—Life Sciences Education, 15(4), rm5. LinkGoogle Scholar
  • Webb, N. M. J. (1989). Peer interaction and learning in small groups. International Journal of Educational Research, 13(1), 21–39. Google Scholar
  • Whittlesea, B. W., & Leboe, J. P. (2003). Two fluency heuristics (and how to tell them apart). Journal of Memory and Language, 49(1), 62–79. Google Scholar
  • Winne, P. H., Hadwin, A. F., & Perry, N. E. (2013). Metacognition and computer-supported collaborative learning. In The international handbook of collaborative learning (pp. 462–479). New York: Routledge/Taylor & Francis. Google Scholar
  • Wolcott, H. F. (1994). Transforming qualitative data: Description, analysis, and interpretation. Thousand Oaks, CA: Sage. Google Scholar
  • Zimmerman, B. J. (1986). Becoming a self-regulated learner: Which are the key subprocesses? Contemporary Educational Psychology, 11(4), 307–313. Google Scholar