ASCB logo LSE Logo

General Essays and ArticlesFree Access

Recent Research in Science Teaching and Learning

    Published Online:https://doi.org/10.1187/cbe.21-03-0066

    Abstract

    The Current Insights feature is designed to introduce life science educators and researchers to current articles of interest in other social science and education journals. In this installment, I highlight three that explore how different types of stress can produce different educational outcomes, how studying by writing questions can improve performance, and how faculty beliefs about intelligence can influence students’ interest in and evaluation of a course.

    NOT ALL STRESS IS BAD STRESS

    Travis, J., Kaszycki, A., Geden, M., & Bunde, J. (2020). Some stress is good stress: The challenge-hindrance framework, academic self-efficacy, and academic outcomes. Journal of Educational Psychology, 112(8), 1632–1643.

    An increasing number of studies in biology education research are focusing on the negative impacts of stress on students. But is stress always negative? In this study, Travis and colleagues draw on the challenge-hindrance framework to explore the impacts of different types of stress.

    Travis and colleagues review the literature on this framework and apply it to college students. They start by defining stress as psychological arousal that results from the demands on an individual's resources for coping. Stress is elicited by elements in the environment, called stressors. Stressors can take multiple forms, but the challenge-hindrance framework organizes them into two categories based on individuals’ perceived ability to cope with the stressor. Challenge stressors are stressors that are seen as hard but doable. They are thought to bring on a form of stress arousal that increases attentiveness, motivation, persistence at a task, and ultimately performance. Hindrance stressors, on the other hand, impede and feel beyond a student's control. They can cause feelings of helplessness, increase motivation to not engage in a task, and, thus, lower performance. Examples of challenge stressors might be a difficult but interesting assignment or the overall number of assignments a student is responsible for in a semester. Examples of hindrance stressors in educational settings might include directions that are difficult to understand, perceptions that teachers are unfair, and students’ perception of the amount of busy work (work they perceive no value in). Arousal elicited by both challenge and hindrance stressors is associated with exhaustion, indicating that both create stress. Although this stress framework has been used extensively in management, it has not been widely applied in education. In the current study, Travis and colleagues test the relationship between the stresses generated from challenge and hindrance stressors and semester grade point average (GPA) as well as two retention variables (how many courses a student withdrew from in the focal semester and intent to transfer to a new institution).

    This study was conducted at two universities (one public and one private). Students in all departments and years were invited to participate. The majority of the data was collected midsemester. Students completed a survey with questions on how much stress different hindrance and challenge stressors elicited from them that semester, their transfer intentions, and academic self-efficacy. Semester GPA and number of courses a student withdrew from were collected from the institutions at the end of the semester. Academic self-efficacy was collected as a control variable, because the extent to which a stressor elicits stress could vary among individuals based on their beliefs about their ability to successfully manage the stressor. Appraisal of challenge stressors, for example, are subjective (i.e., what is perceived as hard by one person may not be for another), although hindrance stressors may be less so. A structural equation model was employed to determine the impact of hindrance and challenge stress on the three outcomes while simultaneously controlling for academic self-efficacy. A total of 763 students were studied in this analysis. Although the diversity of students represented in the sample was an advantage in terms of generalizability of the results, it did prevent the authors from exploring the heterogeneity in their sample, because the sample in any one group (gender, year in school, ethnicity, major, etc.) was small.

    Travis and colleagues tested two models: one that treated stress due to challenge and hindrance stressors as the same and one that treated them as two different but related constructs. The model that treated them as different was better supported, reaffirming the idea that these two types of stressors are different from one another, although they were strongly positively correlated. Even with that correlation, they each accounted for unique variance in GPA, and their impact on GPA was in different directions. Stress experienced from challenge stressors was positively correlated with semester GPA. Stress experienced from hindrance stressors reduced GPA. When it came to broader predictors of retention (number of courses withdrawn from and intent to transfer), neither hindrance nor challenge stress was a significant predictor. However, the relationships were in the expected directions: hindrance stress increased withdrawals and transfer intentions, whereas challenge stress decreased these.

    This study supports the idea that not all stress is bad stress. Some stress, when it comes from something students feel they can surmount, is motivating. If a stressor feels uncontrollable, it likely produce stress that is demotivating. This difference highlights the need to think carefully about the types of stressors in the classroom. It is also critical to recognize that student perceptions matter for the impact of stress: a stressor that is motivating to one student may be demotivating to another, depending on their perception of their own ability.

    GENERATING QUESTIONS IS AS GOOD FOR LEARNING AS TESTING

    Ebersbach, M., Feierabend, M., & Nazari, K. B. B. (2020). Comparing the effects of generating questions, testing, and restudying on students’ long-term recall in university learning. Applied Cognitive Psychology, 34(3), 724–736.

    In this study, Ebersbach and colleagues compare the impact of two study strategies, testing versus generating questions, on exam performance. As reviewed in their introduction, the effect of testing (vs. restudying) on learning is one of the most robust patterns in cognitive psychology. The testing effect involves learners answering questions while they learn and has been shown across many studies to increase retention of that knowledge. The testing effect is maximized when learners are able to access the correct answers when they are unsuccessful. Thus, instructors looking to optimize the testing effect to help their students should generate not only questions but also feedback on questions or assignments in which students research the questions they got wrong. These questions and assignments can be time-consuming to create. An alternative approach might be asking students to generate their own questions and answers. While the testing effect mainly requires students to remember knowledge, question generation requires them to process learning content as they decide what would make a good question and has them engage in generative work when they answer their own questions. Given the higher-order processes involved in question generation, Ebersbach and colleagues predicted students who generate and answer their own questions would perform better than students who are tested as they learn.

    Eighty-two students in a college psychology class were randomly assigned to one of three treatment conditions. All students were introduced to an unfamiliar subject in a normal class session. At the end of that class session, 20 minutes were reserved for students to complete the three experimental conditions. Each condition was given the same information (10 slides from the lecture they just heard), but different directions. In the generating questions condition, students were told to write one open-ended exam question per slide as well as its answer. In the testing condition, students had to answer questions without help on each slide. If they were not able to answer, they were instructed to look up the answers on the slides. In the control condition, students were asked to restudy the slides and memorize the content. One week later, all students were tested on the content of the slides with a set of factual questions (focused on vocabulary words) and transfer questions asking them to apply the content.

    One of the challenges of this study was the small sample size. Ebersbach and colleagues tried to account for this through their statistics. First, they conducted a power analysis assuming a medium effect size and found they needed ∼88 students, which was close to their sample size. They also ran two different statistical analyses on their data: linear regressions and Bayesian analyses. According to the authors, Bayesian approaches are more powerful than frequentist statistics when sample sizes are small.

    The researchers found that both question generation and testing increased the proportion of questions student answered correctly on the posttest overall (both raising student scores by 11%) relative to the restudying condition. They then ran separate regressions looking at the impact of study condition on the factual versus transfer questions. They saw no significant impact of question generation on either question type relative to restudying, despite the significant difference on the overall exam performance. This suggested a power problem to the authors, so they focused on the Bayesian analysis results. They found that question generation and retesting had similar positive effects on the factual questions over the restudying condition. A similar increase was found on transfer questions as well. Taken together, these results suggest that question generation and testing are equally powerful study strategies, so why not let students write the questions?

    STUDENT PERCEPTIONS OF FACULTY BELIEFS ABOUT ABILITY MATTER

    LaCosse, J., Murphy, M. C., Garcia, J. A., & Zirkel, S. (2020). The role of STEM professors’ mindset beliefs on students’ anticipated psychological experiences and course interest. Journal of Educational Psychology. Advance online publication. https://doi.org/10.1037/edu0000620

    Everyone holds beliefs about the malleability of intelligence, and those beliefs tend to roughly fall into two categories: some people believe intelligence is a fixed characteristic that cannot be changed (fixed mindset belief) and others believe it can increase through practice (growth mindset belief). Ample evidence, reviewed in the introduction, has demonstrated that students’ beliefs about their own intelligence matter, but fewer studies have explored whether students’ perceptions of how others view intelligence influences them. In this study, LaCrosse and colleagues explore how student's perceptions of their science, technology, engineering, and mathematics (STEM) instructors’ beliefs about intelligence influence their appraisal of a course they are considering enrolling in or just beginning.

    LaCrosse and colleagues conducted three closely related lab experiments. Although the studies were not classroom based, researchers did a nice job of manipulating realistic sources of data for students about faculty. In the first study (n = 157 undergraduates), researchers had students read a profile of a chemistry professor that included three short student reviews of that professor similar to those they might find on online professor-rating websites. These reviews were manipulated to suggest the professor held a fixed mindset or a growth mindset. In the second (n = 260 traditionally college-aged participants) and third study (n = 206 undergraduates), students watched a recording of the first day of class during which a math professor reviews his syllabus. Some of what the actor said in these recordings was manipulated to suggest he held either a growth or fixed mindset.

    Researchers measured a suite of participant impressions after they read or watched the manipulation. First, they measured students’ appraisal of the faculty member's mindset to make sure the manipulation worked as planned. In all cases, the manipulations cued the correct mindset. With that check accomplished, they then measured what participants anticipated the course experience would be like, which included whether the professor would treat them fairly, whether the professor would consider them a good student, and whether they would feel like they belong in the class. The researchers also asked participants about how interested they were in taking a course taught by this professor and their anticipated performance in the course. The researchers ran analyses of covariance with faculty mindset and participant gender (included as a binary) as predictors. They also included two control variables: the participant's personal mindset beliefs and personal identification with math and science. Researchers focused on gender, because previous research has demonstrated there are widely held stereotypes that women are not good at math or STEM, which may increase the chances that women participants will feel threatened in STEM courses.

    The results were remarkably stable across both manipulation types (student reviews vs. first-day video) and all three studies. They found participants anticipated more unfair treatment, less likelihood they would be considered a good student, and a lower anticipated sense of belonging in the class of the professor with a fixed mindset compared with the class of one with a growth mindset. Participants also expected to perform worse and had less interest in taking courses with the fixed mindset professor. The difference in responses between the fixed and growth mindset treatments was much larger for women than men. Researchers then ran moderated mediation analyses in which treatment concerns (belonging, fairness, and evaluation) predicted anticipated grade and course interest. They found these three concerns did consistently mediate the relationships between perceived faculty mindset and grades and interest. In addition, the influence of these treatment concerns on the outcome measures were larger for women. Finally, to bring all three studies together, they ran a meta-analysis on their results to confirm their consistency.

    This study suggests that students can appraise faculty mindsets about intelligence from very little information and that these early appraisals could color student experience in a course. To better support students, especially students who may feel more threatened, faculty may want to plan how they convey a growth mindset in their classes, especially on the first day.