ASCB logo LSE Logo

Recent Research in Science Teaching and Learning

    Published Online:https://doi.org/10.1187/cbe.13-06-0113

    This feature is designed to point CBE—Life Sciences Education readers to current articles of interest in life sciences education as well as more general and noteworthy publications in education research. URLs are provided for the abstracts or full text of articles. For articles listed as “Abstract available,” full text may be accessible at the indicated URL for readers whose institutions subscribe to the corresponding journal.

    1. Bush SD, Pelaez NJ, Rudd JA, Stevens MT, Tanner KD, Williams KS (2013). Widespread distribution and unexpected variation among science faculty with education specialties (SFES) across the United States. Proc Natl Acad Sci USA 110, 7170–7175.

    [Available at: www.pnas.org/content/110/18/7170.full.pdf+html?sid=f2823860-1fef-422c-b861-adfe8d82cef5]

    College and university basic science departments are taking an increasingly active role in innovating and improving science education and are hiring science faculty with education specialties (SFES) to reflect this emphasis. This paper describes a nationwide survey of these faculty at private and public degree-granting institutions. The authors assert that this is the first such analysis undertaken, despite the apparent importance of SFES at many, if not most, higher education institutions. It expands on earlier work summarizing survey results from SFES used in the California state university system (Bush et al., 2011).

    The methods incorporated a nationwide outreach that invited self-identified SFES to complete an anonymous, online survey. SFES are described as those “specifically hired in science departments to specialize in science education beyond typical faculty teaching duties” or “who have transitioned after their initial hire to a role as a faculty member focused on issues in science education beyond typical faculty teaching duties.” Two hundred eighty-nine individuals representing all major types of institutions of higher education completed the 95-question, face-validated instrument. Slightly more than half were female (52.9%), and 95.5% were white. There is extensive supporting information, including the survey instrument, appended to the article.

    Key findings are multiple. First, but not surprisingly, SFES are a national, widespread, and growing phenomenon. About half were hired since the year 2000 (the survey was completed in 2011). Interestingly, although 72.7% were in tenured or tenure-track positions, most did not have tenure before adopting SFES roles, suggesting that such roles are not, by themselves, an impediment to achieving tenure. A second key finding was that SFES differed significantly more between institutional types than between science disciplines. For example, SFES respondents at PhD-granting institutions were less likely to occupy tenure-track positions than those at MS-granting institutions and primarily undergraduate institutions (PUIs). Also, SFES at PhD institutions reported spending more time on teaching and less on research than their non-SFES peers. This may be influenced, of course, by the probability that fewer faculty at MS and PUI institutions have research as a core responsibility. The pattern is complex, however, because all SFES at all types of institutions listed teaching, service, and research as professional activities. SFES did report that they were much more heavily engaged in service activities than their non-SFES peers across all three types of institutions. A significantly higher proportion of SFES respondents at MS-granting institutions had formal science education training (60.9%), as compared with those at PhD-granting institutions (39.3%) or PUIs (34.8%).

    A third finding dealt with success of SFES in obtaining funding for science education research, with funding success defined as cumulatively obtaining $100,000 or more in their current positions. Interestingly, the factors that most strongly correlated statistically with funding success were 1) occupying a tenure-track position, 2) employment at a PhD-granting institution, and 3) having also obtained funding for basic science research. Not correlated were disciplinary field and, surprisingly, formal science education training.

    Noting that MS-granting institutions show the highest proportions of SFES who are tenured or tenure-track, who are higher ranked, who are trained in science education, and who have professional expectations aligned with those of their non-SFES peers, the authors suggest that these institutions are in the vanguard of developing science education as an independent discipline, similar to ecology or organic chemistry. They also point out that SFES at PhD institutions appear to be a different subset, occupying primarily non–tenure track, teaching positions. To the extent that more science education research funding is being awarded to these latter SFES, who occupy less enfranchised roles within their departments, the authors suggest the possibility that such funding may not substantially improve science education at these institutions. However, the authors make it clear that the implications of their findings merit more careful examination and discussion.

    2. Opfer JE, Nehm RH, Ha M (2012). Cognitive foundations for science assessment design: knowing what students know about evolution. J Res Sci Teach 49, 744–777.

    [Abstract available: http://onlinelibrary.wiley.com/doi/10.1002/tea.21028/abstract]

    The authors previously published an article (Nehm et al., 2012) documenting a new instrument (more specifically, a short-answer diagnostic test), Assessing Contextual Reasoning about Natural Selection (ACORNS). This article describes how cognitive principles were used in designing the theoretical framework of ACORNS. In particular, the authors attempted to follow up on the premise of a National Research Council (2001) report on educational assessment that use of research-based, cognitive models for student learning could improve the design of items used to measure students’ conceptual understandings.

    In applying this recommendation to design of the ACORNS, the authors were guided by four principles for assessing the progression from novice to expert in using core concepts of natural selection to explain and discuss the process of evolutionary change. The items in ACORNS are designed to assess whether, in moving toward expertise, individuals 1) use core concepts for facilitation of long-term recall; 2) continue to hold naïve ideas coexistent with more scientifically normative ones; 3) offer explanations centered around mechanistic rather than teleological causes; and 4) can use generalizations (abstract knowledge) to guide reasoning, rather than focusing on specifics or less-relevant surface features. Thus, these items prioritize recall over recognition, detect students’ use of causal features of natural selection, test for coexistence of normative and naïve conceptions, and assess students’ focus on surface features when offering explanations.

    The paper provides an illustrative set of four sample items, each of which describes an evolutionary change scenario with different surface features (familiar vs. unfamiliar taxa; plants vs. animals) and then prompts respondents to write explanations for how the change occurred. To evaluate the ability of items to detect gradations in expertise, the authors enlisted the participation of 320 students enrolled in an introductory biology sequence. Students’ written explanations for each of the four items were independently coded by two expert scorers for presence of core concepts and cognitive biases (deviations from scientifically normative ideas and causal reasoning). Indices were calculated to determine the frequency, diversity, and coherence of students’ concept usage. The authors also compared the students’ grades in a subsequent evolutionary biology course to determine whether the use of core concepts and cognitive biases in their ACORNS explanations could successfully predict future performance.

    Evidence from these qualitative and quantitative data analyses argued that the items were consistent with the cognitive model and four guiding principles used in their design, and that the assessment could successfully predict students’ level of academic achievement in subsequent study of evolutionary biology. The authors conclude by offering examples of student explanations to highlight the utility of this cognitive model for designing assessment items that document students’ progress toward expertise.

    3. Sampson V, Enderle P, Grooms J (2013). Development and initial validation of the Beliefs about Reformed Science Teaching and Learning (BARSTL) questionnaire. School Sci Math 113, 3–15.

    [Available: http://onlinelibrary.wiley.com/doi/10.1111/j.1949-8594.2013.00175.x/full]

    The authors report on the development of a Beliefs about Reformed Science Teaching and Learning (BARSTL) instrument (questionnaire), designed to map teachers’ beliefs along a continuum from traditional to reform-minded. The authors define reformed views of science teaching and learning as being those that are consistent with constructivist philosophies. That is, as quoted from Driver et al. (1994, p. 5), views that stem from the basic assumption that “knowledge is not transmitted directly from one knower to another, but is actively built up by the learner” by adjusting current understandings (and associated rules and mental models) to accommodate and make sense of new information and experiences.

    The basic premise for the instrument development posed by the authors is that teachers’ beliefs about the nature of science and of the teaching and learning of science serve as a filter for, and thus strongly influence how they enact, reform-based curricula in their classrooms. They cite a study from a high school physics setting (Feldman, 2002) to illustrate the impact that teachers’ differing beliefs can have on the ways in which they incorporate the same reform-based curriculum into their courses. They contend that, because educational reform efforts “privilege” constructivist views of teaching and learning, the BARSTL instrument could inform design of teacher education and professional development by monitoring the extent to which the experiences they offer are effective in shifting teachers’ beliefs toward the more constructivist end of the continuum.

    The BARTSL questionnaire described in the article has four subscales, with eight items per subscale. The four subscales are: a) how people learn about science; b) lesson design and implementation; c) characteristics of teachers and the learning environment; and d) the nature of the science curriculum. In each subscale, four of the items were designed to be aligned with reformed perspectives on science teaching and learning, and four to have a traditional perspective. Respondents indicate the extent to which they agree with the item statements on a 4-point Likert scale. In scoring the responses, strong agreement with a reform-based item is assigned a score of 4 and strong disagreement a score of 1; scores for traditional items were assigned on a reverse scale (e.g., 1 for strong agreement). A more extensive characterization of the subscales is provided in the article, along with all of the instrument items (see Appendix).

    The article describes the seven-step process and associated analyses used to, in the words of the authors, “assess the degree to which the BARTSL instrument has accurately translated the construct, reformed beliefs about science teaching, into an operationalization.” The steps include: 1) defining the specific constructs (concepts that can be used to explain related phenomena) that the instrument would measure; 2) developing instrument items; 3) evaluating items for clarity and comprehensibility; 4) evaluating construct and content validity of the items and subscales; 5) a first round of evaluation of the instrument; 6) item and instrument revision; and 7) a second evaluation of validity and reliability (the extent to which the instrument yields the same results on repetition). Step 3 was accomplished by science education doctoral students who reviewed the items and provided feedback, and step 4 with assistance from a seven-person panel composed of science education faculty and doctoral students. Administration of the instrument to 104 elementary teacher education majors (ETEs) enrolled in a teaching method course was used to evaluate the first draft of the instrument and identify items for inclusion in the final instrument. The instrument was administered to a separate population of 146 ETEs in step 7.

    The authors used two estimates of internal consistency, a Spearman-Brown corrected correlation and coefficient alpha, to assess the reliability of the instrument; the resulting values were 0.80 and 0.77, respectively, interpreted as being indicative of satisfactory internal consistency. Content validity, defined by the authors as the degree to which the sample of items measures what the instrument was designed to measure, was assessed by a panel of experts who reviewed the items within each of the four subscales. The experts concluded that items that were designed to be consistent with reformed and traditional perspectives were in fact consistent and were evenly distributed throughout the instrument. To evaluate construct validity (which was defined as the instrument's “theoretical integrity”), the authors performed a correlation analysis on the four subscales to examine the extent to which each could predict the final overall score on the instrument and thus be viewed as a single construct of reformed beliefs. They found that each of the subscales was a good predictor of overall score. Finally, they performed an exploratory factor analysis and additional follow-up analyses to determine whether the four subscales measure four dimensions of reformed beliefs and to ensure that items were appropriately distributed among the subscales. In general, the authors contend that the results of these analyses indicated good content and construct validity.

    The authors conclude by pointing out that BARTSL scores could be used for quantitative comparisons of teachers’ beliefs and stances about reform-minded science teaching and learning and for following changes over time. However, they recommend BARTSL scores not be used to infer a given level of reform-mindedness and are best used in combination with other data-collection techniques, such as observations and interviews.

    4. Meredith DC, Bolker JA (2012). Rounding off the cow: challenges and successes in an interdisciplinary physics course for life sciences students. Am J Phys 80, 913–922.

    [Abstract available at: http://ajp.aapt.org/resource/1/ajpias/v80/i10/p913_s1?isAuthorized=no]

    There is a well-recognized need to rethink and reform the way physics is taught to students in the life sciences, to evaluate those efforts, and to communicate the results to the education community. This paper describes a multiyear effort at the University of New Hampshire by faculties in physics and biological sciences to transform an introductory physics course populated mainly by biology students into an explicitly interdisciplinary course designed to meet students’ needs.

    The context was that of a large-enrollment (250–320 students), two-semester Introductory Physics for Life Science Students (IPLS) course; students attend one of two lecture sections that meet three times per week and one laboratory session per week. The IPLS course was developed and cotaught by the authors, with a goal of having “students understand how and why physics is important to biology at levels from ecology and evolution through organismal form and function, to instrumentation.” The selection of topics was drastically modified from that of a traditional physics course, with some time-honored topics omitted or de-emphasized (e.g., projectile motion, relativity), and others thought to be more relevant to biology introduced or emphasized (e.g., fluids, dynamics). In addition, several themes not always emphasized in a traditional physics course but important in understanding life processes were woven through the IPLS course: scaling, estimation, and gradient-driven flows.

    It is well recognized that life sciences students need to strengthen their quantitative reasoning skills. To address their students’ needs in this area, the instructors ensured that online tutorials were available to students, mathematical proofs that the students are not expected use were de-emphasized, and Modeling Instruction labs were incorporated that require students to model their own data with an equation and compose a verbal link between their equations and the physical world.

    Student learning outcomes were assessed through the use of the Colorado Learning Attitudes about Science Survey (CLASS), which measures students’ personal epistemologies of science by their responses on a Likert-scale survey. These data were supplemented by locally developed, open-ended surveys and Likert-scale surveys to gauge students’ appreciation for the role of physics in biology. Students’ conceptual understanding was evaluated using the Force and Motion Concept Evaluation (FCME) and Test of Understanding Graphs in Kinematics (TUG-K), as well as locally developed, open-ended physics problems that probed students’ understanding in the context of biology-relevant applications and whether their understanding of physics was evident in their use of mathematics.

    The results broadly supported the efficacy of the authors’ approaches in many respects. More than 80% of the students very strongly or strongly agreed with the statement “I found the biological applications interesting,” and almost 60% of the students very strongly or strongly agreed with the statements “I found the biological applications relevant to my other courses and/or my planned career” and “I found the biological applications helped me understand the physics.” Students were also broadly able to integrate physics into their understanding of living systems. Examples of questions that students addressed include one that asked students to evaluate the forces on animals living in water versus those on land. Ninety-one percent of the students were able to describe at least one key difference between motion in air and water. Gains in the TUG-K score averaged 33.5% across the 4 yr of the course offering and were consistent across items. However, the positive attitudes about biology applications in physics were not associated with gains in areas of conceptual understanding measured by the FCME instrument. These gains were more mixed than those from the TUG-K and dependent on the concept being evaluated, with values as low as 15% for some concepts and an average gain on all items of 24%. Overall, the gains on the two instruments designed to measure physics understanding were described by the authors as being “modest at best,” particularly in the case of the FCME, given that reported national averages for reformed courses for this instrument range from 33 to 93%.

    The authors summarize by identifying considerations they think are essential to design and implementation of a IPLS-like course: 1) the need to streamline the coverage of course topics to emphasize those that are truly aligned with the needs of life sciences majors; 2) the importance of drawing from the research literature for evidence-based strategies to motivate students and aid in their development of problem-solving skills; 3) taking the time to foster collaborations with biologists who will reinforce the physics principles in their teaching of biology courses; and 4) considering the potential constraints and limitations to teaching across disciplinary boundaries and beginning to strategize ways around them and build models for sustainability. The irony of this last recommendation is that the authors report having suspended the teaching of IPLS at their institution due to resource constraints. They recommend that institutions claiming to value interdisciplinary collaboration need to find innovative ways to reward and acknowledge such collaborations, because “external calls for change resonate with our own conviction that we can do better than the traditional introductory course to help life science students learn and appreciate physics.”

    I invite readers to suggest current themes or articles of interest in life science education, as well as influential papers published in the more distant past or in the broader field of education research, to be featured in Current Insights. Please send any suggestions to Deborah Allen ().

    REFERENCES

  • Bush SD, Pelaez NJ, Rudd JA, Stevens MT, Tanner KD, Williams KS (2011). Investigation of science faculty with education specialties within the largest university system in the United States. CBE Life Sci Educ 10, 25-42. LinkGoogle Scholar
  • Driver R, Asoko H, Leach J, Mortimer E, Scott P (1994). Constructing scientific knowledge in the classroom. Educ Res 23, 5-12. Google Scholar
  • Feldman A (2002). Multiple perspectives for the study of teaching: knowledge, understanding, and being. J Res Sci Teach 39, 1032-1055. Google Scholar
  • National Research Council (2001). Knowing What Students Know: The Science and Design of Educational Assessment, Washington, DC: National Academies Press. Google Scholar
  • Nehm RH, Beggrow EP, Opfer JE, Ha M (2012). Reasoning about natural selection: diagnosing contextual competency using the ACORNS instrument. Am Biol Teach 74, 92-98. Google Scholar