ASCB logo LSE Logo

Letters to the EditorFree Access

RESPONSE:Re: The Use of a Knowledge Survey as an Indicator of Student Learning in an Introductory Biology Course

    Published Online:https://doi.org/10.1187/cbe.06-07-0174

    What is the correlation between students' confidence in their level of knowledge and comprehension in a course and their actual performance, as judged by their grades? The research by Bowers, Brandon, and Hill (Bowers et al., 2005) investigated this question using knowledge surveys developed by Nuhfer and Knipp (2003) to determine students' perceived self-efficacy (belief in one's capability to carry out an action successfully). A knowledge survey (KS) is a series of content-based questions on topics presented in the course. Students are not asked to answer the questions, but merely to indicate for each question their confidence that they could answer it correctly. Evidence from Bowers et al. did not support the claim by Nuhfer and Knipp that students' learning can be predicted by perceived self-efficacy levels; rather, their data indicated that the correlation between student confidence and final grades is negligible. In response to this finding, Nuhfer and Knipp challenged the authors' interpretation of their results (see above letter). We are writing to comment on this controversy and to address a more fundamental question: How useful are knowledge surveys as assessment tools?

    Designing assessments that provide substantive feedback about student learning in science presents a difficult challenge to faculty who teach undergraduates. The process of creating and evaluating assessments should include thinking broadly about validity and reliability. Bowers et al. focused on whether KS scores were valid measures of student understanding—validity. The authors concluded that the correlations they found between KS scores and student understanding were too low to validate a link. We concur with Bowers et al. that the statistical methods used in their study were appropriate and that the evidence supported their conclusions. However, Bowers et al. did not address the reliability of their assessments—that is, the reproducibility of the scores that would be obtained if the survey were administered several times to the same students.

    Nuhfer and Knipp claimed high internal reliability of their instrument because the scores on related questions within the KS correlated highly with each other. In effect, this implied overall reliability of the instrument. They also suggested that the assessments used by Bowers et al. likely had low reliability, resulting in the observed low correlations between these assessments and the KS. Because of this factor, Nuhfer and Knipp argue in their letter that there may indeed be a valid link between the KS scores and student understanding reported by Bowers et al.

    Reliability and validity are both critical aspects of assessment. Differences in interpretation of assessment results occur and merit productive debate. As Nuhfer and Knipp point out, assessments that define achievement based on lower-level thinking (e.g., knowledge and comprehension) usually have high reliability. If the goal of instruction is for students to demonstrate gains in their knowledge and comprehension of the subject, the KS may be useful. However, science involves more than mastering facts. The KS is not designed to probe students' confidence about their ability to actively engage in processes of science or higher-level thinking such as analysis or synthesis. As we attempt to assess students' critical thinking abilities using open-ended problems, determining reliability and validity becomes more difficult (Batzli et al., 2006). Ultimately, we wish to ascertain if assessments reliably measure what we want students to know and be able to do based on the goals and objectives of instruction. Are students becoming more sophisticated in their ability to solve complex problems that do not have single answers, high degrees of completeness, certainty or correctness? Is our instruction providing students guidance and practice in doing so?

    Both papers agree that perceived self-efficacy is a key attribute when learning difficult subjects and addressing complex tasks. According to theory, as self-efficacy increases, students are more willing to undertake more complex tasks and think more about complex ideas and problems (Baldwin et al., 1999). Instruction that nurtures critical thinking enables students to gain deeper understanding of content knowledge. Specific guiding questions can facilitate students' engagement with a problem and draw upon their prior knowledge or identify what they do not know (Ebert-May et al., 2006). Pretests that identify students' comprehension or misconceptions about a topic can be used to guide instruction.

    The peer-reviewed literature about knowledge surveys is sparse, and the relationship of perceived self-efficacy to performance merits further research. However, if critical thinking is the ultimate goal, a KS is unlikely to be a useful assessment. The potential value of knowledge surveys thus revolves around the question of whether “covering” content by the instructor is more important in an undergraduate science course than students “uncovering” content through problem solving and critical thinking. Whether instructors choose to use the KS should depend on their student learning goals. Perhaps instructors need to increase self-confidence in their own ability to promote higher-level thinking by their students.

    REFERENCES

  • Baldwin J., Ebert-May D., Burns D. (1999). The development of a college biology self-efficacy instrument for non-majors. Sci. Educ 83, 397-408. Google Scholar
  • Batzli J. M., Ebert-May D., Hodder J. (2006). Bridging the pathway from instruction to research. Front. Ecol. Environ 4, 105-107. Google Scholar
  • Bowers N., Brandon M., Hill C. (2005). The use of a knowledge survey as an indicator of student learning in an introductory biology course. Cell Biol. Educ (accessed 16 October 2006) 4, 311-322 http://www.lifescied.org/cgi/content/full/4/4/311. LinkGoogle Scholar
  • Ebert-May D., Batlzi J., Weber R. (2006). Designing research to investigate student learning. Front. Ecol. Environ 4, 218-219. Google Scholar
  • Nuhfer E. B., Knipp D. (2003). The knowledge survey: a tool for all reasons. To Improve the Academy. (accessed 16 October 2006) 21, 50-78 http://www.isu.edu/ctl/facultydev/resources1.html. Google Scholar