ASCB logo LSE Logo

General Essays and ArticlesFree Access

Predictors of Scientific Civic Engagement (PSCE) Survey: A Multidimensional Instrument to Measure Undergraduates’ Attitudes, Knowledge, and Intention to Engage with the Community Using Their Science Skills

    Published Online:https://doi.org/10.1187/cbe.22-02-0032

    Abstract

    Civic engagement is an individual’s active participation that is intended to improve a community’s socioeconomic status or help shape its future. Undergraduates who engage with a community during formal course work are more likely to participate civically later in life. This outcome is important for science, technology, engineering and math (STEM) students since they use STEM knowledge to make informed decisions about public health, national security and the environment. STEM courses that incorporate this idea actively engage students in helping communities, and yet, assessment of the civic outcomes in these courses, such as measuring important predictors of future civic engagement, has been inconsistent and challenging. To address this need, we designed and assessed a new survey by adapting and testing items from previously existing civic engagement measures. The result was a 14-item survey comprising the following scientific civic constructs, that predict future scientific civic engagement: value, self-efficacy, action, and knowledge. This survey has potential to provide insight into the development of scientific civic engagement for STEM disciplines among undergraduate populations and can be used with additional scales of interest, allowing for researchers to assess relationships between predictors of scientific civic engagement and other constructs.

    INTRODUCTION

    As of September 2021, COVID-19 had caused more than 650,000 total deaths in the United States (Centers for Disease Control and Prevention, 2021). With the advent of mRNA vaccines, immunization against the coronavirus was considered an inevitable victory for the United States. Yet only 55% of the total population was fully vaccinated. Reasons for poor vaccination rates discussed in a review (Troiano and Nardi, 2021) highlighted many factors, such as sociopolitical identities, religious beliefs, and the lack of trust in scientists and their work. According to the 2021 General Social Survey (Davern et al., 2021), 45% of Americans (18–34 years) had “only some” confidence in the scientific community indicating a generalized lack of trust in scientists and their work. Much of this lack of trust might be attributed to past events wherein researchers took advantage of specific communities, such as communities of color, in the name of science (Jones, 1993; Skloot, 2017) or were not fully transparent about the impacts of socioscientific decisions (Murakami and Tsubokura, 2021). Building trust between scientists and the public is important, because the latter should be able to rely on the scientific claims without the fear of being manipulated. To address the damage caused by past atrocities and move toward the goal of building trust and engaging all in socioscientific decisions, recent works have called upon scientists to “bring the community into the discussion with a clear sense of community-aligned science that advocates for science for society” (Murakami and Tsubokura, 2021, p. 969; Gray et al., 2022). These authors recognized the importance of scientists working with the community affected by socioscientific issues, via civic engagement, to achieve productive responses to such challenges.

    What Is Civic Engagement?

    Civic (or community) engagement is defined broadly as an individual’s active participation in ways that are intended to either improve a community’s socioeconomic status or help shape its future in positive ways (Adler and Goggin, 2005). According to a framework proposed by Westheimer and Kahne (2004), an individual can be civically engaged in one or more of the following ways: personally, participatorily, and/or justice-oriented. Whereas a personally engaged citizen might participate individually by donating food, for example, a participatory citizen would actively engage in the community through collective efforts such as organizing a food drive. A justice-oriented citizen, on the other hand, takes the work of their participatory counterpart to the next level by critically questioning current established practices to push for systemic change, for example, by exploring the root causes of food insecurity. The work we aim to present here is based on a personal or participatory citizen model. Thus, our working definition of “civic engagement” for this paper aligns with that of Adler and Goggin (2005) and is broadly stated as “individualistic or collective actions that are intended to strengthen or improve the local community and could lead to positive social change.”

    Despite the overarching positive intention for civic engagement, it is important to acknowledge that not all forms of civic engagement will lead to improvement. For example, Guta and colleagues (2013) found that negative/neutral impacts of community-based research projects can result from actions committed by researchers when hiring community members to aide in research. During their investigation of community researchers’ experiences, the authors learned that the members felt exploited as research subjects, excluded from key decision-making events, or kept in the dark about the work done examining their own community by “well-intentioned” researchers. In a review by Banks et al. (2013), the authors explored ethical issues in community-based participatory research using case studies, one of which looked at how young peer researchers are perceived by the governing authority (i.e., scientists). The authors cited that the peer researchers’ solutions for dealing with gang violence were not listened to, and the researchers were, instead, quizzed about other esoteric issues. Although the young researchers had hoped to make a positive change, they instead learned how the project was used by the institution to serve institutional agendas and the leadership did not respect the youth’s contributions. Thus, it would be remiss of us to highlight only the benefits of civically engaging with a community without mentioning the potential for adverse impacts that can do more harm than good, especially if the existing power structures are allowed to disenfranchise or exclude key stakeholders who could contribute to and benefit from the civic actions taken. Furthermore, we acknowledge that scientists’ ideas of what constitutes a “positive” outcome are inclined to vary depending on their moral and political stances, among other things (O’Daniel et al., 2012). This creates complexity regarding whether the outcome of civic engagement is deemed “positive” or not across demographics and sociocultural groups. It is for this reason that we define civic engagement as actions intended to strengthen or improve a community.

    What Is “Scientific” Civic Engagement, and Why Is It Important?

    While recognizing the complexities of civic engagement and the potential of civic action to cause both harm and benefit, we, as scientists and educators, hold certain convictions that shape our views and are relatively common within scientific communities (e.g., Arimoto and Sato, 2012; Murakami and Tsubokura, 2021). We maintain that knowledge gained in science, technology, engineering, and math (STEM) fields can and should be used to make informed decisions in areas such as public health, national security, and the environment. To meet this goal, we advocate for STEM undergraduates who, upon graduating, are prepared to make connections between their formal education and public issues by producing solutions with potential to improve the quality of life in their communities. Unfortunately, we are at a stage where, despite the rapid advances in science and technology, accommodating such innovations (e.g., genetically modified organisms or vaccines) in our social and civic affairs has fallen behind—resulting in science losing its public appeal (Rudolph, 2014). Furthermore, much of the current practice of scientists engaging with the public assumes that the audience is a passive consumer who only requires persuasion rather than clear rationales and justifications for socioscientific changes being implemented in their communities. Such detrimental practices have exacerbated the anti-science sentiment (Garlick and Levine, 2017) and invite the risk of disenfranchising and silencing key stakeholders. A possible solution to deal with this issue is to integrate undergraduate STEM education with a civic-oriented, interdisciplinary approach, that is, via scientific civic (or civic science) engagement. We use the term “scientific civic engagement” to mean collective engagement with the community using science skills with the intention of strengthening or improving the local community and supporting positive social change. For the purposes of this work, and given our backgrounds in the biological sciences, we used the BioSkills Guide (Clemmons et al., 2020) as our framework to help define science skills that students learn in their STEM courses. The core competencies of this framework can easily be extended beyond biology and include students engaging in the scientific process, integrating multidisciplinary perspectives, collaborating, communicating findings to the public, and reflecting on the intersection between science and society—all of which encompasses the science in scientific civic engagement. As our nation becomes increasingly multicultural, it is imperative that instructors create spaces for their students to not only appreciate differences but also find commonalities and build lasting relationships with diverse individuals by engaging civically using their science skills.

    What Does Scientific Civic Engagement Look Like within Educational Settings?

    Becoming civically engaged (and by extension scientifically civically engaged) early in life can lead to lasting behavioral patterns. Adolescent students who engage with the community are more likely to participate civically later in life, because they develop skills such as collaboration and persistence (Birdwell et al., 2013). For example, analyses of short reflective essays from sixth- to 12th-grade students revealed plans to apply what they learned from their environmental civic science projects for future collective efforts in their neighborhoods (Gallay et al., 2021). Butin (2010) highlighted four types of community-engaged pedagogy: technical, cultural, political, and anti-foundational. Technical community-engaged pedagogy is focused on students gaining content knowledge through learning or research while helping a community. For example, faculty can create experiential learning courses for Spanish where students not only learn (and are assessed on) the language, but also apply their language learning to perform social services, urban education, or health services within their respective neighborhoods (Moore-Martínez and Pongan, 2018). Cultural community-engaged pedagogy is focused on students working alongside community partners to expand their understanding of self within the local populace, thereby gaining insight on their sense of privilege (e.g., Baugh, 2019). Finally, we have political and anti-foundational community-engaged pedagogies, both of which are focused on the empowerment of underserved groups through activities similar to those mentioned above, but they take the next step by encouraging students to question pre-established norms and behaviors that maintain the status quo (similar to Westheimer and Kahne, 2004). In alignment with our working definition of civic engagement, we choose to focus primarily on the technical mode of community-engaged pedagogy in conjunction with Westheimer and Kahne’s participatory citizen model. Levy et al. (2021) has termed scientific civic engagement as a part of civic science education where civic engagement, science, and education intersect. Like the four modes of community-engaged pedagogy (Butin, 2010), civic science education (Levy et al., 2021) can also be categorized into teaching models that have potential to facilitate students’ development of scientific civic engagement skills and values by having students collect, analyze, and evaluate data or participate in collective action related to public issues. Drawing on these two bodies of work, we discuss community or civically engaged pedagogy as efforts to develop students’ understanding of how their scientific knowledge and skills might be used within a community with the intention of improving that community. We do not focus on assessing students’ sense of empowering underserved groups or advocacy, as this is beyond the scope of this work.

    STEM courses incorporating civically engaged pedagogy often strive to facilitate inclusive dialogue and disseminate scientific findings objectively to communities who stand to benefit from the work (Rudolph and Horibe, 2016). In ideal implementations of such courses, students come to appreciate that scientists are not gatekeepers of their fields, which can help students achieve sustainable solutions collaboratively with diverse stakeholders (Garlick and Levine, 2017). This has the potential to help students develop their confidence and abilities to use their science skills in service of their communities to accomplish positive change (Trott et al., 2020). Thus, there is potential for students in civically engaged STEM courses to bridge the gap between science and sociopolitical structures and develop critical viewpoints of current power dynamics as they progress in academia and beyond (Levy et al., 2021).

    What Has Been Done to Promote Scientific Civic Engagement in STEM Courses?

    Many examples of teaching with a scientific civic engagement approach are in practice today. For example, STEM educators have been incorporating different modes of activities in their courses, such as encouraging students to engage in decision making around socioscientific issues (Dauer et al., 2021), using citizen science to learn about diversity and its impact on the ecosystems (Vance-Chalcraft et al., 2021) and/or emphasizing environmental education that helps strengthen school–community relationships by working on issues related to sustainability and climate change (Flowers and Chodkiewicz, 2009). Experiences such as these help students develop their ability to use scientific rationale when making complex decisions about practices that affect their communities (Dauer et al., 2021). In addition, civically engaged STEM course-based undergraduate research experiences (CUREs) have allowed students to apply their knowledge and skills to research public problems. For instance, biology CUREs at two different minority-serving institutions had students engage with underserved communities to help address issues stemming from health inequities (Olimpo et al., 2019; Malotky et al., 2020). As described by Malotky et al. (2020), their community-engaged CURE had students devise research questions based on the needs of the citizens in North Carolina. In addition to developing relevant research skills, the students spent time serving the community through tutoring students, assisting with citizenship tests, or helping individuals improve their reading skills. The results of the postsurvey scores of this study showed that more than half of the students valued community engagement and had an increased positive perception of community-based participatory research in addressing real-world issues. Similarly, Olimpo and colleagues (2019) created a civically engaged CURE that involved students in assisting communities situated at the U.S.–Mexico border. These students worked on research projects based within the instructor’s expertise and applicable to community needs. One example of a project entailed looking at air-quality level and its effect on the El Paso region while engaging in outreach via hosting campus events for students and El Paso residents. Qualitatively, the authors found that students were able to describe how to pursue their professional aspirations while simultaneously engaging civically with their communities.

    How Have Outcomes from In-Class Scientific Civic Engagement Been Assessed in Past Courses, and How Can Assessment Be Improved?

    As demonstrated earlier, many STEM instructors have made strides to incorporate scientific civic engagement in both large- and small-enrollment undergraduate courses, yet assessment of the outcomes from participating in civically engaged science courses, particularly whether such experiences increase students’ likelihood of future scientific civic engagement, has been inconsistent and challenging. For example, in one large, first-year biology course, students interacted with ecosystem professionals to help remove an invasive species in a forest (Kalas and Raisinghani, 2019). Students were assessed on the extent of civic engagement based on written reflections that, according to the authors, were limited, given the descriptive nature of the assignment and the possibility of a language barrier that could affect the quality of responses. An upper-level ecology course had students do fieldwork to determine water quality at a watershed near campus, and students were assessed through concept maps they created and writing reflections about their course experiences (Pruett and Weigel, 2020). Despite evidence indicating value for civic engagement from these assessments, the authors stated that the learning curve of creating a digital concept map with unfamiliar software may have contributed to differences in response quality, and thus reliability, of the assessment. Olimpo and colleagues (2019) assessed civic engagement for students participating in their CURE through a qualitative analysis of open-ended prompts, while Malotky and colleagues (2020) used a survey to measure gains with additional open-ended questions about the course’s civic-oriented outcomes to assess their CURE. Notably, assessments of these two CUREs differed, but had they used the same instrument, outcomes between the two similar experiences could have been compared. Clearly, many challenges exist when assessing scientific civic engagement and comparing it across contexts, and no agreed-upon, widely used metrics exist. Overall, two limitations of prior scientific civic engagement assessment strategies are that they were qualitative, which is challenging with large sample sizes and limited capacity, and they made use of general civic engagement surveys, which are unable to capture the liklihood of scientific skill use specifically. This indicated a need to create a form of assessment that can quickly and accurately collect data on predictors of students’ future scientific civic engagement on a large-scale.

    To address this need, we designed and tested a new survey that measures four predictors of students’ future civic engagement using their science skills. We called our survey the Predictors of Scientific Civic Engagement (PSCE) survey. This paper describes our instrument-development process and provides multiple forms of validity evidence that led to the creation of the 14-item survey that asks questions about undergraduate STEM students’ sense of value for civic engagement (civic value), their sense of confidence in engaging civically (civic self-efficacy), their intention to interact with a community (civic action), and their sense of knowledge of how to civically engage (civic knowledge), all while using their science skills. Using civic engagement/science education theories as our frameworks (Westheimer and Kahne, 2004; Butin, 2010; Levy et al., 2021), we investigated and adapted items from previously existing civic engagement measures to meet our objectives. In the following section, we describe the multistep instrument-development process that we undertook to create and explore the validity of our survey predicting future scientific civic engagement.

    METHODS AND RESULTS

    The process of creating an instrument to measure predictors of students’ scientific civic engagement involved gathering multiple forms of validity evidence (American Educational Research Association et al., 2014; Reeves and Marbach-Ad, 2016), which we highlight in Figure 1. To briefly summarize our process, I.A. and L.A.C. 1) collected general civic engagement items that existed in the literature and reviewed/adapted those items with experts (K.R. and K.S.) to gather evidence based on test content, 2) collected evidence based on response processes via cognitive interviews to check whether the new survey items made sense to our desired study population, and 3) provided evidence based on the survey’s internal structure via factor analyses and assessed relations to other external variables by collecting convergent/discriminant evidence. We did not, however, collect evidence based on the consequence of testing in this study. The steps taken resulted in a 14-item survey described below.

    FIGURE 1.

    FIGURE 1. Stages of collecting validity evidence for the PSCE survey. The lead author (I.A.) conducted the literature review and item generation in step 1, cognitive interviews in step 3, and subsequent data analyses in steps 4–6. The entire author team (I.A., K.R., K.S., and L.A.C.) served as the panel in step 2.

    This study was determined exempt by the University of Colorado Boulder’s Institutional Review Board (IRB no. 19-0156).

    Positionality Statement

    I.A. (South Asian, cis-gender man) is an international doctoral candidate in the biological sciences who attended both R1 and R2 institutions for his postsecondary education in the United States. L.A.C. (white, cis-gender woman) is an assistant professor at an R1 institution who is a discipline-based education researcher with a focus on place-based, community-engaged CUREs. She is interested in how instructors can foster a new generation of resilient, creative, and passionate scientists to tackle ecological and environmental problems. K.R. and K.S. (both white, cis-gender women) are assistant directors for humanities and arts and STEM initiatives, respectively, of a program that caters to underserved and first-generation undergraduates at an R1 institution. Additionally, K.R. is the director of another program that provides dialogue experiences in classes in which students share their own stories and hear those of their peers or other community members around a topic that they are studying. She views community-based experiences as a way to “make a story” out of classroom learning, which can help students recognize the meaning and value of it. K.S. mentors STEM students and is interested in understanding students’ access to the opportunities they wish to have and the barriers they encounter. Furthermore, she brings her expertise in science education and assessment to help in developing this survey that is designed to understand student perspectives.

    Step 1. Literature Review and Item Generation (Evidence Based on Test Content)

    The first stage of survey development consisted of gathering existing items through a review of peer-reviewed civic engagement literature. The lead author (I.A.) gathered papers from online education journals and via two academic search engines (Education Resources Information Center and Google Scholar). He then read papers describing past instruments that measured attitudes, actions, and values associated with civic/community engagement. Instruments containing items that aligned with the research objective were selected by I.A. for closer examination, and surveys that did not align well with research objectives were not included in further examinations. I.A. created an initial item pool based on an investigation of five previously published instruments (listed below). This item pool consisted of 62 items that were judged to align with the research objectives and were extracted from the following questionnaires:

    1. Civic Attitudes and Skills Questionnaire (Moely et al., 2002b) = 16 items

    2. Civic Engagement Scale (Doolittle and Faul, 2013) = 14 items

    3. Civic-minded Graduate scale (Steinberg et al., 2011) = 13 items

    4. Global Citizenship Scale (Morais and Ogden, 2011) = 8 items

    5. Self-Efficacy towards Service Scale (Weber et al., 2004) = 11 items

    This process contributed to the first set of evidence based on test content, as it involved a search of the existing ways to measure predictors of civic engagement generally, becoming familiar with the wording and format of the existing items, and based on the literature, envisioning and defining the appropriate constructs that constitute predictors of civic engagement, which we describe next.

    Notably, during this first review of the literature, I.A. learned that predicting future civic engagement cannot be assessed as a one-dimensional construct. Items that predict future civic engagement go beyond simply an intention to engage, which also became apparent according to a review by Hemer and Kappus (2021) in which they recommended civic outcomes be classified into four groups: knowledge, skills, attitudes, and behaviors. Thus, based on the constructs hypothesized in the cited papers prior, I.A. proposed that predicting scientific civic engagement involves multiple dimensions, including the following four constructs:

    1. Civic Value (CV): One’s sense of responsibility when engaging with a community with the aim of improving well-being (Doolittle and Faul, 2013). Items in this construct will assess the importance and value that students assign to using their science skills to help and support a community and will contain words such as “importance” and “responsibility.” Item descriptions will range from the importance of supporting a community to finding a career that provides an opportunity to do so.

    2. Civic Self-Efficacy (CE): One’s sense of confidence in one’s ability to positively impact a community via engaging with that community (Weber et al., 2004). Items here will contain statements that express students’ sense of “confidence in their ability” to make a “positive impact” or “a difference” within a community using their science skills.

    3. Civic Action (CA): One’s intended actions to engage with the goal of improving well-being for a community (Moely et al. 2002a). Items in this construct will ask students about their intent to apply their science skill set when helping a community, for example, having a “plan” to engage in community service.

    4. Civic Knowledge (CK): One’s sense of how to use knowledge to help a community (Bobek et al., 2009). Items in this construct will measure students’ sense of their capacity to tap into their scientific knowledge to help a community. Specifically, these items will ask students if they can “think of ways” to apply their skills to help a community.

    After these constructs were articulated, I.A. then made sure that the initial item pool was expansive (to avoid potential construct underrepresentation) by specifically searching for other instruments that either contained or mentioned the constructs explicitly. Based on the extracted items and their original construct identities, I.A. and L.A.C. grouped the items into categories (value, self-efficacy, action, and knowledge) based on their wording and created a list of items to be used in the expert panel review with K.R. and K.S. Note that K.R. and K.S. were only involved in providing their expertise during item review and were not involved creating novel items at the start of the survey design process. This degree of separation ensured that the two experts, K.R. and K.S., saw these items for the first time during the expert panel review. Thus, they were able to provide a more objective perspective of whether items accurately and completely represented the construct as a part of collective evidence based on test content.

    Step 2. Expert Panel Review (Evidence Based on Test Content)

    With a list of items in place, the author team collaborated to review and modify the items to reflect their understanding of the constructs that predict future scientific civic engagement. They used their specific areas of expertise to provide the second set of evidence based on test content. Before the start of this review, I.A. and L.A.C. worked to develop working definitions of “community” and the four tentative civic constructs (presented earlier) to aid with the process. Through a general Web search for definitions that reflected our cause, we defined community as “a group of people who interact and share a common sense of identity, social values, attitudes, interests, or goals (e.g., if you are a resident of New York City, then you might identify with the community of ‘New Yorkers,’ and/or if you identify as Hispanic, then you are a member of the Hispanic community).” Notably, we chose definitions that focused on serving one’s community instead of performing political action. This aligned with the aims of our work and our philosophy surrounding civic engagement, as stated in the Introduction. The full author team (I.A., K.R., K.S., and L.A.C.) then examined and discussed items in two 2-hour, in-person meetings during which items were eliminated or modified to fit the needs of the survey. We also confirmed or adjusted predictions about which construct each item might align with based on our frameworks (Westheimer and Kahne, 2004; Butin, 2010; Levy et al., 2021). Given the pooled items were measuring predictors of general civic engagement, the full author team adapted all items to reflect students’ civic engagement using their science subject skills (e.g., civic engagement using their biology skills; Clemmons et al., 2020) to inform our understanding of common science skills. Our team focused on community-oriented civic engagement and did not include constructs within the political spectrum. In other words, we included items and constructs with more general language around serving one’s community but did not include items discussing taking political actions, such as voting. Items that targeted political actions were eliminated. Other items were eliminated based on their redundancy to items appearing in different papers. For example, “I plan to become involved in my community” and “I plan to become an active member of my community” were items that addressed the same civic action construct (originating from two different papers), but we chose to include the former item, as we agreed that “active member” carried connotations that could be challenging to interpret. Additionally, another cause for item elimination during this stage was item irrelevancy to our proposed constructs. For example, the item “Belief that one can make a difference in the world” was treated as a civic value in one paper, but we determined this did not fit in our proposed construct for scientific civic value, which focused on making a difference within a single community and not the broader “world.” After the final modifications/eliminations were made, I.A. sent the newly created 28-item survey to all other authors (K.R., K.S., and L.A.C.) for a final asynchronous review of content and feedback on item wording. Items were at times significantly different from their predecessors, as the team had changed the wording to align with research goals. Thus, the item set was treated as a new set of items altogether. I.A. then followed up after incorporating other authors’ feedback to ensure the survey was free of errors. Once the survey was approved by all authors, he went ahead with the cognitive interviews.

    Step 3. Cognitive Interviews (Evidence Based on Response Processes)

    Collecting evidence based on response processes involved conducting cognitive interviews during which students were asked to go through the survey with I.A. to 1) describe their interpretation of the survey items, 2) question any points of confusion that needed clarifying, and 3) check whether the item responses were consistent with the authors’ intentions. This provided evidence that students could interpret and respond to the items as intended by the researchers (i.e., evidence based on response process). Through snowball sampling, I.A. conducted cognitive interviews with 11 upper-division students: nine biology (six women, three men), one physics (man), and one statistics student (nonbinary). Out of the 11 students interviewed at his institution, four identified as students of color (two Asians, one Hispanic or Latino, and one Black or African American) and/or members of the LGBTQ+ community. Interviews were conducted during the Spring 2019 semester for 1 hour in a quiet, reserved room on campus. While no monetary compensation was given, students were offered refreshments after each session.

    Each student was provided with a hard copy of the survey and went through each item with I.A., providing feedback as they went. Before responding to the items, students had to first respond to two open-ended questions that asked them to provide a STEM subject they would use for their responses and a community they identified with. The responses from the subject and community identity question served as content (were piped into Qualtrics during data collection) for the PSCE scale items (e.g., if they were responding about a “biology” course, they were asked about their skills in “biology,” and if they reported that they identified as part of the “BIPOC community,” they were reminded to respond to the questions with the “BIPOC community” in mind). I.A. and L.A.C. provided the definition of “community” (as stated earlier) to aid in answering the community identity question. After the first six interviews, small adjustments were made to several items for their wording based on suggestions from the students to ensure better clarity for subsequent interview sessions (see Supplemental Material for cognitive interview findings). According to a review about cognitive interview techniques by Beatty and Willis (2007), we deemed our sample size (N = 11) to be sufficient, because by the ninth interview, we noticed additional interviews were yielding no new insights. Yet we proceeded with an additional two interviews to ensure that we accounted for any additional problematic statements that could emerge later. Findings from cognitive interviews indicated that, while our final set of items were comprehensible and clear overall, students found some items to be redundant with one another or perceived minor differences in the strength of the questions (e.g., one student interpreted “intend to” as needing a plan, whereas another indicated this wording was less concrete). We did not remove those items before the analyses, as we felt these minor differences did not alter the overall intent or meaning of the items. Furthermore, we desired to keep as many items as possible, despite some redundancies, with the assumption that items still had the potential to contribute to their respective construct and reduce the risk of construct underrepresentation. I.A. then proceeded to gather evidence for internal structure using exploratory (EFA) and confirmatory (CFA) factor analyses as described later.

    Data Collection

    Survey Measures.

    The first iteration of the PSCE scale comprised the 28 positively worded items (adapted and created by the author team as described earlier; check Supplemental Material) related to four tentative constructs as follows: value (ten items), self-efficacy (five items), action (eight items), and knowledge (five items). It had a six-point Likert-response scale for all items (1 = “completely disagree” to 6 = “completely agree”) with an additional “I don’t know” option. The “I don’t know” option is included in the final version of the survey, because it allows survey implementors to gauge whether students view an item differently than what they would expect based on class experience.

    I.A. and L.A.C. also included a short form of Marlowe and Crowne’s Social Desirability Scale (SDS; Strahan and Gerbasi, 1972) in addition to the PSCE items to collect discriminant evidence as well as check for social desirability (SD) bias. This scale comprised 10 items with a dichotomous (T/F) response option. To collect convergent evidence, we used (with modifications) the mathematics self-efficacy scale from the Mathematics Self-Efficacy and Anxiety Questionnaire (MSEAQ; May, 2009), which was designed for a broad sample of undergraduate students. The MSEAQ has demonstrated evidence of internal consistency (α = 0.93) with undergraduates, as was also the case for our study (α = 0.96). The MSEAQ’s mathematics self-efficacy scale had 14 items with six-point Likert responses (1 = “completely disagree” to 6 = “completely agree”) and an additional “I don’t know” option. Items in this scale originally measured math self-efficacy, but we made slight modifications to reflect science, instead of math, as the context for student’s self-efficacy. As a result, we are abbreviating this scale as “SSE” to indicate “science self-efficacy” for the rest of this paper.

    To account for this change in wording, I.A. reassessed the scale to provide internal structure evidence with our sample using CFA. The CFA (using the weighted least-square mean and variance-adjusted [WLSMV] estimator due to categorical data) fit statistics for the adapted scale were as follows: χ2 = 530, p < 0.001; root-mean-square error of approximation (RMSEA) = 0.093 (higher due to categorical data, as explained below), 90% confidence interval (CI) [0.086, 0.101]; comparative index (CFI) = 0.997, Tucker-Lewis index (TLI) = 0.996, standardized root-mean-square residual (SRMR) = 0.049. I.A. and L.C. hypothesized that student’s predictors of future scientific civic engagement should be influenced positively by science self-efficacy values. In other words, if students have high science self-efficacy, then they are expected to be more likely to engage with the community using their science skills. Our prediction is supported by the work of Kao et al. (2020), who measured the effects of self-efficacy, satisfaction, and science trust on science volunteers’ intention to continue volunteering. Based on correlation analyses, they found that volunteers’ science self-efficacy had a positive relationship with the intention to engage in voluntary science activities. Thus, having self-efficacy in the specific STEM discipline applied should be positively related to predictors of scientific civic engagement, and we might expect to see a moderate correlation between the two.

    Finally, we had a set of optional questions to assess the responders’ demographic characteristics and academic backgrounds. These were a mix of open-ended and multiple-choice/selection items about age, race/ethnicity, Hispanic identity, gender identity, school year, intended major, and name of institution. The survey went through multiple rounds of testing to ensure ease of usability (e.g., all response options were visible on screen and phone) within the research team before being distributed.

    Survey Administration.

    Data were collected online at midsemester using Qualtrics. Instructors were contacted via email and were requested to send the IRB-approved survey recruitment message on behalf of the researchers. Either a $100 gift card lottery or a small amount of class extra credit served as an incentive for completion. No manipulations occurred for the study other than asking the students to complete the survey. Students (18 or above; self-identified) enrolled in U.S. colleges taking STEM classes were enrolled in the study. The time the students took to complete the survey in one sitting ranged between 15 and 25 minutes. I.A. and L.A.C. aimed to include classes with community-engaged components as well as classes with no community engagement in our sample to best represent the classes we would hope to study using this instrument (Table 1). We recruited instructors and students from a variety of STEM classes and majors. Because most STEM courses we surveyed served primarily STEM majors, they make up the majority of our sample (Table 2).

    TABLE 1. Demographics of the sample size used for providing evidence based on the internal structure of the PSCE survey

    VariableValueEFA (Fall 2019 and Spring 2020; N = 259)CFA (Fall 2020; 
N = 859)C/D (Fall 2020; N = 729)a
    Frequency (%)
    Gender identityWoman182 (70)581 (68)507 (70)
    Man73 (28)262 (31)210 (29)
    Otherb3 (1)7 (<1)7 (<1)
    No response1 (<1)9 (1)6 (<1)
    Age18–20163 (63)725 (84)631 (87)
    21–2356 (22)74 (9)53 (7)
    24+31 (12)43 (5)32 (4)
    No response9 (3)17 (2)13 (2)
    Class standingUnderclassmen148 (57)739 (86)611 (84)
    Upperclassmen111 (43)120 (14)118 (16)
    Institution typeResearch170 (66)766 (89)729 (100)
    Comprehensive89 (34)93 (11)
    Course typeNon-civically engaged185 (71)776 (90)646 (89)
    Civically engaged74 (29)83 (10)83 (11)

    aNote that the data used to provide convergent/discriminant (C/D) evidence were a subset from the CFA sample.

    bStudents who identified as either nonbinary or transgender.

    TABLE 2. Ethnic and student major demographics of the sample size used for providing evidence based on the internal structure of the SCE survey

    VariableValueEFA (Fall 2019 & Spring 2020; N = 259)CFA (Fall 2020; N = 859)
    Frequency (%)
    Race and ethnicityAsian26 (10)70 (8)
    Black20 (8)18 (2)
    Hispanic/Latin56 (22)74 (9)
    Multi-racial/ethnica25 (10)173 (20)
    White117 (44)513 (59)
    Other5 (2)6 (<1)
    No response10 (4)5 (<1)
    Student majorsSTEM170 (66)521 (61)
    Social sciences30 (11)104 (13)
    Other35 (13)14 (2)
    Arts and humanities10 (4)24 (3)
    Business12 (5)9 (1)
    Environment2 (<1)37 (4)
    No response36 (4)
    Not collectedb93 (12)
    Community typescSchool111 (43)275 (32)
    Racial and ethnic45 (17)102 (12)
    Gender and sexual orientation16 (6)63 (7)
    STEM14 (5)38 (4)
    Sports10 (4)124 (14)
    Religion11 (4)40 (5)
    Miscellaneous52 (21)217 (26)

    aMultiracial/ethnic: students who identified with more than one race/ethnic category.

    bMajor data uncollected by one institution.

    cFor community types, the miscellaneous category represents communities that were grouped together due to fewer numbers compared with the other six types.

    Sampling was purposive, as we collected data from civically engaged and non–civically engaged STEM classes as well as minority-serving institutions (shown in parentheses with their Carnegie classification; Carnegie Classification of Institutions of Higher Education, 2022) to obtain a diverse sample (Tables 1 and 2). EFA data were obtained from one southwestern (R1/Hispanic serving), two southeastern (both R1/predominantly white), and two western (R1/predominantly white and a master’s college and university/Hispanic serving) institutions during Fall 2019 and Spring 2020. Note that, due to the COVID-19 pandemic, additional data collection was halted in Spring 2020 given the abrupt changes in teaching modalities and high teaching and learning burdens for instructors and students. Of the 1073 students who received the EFA survey, 259 students completed it. Despite this small sample size, we were confident in conducting EFA, because our measure had the following attributes that facilitate reliable detection of factors with limited sample size (Guadagnoli and Velicer, 1988; de Winter et al., 2009): large pattern coefficients per item (<0.6; Table 3), few factors (one to four), and a sufficient number of items (five to 10) per factor.

    TABLE 3. Four-factor EFA pattern coefficients (N = 259)a

    Construct and mean factor score (SD)ItembFactors and pattern coefficientsMean item score (SD)
    1234
    Scientific civic value (CV) 4.64 (1.07)30.830.14 0.01 −0.044.69 (1.23)
    50.810.080.05−0.104.62 (1.46)
    60.870.030.05−0.02 4.39 (1.42)
    80.91−0.16−0.010.154.51 (1.31)
    90.88−0.060.030.064.55 (1.27)
    Scientific civic self-efficacy (CE) 4.83 (0.98)10.010.840.010.094.93 (1.03)
    2−0.070.900.060.094.86 (1.03)
    30.130.860.01−0.084.82 (1.10)
    40.160.88−0.02−0.044.77 (1.10)
    5−0.090.850.030.154.76 (1.14)
    Scientific civic action (CA) 4.27 (1.16)2−0.070.080.860.034.06 (1.43)
    40.13−0.100.92−0.074.12 (1.43)
    50.030.020.770.124.35 (1.33)
    Scientific civic knowledge (CK) 4.55 (1.13)20.04−0.050.140.814.37 (1.35)
    30.130.09−0.090.844.59 (1.23)
    50.130.090.100.704.54 (1.28)

    aPattern coefficients for items that did not group in another factor are indicated in a gray font.

    bCheck Supplement for eliminated items.

    CFA data were collected from one southwestern (R1/Hispanic serving; same as EFA), two western (R1/predominantly white and master’s college and university/Hispanic serving), and one midwestern (R1/predominantly white) institution during Fall 2020 (Table 1). Of the 1593 students who received the CFA survey, 859 students completed it. SSE and SDS data were collected from all but one institution (master’s college, Hispanic Serving) due to competing survey projects. Although we lacked community college representation, I.A. was able to collect both CFA and EFA data from institutions that were either minority serving (EFA = 46%; CFA = 25%) and/or had a high proportion of transfer students from community colleges (EFA = 34%; CFA = 26%).

    Data Screening and Processing

    Data analyses and checks were done using R/RStudio software (v. 1.4). Factor analysis methods, data checks, and results are reported according to recommendations from Knekta et al. (2019).

    Missing Value Outlier Check.

    Using the MVN package (Korkmaz et al., 2014), we checked univariate outliers by looking at data distribution and frequency histograms, while multivariate outliers were checked by looking for incidences of string responses as well as via Mahalanobis distance for each item. Additionally, string responses were also examined individually by observing their overall responses in conjunction with the total SDS score (honest responders would have a lower score). Responses with only 5% or fewer of the survey questions answered or which were duplicates/spam were eliminated from the data set (this was fewer than 10% of the total data set). Responses in which students selected “I don’t know” were treated as missing data. Little’s missing completely at random (MCAR) test from the naniar package (Tierney et al., 2021) was used to check whether the data supported the null hypothesis (data were missing completely at random). After we eliminated low-quality responses, less than 5% of the responses were missing in all cases for our data; thus, we did not perform any imputations, as that was considered unnecessary (Tabachnick and Fidell, 2019).

    Data Distributions and Assumptions Check.

    Using the MVN package, we checked item-level, univariate and multivariate normality (Mardia’s test), and skewness and kurtosis, as well as descriptive summary statistics. Incidence of multicollinearity was examined using variance inflation factor (VIF) and interitem correlations were made using the car and psych packages (Fox and Weisberg, 2019; Revelle, 2021), respectively. To ensure factorability (i.e., proportion of variance in items are caused by underlying factors), we checked for sampling adequacy using the Kaiser-Meyer-Olkin’s (KMO) test from the psych package and interitem-polychoric correlations using the corrplot package (Wei and Simko, 2021).

    Step 4. EFA (Evidence Based on Internal Structure)

    Given that the author team created a new set of items that deviated greatly from the original civic engagement surveys, it was imperative to provide evidence based on the internal structure of our new measure to see the how the items grouped within the proposed constructs using factor analyses to ensure that the structure of our instrument conformed to the proposed constructs predicting scientific civic engagement. EFA was conducted using the nFactors package (Raiche and Magis, 2020). In addition to using theory to guide I.A.’s decisions, the number of factors extracted and kept was determined based on the following measures: Kaiser criterion, scree plot, parallel analysis (PA), and Wayne Velicer’s minimum average partial (MAP) criterion (Osborne, 2014). Polychoric correlations and the principal axis factoring method were used for this analysis, because response categories were ordinal with a multivariate nonnormal distribution (Osborne, 2014). Based on theory, I.A. expected factors to correlate with each other, thus we used oblique (promax) rotation. I.A. and L.A.C. imposed a cutoff value (>0.6) as well the magnitude of cross-loadings (>0.3) when deciding which items to keep or eliminate.

    EFA Results.

    While outliers were present, I.A. and L.A.C. decided against removing them at the risk of reducing an already low EFA sample size. Furthermore, given the ordinal nature of the data, the presence of extreme responses (e.g., selecting “completely agree”) for an item is to be expected and was indicated by the examination of the histogram plots (left skew; Supplemental Figure 1A–D). The MCAR test showed that the data were not missing completely at random (χ2 = 1542, df = 1020, significance = 0.00), but no items were missing more than 5% of their values. Thus, we could proceed with analysis despite the missing values. The inter-item polychoric correlation matrix showed that all correlations were above 0.3 for items that are expected to be on the same factor (Supplemental Figure 2), and the KMO value for each scale ranged from 0.89 to 0.94, which indicated good factorability. Most items had a skewness and kurtosis below |1.0|, except one item (CV_7 = “I believe that it is important to be informed of community issues”). Additionally, examination of frequency histograms showed negative skewness for all the items as well. Mardia’s test showed significant multivariate skewness and kurtosis values, which indicated multivariate nonnormality. As mentioned earlier, we used polychoric correlations and principal axis factoring due to the ordinal nature of the data and this finding of multivariate nonnormal distribution (Osborne, 2014). Multicollinearity was investigated by examining inter-item polychoric correlations and VIF values where the highest correlation was 0.92 but VIF was less than 10 for all items, indicating that items were not highly correlated and also would yield statistically reliable outputs (Knekta et al., 2019). For later analyses, I.A. and L.A.C. decided to drop CV_7, given its poor interitem correlation and values greater than |1.5| for skewness and kurtosis. While checking for reliability, we saw that dropping this item would not affect the overall alpha for the scale, thus achieving a slightly more parsimonious model.

    The Kaiser criterion showed three factors, scree and PA plots showed two, while MAP showed four (Supplemental Figure 3A and B). From a theoretical perspective, we hypothesized a four-factor model. Based on the outputs of these measures and theory, we examined EFA solutions for one to four factors. While we are aware that this instrument has multidimensional properties, I.A. and L.A.C. also looked to examine more parsimonious explanations of the data to see whether such models could explain the variance in the observed items.

    Our one-factor solution showed all items had strong pattern coefficients and extracted 73% of the variance. The two-factor solution (supported by scree and PA plots) started to show signs of a multidimensional scale; civic value items loaded exclusively in the first factor, while all civic self-efficacy loaded in the second, and the solution extracted 62% of the variance. However, a considerable number of cross-loadings occurred between factors along with pattern coefficients greater than 1.0 in both civic self-efficacy and value items. In our three-factor solution, the number of cross-loadings decreased, civic action loaded almost exclusively on the third factor, and the solution extracted the same percent of variance as its second counterpart. Nonetheless, there were still instances of cross-loading as well as weak pattern coefficients (<0.5) for the civic knowledge items. This made sense, given all items loaded onto the second factor with the civic self-efficacy items. Finally, our four-factor solution (supported by theory and MAP) was deemed best for the following reasons: it accounted for 82% of the variance in the item responses, all four of our originally predicted constructs loaded onto their own factors, most of the items of interest had pattern coefficients of 0.7 or greater (lowest = 0.7, highest = 0.92) and had a “good” model fit (Osborne, 2014; Tables 3 and 4) and satisfactory factor correlations (Supplemental Table 1). Thus, we decided that the four-factor solution was the final output for EFA.

    TABLE 4. EFA model-fit indices; Bayesian information criterion (BIC)

    Factor modelFit indices [result for “good” fit]a
    BIC [lower]TLI [≥ 0.95]RMSEA [< 0.06]SRMR [< 0.08]χ2
    11051.710.730.170.05549.67
    2497.010.780.150.04287.88
    3203.650.820.140.03161.67
    4−59.630.960.070.0129.61

    aValues in square brackets indicate criteria for the respective fit index.

    I.A. and L.A.C. took a closer look at the pattern matrix of the four-factor model and made item eliminations to achieve parsimony. Despite some items having high pattern coefficients in one factor that was above our cutoff, they still had cross-loadings that were above 0.3. For such items, we examined their Hoffman index of complexity and saw values greater than 1.0, indicating that the item required multiple factors to explain it. Thus, we eliminated five civic action, two civic knowledge, and four civic value items (check Supplemental PDF) to achieve a set of 16 items under the following civic constructs: efficacy (five items), action (three items), knowledge (three items), and value (five items). After items were chosen for elimination via EFA, we examined each item to ensure that we were not losing important construct information. We determined that eliminated items were largely either redundant with what was kept or did not contribute in a substantive or clear way to the construct measurement (see Supplemental Material for details). Given our item-development approach, we expected redundancy to appear, as items for a construct were obtained from different instruments addressing the same construct (to avoid construct underrepresentation). Our results allowed I.A. and L.A.C. to keep the items that made most sense to our survey population while eliminating items with the same meaning (i.e., redundant items).

    Additionally, all four factors had good reliability, with Cronbach’s alpha ranging from 0.88 (civic action) to 0.95 (civic self-efficacy). For reference, the scales where these items were obtained had the following ranges for Cronbach’s alpha:

    1. Civic Attitudes and Skills Questionnaire (Moely et al., 2002a) = 0.69 to 0.88

    2. Civic Engagement Scale (Doolittle and Faul, 2013) = 0.85 to 0.91

    3. Civic-minded Graduate Scale (Steinberg et al., 2011) = 0.85 to 0.96

    4. Global Citizenship Scale (Morais and Ogden, 2011) = 0.72 to 0.92

    5. Self-Efficacy towards Service Scale (Weber et al., 2004) = 0.80 to 0.88

    Step 5. Confirmatory Factor Analysis (Additional Evidence Based on Internal Structure)

    The goal of CFA was to confirm the EFA proposed item-factor relationships and achieve the most parsimonious model for measurement of the proposed construct. CFA was conducted using the lavaan package (Rosseel, 2012). As in our EFA, having ordinal, multivariate nonnormal data led us to use the WLSMV estimator. Similarly, internal consistency (i.e., Cronbach’s alpha) was checked using the psych package for each scale to help with achieving parsimony without sacrificing model fit. To determine the final model, we used a combination of pattern coefficients, fit (chi-square statistic, i.e., χ2; comparative and Tucker-Lewis indices, i.e., CFI and TLI; root-mean-square error of approximation, i.e., RMSEA; and standardized root-mean-square residual, i.e., SRMR) and modification indices, Cronbach’s alpha, and item–item correlation residuals. We imposed a cutoff value of 0.7 for pattern coefficients when deciding which items to keep or eliminate. However, I.A. lacked confidence in proposing a cutoff for fit indices given what he saw in the literature; authors who looked at simulated ordinal data for factor analyses saw that standard rules of thumb for fit indices were not applicable when deciding ideal factor models (Nye and Drasgow, 2010; Xia and Yang, 2019). Instead, those authors pointed to the need to look at other aspects of model fit, which we indicated earlier. Additionally, Zhao (2015) recommended that a reasonably fit model is shown by small χ2 statistic and RMSEA value plus large CFI/TLI values. Therefore, we explain our choices about item removal and model selection based on multiple fit indices in the results below.

    CFA Results.

    As in EFA, I.A. and L.A.C. found no justification for removing responses from our CFA data set. MCAR test showed that the data were not missing completely at random (χ2 = 1717, df = 1474, p < 0.001), but no items were missing more than 5% of their values. All items had skewness and kurtosis below |1.0|, except one (CV_5) that had exactly |1.0| skew. Additionally, examination of frequency histograms showed negative skewness for the items as well, which meant that most of the items were univariate normal. Mardia’s test showed significant multivariate skewness and kurtosis values, which indicated multivariate nonnormality; thus, we proceeded with the WLSMV estimator. VIF was less than 10, and average intrascale correlations ranged from 0.68 to 0.84.

    Initial fit statistics for the 16-item, four-factor model were as follows: χ2 = 676, p < 0.001; RMSEA = 0.091, 90% CI [0.085, 0.097]; CFI = 0.985, TLI = 0.982, SRMR = 0.039. Because we could not figure out poor fit just by observing fit indices, we shifted our attention to modification indices to diagnose the possibilities of a model misfit. Modification indices (Supplemental Table 2) suggested that removal of two items would improve the overall model fit. Item CV_3 (“I believe I should make a difference in my community using my biology skills”) was correlated with the other three constructs in addition to scientific civic value, while CE_5 (“I have biology skills that would help my community”) correlated with civic knowledge and action constructs in addition to civic self-efficacy. While the scale internal consistency analyses revealed that overall alpha would remain unchanged if these two items were dropped, item-total correlations would in fact increase if CV_3 and CE_5 were dropped. Analysis of correlation residuals indicated that CV_3 had correlation values at or greater than |0.10| with two civic knowledge and one civic action items (Supplemental Table 3). Furthermore, the R2 value for CV_3 met our 0.7 cutoff by only a margin compared with other items on the scale. Thus, we could safely say that its variance was not properly explained by the current factor structure.

    With all these pieces of information put into place, I.A. and L.A.C. decided to drop items CE_5 and CV_3 from the PSCE scale. As for the EFA, after items were chosen for elimination, we examined each item to ensure that we were not losing important construct dimensions (see Supplemental Material for details). The final model-fit statistics of our 14-item measure are as follows: χ2 = 321.7, p < 0.001; RMSEA = 0.070, 90% CI [0.062, 0.078]; CFI = 0.993, TLI = 0.991, SRMR = 0.028. Compared with the 16-item model, the final model’s χ2 value improved by 52%, RMSEA by 23%, CFI/TLI by 1%, and SRMR by 28%. Based on Zhao’s (2015) recommendation, our 14-item model (Figure 2 or Supplemental Figure 4 for factor loadings) has a reasonable fit, given our ordinal data. All four factors had good reliability within our context, with Cronbach’s alpha values ranging from a low of 0.88 (civic action) to a high of 0.96 (civic self-efficacy).

    FIGURE 2.

    FIGURE 2. Final 14-item PSCE survey to measure students’ scientific civic engagement. For reference, in the statment “1 = I am confident that I can contribute to improving life in my community using my SUBJECT skills.”, the “1” refers to the original item number assigned with the specified scale (i.e., CE_1) and the statement is the specific item wording. Color has been used to show grouping of factors and their corresponding items. The response scale is included in the top white box. Implementers of the survey can replace “SUBJECT” with their course subject (e.g., biology or chemistry) or title (e.g., BIO101).

    Step 6. Evidence Based on Relations with Other Variables

    To provide evidence that our instrument functioned as intended based on relations with other variables, we used a subset of responses from institutions that allowed us to collect data from the SDS and SSE scales in addition to PSCE responses. This sample included 729 students (Table 1; C/D results). We wanted to use this data, to determine whether there were statistically significant relationships between PSCE and either SSE (expected to correlate, i.e., providing convergent evidence) or SDS (expected to not correlate, i.e., providing discriminant evidence) scores.

    Convergent Evidence.

    I.A. and L.A.C. expected responses to a scale measuring science self-efficacy to be positively correlated with predictors of scientific civic engagement. However, we did not expect that science self-efficacy is the only thing that might positively influence the predictors of scientific civic engagement. While having science self-efficacy is a likely precursor to developing various predictors of scientific civic engagement and is also a positive contributor, it is entirely possible that one can still have low values for the predictors of scientific civic engagement and be scientifically self-efficacious. Thus, we might expect a significant positive correlation but do not expect this correlation to be high (>0.7). We used Kendall correlation to examine whether the average SSE scores were significantly correlated with the average PSCE scale scores. Should each PSCE scale have a statistically significant relationship with SSE, we can provide convergent evidence for our PSCE instrument.

    Results of covergent evidence.

    The SSE scale had less than 5% missing data that were not completely random. Table 5 shows that SSE scores were positively correlated with all PSCE scale (construct) scores and were statistically significant. Thus, we have supplied convergent evidence for our PSCE survey.

    TABLE 5. Kendall’s tau values from convergent (SSE) and discriminant (SDS) validity tests with each construct (N = 729)

    ConstructsSocial desirability (SDS)Science self-efficacy (SSE)
    Civic value0.08*0.31***
    Civic self-efficacy0.020.36***
    Civic action0.070.28***
    Civic knowledge0.060.32***

    *p < 0.05.

    ***p < 0.01.

    Discriminant Evidence.

    To see whether responses were under the influence of SD bias and to supply discriminant evidence, I.A. used Kendall correlation to observe whether SDS total and PSCE scale scores had a statistically significant correlation. We first coded socially desirable responses in the SDS scale as “1” and then summed the scale score for our purposes. A statistically insignificant relationship between SDS and PSCE scale scores supports the claim that students’ responses were not overly influenced by their need to respond in a socially desirable way (i.e., not influenced by SD bias) and that both scales are not related to each other theoretically and empirically. Conversely, a significant relationship would show that, to some degree, the motivation to give a socially desirable answer influences students’ responses.

    Results of discriminant evidence.

    The SDS scale was reliable (α = 0.74) and had less than 5% missing data that were not completely random. SDS scores showed statistically insignificant correlations with all PSCE scales (i.e., construct) scores, except civic value (Table 5). For the civic value scale, Kendall’s tau was relatively small (0.08), indicating that the relationship, while significant, was not strong. Thus, we conclude that our scale is minimally affected by SD bias and students’ desire to respond in a socially desirable way is unlikely to substantially affect the interpretation of scale results. Concurrently, we have provided discriminant evidence in relation to this construct for our instrument.

    DISCUSSION

    As the world continues to experience the effects of the current COVID-19 pandemic, scientists and health professionals are being reminded of the need to not only ensure future health protection but also rebuild the trust between the public and scientists. While progress in vaccine design has led the development of herd immunity at a faster rate than it could have 20 years ago (Vanderslott et al., 2013), the current challenge is rooted in the sense of hesitation by the public to engage with such rapid scientific advancements. A politicized topic, resistance to vaccinations primarily stems from the distrust the public has toward scientists (Garlick and Levine, 2017), sometimes resulting from scientists taking advantage of communities in the past (Jones, 1993; Skloot, 2017). A potential solution to this conundrum is building future scientists’ skills and confidence in scientific civic engagement. If today’s STEM students develop value, build knowledge, and strengthen self-efficacy in scientific civic engagement, then it is more likely that they will take actions to collaborate with communities addressing sociocentric issues in the future and earn their trust (Levy et al., 2021). Thus, providing undergraduate students with opportunities to take part in STEM courses that are centered around working with underserved or local communities has potential to help re-establish science’s credibility in such spaces.

    To this end, many recent STEM educators have developed civically engaged STEM courses that address issues of public concern or engage students in research that is relevant and useful for local communities (e.g., Flowers and Chodkiewicz, 2009; Olimpo et al., 2019; Malotky et al., 2020; Dauer et al., 2021). We see scientific civic engagement as a tool not only for educating undergraduates toward mastering science skills, but also for empowering students to revitalize the current and future state of affairs through advocacy for their communities. As noted in the Introduction, these civically engaged courses have the potential to allow students to see themselves as having the ability to create positive change within certain communities. However, assessing the impacts of these courses on students’ development of important predictors of future scientific civic engagement requires an efficient assessment tool. To address this need we created the PSCE survey, which 1) allows users to collect quantitative data from undergraduates participating in civically engaged STEM courses, 2) has the potential to provide insight into the development of our proposed predictors of scientific civic engagement constructs for different STEM disciplines, and 3) is parsimonious, whereby it can be used with additional scales of interest. Our final instrument (included in the Supplemental Material and in Figure 2) went through a rigorous process of collecting multiple forms of validity evidence, as described by Reeves and Marbach-Ad (2016) and according to the standards proposed by American Educational Research Association et al. (2014). We began by collecting general items that predicted future civic engagement from other measures and then reviewing our adapted predictors of scientific civic engagement items with experts to provide evidence based on test content followed by cognitive interviews with undergraduates to provide evidence based on response processes. With the help of EFA and CFA, we provided evidence based on the internal structure of the survey. Finally, we used two separate scales (SSE and SDS) to supply both convergent and discriminant evidence (in addition to checking for SD bias), which together provided evidence based on relations with external variables. The result was a 14-item, multidimensional survey that is made up of the following scientific civic scales: value, self-efficacy, action, and knowledge (Figure 2). The validity and reliability evidence we presented supports our assertion that our survey represents each defined construct as intended and the items within each constitute an accurate measure for each construct.

    Each scale represents a unique aspect that predicts students’ future scientific civic engagement based on our literature reviews. Items within the civic value scale measure a student’s sense of responsibility toward engaging civically with a community using their science skills. An analysis by the National Election Studies panel to examine whether societal interest value (i.e., civic value) could explain one’s engagement with the community showed that the former had a significant relationship with community engagement (Funk, 1998). This finding suggested that those who expressed value in helping the community were more likely to engage in behaviors that showed such commitment. From a STEM perspective, one study found that both graduate and undergraduate students showed an overall increase in civic awareness (synonymous with civic value) after participating in an international service-learning course focused on global biodiversity and conservation (Daniel and Mishra, 2017). One quote from the study highlighted how the experience changed a student’s world perspective and made the student conscious of not squandering resources, while emphasizing the need to educate others about resource management as well. This result, along with the data from the study by Funk (1998), suggests that increasing students’ civic value may be a tractable outcome of civically engaged STEM courses that could have implications for students’ scientific civic engagement after the course.

    The civic self-efficacy scale contains items centered around students’ sense of confidence in their ability to create change within a community using their science skills. In a quasi-experimental study done by Hipolito-Delgado and Zion (2017), the authors found a statistically significant difference in civic self-efficacy scores between high school students who participated in a civic inquiry course and the control group. This was particularly evident among students of color, who found participating in student-centered, inquiry-based learning activities promoted their sense of civic-self efficacy more compared with white students. Another study at a large midwestern university explored the effects of service-learning projects in various disciplines and found significant impacts on civic self-efficacy (Weber and Weber, 2010). On that note, we would predict the same effect in STEM civically engaged courses. This is corroborated through the work of Olimpo et al. (2019), in which a student attributed a civically engaged CURE course with enabling him to better engage with the community using his science skills. Again, this suggests civic self-efficacy as an achievable outcome of civically engaged STEM courses.

    The civic action scale encompasses items that ask students about their intention to civically engage with a community using their science skills. Past research has found that students in service-learning courses engaged in civic actions in the future as a result of course participation (Moely et al., 2002b). Results from a survey of 541 students showed those taking part in service learning had higher scores in civic action compared with the control group, and the difference was statistically significant. Similarly, analyses of final reflections from students in a service-learning environmental chemistry course indicated their intention to serve the community in the future by being agents of change (McGowin and Teed, 2019). Again, this indicates that intention to act can be an is a tractable outcome of service learning and a good predictor of future action.

    Finally, items within the civic knowledge scale ask students to report on whether they feel they have knowledge of how to use their science knowledge to help a community. A study of urban public students in a civics education program saw that civic knowledge was associated with students’ intention to vote, with civic content and knowledge about current events being the strongest predictors (Cohen and Chaffee, 2013). In the realm of science, analysis of essay responses from sixth- to 12th-grade BIPOC students indicated how engaging in STEM service-learning projects in their community made them more knowledgeable about their community issues and about potential actions to solve them (Gallay et al., 2021). Students mentioned how learning about how littering contributes to pollution and indicated this knowledge informed their intentions and actions around picking up trash in their community. Notably, these students also said that their civic science projects empowered them to create change for the better; they now had the political voice and knowledge to tackle environmental issues within the community.

    Taken together, the evidence presented here along with prior empirical work and theory suggests that each construct in the final PSCE scale is a potential outcome of civic engagement in courses and is likely to positively influence future scientific civic engagement. In addition, prior work indicated relationships between the constructs, for example, high civic self-efficacy and civic action (Metzger et al., 2020). Thus, this survey, which includes the four interrelated civic constructs, might provide further insights, such as how these interact and/or which develop first during undergraduate learning (Zaff et al., 2011; Chan et al., 2014). Future work using the survey could provide broad insight into the mechanisms by which the described predictors of scientific civic engagement develop in undergraduate settings and what we can leverage to improve and speed up that process—highlighting the value of a succinct instrument that can measure multiple constructs predicting scientific civic engagement.

    IMPLICATIONS

    The PSCE survey has the advantage of being potentially useful to educators teaching civically engaged courses at their institutions. Instructors can use the survey to conduct a formative assessment of aspects predicting scientific civic engagement at a given point in time. The result of the assessment could help to inform instructional actions targeting specific civic constructs for future course iterations. For example, if students in an undergraduate STEM course scored a high average on the value scale, but low on the self-efficacy scale, instructors could specifically adapt their courses to help students build self-efficacy (e.g., through mastery experiences, vicarious experiences, or social persuasion; Usher and Pajares, 2008) while maintaining existing course elements that emphasize the importance of scientific civic engagement. Similarly, if students scored low on the civic knowledge scale, instructors could incorporate readings from a local news source (e.g., Huerta and Jozwiak, 2008), which can lead to increased knowledge of how they might engage with their community.

    The PSCE scale could also be used in conjunction with measures of other valued outcomes, such as science learning or persistence in STEM, to see if and to what degree PSCE influences or is related to these factors and vice versa. Prior work has described a relationship between students’ view of science as a prosocial endeavor and their likelihood to persist (Allen et al., 2015; Estrada et al., 2018). If this is indeed the case, then further characterizing the relationships between predictors of scientific civic engagement and persistence in STEM and parsing which constructs within the PSCE survey might contribute to persistence could potentially inform efforts to address the paucity of certain groups in STEM fields (Asai, 2020).

    Finally, as instructors begin to use the PSCE, we want to emphasize that, given this is a multidimensional instrument and that each dimension is distinct from the others, users should view the average score for each construct separately. The items should not be averaged across the entire instrument, nor should the averages of the constructs be combined into one score. Doing this could be potentially misleading, as average scores from a student who is moderately low in all constructs might match that of a student who is high in one but extremely low in the others. Ultimately, viewing the scores for each construct separately will lend more insight into each of the predictors of scientific civic engagement and where we might focus instruction to further improve a specific predictor. We hope that such approaches could gradually improve instruction that targets scientific civic engagement as an outcome.

    LIMITATIONS

    Collecting validity evidence for a new survey requires sampling from a population that represents individuals like those whom the survey is designed to assess. We achieved this goal, in that our survey sampled from STEM college students, with at least 30% made up of individuals from historically underrepresented groups in all samples (resulting from efforts to recruit from minority-serving institutions). However, our sample and results are nonetheless limited. We fell short of having a complete representation of different gender identities, community college students and students attending primarily undergraduate institutions, and certain ethnic and racial groups (e.g., very few Black, Native American, or Pacific Islander students) in both the cognitive interviews and factor analyses. Furthermore, due to purposive sampling, biology majors and students enrolled in research-intensive institutions account for a moderate majority in our data set, both in cognitive interviews and factor analyses. Thus, we recommend caution when using the survey to measure predictors of scientific civic engagement across all gender identities, for students enrolled in different institution types, or for all demographic groups. We also believe that providing a larger incentive to complete the study could have resulted in not only higher response rates, but also responses from a broader population (James and Bolstein, 1992). Ideally, data should be collected from a broader sample and additional analytical steps should be taken to ensure the survey can make valid inferences for more populations of students, allow comparison among student populations, and be used longitudinally. For example, explicitly collecting from 4-year and 2-year colleges and examining differential item functioning could lend insight into whether the instrument can be used to compare these two institutional groups. Similar steps could be taken for different ethnic/racial groups. Also, performing test–retest reliability measures could lend insight into whether the survey can be used reliably to measure longitudinal trends. These additional steps to collect evidence of validity are beyond the scope of this study but would add value to the survey and further extend its utility, as explained in the Implications section. We view these limitations as an opportunity for researchers to contribute to the scholarship on this topic across new contexts.

    Notably, and related to the limitations described, though our sample drew from multiple disciplines, it was beyond the scope of this study to investigate whether the constructs were maintained or evidence of validity was similar across populations from different disciplines. It is entirely possible that scales could be interpreted differently across scientific disciplines, but we cannot ascertain if this is the case from the current work. What we can say is that, because we allowed our scale to be flexible in which discipline it asked students about, we can be more certain that this scale will be useful across STEM and specifically for the disciplines represented in this work (see Table 2). In addition, while our conception of biology skills matches closely with that defined in the BioSkills framework, we did not explicitly define these skills either during cognitive interviews or in the actual survey, so we relied on students’ interpretation of skills as a result. Future work on this front could entail applying the BioSkills framework at the start of future validation work by providing a definition of biology skills to students, or future work could ask students to provide their understanding of the meaning of “biology skills” during cognitive interviews. This could be a future topic of investigation, and evidence showing comparable performance across different disciplines could be investigated (e.g., differential item functioning between disciplinary groups).

    To achieve parsimony while not losing critical information from our proposed constructs, we ended up eliminating 14 items from our original 28-item measure using both quantitative and qualitative methods. We looked at our candidates for elimination and discussed whether dropping such items will cause the loss of a piece of the construct in question. Ultimately, we felt confident that eliminating these 14 items would not alter the survey’s potential in capturing information on PSCE from undergraduate students, given we used redundancy as our fail-safe to avoid construct underrepresentation. Nonetheless, there is always the possibility that some of these items could play a key role in defining their respective constructs and eliminating such items resulted in a loss of a central piece of information from the survey. Thus, we advise that, in future PSCE survey refinement studies, investigators advocate for replacing eliminated items with new ones under the same theoretical framework and expert guidance. The aim would then be to see if our current work led to unintended loss of construct representation, which could then usher a new PSCE survey that captures the dimensions predicting scientific civic engagement more accurately than its predecessors.

    We also want to remind readers that this survey’s utility has been investigated under the premise of PSCE in the context of one’s own community (i.e., we ask about engagement with one’s community and allow students to define their own communities). This approach allows the survey to be versatile, in that it can address scientific civic engagement across the diverse communities with which students identify and it allows our students (i.e., the community that we care about) to exercise their own agency when considering what communities they consider personally relevant. Nonetheless, it has some limitations. While we anticipate that many instructors will be interested in assessing students’ PSCE results in regard to students’ own communities, some instructors may be interested in assessing changes in students’ PSCE results in regard to a community that the class works with, but that not all students identify with. We predict that this survey will still yield valid responses, given the breadth of communities that students responded about during validation (Table 2). However, this is still a consideration when using the PSCE scale that we are currently investigating for communities that students may not identify with. Furthermore, it would also be valuable for future investigators to see whether community members would respond to the items in the survey in the same vein as the students. This survey is centered around the students’ perspective of engaging civically using their science skills, so flipping the context would yield new insights for this measure.

    Finally, our conceptualization of civic engagement as serving one’s community stands partially in contrast with civic engagement that is associated within the political realm. This view reflects the views of the authors of this work and the authors’ priorities. Including perspectives of additional experts without an authorship role or without this specific view may have changed the survey and potentially broadened this work or avoided potential biases due to our perspectives during the item-review process. Thus, using this survey to measure changes in predictors of scientific civic engagement in courses in which students participate in political activism or legislation would likely be an invalid use of the measure. We end this paper by acknowledging that, despite our efforts in making the “ideal” PSCE survey, we are aware of its limitations. Gathering validity evidence to support the use of a survey for multiple contexts is a continuous process, and we hope that future work will address the limitations we describe to further improve the PSCE survey.

    ACKNOWLEDGMENTS

    We would like to thank the REACH, Barger and Bowman labs (all at University of Colorado, Boulder), and especially the undergraduate members for feedback and support. We would like to thank Kelly McDonald (California State University, Sacramento), Jeff Olimpo (University of Texas at El Paso), Jamie Sabel (University of Memphis), Jenny Knight, John Basey, and Teresa Bilinski (all at University of Colorado, Boulder) for assistance in recruiting participants. We would also like to acknowledge and thank the National Science Foundation Improving Undergraduate STEM Education Program (award no. 1835610: “Combining Course-based Undergraduate Research Experiences with Place-based Learning to Increase Student Retention, Civic Engagement, and Self-efficacy”) and the Ecology and Evolutionary Biology Department at University of Colorado, Boulder, for graduate student support.

    REFERENCES

  • Adler, R. P., & Goggin, J. (2005). What do we mean by “civic engagement”? Journal of Transformative Education, 3(3), 236–253. https://doi.org/10.1177/1541344605276792 Google Scholar
  • Allen, J. M., Muragishi, G. A., Smith, J. L., Thoman, D. B., & Brown, E. R. (2015). To grab and to hold: Cultivating communal goals to overcome cultural and structural barriers in first generation college students’ science interest. Translational Issues in Psychological Science, 1(4), 331–341. https://doi.org/10.1037/tps0000046 MedlineGoogle Scholar
  • American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). Standards for educational & psychological testing. Washington, DC: American Educational Research Association. Google Scholar
  • Arimoto, T., & Sato, Y. (2012). Rebuilding public trust in science for policy-making. Science, 337(6099), 1176–1177. https://doi.org/10.1126/science.1224004 MedlineGoogle Scholar
  • Asai, D. J. (2020). Race matters. Cell, 181(4), 754–757. https://doi.org/10.1016/j.cell.2020.03.044 MedlineGoogle Scholar
  • Banks, S., Armstrong, A., Carter, K., Graham, H., Hayward, P., Henry, A., ... & Strachan, A. (2013). Everyday ethics in community-based participatory research. Contemporary Social Science, 8(3), 263–277. https://doi.org/10.1080/21582041.2013.769618 Google Scholar
  • Baugh, A. J. (2019). Confronting racism and white privilege in courses on religion and the environment: An inclusive pedagogical approach. Teaching Theology & Religion, 22(4), 269–279. https://doi.org/10.1111/teth.12503 Google Scholar
  • Beatty, P. C., & Willis, G. B. (2007). Research synthesis: The practice of cognitive interviewing. Public Opinion Quarterly, 71(2), 287–311. https://doi.org/10.1093/poq/nfm006 Google Scholar
  • Birdwell, J., Scott, R., & Horley, E. (2013). Active citizenship, education and service learning. Education, Citizenship and Social Justice, 8(2), 185–199). https://doi.org/10.1177/1746197913483683 Google Scholar
  • Bobek, D., Zaff, J., Li, Y., & Lerner, R. M. (2009). Cognitive, emotional, and behavioral components of civic action: Towards an integrated measure of civic engagement. Journal of Applied Developmental Psychology, 30(5), 615–627. https://doi.org/10.1016/j.appdev.2009.07.005 Google Scholar
  • Butin, D. W. (2010). Toward a theory and practice of community engagement. In Service-learning in theory and practice. New York, NY: Palgrave Macmillan. https://doi.org/10.1057/9780230106154_7 Google Scholar
  • Carnegie Classification of Institutions of Higher Education. (2022). About Carnegie Classification. Retrieved May 13, 2022, from https://carnegieclassifications.iu.edu Google Scholar
  • Centers for Disease Control and Prevention. (2021). Trends in number of COVID-19 cases and deaths in the US reported to CDC, by state/territory. Retrieved September 1, 2021, from https://covid.cdc.gov/covid-data-tracker/#trends_totaldeaths Google Scholar
  • Chan, W. Y., Ou, S. R., & Reynolds, A. J. (2014). Adolescent civic engagement and adult outcomes: An examination among urban racial minorities. Journal of Youth and Adolescence, 43(11), 1829–1843. https://doi.org/10.1007/s10964-014-0136-5 MedlineGoogle Scholar
  • Clemmons, A. W., Timbrook, J., Herron, J. C., & Crowe, A. J. (2020). BioSkills Guide: Development and national validation of a tool for interpreting the Vision and Change core competencies. CBE—Life Sciences Education, 19(4), ar53. https://doi.org/10.1187/cbe.19-11-0259 LinkGoogle Scholar
  • Cohen, A. K., & Chaffee, B. W. (2013). The relationship between adolescents’ civic knowledge, civic attitude, and civic behavior and their self-reported future likelihood of voting. Education, Citizenship and Social Justice, 8(1), 43–57. https://doi.org/10.1177/1746197912456339 MedlineGoogle Scholar
  • Daniel, K. L., & Mishra, C. (2017). Student outcomes from participating in an international STEM service-learning course. SAGE Open, 7(1), 215824401769715. https://doi.org/10.1177/2158244017697155 Google Scholar
  • Dauer, J. M., Sorensen, A. E., & Wilson, J. (2021). Students’ civic engagement self-efficacy varies across socioscientific issues contexts. Frontiers in Education, 6, 154. Google Scholar
  • Davern, M., Bautista, R., Freese, J., Morgan, S. L., & Smith, T. W. (Eds.) (2021). General Social Survey 2021 Cross-section. [Machine-readable data file], NORC. Berkeley, CA: Computer-assisted Survey MethodsProgram, University of California/ISA. Google Scholar
  • de Winter, J. C. F., Dodou, D., & Wieringa, P. A. (2009). Exploratory factor analysis with small sample sizes. Multivariate Behavioral Research, 44(2), 147–181. https://doi.org/10.1080/00273170902794206 MedlineGoogle Scholar
  • Doolittle, A., & Faul, A. C. (2013). Civic engagement scale: A validation study. SAGE Open, 3(3), 1–7. https://doi.org/10.1177/2158244013495542 Google Scholar
  • Estrada, M., Hernandez, P. R., & Schultz, P. W. (2018). A longitudinal study of how quality mentorship and research experience integrate underrepresented minorities into STEM careers. CBE—Life Sciences Education, 17(1), ar9. https://doi.org/10.1187/cbe.17-04-0066 LinkGoogle Scholar
  • Flowers, R., & Chodkiewicz, A. (2009). Local communities and schools tackling sustainability and climate change. Australian Journal of Environmental Education, 25, 71–81. Google Scholar
  • Fox, J., & Weisberg, S. (2018). An R companion to applied regression. Los Angeles, CA: Sage publications. Google Scholar
  • Funk, C. L. (1998). Practicing what we preach? The influence of a societal interest value on civic engagement. Political Psychology, 19(3), 601–614. https://doi.org/10.1111/0162-895X.00120 Google Scholar
  • Gallay, E., Flanagan, C., & Parker, B. (2021). Place-based environmental civic science: Urban students using STEM for public good. Frontiers in Education, 6, 693455. https://doi.org/10.3389/feduc.2021.693455 Google Scholar
  • Garlick, J. A., & Levine, P. (2017). Where civics meets science: Building science for the public good through civic science. Oral Diseases, 23(6), 692–696. MedlineGoogle Scholar
  • Gray, D. M., II, Nolan, T. S., Bignall, O. N. R., Gregory, J., & Joseph, J. J. (2022). Reckoning with our trustworthiness, leveraging community engagement. Population Health Management, 25(1), 6–7. https://doi.org/10.1089/pop.2021.0158 MedlineGoogle Scholar
  • Guadagnoli, E., & Velicer, W. F. (1988). Relation of sample size to the stability of component patterns. Psychological Bulletin, 103(2), 265. MedlineGoogle Scholar
  • Guta, A., Flicker, S., & Roche, B. (2013). Governing through community allegiance: A qualitative examination of peer research in community-based participatory research. Critical Public Health, 23(4), 432–451. https://doi.org/10.1080/09581596.2012.761675 MedlineGoogle Scholar
  • Hemer, K. M., & Kappus, A. (2021). Assessment of civic learning and democratic engagement. New Directions for Higher Education, 2021(195–196), 133–141. https://doi.org/10.1002/he.20417 Google Scholar
  • Hipolito-Delgado, C. P., & Zion, S. (2017). Igniting the fire within marginalized youth: The role of critical civic inquiry in fostering ethnic identity and civic self-efficacy. Urban Education, 52(6), 699–717. https://doi.org/10.1177/0042085915574524 Google Scholar
  • Huerta, J. C., & Jozwiak, J. (2008). Developing civic engagement in general education political science. Journal of Political Science Education, 4(1), 42–60. https://doi.org/10.1080/15512160701816101 Google Scholar
  • James, J. M., & Bolstein, R. (1992). Large monetary incentives and their effect on mail survey response rates. Public Opinion Quarterly, 56(4), 442. https://doi.org/10.1086/269336 Google Scholar
  • Jones, J. H. (1993). Bad blood: The Tuskegee syphilis experiment (New and expanded ed). New York, NY: Free Press. Google Scholar
  • Kalas, P., & Raisinghani, L. (2019). Assessing the impact of community-based experiential learning: The Case of Biology 1000 students. International Journal of Teaching and Learning in Higher Education, 31(2), 261–273. Google Scholar
  • Kao, C.-P., Lin, K.-Y., Chien, H.-M., & Chen, Y.-T. (2020). Enhancing volunteers’ intention to engage in citizen science: The roles of self-efficacy, satisfaction and science trust. Journal of Baltic Science Education, 19(2), 234–246. https://doi.org/10.33225/jbse/20.19.234 Google Scholar
  • Knekta, E., Runyon, C., & Eddy, S. (2019). One size doesn’t fit all: Using factor analysis to gather validity evidence when using surveys in your research. CBE—Life Sciences Education, 18(1), rm1. https://doi.org/10.1187/cbe.18-04-0064 LinkGoogle Scholar
  • Korkmaz, S., Goksuluk, D., & Zararsiz, G. (2014). MVN: An R package for assessing multivariate normality. R Journal, 6(2), 151–162. https://doi.org/10.32614/rj-2014-031 Google Scholar
  • Levy, B. L. M., Oliveira, A. W., & Harris, C. B. (2021). The potential of “civic science education”: Theory, research, practice, and uncertainties. Science Education, 105(6), 1053–1075. https://doi-org.colorado.idm.oclc.org/10.1002/sce.21678 Google Scholar
  • Malotky, M. K. H., Mayes, K. M., Price, K. M., Smith, G., Mann, S. N., Guinyard, M. W., ... & Bernot, K. M. (2020). Fostering inclusion through an interinstitutional, community-engaged, course-based undergraduate research experience. Journal of Microbiology & Biology Education, 21(1). 21.1.31. https://doi.org/10.1128/jmbe.v21i1.1939 MedlineGoogle Scholar
  • May, D. K. (2009). Mathematics self-efficacy and anxiety questionnaire. Athens, Greece: University of Georgia. Google Scholar
  • McGowin, A. E., & Teed, R. (2019). Increasing expression of civic-engagement values by students in a service-learning chemistry course. Journal of Chemical Education, 96(10), 2158–2166. https://doi.org/10.1021/acs.jchemed.9b00221 Google Scholar
  • Metzger, A., Alvis, L., & Oosterhoff, B. (2020). Adolescent views of civic responsibility and civic efficacy: Differences by rurality and socioeconomic status. Journal of Applied Developmental Psychology, 70, 101183. https://doi.org/10.1016/j.appdev.2020.101183 Google Scholar
  • Moely, B. E., McFarland, M., Miron, D., Mercer, S. H., & Ilustre, V. (2002a). Changes in college students’ attitudes and intentions for civic involvement as a function of service-learning experiences. Michigan Journal of Community Service Learning, 9(1), 18-26. Retrieved September 1, 2021, from http://hdl.handle.net/2027/spo.3239521.0009.102 Google Scholar
  • Moely, B. E., Mercer, S. H., Ilustre, V., Miron, D., & McFarland, M. (2002b). Psychometric properties and correlates of the Civic Attitudes and Skills Questionnaire (CASQ): A measure of students’ attitudes related to service-learning. Michigan Journal of Community Service Learning, 8(2), 15–26. Google Scholar
  • Moore-Martínez, P., & Pongan, J. M. (2018). A win-win: The intentional cultivation of reciprocal relationships between LSP and community partners. ALDEEU, 33, 219–250. Google Scholar
  • Morais, D. B., & Ogden, A. C. (2011). Initial development and validation of the Global Citizenship Scale. Journal of Studies in International Education, 15(5), 445–466. https://doi.org/10.1177/1028315310375308 Google Scholar
  • Murakami, M., & Tsubokura, M. (2021). Deepening community-aligned science in response to wavering trust in science. The Lancet, 397(10278), 969–970. https://doi.org/10.1016/S0140-6736(21)00358-5 Google Scholar
  • Nye, C. D., & Drasgow, F. (2010). Assessing goodness of fit: Simple rules of thumb simply do not work. Organizational Research Methods, 14(3), 548–570. https://doi.org/10.1177/1094428110368562 Google Scholar
  • O’Daniel, J. M., Rosanbalm, K. D., Boles, L., Tindall, G. M., Livingston, T. M., & Haga, S. B. (2012). Enhancing geneticists’ perspectives of the public through community engagement. Genetics in Medicine, 14(2), 243–249. https://doi.org/10.1038/gim.2011.29 MedlineGoogle Scholar
  • Olimpo, J. T., Apodaca, J., Hernandez, A., & Paat, Y. F. (2019). Disease and the environment: A health disparities CURE incorporating civic engagement education. Science Education and Civic Engagement, 11(1), 13–24. Google Scholar
  • Osborne, J. W. (2014). Best practices in exploratory factor analysis. Seattle, Washington: CreateSpace Independent Publishing Platform. Google Scholar
  • Pruett, J. L., & Weigel, E. G. (2020). Concept map assessment reveals short-term community-engaged fieldwork enhances sustainability knowledge. CBE—Life Sciences Education, 19(3), ar38. https://doi.org/10.1187/cbe.20-02-0031 LinkGoogle Scholar
  • Raiche, G., & Magis, D. (2020). nFactors: Parallel analysis and other non graphical solutions to the Cattell scree test. Retrieved September 1, 2021, from https://CRAN.R-project.org/package=nFactors Google Scholar
  • Reeves, T. D., & Marbach-Ad, G. (2016). Contemporary test validity in theory and practice: A primer for discipline-based education researchers. CBE—Life Sciences Education, 15(1) 1–9. https://doi.org/10.1187/cbe.15-08-0183 Google Scholar
  • Revelle, W. (2021). psych: Procedures for psychological, psychometric, and personality research. Retrieved September 1, 2021, from https://CRAN.R-project.org/package=psych Google Scholar
  • Rosseel, Y. (2012). lavaan: An R package for structural equation modeling. Journal of Statistical Software, 48(1), 1–36. https://doi.org/10.18637/jss.v048.i02 Google Scholar
  • Rudolph, J. L. (2014). Dewey’s “science as method” a century later: Reviving science education for civic ends. American Educational Research Journal, 51(6), 1056–1083. https://doi.org/10.3102/0002831214554277 Google Scholar
  • Rudolph, J. L., & Horibe, S. (2016). What do we mean by science education for civic engagement? Journal of Research in Science Teaching, 53(6), 805–820. https://doi.org/10.1002/tea.21303 Google Scholar
  • Skloot, R. (2017). The immortal life of Henrietta Lacks. New York, NY: Crown. Google Scholar
  • Steinberg, K., Hatcher, J. A., & Bringle, R. G. (2011). Civic-minded graduate: A north star. Michigan Journal of Community Service Learning, 18(1), 19–33. Google Scholar
  • Strahan, R., & Gerbasi, K. C. (1972). Short, homogeneous versions of the Marlow-Crowne Social Desirability Scale. Journal of Clinical Psychology, 28(2), 191–193. Google Scholar
  • Tabachnick, B. G., & Fidell, L. S. (2019). Using multivariate statistics. Boston MA: Pearson. Google Scholar
  • Tierney, N., Cook, D., McBain, M., & Fay, C. (2021). naniar: Data structures, summaries, and visualisations for missing data. Retrieved September 1, 2021, from https://CRAN.R-project.org/package=naniar Google Scholar
  • Troiano, G., & Nardi, A. (2021). Vaccine hesitancy in the era of COVID-19. Public Health, 194, 245–251. https://doi.org/10.1016/j.puhe.2021.02.025 MedlineGoogle Scholar
  • Trott, C. D., Sample McMeeking, L. B., & Weinberg, A. E. (2020). Participatory action research experiences for undergraduates: Forging critical connections through community engagement. Studies in Higher Education, 45(11), 2260–2273. https://doi.org/10.1080/03075079.2019.1602759 Google Scholar
  • Usher, E. L., & Pajares, F. (2008). Self-efficacy for self-regulated learning: A validation study. Educational and Psychological Measurement, 68(3), 443–463. https://doi.org/10.1177/0013164407308475 Google Scholar
  • Vance-Chalcraft, H. D., Gates, T. A., Hogan, K. A., Evans, M., Bunnell, A., & Hurlbert, A. H. (2021). Using citizen science to incorporate research into introductory biology courses at multiple universities. citizen science. Theory and Practice, 6(1), 23. http://doi.org/10.5334/cstp.424 Google Scholar
  • Vanderslott, S., Dadonaite, B., & Roser, M. (2013). Vaccination. Our World in Data. Retrieved September 1, 2021, from https://ourworldindata.org/vaccination Google Scholar
  • Weber, J. E., & Weber, P. S. (2010). Service learning: An empirical analysis of the impact of service learning on civic mindedness. Journal of Business, Society & Government, 2(2), 79–94. Google Scholar
  • Weber, P. S., Weber, J. E., Sleeper, B. J., & Schneider, K. C. (2004). Self-efficacy toward service, civic participation and the business student: Scale development and validation. Journal of Business Ethics, 49(4), 359–369. https://doi.org/10.1023/B:BUSI.0000020881.58352.ab Google Scholar
  • Wei, T., & Simko, V. (2021). R package “corrplot”: Visualization of a correlation matrix. Retrieved September 1, 2021, from https://github.com/taiyun/corrplot Google Scholar
  • Westheimer, J., & Kahne, J. (2004). What kind of citizen? The politics of educating for democracy. American Educational Research Journal, 41(2), 237–269. https://doi.org/10.3102/00028312041002237 Google Scholar
  • Xia, Y., & Yang, Y. (2019). RMSEA, CFI, and TLI in structural equation modeling with ordered categorical data: The story they tell depends on the estimation methods. Behavior Research Methods, 51(1), 409–428. https://doi.org/10.3758/s13428-018-1055-2 MedlineGoogle Scholar
  • Zaff, J. F., Kawashima-Ginsberg, K., Lin, E. S., Lamb, M., Balsano, A., & Lerner, R. M. (2011). Developmental trajectories of civic engagement across adolescence: Disaggregation of an integrated construct. Journal of Adolescence, 34(6), 1207–1220. https://doi.org/10.1016/j.adolescence.2011.07.005 MedlineGoogle Scholar
  • Zhao, Y. (2015). The performance of model fit measures by robust weighted least squares estimators in confirmatory factor analysis [Doctoral dissertation]. The Pennsylvania State University. Retrieved September 1, 2021, from https://etda.libraries.psu.edu/catalog/24901 Google Scholar