ASCB logo LSE Logo

General Essays and ArticlesFree Access

Experience with Scientific Teaching in Face-to-Face Settings Promoted Usage of Evidence-Based Practices during Emergency Remote Teaching

    Published Online:https://doi.org/10.1187/cbe.22-03-0049

    Abstract

    During the Spring of 2020, instructors across the nation scrambled to transition their face-to-face courses to remote/online modalities. Necessarily, teaching practices adapted. This study investigated how the usage of evidence-based practices as defined by scientific teaching (ST) was impacted during this rapid transition. More than 130 science faculty teaching courses in biology, mostly from primarily undergraduate institutions in the U.S. Midwest, completed the Measurement Instrument of Scientific Teaching (MIST) for one course of their choosing (lecture portion only for laboratory-based courses). Participants compared how they taught the course in the face-to-face versus the remote setting. MIST scores declined in every category of ST. An instructor’s face-to-face MIST score was the largest predictor for the remote MIST score. Fourteen representative participants completed a follow-up interview to discuss how and why they made the changes they did within each ST category. Interviews uncovered variation in how individual practices were emphasized, scheduled, and implemented in normal teaching environments, how access to resources changed in the Spring of 2020, and how all of these things impacted the way ST practices were adopted in emergency remote teaching. Recommendations for mitigating declines in the use of evidence-based teaching in response to future unexpected events are discussed.

    INTRODUCTION

    The rapid transition to remote teaching in the Spring of 2020 due to the emergence of the COVID-19 pandemic dramatically impacted higher education. Existing challenges such as student access to efficient and affordable Internet connections and reliable technology devices were magnified (Blaich and Wise, 2020). The prolonged stresses of isolation, family demands, and financial strain negatively impacted student mental health (Chirikov et al., 2020; Kecojevic et al., 2020). Given that the traditional modes for accessing mental health services were no longer available, faculty increasingly served these roles (Anderson, 2020; Blaich and Wise, 2020; Colclasure et al., 2021). Within the span of as little as several days, faculty needed to make significant adjustments to their courses in transitioning them to a remote or online setting (Trust and Whalen, 2020; Colclasure et al., 2021). They found themselves reworking the way they used existing technology and exploring new technology-based tools for quick adoption to fill in the gaps (Johnson et al., 2020; Trust and Whalen, 2020).

    Campus technology infrastructures and services, generally designed for facilitating face-to-face interactions, were suddenly strained as entire institutions moved all instructional delivery and administrative work to a remote environment. Teaching support resources such as centers for teaching excellence, faculty development, and instructional design were easily overwhelmed (Aebersold et al., 2020). The most central aspect of learning and social connection for many, the face-to-face classroom, was no longer available (Mali and Lim, 2021).

    Instructors at primarily undergraduate institutions (PUIs) typically have heavy teaching loads, are expected to sustain substantial and meaningful interactions with their students, and often do not have the help of teaching assistants or learning assistants in the classroom (Bowne et al., 2011; Fernandes et al., 2020; Freeman et al., 2020). Although these factors are not mutually exclusive or specifically unique to a PUI, they may be particularly pervasive among PUI instructors, which suggests that PUIs may be a good sampling source for high concentrations of teaching-focused instructors who were impacted by the emergency remote teaching (ERT) shift. In addition, most full-time faculty lacked online teaching experience before implementing ERT, and only one-third of all faculty in a 2019 survey agreed or strongly agreed that online courses could achieve equivalent student learning outcomes, a percentage that was particularly low (15%) among private, baccalaureate institutions (Lederman, 2019). This also identifies PUI instructors as a particularly interesting sample population given their generally low buy-in to remote learning environments before the COVID-necessitated plunge into ERT.

    Since the Spring of 2020, several studies have been published describing student and faculty experiences (Anderson, 2020; Blaich and Wise, 2020; Johnson et al., 2020; Kecojevic et al., 2020; Mali and Lim, 2021). However, less is known about how teaching practices changed during the shift to ERT (Gonçalves and Capucha, 2020; Gonzalez et al., 2020; Sunasee, 2020; Tartavulea et al., 2020). Scientific teaching (ST) practices are those approaches designed to emulate the practice of science and are supported by empirical evidence of teaching and learning (Handelsman et al., 2004, 2007). They include active learning, use of learning goals, inclusivity, incorporation of data analysis and experimental design, cognitive skill development, responsiveness to students, and use of reflection. ST practice usage is an indicator of the extent to which an instructor has implemented effective teaching practices in a course.

    Specific teaching practices that support the ST framework were identified in a taxonomy of observable ST practices (Couch et al., 2015), and the frequency with which those practices are implemented in an undergraduate science course can be estimated using the validated tool called the Measurement Instrument for Scientific Teaching (MIST; Durham et al., 2017). Factor analyses from MIST responses in the original study grouped the 37 measured ST practices into eight categories: active-learning strategies, learning goal use and feedback, inclusivity, responsiveness to students, experimental design and communication, data analysis and interpretation, cognitive skills, and course and self-reflection. A MIST composite score has a theoretical range of 0–100, but a practical range of 15–85 based on the survey structure. Subcategory scores ranged from 30 to 75 (Durham et al., 2017). While some previous work indicates that instructors may overestimate their use of research-based instructional strategies in self-report instruments when pressured to perform well, as when promotions or professional development evaluations are at stake, this may not be the universal case in low-pressure scenarios and when using quantitative or frequency-based scales rather than agree–disagree scales (Ebert-May et al., 2011; Weiman and Gilbert, 2014). Previous work by the developers of MIST triangulated the perspectives of students, instructors, and observers within the same courses and found a high degree of correlation between instructors and students (r = 0.68) and between instructors and observers (r = 0.59), which supports the use of MIST as an instructor self-report tool (Durham et al., 2018).

    We used the MIST to gather data about the frequency at which ST practices were used in a lecture-based course or within the lecture component of a course that includes a lab before the COVID-related shutdown, and we then compared those frequencies with teaching practice frequencies implemented after the transition to ERT in the Spring of 2020. Undergraduate faculty teaching biology-related courses at PUIs in the Midwest were identified as the target population. Because of the unexpected nature of the pandemic, data collection for the pre-ERT time frame was completed retrospectively and due to practical limitations could not include student input, so we necessarily limited our data collection to instructor surveys. Qualitative data collected from separate faculty interviews provided additional insight into what drove instructor decisions to alter teaching practices. A separate report has been published describing the challenges that teachers surveyed faced while implementing their courses within the context of ERT (Colclasure et al., 2021).

    CONCEPTUAL FRAMEWORK

    ST was born out of initiatives of the last several decades aimed at restructuring life science education around evidence-based practices that promote learning (American Association for the Advancement of Science [AAAS], 2011; Handelsman et al., 2004, 2007). There are three core components of ST, all of which have been heavily vetted in the literature: active learning, assessment, and inclusivity (Couch et al., 2015; (Handelsman et al., 2007). Active learning is indicated by an environment where students are asked to engage with the concepts and topics at hand rather than passively consuming them (Crouch and Mazur, 2001; Michael, 2006; Prince, 2004; Wood, 2009; Osborn, 2010). Assessment is interwoven into ST practice to give both students and instructors indicators of learning progress during the course of a unit (formative assessment) and at its completion (summative assessment; Black and Wiliam, 1998; Tanner and Allen, 2004). Inclusive teaching practices are those that contribute to a classroom environment where barriers to student self-efficacy, including various sources of bias, are continually monitored and mitigated, thereby promoting a sense of belonging for every student (Dewsbury and Brame, 2019). These three “hubs” of ST function to create a space where learning can be expected to occur. In this model, the instructor creates a classroom environment that promotes belongingness and self-efficacy, provides relevant opportunities to practice and engage with the course content within that space, and uses multiple ways for both students and the teacher to assess learning and make adjustments throughout (Handelsman et al., 2007).

    RESEARCH PURPOSE AND OBJECTIVES

    In this study, we aimed to investigate through the lens of ST how the rapid shift to the ERT environment impacted teaching practices. We sought to survey instructors at PUIs, where we surmise the disruption of teaching practice would be broadly apparent. The strong correlation between student, instructor, and observer MIST composite scores supports the utility of gathering these data from instructors only, even given that this population was the most readily available source of data.

    The specific objectives that guided this study were:

    1. Determine the change in the extent that ST practices were used after an abrupt transition away from face-to-face instruction.

    2. Determine factors predicting the changes in the use of ST practices that were observed.

    METHODS

    Due to the exploratory nature of this research and the intention to use both quantitative and qualitative methods to explore our objectives, a mixed-methods research approach was deemed most appropriate (Creswell and Plano Clark, 2011). An explanatory sequential mixed-methods research design was selected (Creswell and Plano Clark, 2006). The first phase of the explanatory sequential design is the collection and analysis of quantitative data. The second phase uses qualitative methods and is based upon results from the quantitative phase. The collection and analysis of in-depth, follow-up qualitative data offer an explanation to results seen in the quantitative phase.

    Identification of Population and Sample

    Undergraduate faculty teaching biology-related courses at PUIs in the Midwest were identified as the target population. However, data were collected for respondents outside the target population and used as comparisons. To fulfill an adequate sample size for our target population, a sampling frame was created through an exhaustive Web search for undergraduate faculty in biology-related disciplines at PUIs in the Midwest. Midwestern states of primary interest were further defined as states in subregion 4 (Slocum and Scholl, 2013), and included Iowa, Kansas, Missouri, Nebraska, North Dakota, and South Dakota. A total of 108 PUIs were identified from an existing list of PUIs in the United States (Slocum and Scholl, 2013). This list was generated using the National Science Foundation (NSF) definition, which includes institutions that: grant baccalaureate degrees in NSF-supported fields, has a larger enrollment of undergraduate students than graduate students, and awards fewer than 10 doctoral degrees per year in NSF-supported areas (Slocum and Scholl, 2013). A Web search for biology faculty at PUIs from this list within subregion 4 and corresponding email addresses yielded a sampling frame of 590 faculty.

    Phase 1: Quantitative

    A digital survey was administered through Qualtrics for the collection of quantitative data. The survey included an instrument to identify respondents’ use of ST practices in an identified biology-related focal course before and after the abrupt transition to ERT. Additional data collected were respondents’ comfort with technology, focal course characteristics, institutional characteristics, and instructor characteristics and demographics.

    Scientific Teaching.

    The MIST (Durham et al., 2017) was modified for the purpose of this study to include both the respondents’ ST practices at the time of data collection (during ERT), as well as a retrospective pretest to identify respondents’ ST practices within the same course before ERT. Lam and Bengo (2003) define the method of reporting current teaching practices and earlier teaching practices as the “post + retrospective pretest method.” The retrospective design encourages greater respondent precision and awareness in measuring change compared with the traditional pretest–posttest design (Cantrell, 2010; Little et al., 2019) and is frequently used to assess changes in educational research (Eeds et al. 2014; Ahmad et al., 2018; Young and Kallemeyn, 2019). Post hoc scale reliability analysis for the MIST used in this study yielded a Cronbach’s alpha of 0.878 for instruction before ERT and 0.883 for instruction during ERT, therefore supporting reliability of both scales (Field, 2013).

    Technology Comfort Scale.

    A scale was created to assess respondents’ comfort with technology in teaching. The researcher-developed instrument was reviewed for face validity based on the “Risk-Taking Behaviors and Comfort with Technology” section of the Teacher Technology Integration Survey (Reinhart and Banister, 2009). It included six items measured using a five-point, Likert-type scale (1 = strongly disagree to 5 = strongly agree). The six items were: 1) I am confident in my ability to incorporate technology in my teaching; 2) I typically avoid using technology in the classroom; 3) I believe using technology in the classroom improves teaching and learning; 4) I seek to integrate new technologies when I normally teach; 5) I find it difficult to implement technology in my teaching; and 6) I believe students enjoy learning through the use of technology. Negative statements were reverse coded in the analysis. The technology comfort construct was created by averaging participant responses on the six items, and a post hoc scale reliability yielded a Cronbach’s alpha of 0.81, and considered reliable.

    Demographic, Course, and Institutional Variables.

    Demographics variables collected and analyzed were gender, age, and ethnicity. Furthermore, respondents were asked to indicate years teaching undergraduate science, technology, engineering, and mathematics (STEM) courses, academic position/rank, institution type, and if they had previously taught remotely. Additional questions related to each respondent’s focal course included how many years the instructor has taught the focal course, current student enrollment, degree of enjoyment teaching the course, the course level, the number of students enrolled in the course, and remote instruction modality (synchronous, asynchronous, blended). We defined asynchronous as “students completed all work on their own time, given a specific time frame”; synchronous as “live class sessions occurred at set schedules and time frames”; and blended as “live class sessions were conducted, but not to the same extent as original face-to-face modality, and included additional work students completed on their own in lieu of less online meeting times.”

    Course Evaluation Data.

    As an optional follow-up, survey participants were given the opportunity to provide evaluation data on their focal course. The purpose of gathering the evaluation data was to provide a source of triangulation between responses on the MIST survey and interviews that could provide a student perspective on the changes in ST that were observed. Participants who chose to provide evaluation data were directed to a separate form where they provided informed consent. They provided the title and number of the focal course and uploaded their evaluations in the file format of their choosing. A database containing an entry for each respondent included information on the focal course, the total enrollment of the course, and the number of respondents to the evaluation. Open-ended responses were screened for any mention of the transition to a remote/online setting.

    Survey Distribution.

    Electronic invitations were sent to all members of the sampling frame (n = 590) in late April and early May toward the end of the typical Spring 2020 semester. Survey distribution followed the tailored design method (Dillman et al., 2014), and personalized email invitations were sent with a link to complete the survey. Follow-up invitations were sent to noncompleters once initial response rates fell to zero. During the same period, open invitations to participate in the study were sent to audiences beyond the target population using common postsecondary science teaching Listservs. Community college biology instructors (n = 38) within the target region (Midwest subregion 4) were also sent personalized invitations.

    Quantitative Data Analysis.

    SPSS Statistics v. 25 was used to analyze data. Descriptive statistics and paired-samples t tests were used to address objective 1. Independent-samples t test revealed no difference between respondents in the target population and the other respondents (see Results), so we combined the data for analysis to use the complete data set. A stepwise regression was used to address objective 2. Quantitative data were analyzed and informed the development of phase 2 of the study, the qualitative phase, which was designed to offer an in-depth explanation of observations seen in the quantitative phase, and used to further address objective 2.

    Phase 2: Qualitative

    To fulfill the second phase of the explanatory sequential mixed-methods research design (Creswell and Plano Clark, 2006), follow-up, one-on-one semistructured interviews were deemed most appropriate to elicit thick and rich data (Morse, 2015) to explain quantitative results. One-on-one interviews were selected due to the potentially high sensitivity and anticipated variation in experiences of individuals teaching during a pandemic. The semistructured format was selected to allow us to modify questions based upon each participant’s survey response and to provide us with flexibility in probing and validating the meaning of participants’ answers (Barriball, 1994). Due to limitations surrounding physical proximity and COVID-19, and for ease of data collection, interviews were conducted over the telephone, which has been shown to be appropriate for semistructured interviewing methods (Cachia and Millward, 2011).

    Recruitment of Participants.

    In the quantitative phase of the study, respondents were asked if they would be willing to complete an incentivized, follow-up interview regarding their survey responses. Of 133 survey respondents, 59 indicated their willingness to be interviewed. Participation was narrowed to only include the target population (respondents teaching at a PUI in subregion 4 of the Midwest). Finally, in order to recruit an even distribution of respondents’ survey-response characteristics, a cluster analysis was employed. Interviewees were selected using purposeful criterion sampling (Suri, 2011) across each of six clusters, with one to four interviewees representing each cluster (Figure 1).

    FIGURE 1.

    FIGURE 1. Scatter plot of post-MIST vs. pre-MIST scores. Hierarchical clustering analysis was conducted on the full data set. A minimum of five individuals/cluster was set as a cutoff. Of the total 133 individuals, 123 fit into the six clusters indicated. Fourteen candidates were randomly selected. The number of individuals selected from each cluster was proportional to its size.

    Interview Guide.

    A semistructured interview guide was created and used as the data-collection instrument for the qualitative phase (Supplemental Material 9). The interview guide consisted of three content areas, the first of which was used for the purpose of this mixed-methods study, and the following two for an independent qualitative study. The first content area consisted of 14 to 20 open-ended questions pertaining to participants’ survey responses. The open-ended questions aligned directly to the MIST instrument, and the wording of each question was modified according to participants’ responses on the survey. For example, “You answered that the average percent of class time during which students were asked to work in groups decreased from 61% to 30% after the course modality was transitioned online. Please describe why you chose that approach.” At the end of the interview, the moderator provided a summary and used member checking to ensure the credibility, accuracy, and completeness of the interview (Lincoln and Guba, 1985).

    Data Collection and Analysis.

    The Bureau of Sociological Research (BOSR) at the University of Nebraska–Lincoln was used to assist in the interview process. A member of the BOSR institute who had extensive experience conducting interviews contacted, scheduled, and interviewed each participant. Interviews were audio-recorded and recordings were transcribed for later analysis.

    In the second phase of this explanatory sequential mixed-methods study, the interview data were analyzed using several iterative rounds of qualitative analysis. Because the semistructured interview was set up to ask about each survey question pertaining to a particular ST practice, structural coding was used to organize interview responses by individual question, and then further subgrouped by the pattern of change with respect to each question (e.g., did the interviewee increase, decrease, or stay the same in teaching practice frequency after the ERT transition; Saldaña, 2013). In vivo coding was used to identify specific excerpts that represented the heart of interviewee explanations (Saldaña, 2013). We then performed several rounds of pattern coding to identify meaningful themes across explanations for each pattern of change within each teaching practice (Saldaña, 2013). In each round of coding, two research team members (B.C. and A.M.M.) coded each response independently, and then the whole research team met to discuss codes and themes and to establish consensus for each theme of responses identified by the pattern-coding steps. Themes related to and offering explanations for changes in respondents’ MIST scores were documented.

    IRB Status

    This work was approved via the expedited review process by the Doane University Institutional Review Board (IRB) review, approval code S20 007 DC IRB HS.

    RESULTS

    Respondent Characteristics

    A total of 72 responses were received from the sampling frame of 590 faculty in the target population, biology faculty members teaching at PUIs within five midwestern states. This indicates a response rate of 12.2%. Of the 72 responses, 57 were completed in full and used during data analysis, indicating a survey completion rate of 79.2%. Seventy-six additional responses were received through open invitations using postsecondary science teaching Listservs and personal invitations to community college biology instructors within the target region. Based on all survey respondents, 66 responses came from Midwest subregion 4, representing 49.6% of the total sample. Sixty-one of all responses (45.9% of the total sample) were from faculty teaching at PUIs. A total of 133 faculty teaching biology-related courses completed the quantitative phase of the study. A majority of faculty were female (n = 84; 63.2%). Nine respondants (6%) identified as an underrepresented minoriry (URM). Forty-three (32.3%) faculty indicated being between 40 and 49 years of age, followed by 23.3% (n = 31) being between 50 and 59 years of age, and 21.1% (n = 28) being between 30 and 39 years of age. Approximately 60% of respondents indicated having a tenured academic rank, 36.8% (n = 49) as a professor and 24.8% (n = 33) as an associate professor. Sixty-seven percent (n = 90) indicated having previous experience with remote teaching, and more than half (n = 90; 67.7%) had more than 10 years of experience teaching STEM at the undergraduate level. Table 1 illustrates respondent demographics for the quantitative phase of the study.

    TABLE 1. Characteristics of survey respondents (n = 133)

    % (f)
    Gender
     Female63.2 (84)
     Male34.6 (46)
     Other/non-response2.3 (3)
    Ethnicity
     URMa6.8 (9)
     Non-URM92.5 (123)
     Other/non-response0.8 (1)
    Age
     20–293.0 (4)
     30–3921.1 (28)
     40–4932.3 (43)
     50–5923.3 (31)
     60–6914.3 (19)
     70 or over0.8 (1)
     Other/non-response5.3 (7)
    Position
     Adjunct instructor/lecturer6.0 (8)
     Instructor/lecturer, contract11.3 (15)
     Instructor/lecturer, tenure track2.3 (3)
     Assistant professor18.8 (25)
     Associate professor24.8 (33)
     Professor36.8 (49)
    Previous remote teaching experience
     Yes67.2 (90)
     No29.9 (40)
     Non-response3.0 (4)
    Years teaching undergraduate STEM
     <1
     1–26.0 (8)
     3–57.5 (10)
     6–1018.8 (25)
     11–1526.3 (35)
     16–2012.8 (17)
     >2028.6 (38)

    aURM, underrepresented minority (African American, Hispanic/Latinx, Filipino, Native American/Alaskan Native).

    A majority (n = 85; 63.9%) of respondents indicated teaching at a Midwest institution. Eighty respondents (60.2%) were teaching at a private institution, and 82% (n = 109) were teaching at a PUI. Fourteen respondents (10.5%) were teaching at a research institution, and 10 (7.5%) were teaching at a community college. Table 2 illustrates the characteristics of respondents’ institutions.

    TABLE 2. Description of respondents’ institutions

    % (f)
    Regiona
     Midwestb63.9 (85)
     Northeast8.3 (11)
     South11.3 (15)
     West13.5 (18)
     Canada3.0 (4)
    Institution control
     Private60.2 (80)
     Public39.8 (53)
    Institution type
     PUI82.0 (109)
     Research institution10.5 (14)
     Community college7.5 (10)

    aRegions as defined by Slocum and Scholl (2013).

    bSixty-six of 85 Midwest respondents were from subregion 4.

    Respondents were asked to identify a focal course and to use that focal course to complete survey questions. Characteristics of respondents’ focal courses were collected. A majority of respondents (n = 57; 42.5%) completed the survey based on a 100-level focal course. Thirty-two (23.9%) respondents indicated a 200-level focal course, and 32 (23.9%) indicated a 300-level course. Nearly half (n = 66, 49.3%) of respondents taught remotely through an asynchronous modality, while 22 (16.4%) taught through a synchronous modality, and 46 (34.3%) used a blended approach. Approximately 37% (n = 49) of respondents indicated teaching the identified focal course for more than 10 years. Table 3 illustrates the characteristics of respondents’ focal courses.

    TABLE 3. Description of respondents’ focal courses

    % (f)
    Course level
     10042.5 (56)
     20023.9 (32)
     30023.9 (32)
     4009.0 (12)
     Other0.1 (1)
    Instructional approach in remote modality
     Asynchronous49.3 (65)
     Synchronous16.4 (22)
     Blended34.3 (46)
    Years respondent has been teaching focal course
     <15.3 (7)
     1–212.0 (16)
     3–521.1 (28)
     6–1024.8 (33)
     11–1516.5 (22)
     16–205.3 (7)
     >2015.0 (20)

    For the qualitative phase of the study, a total of 14 survey participants were purposefully selected from a pool of volunteers to complete a follow-up interview. All interview participants were teaching at a midwestern PUI. Interview participants were purposely selected to have varying relative MIST scores. Hierarchical clustering analysis was conducted using instructors’ composite MIST scores before and after the modality transition. Twelve clusters were identified, and the six largest clusters were used to select participants to interview during the qualitative phase (see Figure 1). Participant demographics, relative MIST and technology comfort scores, and total length of the interview are illustrated in Table 4.

    TABLE 4. Interview participant characteristics

    Participant no.GenderAcademic rankMIST face-to-face scoreMIST remote scoreTechnology comfort scoreInterview length (minutes)
    1FemaleAssistant professor48.1445.252.8348
    2MaleAssistant professor49.5741.552.8379
    3FemaleAssistant professor50.9942.483.83109
    4MaleProfessor51.0145.504.580
    5MaleProfessor34.3027.823.6789
    6FemaleProfessor31.5728.083.3384
    7FemaleAssociate professor42.2439.093.5068
    8FemaleProfessor38.4919.344.3364
    9MaleAssistant professor42.8724.19536
    10MaleProfessor56.8838.802.8357
    11MaleAssociate professor49.4226.903.579
    12FemaleAssociate professor50.0712.812.6793
    13MaleAssistant professor66.7448.994.1780
    14FemaleAssociate professor65.6947.743.6770

    Overall Change of ST Practices and Contributing Factors

    Independent-samples t tests were conducted to determine whether differences existed between composite MIST scores for participants within and outside the target population. No differences were found between the two group’s face-to-face MIST scores, t(132) = −1.426, p = 0.156, and remote scores, t(132) = −0.378, p = 0.706. Institution type was not found to be a significant predictor of participants’ face-to-face or remote MIST scores, which is explained further in the regression analysis (Table 5). Because of these observations, the full data set representing 133 participants was used in the analysis of the MIST score data.

    TABLE 5. Stepwise regression summary for variables predicting MIST remote modality scores (n = 133)

    ModelR2Adjusted R2Predictor variableB (coefficient)SEBβ (standardized coefficient)tp value
    10.4390.435<0.001
    (Constant)5.0413.3191.5190.131
    MIST face-to-face score0.6720.0660.66310.164<0.001
    20.4750.467<0.001
    (Constant)3.8633.2491.1890.237
    MIST face-to-face score0.6470.0650.6389.987<0.001
    Remote modality type4.7201.5860.1902.9760.003
    30.4940.482<0.001
    (Constant)−4.2134.849−0.8690.387
    MIST face-to-face score0.6300.0640.6219.799<0.001
    Remote modality type4.5241.5650.1822.8900.005
    Technology comfort score2.3941.0800.1402.2170.028

    Instructors’ Overall Use of ST Practices.

    Significant changes in ST practices were measured by comparing respondents’ composite face-to-face and remote MIST scores. The mean composite MIST score for face-to-face instruction was 48.66 (SD = 12.28), and the mean composite MIST score for remote instruction was 37.75 (SD = 12.46). The 10.91 mean decrease was significant (t = 12.42, p < 0.001), indicating an overall decreased use of ST practices during ERT (Figure 2).

    FIGURE 2.

    FIGURE 2. MIST composite score distributions in face-to-face (F2F) and remote modalities. Central bars represent subcategory mean scores, boxes represent inner quartiles, and whiskers represent the 5th and 95th percentile values. n = 133 survey respondents. Open box represents face-to-face modality, and filled box represents remote modality. ***p < 0.001.

    Variables Predicting Instructors’ Overall Use of ST Practice.

    We used stepwise regression to answer objective 2 due to the exploratory nature of this study (Field, 2013). We attempted to predict instructor composite MIST remote score and used this continuous dependent variable in our regression model. Predictor variables included both continuous: the number of students enrolled in the focal course, instructor MIST face-to-face score, and instructor technology comfort score; and dummy-coded categorical variables: region of institution (Midwest = 1, non-Midwest = 0), institutional type (PUI = 1, community college or research institution = 0), focal course level (100-level = 1, all other levels = 0), instructor academic rank (tenured = 1, nontenured = 0), instructor gender (female = 1, male = 0), instructor previous remote teaching experience (yes = 1, no = 0), and modality of remote instruction (synchronous or blended = 1, asynchronous = 0).

    Before running the stepwise regression, we tested assumptions of regression. Skewness and kurtosis of the dependent variable were measured. The dependent variable fell within ±2 skewness and kurtosis indicating normality. A Shapiro-Wilk test was conducted to further validate normality of the dependent variable. The Shapiro-Wilk p value was significant, therefore supporting normality. Regression analysis assumes no multicollinearity of data. A correlation matrix was computed using Pearson’s bivariate correlations among all predictor variables. The highest correlation coefficient magnitude between two predictor variables was 0.396, and therefore below the 0.80 magnitude determining multicollinearity (Field, 2013). Furthermore, variance inflation factors (VIFs) were determined. All VIF values were lower than 10, indicating no threat of multicollinearity. Finally, a scatter plot of residuals versus predicted values was created to check for homoscedasticity. No patterns of distribution were observed, indicating data were heteroscedastic.

    The stepwise regression allowed significant (p < 0.05) predictor variables to be added to the model and nonsignificant predictor variables to be removed from the model (Table 5). The stepwise regression produced three models. The first model (model 1) included instructors’ MIST face-to-face scores as a significant predictor variable. Model 1 impressively explained 43.5% of the variation in MIST remote scores with this single predictor variable. Remote modality type was added as a significant predictor variable in model 2. The addition of remote modality type increased the explained variance to 47.5%. In the final model generated, model 3, instructor technology comfort score was added. With the addition of this predictor variable, the final model (MIST face-to-face scores + remote modality type + technology comfort score) explained 49.4% (R2 = 0.494) of the variance in instructors’ MIST remote scores, indicating a large effect size of the independent variables (Cohen, 1988). The variables of student enrollment in focal course, region of institution, institutional type, focal course level, instructor academic rank, instructor gender, and instructor previous remote teaching experience were found to not be significant predictors and were excluded from the model. The model was significant at p < 0.001. Table 5 provides the stepwise regression summary for variables predicting MIST remote scores.

    In the final model (MIST face-to-face scores + remote modality type + technology comfort score) the MIST face-to-face score predictor variable had the largest positive impact on instructors’ MIST remote scores. For every 1-point increase in an instructor’s MIST face-to-face score, a 0.630 increase (B = 0.630, β = 0.621, p < 0.001) in MIST remote score was predicted. These findings suggest that instructors who implement higher levels of ST practices during face-to-face instruction will implement higher levels of ST practice during ERT.

    In the final model, instructors who used a synchronous or blended modality were predicted to have a higher MIST remote score by 4.52 points (B = 4.524, β = 1.82, p = <0.05), thereby illustrating that instructors who use synchronous or blended instructional techniques during remote instruction are more likely to implement higher levels of ST. Finally, instructors’ comfort with using technology was a significant predictor variable, as determined by our technology comfort scale. Instructors who held more positive beliefs in their use of technology in the classroom were predicted to have higher MIST remote scores. For every 1-point increase on the five-point Likert scale, instructors’ MIST remote scores were predicted to increase by 2.39 points (B = 2.39, β = 1.40, p = <0.05).

    Changes in ST Practice Subcategories and Qualitative Findings

    Average MIST composite scores and subcategory scores as well as variation in these scores in the face-to-face condition were similar to those reported previously in the literature (Durham et al., 2017). A significant decrease was found for each MIST subcategory score during ERT (Figure 3 and Table 6). Results for changes in MIST subcategory scores are described in the following sections. Qualitative data collected from participant interviews were used to provide an explanation of change and are reported. The full qualitative analysis can be found in Supplemental Materials 1–8.

    FIGURE 3.

    FIGURE 3. Score distributions for the eight MIST subcategories in face-to-face and remote modalities. Central bars represent subcategory mean scores, boxes represent inner quartiles, and whiskers represent the 5th and 95th percentile values. n = 133 survey respondents. Open boxes represent face-to-face modality, and filled boxes represent remote modality. *p < 0.05; ***p < 0.001.

    TABLE 6. Respondents’ MIST subcategory comparisons before and after modality transition (n = 133)

    Face-to-faceRemotePaired correlationPaired t test
    MeanSDMeanSDRptp
    Composite score48.6612.2837.7512.460.66<0.00112.42<0.001
    Subscores
     Active learning41.8316.6320.7115.000.45<0.00114.68<0.001
     Learning goals63.1116.6552.7217.770.57<0.0017.53<0.001
     Inclusivity65.4225.3562.7325.550.94<0.0013.65<0.001
     Responsiveness75.7514.7255.2519.600.46<0.00112.95<0.001
     Experimental design33.5319.9526.0019.470.81<0.0017.18<0.001
     Data analysis38.9820.7130.8020.440.80<0.0017.27<0.001
     Cognitive skills48.7318.5441.2919.570.78<0.0016.80<0.001
     Reflection32.3721.6729.3220.800.70<0.0012.130.035

    Active Learning.

    ST practices related to active learning decreased the most of all MIST subcategory scores. Active-learning practice scores decreased from a mean of 41.83 during face-to-face instruction to a mean of 20.71 during ERT (Figure 3). The 21.12 decrease in mean score was statistically significant (t = 14.68, p < 0.001). Interview participants provided a variety of explanations for why their percent of active class time or percent of time spent working in groups changed or stayed the same after the switch to the online modality (Supplemental Material 1). Because the interview participants provided extensive commentary on the active-learning practices, we have separated the interview results into three sections of decreased active-learning scores, no change in active-learning scores, and increased active-learning scores.

    Decreased Active-Learning Scores.

    Several interviewees who decreased use of active-learning practices cited limitations of the asynchronous online approach, which in some cases included prerecorded lecture videos with no interactive aspects, dropping team-based learning activities, and substituting in-class discussions with discussion board posts. Interviewee 8 mentions:

    “I did voice over PowerPoints and as I started developing them, I realized wow, I cannot ask my questions like I did before. I would constantly be asking them questions for review [face-to-face] and I couldn’t do that [remotely] because there’s nobody to respond back to me so I had to adjust my lecturing style.”

    Eight interviewees who had students working in groups during the face-to-face modality did not feel that group work was compatible with asynchronous formats. In some cases, interviewees simply did not know how to implement groups in their learning management systems or videoconferencing programs, and they did not have the time to figure out how to do so given the rapid transition. Other interviewees tried to implement groups, experienced technical difficulties, and then abandoned the practices. Interviewee 4 described this experience from live Zoom class sessions:

    “The first time I tried [breakout rooms] it was a disaster. I lost a lot of students…. At that point in time, I decided, I’d go away from the group activities for the rest of the semester and just focus on what I could achieve with the programs that I had at my disposal.”

    Other interviewees who tried to implement group work found that students were not responsive in breakout rooms or in shared asynchronous documents, so those instructors decreased or dropped the group approaches. Interviewee 6 mentioned:

    “I even said … why don’t you put it in a Google doc or something and work on it together and just turn in one document. But nobody did that and so I kind of took that as them not being interested or it not being worth it to them to bother with whatever logistics they were dealing with for group work and I didn’t pursue it any further.”

    Two interviewees indicated that their sensitivity and desire to maintain fairness regarding students’ home or personal situations led them to drop group work, because they had various students who had unknown Internet accessibility or unknown personal or home situations; some groups may have members from different time zones; and some students had to get a job to make ends meet, so various work schedules among students would make group work prohibitively difficult. Interviewee 9 said: “I’m all about fairness in my class and so I can’t place a demand on one student and assume that they have really good Internet connections or something so I have to hold them to the minimum standard.”

    No Change in Active-Learning Scores.

    Interviewees who maintained the same level of active learning primarily aimed to provide a similar student experience across modalities. Instructors using the synchronous format continued live lecture sessions with occasional check-in questions in the remote format, which was possible because the instructor had reliable student participation for in-class questions in both modalities. Interviewee 4 mentioned the motivation was to ensure consistency and reduce students’ shock with the transition: “When I did the online component, because the transition from face-to-face to online was so sudden, I tried to keep it as similar as possible for students, so they didn’t have any shock.”

    Asynchronous format instructors also wanted to maintain similar course sessions, so they asked students to pause prerecorded lecture videos to reflect on or answer questions that would have been discussed in face-to-face class sessions, although the instructors acknowledge that students were not held accountable for responses, as explained by Interviewee 2:

    “I still asked them to please pause and consider how they would respond to these things to go through notes and whatever else. They weren’t working with another person anymore; it was still individual and there was no way of knowing if they were actually doing that.”

    Increased Active-Learning Scores.

    Interviewees who increased active-learning scores during ERT indicated several different reasons for the increase. One instructor noted that online resources that were temporarily provided for free allowed greater access to higher-quality materials that were used to engage students. In the synchronous format, one instructor cut out the lead-off lecture portion of the course entirely and put students directly into breakout rooms when class sessions began. Individuals using asynchronous formats mentioned that the prerecorded lecture videos they provided to students were shorter than face-to-face class sessions, and some instructors removed lectures entirely and only provided lecture slides, shared Google Docs, or other activities that required students to engage with the material or turn in an assignment for credit. Interviewee 1 described: “They had time to read through my lectures and then post questions to me in the Google doc or email me questions. So, it became a much more student responsibility type format.”

    One interviewee who used an asynchronous course format increased the amount of group work in the remote modality in order to maintain a sense of community among the students. Interviewee 7 explained:

    “I wanted to keep that up since they weren’t seeing each other anymore. I thought that since the community was still important and so, I made groups in Canvas for them … and then pretty much anything that they did for the class I had them work on in those groups … trying to simulate how it would have been in the classroom, and still trying to foster that community.”

    One interviewee also mentioned that their increase in active-learning score is an artifact of the survey design, because most of their activity in the face-to-face modality was completed in the lab portion of the course, so those activities were not included in the survey.

    Learning Goals and Feedback.

    The calculated average for the learning goals and feedback subscale was 63.1 during face-to-face instruction and 52.72 during remote instruction (Figure 3). The 10.38 decrease was significant (t = 7.53, p < 0.001). Interviewees explained their approaches to changing or maintaining the specificity of learning goals and frequencies of feedback provided to students with practical, logistical, and personal value reasoning (Supplemental Table 2). Those who decreased the specificity of their learning goals in the remote modality explained that they intended to streamline course material for students by sacrificing depth for breadth, in part because the online or asynchronous delivery mode made it difficult to get depth or because there was not enough time to get in depth on course topics. Interviewee 5 described: “[I know I am] not going to get this covered and [I am] not going to get that covered. How can I streamline the material so that I can hit more topics but maybe less in depth?”

    Several interviewees who maintained the same specificity of learning goals cited practical reasons such as their learning goals were already well established for the course, they did not have enough preparation time to change their existing learning goals, or they wanted to continue to allow students to prepare for exams. Other interviewees indicated that maintaining consistency was of high value to them personally or that they did not think students were concerned with the learning goals.

    Those who increased learning goal specificity after the ERT transition explained that it was to provide more structure and specific points of focus for students and that it was important as a justification for why they were asking students to do the activity or course work.

    Interviewees were also asked to explain why the amount of feedback they provided to students changed or did not change after the transition to ERT. Those who decreased in the amount of feedback listed a variety of reasons for the change.

    Interviewees citing time constraints and stress-level barriers indicated that they did not have enough time to provide valuable feedback and they, as instructors, felt overwhelmed and were operating in “survival mode.” Interviewee 6 explained:

    “The amount of feedback decreased largely because I was just so overwhelmed with the amount of material that was coming in. Instead of an in-class discussion where they might’ve just gotten participation points for being there and speaking up and that kind of thing, all of a sudden they’re having to type something up and send it to me and I’m trying to read all of this and grade all of this and I couldn’t handle it … by no means do I think that was ideal but it was the only way I could survive.”

    Interviewees noted limitations of both remote modalities reduced their ability to provide feedback. Those using synchronous formats said it was difficult to give feedback in real time with synchronous sessions, and those using an asynchronous format thought it was difficult or not possible to provide useful feedback given the time lag in turning in assignments and sending feedback out.

    Interviewees also mentioned logistics barriers that led to a decrease in the amount of feedback. One interviewee mentioned that expectations of students were established early in the semester. Another interviewee who used synchronous class sessions said that instructors cannot monitor breakout rooms well without awkwardly butting in and disrupting the student workflow. Some interviewees said they did not know how to use the feedback tools in their learning management systems, and others said their grading system was very time-consuming. Interviewee 1 described:

    “Transitioning to remote, the learning management system was very clunky and caused some real delays. You know, it took five clicks to see an answer for a question so by the time grading was done, there wasn’t much time for feedback.”

    Interviewees also explained that the format of questions and assignments changed with ERT from primarily free response and essay to multiple-choice questions so that grading was easier and automated whenever possible. Others went in the opposite direction and made their exams harder to take, which meant they were more time-consuming to grade. Finally, one interviewee mentioned that students were not as engaged in ERT compared with the face-to-face modality, so they did not ask for feedback.

    A few interviewees who kept the amount of feedback the same across modalities reported that this was possible, because all student assignments were submitted on the learning management system regardless of modality, and because the interviewee wanted to encourage students to do a better job in the remote modality.

    Interviewees who increased the amount of feedback provided to students explained personal value and logistical reasons for the increases. Interviewee 7 thought it was important for students to know specifics of grading:

    “When it went online, they never had any [instructor] circulating in the class or I never went over a question with the right and wrong answers in class anymore. All they got really was what they saw and so I was just trying to make it very clear to them why I was taking off points and what they missed or whatever those misconceptions might have been.”

    Other interviewees mentioned that, because they had less group work in the remote format, they felt they needed to provide more feedback for individuals, because they had no chance to chat in person with students and knew all students would receive before the exam were feedback comments from the instructor, and because the instructor had more time at home compared with when they were in their office because of multiple on-campus obligations that were minimized or removed when the remote modality began.

    Inclusivity.

    Respondents reported similar use of inclusivity-based ST practices during face-to-face instruction (M = 65.42, SD = 25.35) and ERT (M = 62.73, SD = 25.55; Figure 3). Despite the similarity in mean scores, a 2.69 mean score significant decrease was observed (t = 3.65, p < 0.001).

    Interviewees who showed a decrease in inclusivity scores after the online switch mentioned that the decrease was an artifact of course timing, in that the inclusivity-relevant topics happened to occur before the transition; that the inclusivity practices were dropped or reduced as a casualty of stripping the course topics down to the basics addressed by the course textbook, which contained little to no diverse representation, and because they felt uncomfortable or awkward discussing controversial topics remotely, because they prefer to address these topics via conversation and open dialogue, which they did not feel was possible in the asynchronous remote format (Supplemental Table 3). Interviewee 9 mentioned: “If I’m going to do recorded lectures then it’s harder to get into controversial topics but when I’m in the class I can bring up and have more of a dialogue with my class.”

    Interviewees who maintained the same levels of inclusivity across modalities mentioned that inclusivity was a major component of their course design or that inclusivity is of high personal value to them. Interviewee 3 expressed:

    “Oh, for me that’s very, very, very important.… I have friends who work in my field who are amazing Black women or Black men and they are not represented fairly in a lot of different ways. So, to me that’s very important. I have students of color. I want them to see themselves in what I do.”

    Responsiveness.

    A significant decrease (t = 12.95, p < 0.001) in use of ST practices related to responsiveness was observed between face-to-face instruction (M = 75.75, SD = 14.72) and ERT (M = 55.25, SD = 19.60; Figure 3).

    Interviewees who decreased their responsiveness practices after the transition to ERT indicated two main reasons for the decrease: logistical concerns and a lack of student cues available (Supplemental Table 4). In terms of logistics, interviewees mentioned that they did not want to overburden students with additional work, that the online modality afforded fewer possible types of follow-up resources in both synchronous and asynchronous formats, and that they did not have enough time to check on all breakout rooms during online class sessions like they would while circulating around a classroom. Interviewees described that jumping into breakout rooms interrupted the flow of student work and that the weekly module structure led to a time lag in grading and feedback, so there was no opportunity to adjust to student understanding until the class had moved on to a different topic.

    In terms of the lack of student-cued responses, interviewees more specifically attributed their decrease in responsiveness to the inability to see students synchronously, because students tended to keep their cameras off during class sessions; inability to interact with students in real time when in the asynchronous format; a general lack of responsiveness among students; a lack of input from students until assignments or assessments were turned in for a grade; and a tendency of students to give false indications of understanding to avoid discomfort. Interviewee 3 described: “Sometimes flat out asking them, they’re not going to tell you … sometimes they don’t even know enough, like they’re so overwhelmed they just say, ‘Yes’ to make the conversation go away, or they don’t respond.”

    Interviewees who maintained the same level of responsiveness in both formats indicated that they found students were more likely to email them with questions when they were in the online modality, that their course activities fostered back-and-forth interactions between the instructor and students, and that being responsive was of high personal value in their teaching. Interviewee 14 describes this importance:

    “Because that’s the most important thing, right? You can’t figure out if students don’t know something and then not work to try to fix it … I guess I was able to maintain it because I think it was the most important part of teaching.”

    Experimental Design.

    Respondents reported low use of ST practices related to experimental design. Before ERT, mean use was 33.53 (SD = 19.95), and during ERT, mean use decreased to 26.0 (SD = 19.47; Figure 3). The difference in respondent means was significant (t = 7.18, p < 0.001).

    After the switch to ERT, interviewees tended to decrease experimental design practices but maintained similar frequencies of activities involving scientific literature or media articles. Interviewees cited practical, logistical, artifactual, student-motivated, and personal value–related reasons for taking these approaches (Supplemental Table 5).

    Interviewees who decreased in experimental design and communication scores indicated that these practices were removed after the transition in an attempt to simplify course materials just to get through the semester in “survival mode.” Interviewee 11 said:

    “It was less of a let’s analyze and think about what’s going on here and come up with hypotheses and more so like, okay, here’s the system that we’re looking at. Here are the basics that I want you to know. Here are some ideas that I want you to think about. Here’s a video to reinforce that and, here’s a time that I’m available for us to discuss that ahead of the test.”

    In some cases, these practices were dropped as a consequence of removing another course aspect such as group work or face-to-face labs. In other cases, interviewees could not ensure that students would have access to articles or other sources when they were off campus with unknown Internet access. One interviewee mentioned that the amount of time working on these activities was the same in both modalities, but it took longer for students to work through each individual activity remotely, so the frequency itself decreased. Additionally, some of the decreases were artifacts of the original course plan, which included experimental design early in the semester but not late in the semester. Interviewee 6 also reported dropping practices in this category, because the students lacked the skills needed to successfully complete these tasks independently:

    “It’s not something that our students seem to be very adept at … I’m teaching most of them as second-semester freshmen and so they really need a lot of prodding and hints and I think that would just be really hard to do remotely.”

    Interviewees who did not change frequency of experimental design and communication practices between modalities attributed their ability to maintain the same levels to factors such as resource availability and course design. Those citing resource availability reported that they provided tutorials, articles, or data sets to students through email or learning management systems, or their institutions provided iPads or loaner laptops to students before they left campus. Interviewees who mentioned course design indicated that experimental design or scientific literature composed a substantial portion of their course structure and goals; that their course material was amenable to experimental design or using literature; and that, in some cases, students were able to continue conducting semester-long projects after the transition. In addition, some interviewees clarified that opportunities to practice these skills were provided for students, but students were not held accountable for completing them. Interviewee 2 articulated:

    “When we cut off, we were just getting into human anatomy and physiology and I do a lot of case studies there … How often students were actually doing that I’m not sure but I was creating the opportunities for them.”

    Interviewees who increased in the experimental design and communication practices did so because it was their normal approach to teaching or because they place high personal value on these practices as part of their teaching philosophy. Interviewee 7 indicated:

    “I feel like that’s just sort of a skill you need to have as a biologist and certainly I teach an upper-level biology course and so I just wanted to make sure that … the skills portion of it, they were still getting.”

    Data Analysis.

    The use of ST practices related to data analysis significantly decreased after the transition to ERT (t = 7.27, p < 0.001). During face-to-face instruction, the mean use of data analysis teaching practices was 38.98 (SD = 20.71), which decreased to 30.8 (SD = 20.44) during ERT (Figure 3).

    Interviewees who decreased the frequency of activities in which students were asked to make or interpret graphs reported that the decrease was motivated by not having enough time and needing to cut course components to make the amount of time in class or doing course activities more manageable for students, instructors, or both (Supplemental Table 6). Interviewee 3 noted: “For me it was just making things more manageable … some of their classes kind of tended towards overloading with busy work as opposed to fair assessments.” Interviewee 10 indicated that assignments with graphs were dropped, because they were challenging for students: “For the non-science majors, I mean their science literacy is not always the best and they get frustrated very quickly, so it was a battle I wasn’t interested in fighting.”

    Interviewees who maintained the same frequency of activities involving graphs indicated that they were able to establish graphing and graph interpretation skills early in the semester, so it was relatively easy to continue asking students to continue using these skills after the transition. Interviewee 13 explained: “Because I had that front-end focus on analysis, practical interpretation, graph building, they were able to actually apply that stuff with me not directly in the room.”

    In some of these cases, students continued working on semester-long projects that had begun during the face-to-face portion of the course. In other cases, students already had the materials (paper lab manuals) or technology (institution-provided iPads with graphing assignments preloaded) physically in hand and available from the start of the semester, so the practices continued seamlessly after the transition to the online modality. Only one interviewee increased the frequency of activities involving graphs after the transition, and that interviewee attributed the increase as an artifact of course timing, because the only course content involving interpretation of graphs happened to fall at the end of the semester.

    Cognitive Skills.

    A significant decrease (t = 6.80, p < 0.001) in use of ST practices related to cognitive skills was observed between face-to-face instruction (M = 48.73, SD = 18.54) and ERT (M = 41.29, SD = 19.57; Figure 3).

    The reasons that interviewees provided for decreases in cognitive skills practices were primarily of the logistical nature (Supplemental Table 7). Some thought that cognitive skills activities required more instructor guidance, so they were prohibitively difficult to facilitate asynchronously or live with breakout rooms. Other interviewees mentioned that cognitive skills practices were lost as a casualty of dropping team-based activities. For some interviewees, the decrease in case studies or open-ended activities was simply an artifact of course timing, because these activities were originally planned for the first half of the semester. Other interviewees noted that the raw number of cognitive skills–related activities or problems were the same, but it took students longer to work through them remotely, so there were fewer in each class session. Interviewee 10 described this experience as:

    “To cover the territory for those discussion questions that I thought was the depth we needed was a lot more prodding on my part, had a lot more of that lag time, had a lot more awkward silence, things like that. The amount of time was the same, but the number of different things we hit was different. It was a lot less.”

    Interviewees who maintained the same frequency of cognitive skills practices after the transition to ERT stated two main reasons: First, they were able to set the tone for course rigor early in the semester during the face-to-face interactions and maintain that rigor after the transition; and second, their personal values or teaching philosophies include challenging students to practice or build their cognitive skills. Interviewee 6 explained:

    “Well I’m just always trying to do that as much as I can. I don’t see a lot of point in asking students to just memorize stuff. I mean, they obviously have to know facts to be able to use the facts and apply them and analyze them and so on but I would much rather them be able to work at a little bit higher level.”

    Interviewee 7 cited that a desire to promote academic integrity motivated an increase in cognitive skills practices after the online transition:

    “I didn’t want [the students] to just be able to Google the answer and find something really straightforward online and so I was really trying to tailor my questions, not so much like rote memorization questions, but really to understand a particular process. The easiest way to do that I thought, and not fight the Internet was to really come up with these more deep-probing sorts of questions.”

    Reflection.

    Respondents reported low, but similar use of reflection-based ST practices during face-to-face instruction (M = 32.37, SD = 21.67) and ERT (M = 29.32, SD = 20.80; Figure 3). Despite the similarity, the 3.05 mean score decrease was significant (t = 2.13, p = 0.035).

    To gauge the underlying factors contributing to changes in course and self-reflection strategies, interview participants were asked to explain why they changed or did not change the number of times students were either asked to reflect on their study habits or problem-solving strategies (self-reflection) or to provide feedback on course structure (course reflection), and they provided a variety of reasons for their approaches (Supplemental Table 8).

    About half of the interviewees decreased the number of self-reflection practices after the remote transition. Most of those individuals attributed this decrease as an artifact of course timing rather than the modality transition, because students are encouraged to reflect more frequently at the beginning of the semester as they adjust to the course, and because the course expectations were well established at the time of the course interruption. Several interviewees also attributed this decrease to an incompatibility with asynchronous formats, because lecture videos were prerecorded, and because instructors only saw the students’ finished products; they could not circulate around the classroom to observe students while they were working through questions or problems. Interviewee 2 describes this occurrence:

    “In person, I’m wandering around the room and I’m listening in to conversations. I’m kind of reading over their shoulders, as they’re writing out answers and I’ll jump in a lot and I’ll ask them, ‘Okay, tell me what you’re thinking right here. You know, what, why do you think this is the case? What led you to this conclusion?’ … Once we started meeting online, there wasn’t an opportunity for me to see the students work through the process. All I ever saw was the product.”

    Interviewee 1 increased the amount of student self-reflection practices after the transition as an attempt to maintain a connection with the students and to keep the students engaged in the course:

    “After transitioning to remote, partly [I] just missed the students. Partly [I] wanted to maintain connection and let them know [I was] still on the other end of the email … [I] tried to just keep reminding the students of that via email so that they didn’t feel like they were adrift at sea and also just to keep [the] course content on their radar.”

    With respect to course reflection practices, one interviewee reported a decrease in course reflection, in that the instructor did not solicit feedback directly to avoid over-emailing students, but was open to suggestions from students. Another individual, Interviewee 13, kept the frequency of course reflection practices consistent in both modalities, because it was an important component of their teaching philosophy:

    “I feel like it’s sort of my duty as an instructor to be able to sort of read the pulse of the classroom, actually listen to feedback and be humble enough to say okay this is working, this is not working, let me find a way for this to work for you and I thought that that was especially important given that none of them signed up for this kind of online transition.”

    Several interviewees who increased the frequencies of course reflection practices when their courses transitioned to ERT indicated reasons related to their own inexperience and uncertainty in the new learning environment, their teaching philosophy, wanting to know what was and was not working for students, and a desire to keep students connected to the course. Interviewee 5 connected these reasons by explaining:

    “Our students are really good at catching you in the hallway, catching you in the cafeteria, things like that. Well now when we are going online where everyone’s scattered it felt like it was now the burden of ‘Are you understanding this?’ shifts more back to me as opposed to students being responsible for their own learning. [Instructors at my institution] really were very much concerned and we were continually reminded of trying to reach out to the students and not letting them hide behind the distance.”

    Indirect Student Course Evaluation Feedback Regarding ST Practices

    Survey respondents were given the option to provide student evaluations for their focal courses. Thirty-six (27%) of respondents elected to provide this information. The average response rate by students within each focal course was 58%. Twenty-nine of the participating respondents provided course evaluation data that included open-ended responses to questions asking for feedback on their courses. Open-ended responses that mentioned the transition to the online/remote setting were collated and further analyzed. A total of 109 responses from 15 courses were gathered. Given the low response rate by participants and the unrepresentative nature of these data, course feedback was only used as a source of triangulation between the results of the quantitative study and follow-up interviews.

    The most frequent subject of student comments related to ST centered around active learning and responsiveness. Comments mentioning specific active-learning approaches (16% of all comments) were coded by the type of intervention used. The majority of these comments mentioned class discussion (65%) or collaboration tools (24%). Comments related to responsiveness included the themes of communication (36%), flexibility (20%), and organization (13%).

    Students mentioning discussion in course evaluations overwhelmingly cited it as positive (79%), even if the technology was obtuse or awkward. They appreciated the way it provided an opportunity to interact with classmates.

    “I really liked the discussion questions we had once we moved to online learning because it was a really nice way to facilitate discussion with my peers and think about what I do and don’t understand after completing the homework. I felt more connected and engaged with the class which I think was one of my fears when we moved to online classes.”

    Communication is foundational to the ST practice of responsiveness. The theory of transactional distance (Moore and Kearsley, 1996) frames the importance of communication in the context of distance learning and states that separation between students and their teachers can “lead to communication gaps, a psychological space of potential misunderstandings between the behaviors of the instructors and those of the learners” (p. 200). A recommendation Moore has made based on this framework is that communication between teachers and students should include three elements: dialogue, structure, and student autonomy. Dialogue in this theory is measured in the extent to which it resolves the students’ problems (Moore, 1997). Structure refers to the level of rigidity or flexibility of the course. This criterion is measured by the ability of the course to meet the students’ needs. Learner autonomy is a direct outcome of quality dialogue paired with a flexible structure, because these factors in combination enable learners to effectively engage with the course with agency.

    In this context, the student comments related to communication, flexibility, and organization are in strong alignment with this model and closely associated with the ST practice of responsiveness. A large majority of students who mentioned communication discussed it in a positive light (76%). The general theme was an appreciation of frequent and clear correspondence from their professors. For example, one student reported, “The switch to distance learning was [fl]awless. Thank you for always keeping communication open and answering questions, no matter how frantic and crazy they seemed. I enjoyed this course!”

    A comment about communication was paired with a comment about the online/remote transition 45% of the time. Of these paired comments, a positive statement about communication was cited with a positive statement about the transition to the online/remote setting 85% of the time. For example,

    “I believe that the ability to meet with the professor virtually (if needed) when online learning began (and in-person if needed before online learning began) was great and that the transition was handled perfectly.”

    Positive attitudes toward the communication in the course likely contributed to student feelings of a smooth transition to the remote setting. As supported by the theory of transactional distance, comments related to flexibility and organization assumed strong communication with the instructor.

    “[Instructor] was very kind and understanding of the difficulties that online classes presented and was appropriately accommodating with Test Corrections, reminders of due dates, and an overall smooth transition to online learning with [their] mini-lectures, problem sets, and discussion questions.”

    “You made everything possible to help us understand the content, and you are there to answer our concerns timely even with the social distancing. [Instructor] set us up for success from the beginning. [They] set standards and made them clear and how to meet them. [They were] helpful and made [themselves] available to us as students. [Instructor] made an excellent switch to online courses and I enjoyed and appreciated [their] approach. I felt that [they were] a difficult but excellent professor.”

    Overall, comments that we observed related to the ST practices of active learning and responsiveness supported their impact on student learning. In the context of a remote environment, the ability of these practices to promote effective communication between peers and the instructor contributed to a positive attitude of students toward the course.

    DISCUSSION

    The COVID-19 pandemic has had a tremendous impact on all levels of our educational system. Early on in the pandemic, during the abrupt termination of face-to-face learning and subsequent transition to ERT, disruption was especially widespread. This emergency online shift with little to no time to develop or adapt remote curricula in the Spring of 2020 inspired a variety of approaches to adjusting course plans and teaching strategies to complete the semester. Instructor strategies to transition their courses to remote modalities fell on a continuum of change, ranging from approaches that retained as much of the original course plan as possible to drastic changes in course plans and requirements, often stripping the course to the minimum components and goals required to meet “adequate” standards. We posited that the quality of instruction declined as a result. We conducted an online survey near the conclusion of the Spring semester of transition to quantify the amount of change in ST practices in biology-related courses before and after ERT. We then conducted follow-up semistructured interviews with a subset of survey respondents to gain insight on how and why they changed their courses in those ways.

    Decrease of All ST Practices during ERT, Especially Active Learning

    Each of the ST subcategories decreased as a result of the rapid transition to ERT, but active learning, responsiveness to students, and learning goals and feedback were the most dramatically impacted (Figure 3 and Table 6). In particular, our data suggest that a main challenge of the abrupt shift to ERT was grappling with how to bring active learning into the remote setting (Supplemental Material 1). Active-learning practices decreased by a score of 21 points on the MIST scale. Averaged across all respondents, the percentage of class time students were active decreased by 16.8% and the percent of time spent working in groups decreased by 13% when the transition occurred as self-reported by instructors (unpublished data). This is particularly concerning, because of all the ST practices, those associated with active learning have the most evidence supporting their role in student success (Freeman et al., 2014), including benefits for underrepresented students, such as reducing the achievement gap (Theobald et al., 2020), improving science self-efficacy (Ballen et al., 2017), and increasing retention (Estrada et al., 2016). Thus, this decrease in active learning alone is likely to have been a substantial detriment to student learning after the remote switch. However, some instructors were able to retain high levels of active-learning practices in both synchronous modalities and asynchronous modalities (Supplemental Material 1). Interviewees listed live conversations and group work in breakout rooms being used while meeting synchronously. Interactive assignments or shared class documents for students in asynchronous modalities kept students active on their own time. Students also seemed to appreciate the chance to complete active-learning strategies, especially if it meant interacting with their classmates. An overwhelming majority (79%) of student course evaluation responses mentioning the use of discussion and interaction with classmates cited active learning in a positive manner.

    Responsiveness to Students, Learning Goal Use, and Feedback Decreased during ERT

    Teaching practices associated with responsiveness to students and learning goal use and feedback were the second and third most-impacted ST subcategories. These decreases likely occurred for similar reasons due to parallels between instructors being responsive to student needs and providing feedback on progress toward achieving learning goals. Indeed, interviews revealed that instructors had trouble gauging how well students understood material when operating either synchronously or asynchronously, which affected instructors’ ability to provide feedback or to be responsive to misunderstandings. Additionally, when formative assessments were completed, the necessary lag time in providing feedback in asynchronous approaches often meant that students did not receive the feedback until the class had moved on to the next weekly module, which was often a different topic. This is concerning, because the course then loses the iterative nature of learning in which students can identify specific misunderstandings and correct them before taking summative assessments. Student evaluation data pointed to instructor responsiveness as a key component of positive student experiences in the switch to remote modalities. Specifically, students pointed to instructor flexibility and understanding of their situations as well as a sense of strong communication as major contributors to positive course experiences. Students also viewed strong communication between an instructor and students as an indicator of a highly organized instructor. Taken together, the decrease in responsiveness and feedback brought on by the disruption of face-to-face learning environments is likely to have decreased instructor effectiveness and negatively impacted the student experience.

    Retention of ST Practices during ERT

    Of the variables we measured, the MIST face-to-face score was the most impactful predictor of remote MIST scores (Table 5). Instructors who used higher levels of ST practices in the face-to-face setting had higher levels of ST teaching practices in the remote setting. In other words, doing more ST in everyday instructional approaches translated to doing more ST under duress, despite large decreases in the frequencies of those practices. Even at a potentially reduced capacity to implement ST, instructors who were invested in using these best practices found a way to weave them into their suddenly changed curriculum. Given the sudden remote switch and the often noted “survival” mentality of ERT, it is not surprising that instructors who had low levels of ST in face-to-face did not take the time and cognitive load to shift their teaching approaches to include more ST practices, despite the benefit it might have had for students. Our interviews nicely exemplified these two ends of the spectrum (Supplemental Materials 1–8). In one case, faculty at the institution of a high MIST score instructor were urged by administrators to operate under low expectations and to conduct courses asynchronously, but after trying and disliking that approach, the instructor instead held synchronous class meetings in which students worked almost exclusively on case studies in breakout room groups. In a different case, a lower MIST score instructor held synchronous class sessions consisting almost entirely of lectures. When this instructor tried breakout rooms, inevitable technical difficulties arose, and the instructor decided to drop all activity beyond lecturing in the main meeting room thereafter to avoid the hassle and the wasted time to work out issues with the technology.

    Modality type was also a significant contributor to variation in MIST remote scores (Table 5). Instructors who used synchronous or blended modalities were more likely to have higher MIST remote scores and experience a less drastic reduction in remote MIST scores compared with instructors using asynchronous modes. This suggests that the synchronous formats more closely resembled the in-person learning environment and were therefore more likely beneficial for students, given the higher levels of research-supported practices. Indeed, when comparing in-person and remote versions of the same course in nonemergency instances, research conducted in marketing education indicated that student learning and engagement in online marketing courses most closely resembled the level of the face-to-face classroom when the online course operated in synchronous formats (Francescucci and Rohani, 2019). It should be noted, however, that many of the instructors indicated in interviews or survey comments that despite the potentially reduced quality of teaching, they chose asynchronous delivery of course materials to promote equity and accommodate student needs such as differing time zones, lack of reliable Internet connection or computer access, needs to acquire and maintain full-time employment to help their family pay rent, and many others. In some cases, the instructors made this decision independently, but in others, they were pressured or required to do so by administrators at their institution (Colclasure et al., 2021). Although the courses were more likely to differ in research-based practices when courses were delivered asynchronously, students mentioned benefits of some of the asynchronous tools or items in their course evaluations. Of the student course evaluation comments analyzed, 25% mentioned prerecorded or minilecture videos, and an impressive 96% of these comments were positive in the use of this tool. This suggests that, even when online courses are delivered synchronously, providing recordings of class sessions or prerecorded minilectures is likely to benefit students and provide a more positive experience in the course.

    The instructors’ comfort with technology also influenced their likelihood of implementing ST in ERT (Table 5). Instructors who were more comfortable with technology implemented higher levels of ST practices. It is perhaps not surprising that those who were comfortable with technology tried out more ST in the remote format. Two instructors who were uncomfortable with technology mentioned in their interviews that they tried using some technology features like breakout rooms or showing videos, but once they experienced difficulty, those instructors abandoned ship and stuck with the basics (Supplemental Material 1). The degree of troubleshooting required for remote course delivery for the instructors alone was intimidating for these and similar instructors, and this was compounded by the need to help students troubleshoot their own technical difficulties, especially in cases of unstable Internet connections or limited access to technology.

    Time-Intensive and Nonequitable Practices Dropped

    To gain insight into the underlying reasons for instructional decisions regarding ST practices, we asked interviewees to reflect on why they chose to change or retain ST practices after the disruption in their courses. A detailed analysis of the major barriers faced by interviewees and their perceptions of the barriers their students faced have been published previously (Colclasure et al., 2021). Instructor motivations for decreasing tended to fall in categories of practical or time-based justifications or concerns around equity or fairness to students from different time zones, those with limited Internet connectivity or access to technology, or students with employment or family obligations (Supplemental Material 1). Many interviewees mentioned not having enough time to plan logistics of how to facilitate ST practices remotely. Several also mentioned that they were planning to spend time in the subsequent summer to work on those logistics and develop more effective strategies for upcoming remote semesters as the pandemic and remote instruction continued.

    There were two general recurring themes that emerged throughout the interview process for why instructors were able to maintain practices. First, practices were maintained because the precedent for and expectations regarding those practices had already been established early in the semester while the classes were meeting face-to-face. Instructors were able to continue implementing these practices in a somewhat seamless manner, because the students knew how to complete the tasks, had developed the necessary skills, could efficiently use the affiliated technologies, or were familiar with the expectations for success around these practices or activities before the interruption. In particular, interviewees mentioned this in reference to practices involving graphing skills, data analysis skills, apps or programs used for graphing, finding and/or reading scientific articles, metacognitive reflection practices, and critical-thinking skills. Second, instructors indicated intentionally retaining ST practices that were of high personal value to themselves, many of which were aspects of their teaching philosophies. One interviewee put forth extra effort to continue including a diversity of contributors to science; they valued making sure all of their scientist friends and colleagues were represented to their students (Supplemental Material 3). Another interviewee mentioned retaining activities requiring students to use higher-level cognitive skills, because that was the central motivating factor for the flipped design of the course, and they think that college-level science courses should demand higher-level cognitive engagement from students (Supplemental Material 7).

    Study Limitations

    While our study provides valuable insight regarding instructional approaches in ERT, there are a number of cautions to be considered when interpreting the result of this work. First, we disseminated MIST toward the end of the semester, so instructors provided information about their teaching practices from the face-to-face portion of the course in a retrospective manner rather than toward the end of the face-to-face environment itself. This was a necessary approach, given the suddenness of the remote transition. We also note that the short time frame between the end of face-to-face modality and the survey administration is likely to yield relatively accurate results, especially as instructors were making direct comparisons between the two modalities. Additionally, self-report instruments have been subject to overestimation of practices in some cases, especially when respondents are pressured to perform well by instances of promotion or professional development evaluation (Ebert-May et al., 2011; Weiman and Gilbert, 2014). However, our work presented no pressure to perform and focused on the change in practices rather than the raw frequency of practices, thus mitigating typical motivators to overestimate practices. Furthermore, previous work by the developers of MIST triangulated instructor, student, and observer responses and found a strong correlation between the perspectives, which supports the use of MIST as an instructor self-report tool in low-pressure conditions (Durham et al., 2018).

    We also note that many of the courses in our sample, especially those at PUIs, included both a lecture and lab component. MIST is designed to capture information only from the lecture portions of courses, and respondents are instructed to only consider lecture portions of their course in their responses, so our results likely underestimate ST levels for courses in which lab components were tightly integrated in regular course components. Several survey respondents noted this concern in the open comments box.

    Despite the use of strategies to increase survey response rates (Dillman et al., 2014), the authors acknowledge receiving a relatively low response rate (12.2%) from our target population. The low response rate was likely caused by the fallout from COVID-19 and corresponding challenges with ERT (email overload, increased workload, etc.) that have been well documented during this time (Johnson et al., 2020; Colclasure et al., 2021; Stewart, 2021). The significant estimated time commitment to complete the survey (20 to 30 minutes) also likely deterred members of the sampling frame from starting the survey (Kaplowitz et al., 2012). In rare cases, faculty may have not had access to the Internet and would therefore be unable to see or respond to our request. Given these challenges, the authors believe the response rate is acceptable, but caution should be taken when interpreting the data.

    The study described here is regional and centered on PUIs. About half (49.3%) of all respondents were from our focus area, which includes five midwestern states: Iowa, Kansas, Missouri, Nebraska, and North Dakota. Out of all our respondents within the Midwest, a majority (77.6%) were from these same five states. Eighty-two percent were from a PUI (Table 2). Given this, we found no statistical difference in instructor responses from this focused region of the Midwest versus the rest of the sample (see Overall Change of ST Practices and Contributing Factors). We also found that the challenges reflected in our focus population reflected struggles that have been reported more broadly (Johnson et al., 2020; Colclasure et al., 2021; Stewart, 2021).

    The authors would like to stress that this work can only be interpreted in the context of ERT; our results cannot necessarily be extended to standard remote teaching environments in which time to consider pedagogical decisions and develop course plans is abundant and students who enroll in the course are likely to have adequate access to technology and Internet connectivity. In other words, it is not the remote teaching environment itself that necessarily decreased ST levels, but rather the emergency and survival teaching mode of operation caused by the rapidity of the transition to the remote modality that impacted ST approaches. Our qualitative data illustrate both the instructors’ lack of ability to prepare due to time challenges and difficulties in balancing home life with work and their uncertainty concerning student access to class time, technology, or reliable Internet connectivity as major limiting factors in instructors’ abilities to implement ST practices. Our findings would quite likely be different if the instructors knew months ahead of time that they would be teaching online, and indeed, many of the interviewees mentioned dedicating time in the subsequent summer to more rigorously plan for effective instruction in their presumed remote modality courses for the upcoming semester.

    Recommendations

    Our work took an important step by documenting changes in research-based instructional strategies that occurred as a result of the COVID-19–necessitated rapid transition to remote teaching modalities. Generally speaking, we found that instructors who were already trained in and implementing higher levels of ST practices in their courses as well as those who were most adequately equipped with technology resources of both hardware and know-how before the disruption of face-to-face learning were most likely to retain higher levels of effective teaching approaches under duress. Educators and administrators in higher education institutions can leverage this insight to better prepare for future events that might require rapid changes in course plans and ensure the quality of the educational experience can be retained in the event of unexpected changes. Building a more robust and reliable infrastructure can lay the groundwork that instructors can easily tap into for support in situations that would require a rapid shift to remote modalities, whether they be broad-range, campus- or department-wide, or even at the level of individual instructors, and whether they be short-term, such as a weather-related campus closure, or longer term, like the pandemic quarantines and campus closures. We offer the following recommendations to help minimize negative impacts on teaching in the event of rapid change to remote modality. These recommendations parallel those that came out of a detailed analysis of the 14 individual interviews that were part of this study (Colclasure et al., 2021).

    Despite the advantages of implementing research-supported instructional strategies (Freeman et al., 2014), recent evidence suggests that undergraduate science courses are still primarily taught using traditional lecture strategies (Stains et al., 2018). Our data indicate that this detriment was further amplified by the rapid transition to ERT. Thus, providing instructors with support and training for the use of research-based instructional strategies such as those associated with the ST pedagogy can improve both face-to-face instruction and adjustments necessitated by ERT. In particular, focusing training and teaching practice on the implementation of active-learning and communication strategies to promote interaction between students and responsiveness by the instructor would be expected to have a major impact in buffering against learning losses based on our results. In addition, focusing on practices that intersect ST and inclusivity will likely yield robust retention of practices across modalities (Harris et al., 2020). Practices in this intersection can help keep students connected and engaged with the course without excluding those who have restricted access to technology and/or those students who have limited Internet connectivity. The interviews conducted in this study were conducted on a representative set of individuals who spanned the range of ST implementation observed in our data set. Because of this, the interview data provide a wealth of specific cases in different ways ST practices were used or not used for faculty training purposes.

    Another implication of this study is that regular use of electronic resources and access to technology in face-to-face settings can reduce the impact of the ERT transition by improving instructor self-efficacy in technology usage. Resources should be evaluated for their application in the implementation of ST. It could be beneficial for individuals, departments, and institutions to curate exemplar resources aligned with ST practices. In particular, resources that help facilitate teamwork in an online setting and that promote interaction between instructors and students could be prioritized to buffer against disruptions in an emergency scenario. Practicing the use of these tools in nonemergency situations is important for both instructors and students. Similarly, instructors in our study whose institutions provided technology such as tablets or laptops to all students were able to mitigate some of the limitations of students who would not otherwise have access to technology, thus promoting inclusivity and the continuation of ST practices implemented before ERT. Any strategy promoting faculty self-efficacy in the usage of technology would be predicted to promote ST practice in an ERT event based on our results.

    Our data indicate that synchronous ERT formats more closely resembled face-to-face portions of the disrupted courses than asynchronous ERT approaches; however, synchronous formats are not always possible or equitable for all students. This demands that attention be paid to developing effective methods to implement ST practices in asynchronous formats. For example, one interviewee had students work in groups to complete a shared Google Doc of course notes, which was easily completed across time zones and varying schedules of the participating students (Supplemental Material 7). However, as we saw in our study, respondents came into ERT with a wide range of online teaching experience. In an ERT situation, focusing efforts to support faculty with little online teaching experience into synchronous and blended synchronous/asynchronous online formats may cause less disruption to the quality of teaching based on our results. Outside ERT, online teaching training efforts anchored in ST would be more effective in preparing faculty for abrupt transitions. Faculty development efforts starting with foundational training on ST could then expand to the individual needs and interests of the faculty involved to work on adapting and implementing these principles in face-to-face settings, blended face-to-face/online learning formats, synchronous online settings, synchronous/asynchronous blended formats, and asynchronous online modalities. Encouraging faculty to continuously work on their ST implementation within a variety of formats would maximize their ability to transition abruptly into remote learning. Combining this training with curated resources, as recommended earlier, would help the institution act more cohesively in serving students.

    Developing contingency plans for unexpected events in the future can help ensure a more seamless transition in teaching and learning in higher education when circumstances require rapidly changing course modality in an ERT scenario. More importantly, supporting faculty in adopting ST practices and ensuring equitable access to technology will provide immediate benefits to students and added protection against the negative effects of a future crisis.

    ACKNOWLEDGMENTS

    We would like to thank our research participants who volunteered their time to make this research possible. We would also like to thank AnnMarie Marlier for helpful insight and contributions to data analysis and two anonymous reviewers for their thoughtful feedback. This work was funded by the NSF EPSCoR Research Infrastructure Improvement (RII) Track-1 (1557417).

    REFERENCES

  • Aebersold, A., Hooper, A., Berg, J. J., Denaro, K., Mann, D., Ortquist-Ahrens, L., … & Verma, M. (2020). Investigating the transition to remote teaching during COVID-19. Journal on Centers for Teaching and Learning, 12. Retrieved September 2, 2022, from https://openjournal.lib.miamioh.edu/index.php/jctl/article/view/207 Google Scholar
  • Ahmad, H., Latada, F., Wahab, M. N., Shah, S. R., & Khan, K. (2018). Shaping professional identity through professional development: A retrospective study of TESOL professionals. International Journal of English Linguistics, 8(6), 37–51. https://doi.org/10.5539/ijel.v8n6p37 Google Scholar
  • American Association for the Advancement of Science. (2011). Vision and change in undergraduate biology education: A call to action, final report. Washington, DC. Google Scholar
  • Anderson, G. (2020, September 16). More pandemic consequences for underrepresented students. Inside Higher Ed. Retrieved September 2, 2022, from www.insidehighered.com/news/2020/09/16/low-income-and-students-color-greatest-need-pandemic-relief Google Scholar
  • Ballen, C. J., Wieman, C., Salehi, S., Searle, J. B., & Zamudio, K. R. (2017). Enhancing diversity in undergraduate science: Self-efficacy drives performance gains with active learning. CBE—Life Sciences Education, 16(4), ar56. https://doi.org/10.1187/cbe.16-12-0344 LinkGoogle Scholar
  • Barriball, K. L. (1994). Collecting data using a semi-structured interview: A discussion paper. Journal of Advanced Nursing, 19, 328–335. https://doi.org/10.1111/j.1365-2648.1994.tb01088.x MedlineGoogle Scholar
  • Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice, 5(1), 7–74. https://doi.org/10.1080/0969595980050102 Google Scholar
  • Blaich, C., & Wise, K. (2020, April 8). Initial trends from the HEDS COVID-19 institutional response student survey. Higher Education Data Sharing Consortium (HEDS). Retrieved September 2, 2022, from www.hedsconsortium.org/wp-content/uploads/2020.04.08-HEDS-COVID-Student-Survey-Update-initial-trends.pdf Google Scholar
  • Bowne, D. R., Downing, A. L., Hoopes, M. F., LoGiudice, K., Thomas, C. L., Anderson, L. J., … & Shea, K. L. (2011). Transforming ecological science at primarily undergraduate institutions through collaborative networks. BioScience, 61(5), 386–392. https://doi.org/10.1525/bio.2011.61.5.7 Google Scholar
  • Cachia, M., & Millward, L. (2011). The telephone medium and semi-structured interviews: A complementary fit. Qualitative Research in Organizations and Management, 6(3), 265–277. https://doi.org/10.1108/17465641111188420 Google Scholar
  • Cantrell, P. (2010). Traditional vs. retrospective pretests for measuring science teaching efficacy beliefs in preservice teachers. School Science & Mathematics, 103(4), 177–185. https://doi.org/10.1111/j.1949-8594.2003.tb18116.x Google Scholar
  • Chirikov, I., Soria, K. M., Horgos, B., & Org, E. (2020). Undergraduate and graduate students’ mental health during the COVID-19 pandemic. SERU Consortium, University of California. Retrieved September 2, 2022, from https://escholarship.org/uc/item/80k5d5hw Google Scholar
  • Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum. Google Scholar
  • Colclasure, B. C., Marlier, A., Durham, M. F., Brooks, T. D., & Kerr, M. (2021). Identified challenges from faculty teaching at predominantly undergraduate institutions after abrupt transition to emergency remote teaching during the Covid-19 pandemic. Education Sciences, 11(9), 556. https://doi.org/10.3390/educsci11090556 Google Scholar
  • Couch, B. A., Brown, T. L., Schelpat, T. J., Graham, M. J., & Knight, J. K. (2015). Scientific teaching: Defining a taxonomy of observable practices. CBE—Life Sciences Education, 14, ar9. LinkGoogle Scholar
  • Creswell, J. W., & Plano Clark, V. L. (2006). Designing and conducting mixed methods research. Thousand Oaks, CA: Sage. Google Scholar
  • Creswell, J. W., & Plano Clark, V. L. (2011). Designing and conducting mixed methods research (2nd ed.). Thousand Oaks, CA: Sage. Google Scholar
  • Crouch, C. H., & Mazur, E. (2001). Peer Instruction: Ten years of experience and results. American Journal of Physics, 69(9), 970–977. https://doi.org/10.1119/1.1374249 Google Scholar
  • Dewsbury, B., & Brame, C. J. (2019). Inclusive teaching. CBE—Life Sciences Education, 18(2), fe2. https://doi.org/10.1187/cbe.19-01-0021 LinkGoogle Scholar
  • Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, phone, mail, and mixed-mode surveys: The tailored design method. Washington, DC: Wiley. Google Scholar
  • Durham, M. F., Knight, J. K., Bremers, E. K., DeFreece, J. D., Paine, A. R., & Couch, B. A. (2018). Student, instructor, and observer agreement regarding frequencies of scientific teaching practices using the Measurement Instrument for Scientific Teaching–Observable (MISTO). International Journal of STEM Education, 5(1), 1–15. https://doi.org/10.1186/s40594-018-0128-1 MedlineGoogle Scholar
  • Durham, M. F., Knight, J. K., & Couch, B. A. (2017). Measurement Instrument for Scientific Teaching (MIST): A tool to measure the frequencies of research-based teaching practices in undergraduate science courses. CBE—Life Sciences Education, 16(4), ar67. https://doi.org/10.1187/cbe.17-02-0033 LinkGoogle Scholar
  • Ebert-May, D., Derting, T. L., Hodder, J., Momsen, J. L., Long, T. M., & Jardeleza, S. E. (2011). What we say is not what we do: Effective evaluation of faculty professional development programs. BioScience, 61(7), 550–558. https://doi.org/10.1525/bio.2011.61.7.9 Google Scholar
  • Eeds, A., Vanags, C., Creamer, J., Loveless, M., Dixon, A., Sperling, H., … & Shepherds, V. L. (2014). The school for science and math at Vanderbilt: An innovative research-based program for high school students. CBE—Life Sciences Education, 13(2), 297–310. https://doi.org/10.1187/cbe.13-05-0103 LinkGoogle Scholar
  • Estrada, M., Burnett, M., Campbell, A. G., Campbell, P. B., Denetclaw, W. F., Gutiérrez, C. G., … & Zavala, M. (2016). Improving Underrepresented Minority Student Persistence in STEM. CBE—Life Sciences Education, 15(3), es5. https://doi.org/10.1187/cbe.16-01-0038 LinkGoogle Scholar
  • Fernandes, J. D., Sarabipour, S., Smith, C. T., Niemi, N. M., Jadavji, N. M., Kozik, A. J., … & Haage, A. (2020). Research culture: A survey-based analysis of the academic job market. eLife, 9, e54097. https://doi.org/10.7554/eLife.54097 MedlineGoogle Scholar
  • Field, A. (2013). Discovering statistics using IBM SPSS (4th ed.). Thousand Oaks, CA: Sage. Google Scholar
  • Francescucci, A., & Rohani, L. (2019). Exclusively synchronous online (VIRI) learning: The impact on student performance and engagement outcomes. Journal of Marketing Education, 41(1), 60–69. https://doi.org/10.1177/0273475318818864 Google Scholar
  • Freeman, E. A., Theodosiou, N. A., & Anderson, W. J. (2020). From bench to board-side: Academic teaching careers. Developmental Biology, 459(1), 43–38. https://doi.org/10.1016/j.ydbio.2019.10.032 MedlineGoogle Scholar
  • Freeman, S., Eddy, S. L., McDonough, M., Smith, M. K., Okoroafor, N., Jordt, H., & Wenderoth, M. P. (2014). Active learning increases student performance in science, engineering, and mathematics. Proceedings of the National Academy of Sciences USA, 111(23), 8410–8415. https://doi.org/10.1073/pnas.1319030111 MedlineGoogle Scholar
  • Gonçalves, E., & Capucha, L. (2020). Student-centered and ICT-enabled learning models in veterinarian programs: What changed with COVID-19? Education Sciences, 10(11), 343. https://doi.org/10.3390/educsci10110343 Google Scholar
  • Gonzalez, T., De La Rubia, M. A., Hincz, K. P., Comas-Lopez, M., Subirats, L., Fort, S., & Sacha, G. M. (2020). Influence of COVID-19 confinement on students’ performance in higher education. PLoS ONE, 15(10), e0239490. https://doi.org/10.1371/journal.pone.0239490 MedlineGoogle Scholar
  • Handelsman, J., Ebert-May, D., Beichner, R., Bruns, P., Chang, A., DeHaan, R., … & Wood, W. B. (2004). Scientific teaching. Science, 304, 521–522. https://doi.org/10.1126/science.1096022 MedlineGoogle Scholar
  • Handelsman, J., Miller, S., & Pfund, C. (2007). Scientific teaching. New York, NY: Freeman. Google Scholar
  • Harris, B. N., McCarthy, P. C., Wright, A. M., Schutz, H., Boersma, K. S., Shepherd, S. L., … & Ellington, R. M. (2020). From panic to pedagogy: Using online active learning to promote inclusive instruction in ecology and evolutionary biology courses and beyond. Ecology and Evolution, 10(22), 12581–12612. https://doi.org/10.1002/ece3.6915 MedlineGoogle Scholar
  • Johnson, N., Veletsianos, G., & Seaman, J. (2020). US faculty and administrators’ experiences and approaches in the early weeks of the COVID-19 pandemic. Online Learning, 24(2), 6–21. https://doi.org/10.24059/olj.v24i2.2285 Google Scholar
  • Kaplowitz, M. D., Lupi, F., Couper, M. P., & Thorp, L. (2012). The effect of invitation design on Web survey response rates. Social Science Computer Review, 30(3), 339–349. https://doi.org/10.1177/0894439311419084 Google Scholar
  • Kecojevic, A., Basch, C. H., Sullivan, M., & Davi, N. K. (2020). The impact of the COVID-19 epidemic on mental health of undergraduate students in New Jersey, cross-sectional study. PLoS ONE, 15(9), e0239696. https://doi.org/10.1371/journal.pone.0239696 MedlineGoogle Scholar
  • Lam, T. C. M., & Bengo, P. (2003). A comparison of three retrospective self-reporting methods of measuring change in instructional practice. American Journal of Evaluation, 24(1), 65–80. https://doi.org/10.1177/109821400302400106 Google Scholar
  • Lederman, D. (2019, October 30). Professors’ slow, steady acceptance of online learning: A survey. Inside Higher Ed. Retrieved September 2, 2022, from www.insidehighered.com/news/survey/professors-slow-steady-acceptance-online-learning-survey Google Scholar
  • Lincoln, Y., & Guba, E. G. (1985). Naturalistic inquiry. Thousand Oaks, CA: Sage. Google Scholar
  • Little, T. D., Chang, R., Gorrall, B. K., Waggenspack, L., Fukuda, E., Allen, P. J., & Noam, G. G. (2019). The retrospective pretest-posttest design redux: On its validity as an alternative to traditional pretest-posttest measurements. International Journal of Behavioral Development, 44(2), 175–183. https://doi.org/10.1177/0165025419877973 Google Scholar
  • Mali, D., & Lim, H. (2021). How do students perceive face-to-face/blended learning as a result of the Covid-19 pandemic? International Journal of Management Education, 19(3), 100552. https://doi.org/10.1016/j.ijme.2021.100552 Google Scholar
  • Michael, J. (2006). Where’s the evidence that active learning works? Advances in Physiology Education, 30(4), 159–167. https://doi.org/10.1152/advan.00053.2006 MedlineGoogle Scholar
  • Moore, M. (1997). Theory of transactional distance. In Keegan, D. (Ed.), Theoretical principles of distance education (pp. 22–38). Oxfordshire, UK: Routledge. Google Scholar
  • Moore, M., & Kearsley, G. (1996). Distance education: A systems review. Belmont, CA: Wadsworth Publishing. Google Scholar
  • Morse, J. M. (2015). Critical analysis of strategies for determining rigor in qualitative inquiry. Qualitative Health Research, 25(9), 1212–1222. https://doi.org/10.1177/1049732315588501 MedlineGoogle Scholar
  • Osborn, D. S. (2010). Using video lectures to teach a graduate career development course. Retrieved September 2, 2022, from http://counselingoutfitters.com/vistas/vistas10/Article_35.pdf Google Scholar
  • Prince, M. (2004). Does active learning work? A review of the research. Journal of Engineering Education, 93(3), 223–231. https://doi.org/10.1002/j.2168-9830.2004.tb00809.x Google Scholar
  • Reinhart, R. A., & Banister, S. I., (2009). Validating a measure of teacher technology integration. In Maddux, C. D. (Ed.), Research highlights in Technology & Teacher Education. Society for Information Technology & Teacher Education. Retrieved October 3, 2022, from http://works.bepress.com/savilla_banister/5/ Google Scholar
  • Saldaña, J. (2013). The coding manual for qualitative researchers (2nd ed.). Thousand Oaks, CA: Sage. Google Scholar
  • Slocum, R. D., & Scholl, J. D. (2013). NSF support of research at primarily undergraduate institutions (PUIs). Council of Undergraduate Research, 34(1). Retrieved September 2, 2022, from www.cur.org/assets/1/23/Fall2013_v34.1_slocum.scholl.pdf Google Scholar
  • Stains, M., Harshman, J., Barker, M. K., Chasteen, S. V., Cole, R., DeChenne-Peters, S. E., … & Young, A. M. (2018). Anatomy of STEM teaching in North American universities. Science, 359(6383), 1468–1470. https://doi.org/10.1126/science.aap8892 MedlineGoogle Scholar
  • Stewart, W. H. (2021). A global crash-course in teaching and learning online: A thematic review of empirical emergency remote teaching (ERT) studies in higher education during year 1 of COVID-19. Open Praxis, 13(1), 89–102. Retrieved September 2, 2022, from https://search.informit.org/doi/10.3316/informit.758902304536019 Google Scholar
  • Sunasee, R. (2020). Challenges of teaching organic chemistry during COVID-19 pandemic at a primarily undergraduate institution. Journal of Chemical Education, 97(9), 3176–3181. https://doi.org/10.1021/acs.jchemed.0c00542 Google Scholar
  • Suri, H. (2011). Purposeful sampling in qualitative research synthesis. Qualitative Research Journal, 11(2), 63–75. https://doi.org/10.3316/QRJ1102063 Google Scholar
  • Tanner, K., & Allen, D. (2004). Approaches to biology teaching and learning: From assays to assessments—on collecting evidence in science teaching. Cell Biology Education, 3(2), 69–74. https://doi.org/10.1187/cbe.04-03-0037 LinkGoogle Scholar
  • Tartavulea, C. V., Albu, C. N., Albu, N., Dieaconescu, R. I., & Petre, S. (2020). Online teaching practices and the effectiveness of the educational process in the wake of the COVID-19 pandemic. Amfiteatru Economic, 22(55), 920–936. https://doi.org/10.24818/EA/2020/55/920 Google Scholar
  • Theobald, E. J., Hill, M. J., Tran, E., Agrawal, S., Arroyo, E. N., Behling, S., … & Freeman, S. (2020). Active learning narrows achievement gaps for underrepresented students in undergraduate science, technology, engineering, and math. Proceedings of the National Academy of Sciences USA, 117(12), 6476–6483. https://doi.org/10.1073/pnas.1916903117 MedlineGoogle Scholar
  • Trust, T., & Whalen, J. (2020). Should teachers be trained in emergency remote teaching? Lessons learned from the COVID-19 pandemic. Journal of Technology and Teacher Education, 28(2), 189–199. Retrieved March 16, 2022, from www.learntechlib.org/primary/p/215995 Google Scholar
  • Weiman, C., & Gilbert, S. (2014). The Teaching Practices Inventory: A new tool for characterizing college and university teaching in mathematics and science. CBE—Life Sciences Education, 13(3), 552–569. https://doi.org/10.1187/cbe.14-02-0023 MedlineGoogle Scholar
  • Wood, W. B. (2009). Innovations in teaching undergraduate biology and why we need them. Annual Review of Cell and Developmental, 25, 93–112. https://doi.org/10.1146/annurev.cellbio.24.110707.175306 MedlineGoogle Scholar
  • Young, J., & Kallemeyn, L. (2019). Testing the retrospective pretest with high school youth in out-of-school time programs. Journal of Youth Development, 14(1), 216–229. https://doi.org/10.5195/jyd.2019.635 Google Scholar