ASCB logo LSE Logo

Students in Fully Online Programs Report More Positive Attitudes toward Science Than Students in Traditional, In-Person Programs

    Published Online:https://doi.org/10.1187/cbe.16-11-0316

    Abstract

    Following the growth of online, higher-education courses, academic institutions are now offering fully online degree programs. Yet it is not clear how students who enroll in fully online degree programs are similar to those students who enroll in in-person (“traditional”) degree programs. Because previous work has shown students’ attitudes toward science can affect their performance in a course, it is valuable to ask how attitudes toward science differ between these two populations. We studied students who completed a fully online astrobiology course. In an analysis of 451 student responses to the Classroom Undergraduate Research Experience survey, we found online program students began the course with a higher scientific sophistication and a higher sense of personal value of science than those in traditional programs. Precourse attitudes also showed some predictive power of course grades among online students, but not for traditional students. Given established relationships between feelings of personal value, intrinsic motivation, and, in turn, traits such as persistence, our results suggest that open-ended or exploration-based learning may be more engaging to online program students due to their pre-existing attitudes. The converse may also be true, that certain pre-existing attitudes among online program students are more detrimental than they are for traditional program students.

    INTRODUCTION

    Online courses have proliferated and are being offered by many colleges and universities as part of their broader distance education options. In the United States, 70.7% of all institutions of higher education and 95% of institutions with enrollment greater than 5000 students offer distance education (Allen and Seaman, 2015). In addition, 28% of students in higher education use distance learning for some of their course work (Allen et al., 2016). While many students take only a few courses online, an increasing number are enrolling in fully online degree programs (National Center for Education Statistics, 2014). A number of factors such as the wider availability of technology, difficult economic conditions, and changing perceptions of the quality of online education have contributed to increasing popularity of online platforms, making learning more affordable, accessible, and personalized (Means et al., 2014).

    Research has been done to understand why students do or do not choose to take online courses instead of traditional courses (Jaggars, 2014). There are clear advantages to online courses, such as a more flexible schedule for those balancing family, work, and school. There are also potential disadvantages, such as the fact that some students are unwilling to enroll in fully online degree programs out of a desire to maintain a connection to the campus (both the location and the people) and to have a better connection with the instructor (Jaggars, 2014). However, online students may be alleviating this disadvantage by traveling to the campus. Clinefelter and Aslanian (2017) found that 59% of respondents to their nationwide survey traveled to the campus in which they were enrolled (between one and five times per year). The process of weighing these advantages and disadvantages implies that there will be systematic differences in preferences, financial circumstances, and family considerations between students who ultimately choose to enroll in fully online degree programs and those who do not. The extent to which these differences extend to differences in attitudes toward academics in general, attitudes toward specific subjects, and the efficacy of specific strategies is unclear. Therefore, we focused on students’ attitudes toward science and how those attitudes may affect their performance in science courses.

    Attitude is an expansive topic in psychology (cf. Bohner and Dickel, 2011); nevertheless, for this work, we focus on explicit attitudes and use the definition for “attitude” from Eagly (1992): a tendency or state internal to a person which biases or predisposes a person toward evaluative responses which are to some degree favorable or unfavorable. Attitudes, in the context of learning, fall within the broader characterization of affect. However, in the human brain, cognition (which includes attention, language, memory, planning, and problem solving) and affect (which includes attitudes, emotions, interests, and values) do not operate entirely separately from one another (e.g., Pessoa, 2008). If they were independent, it would be unnecessary to consider affect in the context of education. However, because cognition and affect are integrated and influence each other (e.g., Shiv and Fedorikhin, 1999; Dolan, 2002), it is not only vital to study how they interact with one another but also how they relate to behavior. All three—affect, cognition, and behavior—are important to learning.

    Elements of student affect, such as interest in the subject and perceived value of the skills and content taught, have previously been argued to be important to learning (e.g., Koballa and Glynn, 2007); van der Hoeven Kraft et al., 2011; McConnell and van der Hoeven Kraft, 2011; Fortus, 2014; Lin-Siegler et al., 2016). However, the question may be asked as to how exactly student affect is linked to cognition and behavior. Previous works have, for example, considered how attitudes may be the cause of certain behaviors. Yet there are alternative ideas such as attitudes following behaviors and attitudes and behaviors being reciprocal (Shrigley, 1990). There are a number of examples wherein attitudes do not seem to correlate with behavior (e.g., LaPiere, 1934; Kutner et al., 1952). For example, in a study by Corey (1937) of 67 university students who were taking an introductory educational psychology course, students who stated that cheating was wrong still changed their own exam responses when given the opportunity to grade their own work. Additionally, it has been shown that attitudes toward biology were largely independent of whether or not a student majored in biology (Rogers and Ford, 1997), showing that the decision to major does not imply a particular attitude toward the subject. These studies may imply that human behavior is primarily unconscious and, in turn, that attitudes have little, if any, control over behavior. However, studies have shown that imagining oneself successfully completing a task can improve performance (e.g., Sanders et al., 2004; McGlone and Aronson, 2007) and that even false memories can measurably change behavior (Geraerts et al., 2008). Thus, a recent review concluded that human behavior is the result of both conscious and unconscious processes (Baumeister et al., 2011).

    Because the link between affect and behavior is complex, past research on student affect and behavior has yielded mixed and often contradictory results. For example, though Hough and Piper (1982) and Steiner and Sullivan (1984) found that students’ attitudes toward the subject positively correlated with their performance, Rogers and Ford (1997) found a negative correlation between final course grade and attitude gain. Studying students’ attitudes is further complicated by the fact that they are functions of the classroom environment (e.g., McMillan and May, 1979), discipline or topic (e.g., Ramsden, 1998), student’s culture (e.g., Krogh and Thomsen, 2005; Ainley and Ainley, 2011), family background (e.g., Turner et al., 2004), student’s age or grade level (e.g., Prokop et al., 2007), and student’s gender (e.g., Simpson and Oliver, 1985; Weinburgh, 1995; Jones et al., 2000; Miller et al., 2006; Liu et al., 2010). These complexities indicate that further research is necessary to better understand connections among student affect, cognition, and behavior.

    The study described here compares the attitudes toward science of two groups of students: 1) those enrolled in fully online degree programs and 2) those enrolled in traditional, in-person degree programs. Students in both groups were enrolled in an identical online, introductory astrobiology course. The intent was to contribute to the larger body of research on connections among affect, cognition, and behavior and to the still-limited body of research into online science learning. From this work, we make recommendations to improve future online courses and programs for both types of students.

    METHODS

    The Course and the Population Studied

    Habitable Worlds is a 7.5-week, fully online course intended for non–science majors that satisfies a general-education laboratory science credit requirement for graduation at Arizona State University (ASU). The course and its design are described in more detail by Horodyskyj et al. (2017). The course is open to enrollment by students in traditional degree programs (in which it is identified as an “i-course”) and to students in fully online programs (in which it is identified as an “o-course”).

    Our study was conducted according to a research protocol approved by the Institutional Review Board at ASU (Study #00003679). A total of 774 out of 941 students who took the course in the Fall 2014, Spring 2015, and Fall 2015 semesters consented to having their survey responses and course data used for this study. We further limited our analysis to students who completed the course and responded to all of the survey items ultimately included in our analysis (see Results: Factor Analysis). This left 451 students as the main population for this study.

    The sample population had a nearly equal number of i-course students (ni = 232) and o-course students (no = 219). Student ages ranged from 18 to 58 years. Overall, the course not only had a higher percentage of self-identified white students than the university average, but also higher than the nationwide average of 63% self-identified white students reported by Clinefelter and Aslanian (2017). While the o-course group had more females than the university average, the difference was not as great as in Clinefelter and Aslanian (2017), who found that 75% of online students identified as female. The demographic data (all self-identified) of the students are listed and compared with those of ASU as a whole in Table 1.

    TABLE 1. Demographic data (as percentages) for the Habitable Worlds i-course and o-course students compared with our university as a whole (average data are for the Fall 2014 semester of all ASU undergraduates)

    Demographic data (self-identified)i-course students (ni = 232)o-course students (no = 219)Whole university average (undergraduates)
    GenderMale50.9%45.2%50.8%
    Female49.1%54.8%49.2%
    EthnicityAmerican Indian/Alaska Native1.7%0%1.6%
    Asian3.4%1.8%5.8%
    Black/African American3.4%4.1%5.0%
    Hispanic/Latino17.2%16.4%20.2%
    International2.2%1.4%7.1%
    Native Hawaiian/Pacific Islander0%0%0.3%
    White69.0%72.2%55.4%
    Two or more races3.0%3.6%3.9%
    Unknown0%0.5%0.9%
    Mean age (years)233122

    The Survey

    The learning design of Habitable Worlds emphasizes scientific practices and often asks students to figure out how a phenomenon works through observation and experiment. The use of this pedagogy guided our choice of survey instrument for this study. The instrument we selected was the widely used Classroom Undergraduate Research Experience (CURE) survey, which was originally developed to measure the effectiveness of small, in-person, course-based undergraduate research experiences in improving students’ attitudes toward science (Lopatto, 2009). CURE items relate to student experience, career intentions, attitudes about science, and learning style (Denofrio et al., 2007; Shaffer et al., 2010). The survey is typically used to assess students’ attitudes after the completion of a course with a research component (Lopatto et al., 2008; Jordan et al., 2014). Because of the shared emphasis on scientific practices between Habitable Worlds and course-based undergraduate research experiences, we expected that many of the CURE survey items would be relevant to our study population and that the large existing CURE data set would offer meaningful comparisons with our results.

    Other surveys of students’ attitudes toward science exist, ranging from topic- or subject-specific to science in general. We considered topic-specific surveys (e.g., Thompson and Mintzes, 2002) and subject-specific surveys such as the Biology Attitude Scale (Russell and Hollander, 1975) and the Colorado Learning Attitudes about Science Survey for Biology (Semsar et al., 2011) to be less applicable because of the interdisciplinary nature of Habitable Worlds. There are several surveys on attitudes toward science in general that could have been used, such as the Views about Sciences Survey (Halloun and Hestenes, 1996) and the Views on Science and Education Questionnaire (Chen, 2006). We did not use these surveys due to their limited representation in the literature to date. The CURE survey has thus far been administered to more than 10,000 students at 122 different institutions nationwide; as noted earlier, this wide use affords us a strong comparison with other courses and programs.

    The CURE survey was originally designed for in-person students with a strong interest in science. In using the survey for an online course and with non–science majors, we could not assume its validity. For this reason, we included a factor analysis step to help guide our interpretation of the survey responses.

    For this work, we focused on two sections of the CURE survey. The Science Attitudes section focuses on students’ attitudes toward science. The Benefits section focuses on students’ perceived learning and development gains as a result of taking the course. The items used for this work are listed in Tables 2 and 3. The 22 Science Attitudes items are a subset of the 35 items of Wenk (2000), with the item in her work regarding intuition changed to an item pertaining to creativity in the CURE survey. These items are both positively and negatively worded (e.g., “I like studying science” vs. “I don’t like studying science”) according to Wenk (2000) to demonstrate complex thinking on the part of the students. The 21 Benefits items are derived from earlier survey work by Lopatto (2003).

    TABLE 2. CURE Science Attitudes items and their factor alignments

    aBeginning in the Fall 2015 semester, nine of the 21 Science Attitudes items were reworded to clarify their meaning. The revised versions are italicized.

    bItems in this factor were reverse scored.

    TABLE 3. CURE Benefits items and their factor alignment

    For each of the 22 Science Attitudes items and the 21 Benefits items, the students responded on a five-point Likert-response format plus an option to not respond. For the Science Attitudes items, the options presented were “strongly disagree,” “disagree,” “neutral,” “agree,” and “strongly agree.” For the Benefits items, the options presented were “no gain,” “small gain,” “moderate gain,” “large gain,” and “very large gain.” These responses were then numerically coded to 1, 2, 3, 4, and 5, respectively, for analysis, with nonresponses coded as “not applicable.” It should also be noted that students were not required to respond to any of the items. The Science Attitudes items were presented to the students both at the beginning and at the end of the course, while the Benefits items were only presented to the students at the end of the course, which follows the typical administration of the CURE survey. The pre- and postcourse surveys each took approximately 10 minutes to complete.

    Certain items from the original CURE survey were excluded from the final analysis because they were not relevant to the learning objectives of Habitable Worlds. For instance, the Science Attitudes item relating to writing was removed, because there are no writing assignments in Habitable Worlds. Additionally, 10 items were removed from the Benefits items because they pertained to learning outcomes that were not emphasized in the course. These changes were supported by our factor analyses (see Results). Additionally, items that were removed had substantially more not applicable responses, which further supported our decision to exclude those items.

    Factor Analysis

    As noted earlier, we excluded one item from the Science Attitudes section and 10 items from the Benefits section because they were not applicable to our course (see Tables 2 and 3). This left 32 items for further analysis.

    Gardner (1995) recommended that a scale should be internally consistent (using Cronbach alpha values) and unidimensional (i.e., measures one attitude construct). A high Cronbach alpha value can indicate high internal consistency, but it can also mean that there are items that cluster together into multiple constructs. Therefore, in addition to calculating Cronbach alpha values, we used factor analyses to ensure that the scales were unidimensional.

    We conducted a maximum-likelihood, oblique-rotation exploratory factor analysis to identify latent factors in our data following recommendations of Preacher and MacCallum (2003). The 21 Science Attitudes items were analyzed separately from the 11 Benefits items because of their different response options (i.e., agreement or disagreement for the Science Attitudes items vs. no to high gain for the Benefits items). Because we were interested in the effect the course had on students’ attitudes toward science, we chose to use the change in response scores from pre- to postcourse rather than pre- or postcourse values for our factor analysis of the Science Attitudes items. The Benefits items were only administered postcourse; therefore, we used those for our factor analysis. We also performed a confirmatory factor analysis for which we used the robust diagonally weighted least-squares estimator, which is recommended for use with Likert response–format data (Flora and Curran, 2004; Brown, 2014; Li, 2016). We calculated the exploratory factor analysis in R using the package psych (Revelle, 2016), while the confirmatory factor analysis was done with the lavaan package (Rosseel, 2012). We identified the number of factors to extract through a combination of scree-plot analysis and parallel analysis (Horn, 1965; Cattell, 1966). We used an oblique promax rotation to allow interfactor correlation, if present.

    RESULTS

    Factor Analysis

    Our results showed that a two-factor solution was the most appropriate for the Science Attitudes items, while the Benefits items could be described by a single factor. We retained only items with loadings greater than 0.5 for the Science Attitudes items. All Benefits items loaded very strongly onto a single factor. (The rotated solutions are shown in Supplemental Tables S1 and S2.) We named the two Science Attitudes factors “Scientific Sophistication” (SS) and “Personal Value” (PV), while the Benefits factor was simply called “Benefits.” Table 2 shows the items that grouped into each of the Science Attitudes factors, and Table 3 shows the items that grouped into the Benefits factor. Because the items that grouped into the SS factor were negatively worded, we reverse scored them so that a positive combined SS factor score would mean an increase in scientific sophistication. The internal consistencies were α = 0.85 for the SS factor, α = 0.69 for the PV factor, and α = 0.97 for the Benefits factor.

    We considered a three-factor solution for the Science Attitudes items; however, the third factor was considerably weaker and was roughly the result of splitting up the SS factor into two factors. Additionally, a scree-plot analysis did not support a three-factor solution. Therefore, we decided against a three-­factor solution for the Science Attitudes items.

    To further support this specific factor solution, we performed confirmatory factor analysis using data from the Spring 2016 offering of Habitable Worlds at ASU. This independent sample included 203 complete and consented responses. The results indicate an acceptable fit (comparative fit index = 0.95, Tucker-Lewis index = 0.94, root mean square error of approximation = 0.03). Therefore, we are confident that the specified factors are robust.

    Factor Correlations

    To calculate correlations between the factors listed above and the final course grade, we grouped the items within a single factor into one composite scale for each student. This was done by taking the mean of all items within a factor for each student. See Table 4 for Pearson correlation coefficients for the course grade in comparison with the three composite scales (two for the Science Attitudes items and one for the Benefits items).

    TABLE 4. Correlation of the two Science Attitudes factors (difference of pre- and postcourse responses), the Benefits factor (postcourse responses), and the final course grade (numerically coded: “A” = 4, “B” = 3, etc.) for the whole cohort (correlations with a p value ≤ 0.05 are shown in bold)

    Course gradeScientific Sophistication (SS)Personal Value (PV)
    Scientific Sophistication (SS)0.27
    Personal Value (PV)0.170.06
    Benefits0.160.070.52

    The nonsignificant correlation between the SS factor and the PV factor illustrates that these two factors are independent measures of latent variables in the data. To illustrate this further, we made two-dimensional histograms with the mean changes in PV versus SS factor scores serving as axes, and the number of students who had that particular change represented by lighter or darker shades of color (Figure 1). The figure shows the results for the whole cohort (Figure 1a), the i-course students (Figure 1b), and the o-course students (Figure 1c). The desirable quadrant of the two-dimensional histograms is the top right quadrant, because we consider both a positive change in PV and a positive change in SS a desired outcome of the course.

    FIGURE 1.

    FIGURE 1. Two-dimensional histogram of the number of students as a function of mean changes in their responses to the SS factor items (horizontal axes) and their responses to the PV factor items (vertical axes). Both the whole cohort (a) and the program populations—i-­course (b) and o-course students (c)—are shown.

    Factor Scores and Factor Score Changes

    To determine whether the differences in Science Attitudes factor scores between i-course and o-course students were statistically significant, we conducted several simultaneous linear regressions (see Table 5, with additional models shown in Supplemental Table S3). For the SS factor, precourse scores predicted 24.5% of the postcourse score variance for the whole cohort (model SS1). When controlling for program type (i.e., i-course or o-course), the model predicted 26.4% of variance of the postcourse scores (model SS2). Thus, there is a statistically significant effect of program type on postcourse SS factor scores, with o-course students having higher scores. For the PV factor, a model with precourse scores as the independent variable and the postcourse scores as the dependent variable was heteroscedastic (thus should not be considered), though the precourse scores were statistically significant, with the model predicting 23.5% of the postcourse score variance (model PV1). However, the model including program type was not heteroscedastic (model PV2). That model predicted 25.3% of the postcourse score variance. Here again, o-course students had higher postcourse scores. Unlike for the SS factor (model SS3), for the PV factor, gender was also a statistically significant variable (model PV3). In Figure 2 we show precourse and postcourse factor scores for both SS and PV factors, along with their corresponding simultaneous linear-regression models.

    TABLE 5. Simultaneous linear-regression models for predicting the postcourse Scientific Sophistication (SS) factor scores (on the left) and Personal Value (PV) factor scores (on the right) of the whole cohorta

    Model SS2Model PV2
    VariableCoefficientp valueVariableCoefficientp value
    (Intercept)−0.145770.0113(Intercept)−0.136070.0171
    SS factor (precourse)0.45689<0.001PV factor (precourse)0.47356<0.001
    Program type0.30019<0.001Program type0.28022<0.001
    Adjusted R2 = 0.2645Adjusted R2 = 0.2533
    F-statistic = 81.91 on 2 and 448 df with p < 0.001F-statistic = 77.31 on 2 and 448 df with p < 0.001
    BP = 1.2279, df = 2, p value = 0.541BP = 5.7897, df = 2, p value = 0.0553

    aThe reference groups for the categorical variables gender (female or male) and program type (i-course or o-course) were female and i-course. Listed are standardized coefficients (i.e., continuous variables were scaled and centered before the regression). The Studentized Breusch-Pagan (BP) test was used to test for heteroscedasticity; df = degrees of freedom.

    FIGURE 2.

    FIGURE 2. Pre- and postcourse Science Attitudes factor scores by program type: (A) SS factor scores and (B) PV factor scores. Lines are simultaneous linear-regression fits (see Table 5, models SS2 and PV2). All factor scores have been converted to z-scores.

    Additionally, to test the significance of differences in precourse factor scores between i-course and o-course students, we used the nonparametric Wilcoxon test. The test was chosen because quantile–quantile (Q-Q) plots showed the factor scores were not normally distributed. For both SS and PV factors, the precourse score differences between i-course and o-course students were statistically significant (p values < 0.001 and < 0.02 respectively). For our linear models, we also considered an interaction term between precourse scores and program type (models SS4 and PV4), but those terms were not statistically significant for either SS or PV factors (i.e., linear-regression slopes are not significantly different). This further demonstrates that the difference between i-course and o-course students is largely due to their precourse factor scores.

    Relationships between Factor Scores and Final Course Grade

    The SS (r = 0.27), PV (r = 0.17), and Benefits (r = 0.16) factors all displayed some significant positive correlations with final course grades for the whole cohort. However, the precourse Science Attitudes factor score correlations differed between the o-course and i-course students. Among the o-course students, precourse factor scores showed significant correlations with final course grade for both the SS factor (r = 0.19, p = 0.004) and the PV factor (r = 0.16, p = 0.02), which was not the case with the i-course students. Benefits factor scores were positively correlated with course grade for students in both groups (r = 0.15, p = 0.03 for the o-course group; r = 0.17, p = 0.01 for the i-course group).

    We conducted regressions to further explore the relationships with course grade. We used logistic regressions over linear regressions, because the course grades were not normally distributed. We first modeled the odds of receiving an “A” grade in the course (see Supplemental Table S4). Following from the correlations shown previously, we found higher precourse SS factor scores predicted greater chances of earning an “A” grade, but only for o-course students (models GA1 and GA2). When those models were expanded to include university cumulative grade point average (GPA) and students’ gender, we found the predictive power of SS factor scores to be dramatically lower (models GA7–GA9), though the precourse SS factor remained significant among o-course students (model GA10). Next, we modeled the odds of a student failing the course (see Supplemental Table S5). Here, we found only a weak predictive relationship for odds of failure as a result of precourse SS factor scores (see models GF1 and GF2). GPA remained a strong predictor of failure, as it was of “A” grades.

    Along with GPA and gender, our logistic regressions showed that the SS factor, which quantified the pre- to postcourse change, to be a statistically significant positive predictor in predicting A grades. The SS factor had the reverse effect on course failure because it was a negative predictor. However, the PV factor was not statically significant in predicting either “A” grades or course failure. Program type was not a significant predictor in these models, in contrast to our findings for the precourse factor scores.

    DISCUSSION

    What Do the Factors Represent?

    Science Attitudes Factors.

    Each of the two Science Attitudes factors represents a facet of students’ attitudes toward science and learning science. The Science Attitudes section of the CURE survey is intended to measure, through both positive and negative attitudes, how a student views the institution of science, how accurately he or she conceives of the process of science, and how much value he or she sees in learning science. The high internal consistency from the CURE benchmark data set (shown in Supplemental Tables S6 and S7) indicates that respondents generally respond similarly to these items. However, the correlational differences between the two factors identified in this study show that there is valuable information hidden within the omnibus Science Attitudes section.

    Interpreting and naming the two factors was not straightforward. The items are intuitively opposite, yet the results of our factor analysis indicate largely uncorrelated factors (see Table 4). The factor we labeled PV includes four items. Items in the PV factor assess whether students value learning science. The factor we labeled SS includes eight items. A high SS factor score would indicate that the student likely has a more advanced understanding of how science works and what it means to do science, while a low SS factor score would indicate a more rudimentary understanding. Overall, we interpret a high SS and PV factor score to indicate the strongest “pro-science” attitude, with the reverse indicating a combination of low value of learning science, dislike of science, and/or disinterest in science. Changes in these scores pre- to postcourse reflect shifts in attitudes as a result of the course experience or outside factors.

    Though our factors are not exactly the same as factors identified in previous work, there are comparisons that can be made. Previous work, such as Germann (1988), has argued for dividing attitudes pertaining to science into two factors such as “attitudes toward science” and “beliefs about science,” as in the case of Walker et al. (2013). Such a division is consistent with our Science Attitudes factors, with our PV factor being similar to “attitudes toward science” and our SS factor being similar to “beliefs about science.” Additionally, though their surveys were specific to either physics or biology, the “Personal Interest” factors of Adams et al. (2006) and Semsar et al. (2011) contain items that are similar to the items in our PV factor. This is perhaps an indication that personal value/interest is in fact a stable latent variable. In considering the SS factor, as noted by Deng et al. (2011), we heed that, because items in the SS factor are declarative statements, a high SS factor score does not necessarily mean that a student has a procedural knowledge of the practice of science. Furthermore, as mentioned by Allchin (2011), agreement or disagreement to declarative statements does not demonstrate a “functional understanding of scientific practice and its relevance to decision making.” Thus, we do not suggest that a high SS factor score implies a strong understanding of the nature of science (NOS); however, it may be the case that a low SS factor score implies a weaker understanding of the NOS. Deng et al. (2011) listed 10 key aspects to the NOS. Of that list, two aspects are not contained in our SS factor: “nature of and distinction between observation and inference” and “nature of and relationships between theories and laws.” Overall, both our Science Attitudes factors have similarities with previous work, which bolsters their validity as measures.

    Benefits Factor.

    The 11 items that formed the Benefits factor represent skills and learning outcomes that are consistent with the learning objectives of the course. This factor includes items that ask students if they gained skills in interpreting results and working independently as well as improving their understanding of the role of evidence in science. We expect high scores on this factor to indicate that course learning objectives were met. It is interesting to note that, in Table 4, there is a strong, positive correlation between the Benefits factor and the PV factor (r = 0.52). This indicates that there is a strong positive correlation between students’ change in perceived value of learning science and their perceived benefits of taking the course.

    Comparisons between o-course and i-course Students

    Factor Scores.

    Our linear-regression models show that the primary difference between the Science Attitudes of i-course and o-course students is their precourse factor scores. This is important, because our regression models also show that a large part of the postcourse factor score variance is predicted by precourse factor scores. We find that o-course students appear to be aided in achieving a better course grade by their positive precourse Science Attitudes. We also found that precourse SS factor scores were a predictor of course grades for o-course students, which was not the case for their i-course counterparts.

    The differences in Science Attitudes factor scores between o-course and i-course students indicate that the o-course students increased their perception of personal value in learning science, while the i-course students did not change their views on personal value in learning science. In that sense, i-course students are similar to the CURE benchmark (shown in Supplemental Table S8), which also shows little change in the PV factor from pre- to postcourse. For the precourse PV factor scores, both the o-course and i-course students have a lower perception of personal value than the CURE benchmark. As mentioned earlier, the CURE survey is nominally administered to science majors; thus, it is expected that the precourse perception of personal value in science should be higher for science majors. Postcourse PV factor scores for the o-course students are higher than the CURE benchmark, while those for the i-course students are lower. For the SS factor, the i-course students lowered their factor scores from pre- to postcourse (i.e., they decreased in their understanding of how science works and what it means to do science), while o-course students showed no change. In that sense, o-course students’ factor scores are similar to the CURE benchmark, which also shows little change in the SS factor from pre- to postcourse. The precourse SS factor scores for i-course students are similar to the CURE benchmark, while those of the o-course students show a higher scientific sophistication than the CURE benchmark. It is surprising that the precourse CURE benchmark for the SS factor was not higher than the factor scores for both the o-course and i-course students—something that was true for the PV factor. We would have expected science majors to show a higher scientific sophistication than non–science majors regardless of their degree program type (i.e., o-course or i-course). Overall, the o-course students entered the course with both a higher scientific sophistication and a higher personal value in learning science compared with their i-course counterparts. The o-course students’ data even compare favorably with the CURE benchmark data, which primarily measure science majors’ attitudes.

    There are also significant differences between i-course and o-course students in their responses to the Benefits items. On average, the o-course students had higher self-reported gains on the items surveyed than their i-course counterparts, though scores for i-course students were closer to the CURE benchmark. Course grades, as well as GPA, were not significantly different between the two groups (see Table 6), so this finding reflects a difference in perceived learning rather than an actual gap in learning outcomes between the two groups. Though these changes in perceptions did not seem to affect their course grades overall, the differences in perception may affect their future science course performance as well as their likelihood of continuing to be engaged with science in the future.

    TABLE 6. Academic performance for Habitable Worlds students

    Academic performanceai-course students (ni = 232)o-course students (no = 219)
    Habitable Worlds gradeMean3.343.39
    Median4.004.00
    College GPAMean3.313.32
    Median3.303.46

    aGrades and GPA are on a four-point scale (“A” = 4, “B” = 3, etc.).

    Why do o-course and i-course students differ in these ways? One possibility is the demographic differences between these two groups, which may lead to certain perceptions about science. The o-course students are on average older than the i-course students, which is typical of students in nontraditional degree programs (Clinefelter and Aslanian, 2016). One prior study presented evidence that online learners’ preferences with regard to simulations or games-based learning varied by age (Hampton et al., 2016). However, none of our linear models found age to be a statistically significant variable for predicting postcourse Science Attitudes factor scores when controlling for GPA, gender, and program type. On the basis of this, we suggest that the distinctive characteristics and life circumstances that lead students to enroll in fully online degree programs also make them more likely to have more positive attitudes toward science and to perceive greater benefits from the learning experience compared with i-course students.

    A second possibility is that the differences between i-course and o-course students’ attitudes reflect a difference in why students from each group decided to enroll in the course. As part of the precourse survey, we asked students to rate the importance of 10 items pertaining to the reasons why they chose to take this course. For these items, the options were “not applicable,” “not important,” “moderately important,” and “very important.” For the “interested in the subject matter” reason, the median response of the i-course group was “moderately important,” while the median response of the o-course group was “very important.” This difference was statistically significant (p = 0.001 using the unpaired Wilcoxon test). A regression analysis indicated that this incoming difference in their interest in the subject was a significant predictor of postcourse attitudes toward science. However, program type remained significant, even accounting for the interest item. Thus, although initial interest in the subject is more relevant than age in explaining our attitudes results, o-course students are still more likely to have more positive attitudes toward science than their i-course counterparts.

    This relationship should be examined more directly in follow-up studies. A student’s initial disposition toward a course clearly colors his or her experience in that course, but the ways in which that disposition varies systematically could be useful for instructors or institutions. If, as we find here, o-course students are more likely to enroll based on interest than are i-course students, that tendency could be used to tailor instruction or course offerings. Certainly, other similar relationships exist and would provide their own distinctive benefits toward the goal of delivering more useful and productive learning experiences.

    Relationships with Course Grade.

    The differences in factor score–course grade correlations between the o-course and i-course students suggest an interesting motivational difference between the two groups. The precourse factor scores are significant predictors of success or failure in the course for the o-course students, but show no predictive value for the i-course students. We have already observed that the o-course students were more likely to report enrolling because of strong interest in the subject. Though predictive of positive science attitudes, this pre-existing interest has only a small, nonsignificant correlation with course grade. If this pattern holds in other online courses, it would suggest that success and failure among o-course students is driven more strongly by students’ predisposition for the subject than is the case among i-course students. It would also make the CURE survey, or a similar survey, a valuable diagnostic tool for identifying students in danger of failure.

    Unlike the precourse measure, the postcourse measures show the same correlations with course grade among the o-course and i-course students. The postcourse and the pre- to postcourse changes in both PV and SS are positively correlated with grade. Our logistic regressions showed that, while a student’s university GPA and gender were important in predicting course grade, the SS factor score was also statistically significant in predicting course grade. Similarly, although the o-course students report significantly higher learning gains on the Benefits items, the correlations between total Benefits score and course grade are very similar among o-course and i-course students. This indicates that, even though the o-course group exhibited more positive attitude shifts than the i-course group, the better-performing students in both groups had similar relative differences in attitude change compared with the lower-performing students.

    The modest positive correlations of all three factors (i.e., PV, SS, and Benefits) with the final course grade is consistent with previous work by Hough and Piper (1982), Steiner and Sullivan (1984), Germann (1988), and Singh et al. (2002), who found positive correlations between students’ attitudes and course grades. However, the findings of this work are not consistent with Rogers and Ford (1997), who found that positive attitudinal changes correlated negatively with course grade. Though they found attitudes toward biology (particularly personal relevance) to be correlated with course performance, the direct correlation was weak, which led Partin and Haney (2012) to drop the term from their model. These results illustrate the complexity of linking students’ affect to course performance.

    What Are the Implications of This Work?

    Students’ Attitudes toward Science.

    Our two populations of students (o-course and i-course) began the course with different attitudes toward science, and they changed their views differently after taking the course. In spite of these differences in attitudes, there was no significant difference in final course grades between the two groups. This may be explained by the short duration of the intervention (the course) in our study—only 7.5 weeks—which is very brief in comparison with a student’s entire academic program. Over this period of time, differences in attitudes may not affect course performance or the effect may be too small to be detected. By comparison, in a longitudinal study over a 4-year period of students’ attitudes toward science, Hansen and Birol (2014) observed a positive relationship between the development of expert attitudes toward science and academic performance. Therefore, although the attitudinal differences that we observed in our work did not have a corresponding difference in final course grades for o-course and i-course students, those attitudinal differences may predict students’ future performance in science courses or their future engagement with science.

    Given the importance of students’ attitudes toward science, it might be suggested that online courses (and perhaps traditional, in-person courses) should try to positively change students’ attitudes toward science during the course. However, changing students’ attitudes toward science is both complex and difficult. Part of the complexity is illustrated by the example of positive self-statements being helpful to some people but damaging to others (Wood et al., 2009). The difficulty has been noted by a number of previous works that found no changes in students’ attitudes toward science. For example, Gabel (1981) found no change in attitudes toward science from pre- to postcourse during an in-person, introductory geology course for nonmajors. Additionally, Cook and Mulvihill (2008) found no change in students’ confidence in doing science or their interest in science from pre- to postcourse. They used a general attitudes survey during an in-person course for non majors called Food, Values, Politics and Society. However, for the same course, data from the Biology Attitude Scale showed improved attitudes toward biology from pre- to postcourse. There are many complications that hinder an instructor’s attempts to improve students’ attitudes toward science; for example, it becomes more difficult to change attitudes as students age (e.g., Savelsbergh et al., 2016). In spite of these difficulties, some studies have identified means through which students’ attitudes toward science can be improved. For example, Wheland et al. (2013) showed that attitudes of non–STEM (science, technology, engineering, and mathematics) majors toward science improved by having them engage in authentic scientific activities during a four-course block of English composition, oral communication, freshman seminar, and a special-topics course (led by a biologist). But not all interventions are effective. A meta-­analytic study by Savelsbergh et al. (2016) found that certain teaching approaches improved students’ attitudes, while others improved achievement, but they did not find a correlation between an intervention’s success in improving attitudes and success in increasing achievement. Thus, past studies have demonstrated that changing students’ attitudes toward science is not straightforward.

    Previous works contend that students’ attitudes toward science can be improved by implementing certain types of instructional design, such as active-learning lectures (Armbruster et al., 2009) and building models during the learning process (Brewe et al., 2009). Though educators should look for opportunities to improve learning outcomes, we should avoid suggesting simplistic universal solutions. It has been shown that both positive and negative affect can be beneficial, depending on the individual and the circumstance. For example, George and Zhou (2002) found that when people were both aware of their moods and rewarded for creativity, negative moods correlated with increased creativity, while positive moods correlated with decreased creativity. Additionally, Martin et al. (1993) found that those with positive moods stopped a task faster than those with negative moods when they were directed to achieve a certain goal. However, they found that those with positive moods continued with the task longer than those with negative moods when they were directed to continue as long as they enjoyed the task. Thus, the goal of the present work and work that follows should focus on helping students with diverse affect to learn the course material and meet course objectives rather than trying to change particular aspects of their affect.

    Overall, the implication of prior findings and our own work is that improving students’ attitudes with the aim of improving learning outcomes is unreliable. Instead, we recommend using precourse attitude surveys to guide pedagogical decision making. For example, our results suggest that o-course students, who express valuing science more, will be more willing to engage with learning activities that allow them to explore and discover scientific concepts without regard for their utility in other contexts. In the same way, i-course students may be more interested in learning activities that emphasize the implications of science in everyday life or the student’s own non–­science interests. Those suggestions are supported by the work of Berg (2005), who studied attitudes of first-year university chemistry students. The author conducted follow-up interviews of students who had the largest changes (positive and negative) in their attitudes as measured by a pre- and postcourse attitude questionnaire and found that students who had large positive changes in attitudes were more motivated and more persistent. Students who had large positive changes were also more willing to do open-ended or exploratory exercises than those who had large negative changes in attitudes. Berg (2005, p. 13) states that “for tasks requiring more self-regulated learning, such as planning open experiments and tutorials, students with positive attitude shifts reveal greater acceptance, while students with negative attitude shifts are more reluctant to express positive views, even if they expressed an understanding of the relevance of such tasks.” Future work should explore how students with different precourse attitudes toward science respond to different pedagogies.

    Student Motivation.

    Motivation is sometimes conceived as a mediator between students’ attitudes toward science (part of their affect) and their behaviors. Motivation and its role in learning have been conceptualized differently by various authors (cf. Eccles and Wigfield, 2002). However, motivation is commonly divided into intrinsic motivation (i.e., driven by interest or desire to perform a task) and extrinsic motivation (i.e., driven by rewards or external forces to perform a task). Our PV factor may be a proxy for students’ intrinsic motivation, because the factor identifies a certain personal value in learning science. This would be consistent with the findings of Glynn et al. (2007), who found that perceived relevance of science to students (who were non–science majors) to their future careers was correlated with their motivation (with the correlation stronger among female students). They also concluded that motivation was correlated with student achievement (i.e., GPA). Similarly, Partin and Haney (2012) also argued that personal relevance contributes to intrinsic goal orientation (i.e., intrinsic motivation). Furthermore, the “intrinsic motivation” factor of Glynn et al. (2011) is similar to our PV factor when the constituent items are compared, and though they used a semantic differential scale, the “interest and utility” factor of Bauer (2008) may also be similar to our PV factor. If the PV factor is in fact a proxy for intrinsic motivation, then the higher precourse PV factor scores of o-course students and their further increase in postcourse PV factor scores could mean that o-course students are more intrinsically motivated than their i-course counterparts. This has implications for their education, because those who are intrinsically motivated are more likely to persist when they face obstacles (e.g., Simons et al., 2004; Grant, 2008), which might be particularly important to the long-term success of o-course students. Unlike i-course students, o-course students may have limited access to on-campus support (such as tutoring centers and instructors’ office hours). Future work can further test whether the PV factor is in fact a proxy for intrinsic motivation by considering whether there are correlations between positive PV factor scores and persistence (e.g., by considering the number of reattempts for questions that a student initially answered incorrectly).

    Connection to the Affect–Cognition–Behavior Framework.

    Affect, cognition, and behavior interact with each other in complex ways, but all three are important to learning. Our results demonstrate that there is a connection between students’ affect (specifically attitudes in this work) and both cognition and behavior (as implied by course grade). Though it is reasonable to assume that a student’s course grade would measure his or her cognition and behavior during the course, future work should address this directly. Additional specific measures of cognition (e.g., individual lesson and question scores) and behavior (e.g., time spent on individual lessons or discussion board participation) would allow us to tie fine-grained behaviors or learning to students with different science attitudes. The significant relationships shown between SS and PV factor scores and course grade, particularly among o-course students, show the potential of this line of research, yet the far greater predictive power of student GPA shows the limits of the current aggregate-level analyses. Overall, this work further demonstrates the fruitfulness of considering student affect in education.

    Limitations

    Changes to the CURE Survey and Some Individual Items.

    As noted earlier, we excluded 11 CURE survey items, one Science Attitudes and 10 Benefits items, from our factor analysis. The decision to exclude those items does, in some ways, limit how this work may be compared with previous CURE research. At the same time, it also represents a need that will be common to other large-enrollment courses that also do not include substantial writing or research activities. Our CURE subset will likely be more appropriate for those classes to use than the full CURE survey.

    Starting in the Fall 2015 semester, the wording of nine Science Attitudes items were revised, two of which were ultimately included in our identified factors (see Table 2). This revision was informed by results from an expert review, wherein a group of faculty, research scientists, postdoctoral researchers, and graduate students within our department at ASU answered the 22 Science Attitudes items and commented on their interpretations of each item. On the basis of these results, we changed the wording of nine items to clarify them without changing their initial meanings. To test whether these revisions changed the relationships between the Science Attitudes items and our proposed factors, we divided our response data into two groups: original wording (Fall 2014 and Spring 2015) and revised wording (Fall 2015 and Spring 2016). We then conducted a factor analysis for each group using only the 12 Science Attitudes items our initial analysis found to load onto the SS and PV factors. Although the exact factor loadings (i.e., eigenvalues) differed between the two response groups, the same groupings shown in Table 2 held. Thus, we argue that our item modifications do not materially alter our findings with respect to the two Science Attitudes factors.

    Student Interpretation of Survey Items.

    It is possible that not every student in the cohort interpreted the items of the survey in exactly the same manner. Given individual experiences and viewpoints, students in our cohort may have taken various survey items to mean different things. For example, the item regarding creativity (“Creativity does not play a role in science”) might evoke different meanings to different students depending on how they interpret the word “creativity.” Some students might associate creativity with the arts (painting, music, dance, acting, etc.), while others might take it to mean thinking in a creative manner (which is closer to the intended interpretation of the item). To understand and possibly account for this possible variance, we are in the process of conducting think-aloud interviews with students in the current Habitable Worlds offering.

    General Limitations.

    The population of students included in our analysis accounts for only about half the total number of students who completed the course during the period of our analysis. There are two major reasons for this difference: 1) nonconsent for research participation and 2) course attrition and noncompletion of the postcourse survey (typically a consequence of course attrition). Such selection biases are common in survey research, but they raise concerns that the retained students do not represent the overall population. Because substantially more students completed the precourse survey than the postcourse, we compared the average precourse response for each factor between the included population and those with partial responses. In spite of the large number of students who were excluded due to incomplete postcourse responses, there is no evidence that this exclusion has affected our results. Precourse Science Attitudes factor scores and the Benefits scores for the students who were excluded are statistically indistinguishable from the student scores used in this study. Thus, although one could speculate that students who failed to complete a course may have different attitudes than those who did, our data show no cause for concern.

    The second potential selection effect is that from participant nonconsent. We cannot present survey responses from the nonconsenting students, but we can consider the demographics of those students (working from the class averages and removing what we know to be the makeup of the consenting students). From this we see that the nonconsenting students are much more likely to be i-course students and more likely to be male. There are no significant differences in overall GPA or course grade. Given this information, and working under the assumption that the decision to consent to research participation reflects a positive disposition toward the course, we conclude that our study population likely holds more positive attitudes toward science than the course population as a whole. However, this strengthens our claim that the o-course students differ from i-course students.

    Items using a Likert-response format have a relatively constrained range; thus, our analysis of pre- to postcourse changes could have ceiling (or floor) effects. To account for this, we recalculated the item change scores to show only the increase or decrease regardless of magnitude. Scores that began and ended at the highest value were coded as an increase; scores at the lowest value were treated as a decrease. The resultant factors were very similar to the ones shown in Supplemental Tables S1 and S2. Thus, we do not consider the ceiling or floor effects to be significant.

    Finally, some skepticism is always warranted when working with self-reported data. Even though self-assessments are important for learning (Guest et al., 2001) and people believe that they can accurately assess themselves (Pronin et al., 2002), self-­assessments are flawed in some regards. Dunning et al. (2004) listed two major reasons for this: 1) there are only small correlations between people’s perception of how skilled they are at a particular activity and their objective performance, and 2) people are generally too optimistic about their skills and their mastery of those skills. However, given the emphasis in the items used here on attitudes and opinions over skills and proficiency, we argue that self-reported data are meaningful for this work.

    CONCLUSIONS

    We have administered the CURE survey to three semesters of the online, introductory astrobiology course Habitable Worlds. We have additionally used data from the Spring 2016 offering for our confirmatory factor analysis. We find that the items with relevance to the experience of non–science majors in an online general education undergraduate course can be described by three factors. The Scientific Sophistication, or SS, factor characterizes a general understanding of science and the process of doing science. The Personal Value, or PV, factor characterizes a general perception of personal value in learning science. The Benefits factor characterizes the perceived skills and knowledge gained by taking this course.

    Our results indicate that there are significant differences between students enrolled in traditional, in-person degree programs (i-course) and students enrolled in fully online degree programs (o-course). Overall, students in the fully online program have more positive views about science coming into the course, and they shift further toward more favorable views of science after taking the course. They also report greater benefits from taking the course than their i-course counterparts.

    We find that the precourse CURE survey can be used as a predictor of success for the o-course students. Interestingly, this predictive power does not hold for the i-course students. We expect students in fully online degree programs to have different life experiences, priorities, and outlooks than the traditional, in-person degree student. In particular, we have evidence that fully online degree program students consider interest in the subject to be a more important factor in choosing to enroll in Habitable Worlds than do traditional, in-person degree program students. Our finding that science attitudes predict outcomes only for o-course students should not be taken to mean that attitudes are only useful predictors among nontraditional students. Instead, it argues for a richer consideration of the circumstances that may change the salience of specific attitudes to the success of specific students.

    Online education has proliferated, and research into its effectiveness is still being developed. Given the steady rise in the number of fully online degree seekers, there are clear benefits to further research in this area. Previous works have demonstrated that students’ attitudes toward the subject are important to their learning. In this study, we have shown differences in attitudes toward science between traditional, in-person degree-seeking students and fully online degree-seeking students. We can now use these findings to better serve these different populations of students and to compare our results with other online, introductory science courses.

    ACKNOWLEDGMENTS

    We thank the two anonymous reviewers for their thoughtful suggestions. We also thank the NASA Astrobiology Institute, the National Science Foundation (TUES grant number 1225741), and ASU Online for funding the development of Habitable Worlds. This work is contribution #2 from ASU’s Center for Education Through Exploration.

    REFERENCES

  • Adams W. K., Perkins K. K., Podolefsky N. S., Dubson M., Finkelstein N. D., Wieman C. E. (2006). New instrument for measuring student beliefs about physics and learning physics: The Colorado Learning Attitudes about Science Survey. Physical Review Special Topics—Physics Education Research 2, (1), 010101. Google Scholar
  • Ainley M., Ainley J. (2011). A cultural perspective on the structure of student interest in science. International Journal of Science Education 33, (1), 51-71. Google Scholar
  • Allchin D. (2011). Evaluating knowledge of the nature of (whole) science. Science Education 95, (3), 518-542. Google Scholar
  • Allen I., Seaman J., Poulin R., Straut T. (2016). Online report card: Tracking online education in the United States, Babson Park, MA: Babson Survey Research Group and Quahog Research Group,. Google Scholar
  • Allen I. E., Seaman J. (2015). Grade level: Tracking online education in the United States, Babson Park, MA: Babson Survey Research Group,. Google Scholar
  • Armbruster P., Patel M., Johnson E., Weiss M. (2009). Active learning and student-centered pedagogy improve student attitudes and performance in introductory biology. CBE—Life Sciences Education 8, (3), 203-213. LinkGoogle Scholar
  • Bauer C. F. (2008). Attitude toward chemistry: A semantic differential instrument for assessing curriculum impacts. Journal of Chemical Education 85, (10), 1440. Google Scholar
  • Baumeister R. F., Masicampo E. J., Vohs K. D. (2011). Do conscious thoughts cause behavior?. Annual Review of Psychology 62, (1), 331-361. MedlineGoogle Scholar
  • Berg C. A. R. (2005). Factors related to observed attitude change toward learning chemistry among university students. Chemistry Education Research and Practice 6, (1), 1-18. Google Scholar
  • Bohner G., Dickel N. (2011). Attitudes and attitude change. Annual Review of Psychology 62, 391-417. MedlineGoogle Scholar
  • Brewe E., Kramer L., O’Brien G. (2009). Modeling instruction: Positive attitudinal shifts in introductory physics measured with CLASS. Physical Review Special Topics—Physics Education Research 5, (1), 013102. Google Scholar
  • Brown T. A. (2014). Confirmatory factor analysis for applied research, 2nd ed New York: Guilford,. Google Scholar
  • Cattell R. B. (1966). The scree test for the number of factors. Multivariate Behavioral Research 1, (2), 245-276. MedlineGoogle Scholar
  • Chen S. (2006). Development of an instrument to assess views on nature of science and attitudes toward teaching science. Science Education 90, (5), 803-819. Google Scholar
  • Clinefelter D. L., Aslanian C. B. (2016). Online college students 2014: Comprehensive data on demands and preferences, Louisville, KY: Learning House,. Google Scholar
  • Clinefelter D. L., Aslanian C. B. (2017). Online college students 2017: Comprehensive data on demands and preferences, Louisville, KY: Learning House,. Google Scholar
  • Cook M., Mulvihill T. M. (2008). Examining US college students’ attitudes towards science: Learning from non-science majors. Educational Research and Reviews 3, (1), 38. Google Scholar
  • Corey S. M. (1937). Professed attitudes and actual behavior. Journal of Educational Psychology 28, (4), 271-280. Google Scholar
  • Deng F., Chen D. T., Tsai C. C., Chai C. S. (2011). Students’ views of the nature of science: A critical review of research. Science Education 95, (6), 961-999. Google Scholar
  • Denofrio L. A., Russell B., Lopatto D., Lu Y. (2007). Linking student interests to science curricula. Science 318, (5858), 1872-1873. MedlineGoogle Scholar
  • Dolan R. J. (2002). Emotion, cognition, and behavior. Science 298, (5596), 1191-1194. MedlineGoogle Scholar
  • Dunning D., Heath C., Suls J. M. (2004). Flawed self-assessment implications for health, education, and the workplace. Psychological Science in the Public Interest 5, (3), 69-106. MedlineGoogle Scholar
  • Eagly A. H. (1992). Uneven progress: Social psychology and the study of attitudes. Journal of Personality and Social Psychology 63, (5), 693. Google Scholar
  • Eccles J. S., Wigfield A. (2002). Motivational beliefs, values, and goals. Annual Review of Psychology 53, 109-132. MedlineGoogle Scholar
  • Flora D. B., Curran P. J. (2004). An empirical evaluation of alternative methods of estimation for confirmatory factor analysis with ordinal data. Psychological Methods 9, (4), 466. MedlineGoogle Scholar
  • Fortus D. (2014). Attending to affect. Journal of Research in Science Teaching 51, (7), 821-835. Google Scholar
  • Gabel D. (1981). Attitudes toward science and science teaching of undergraduates according to major and number of science courses taken and the effect of two courses. School Science and Mathematics 81, (1), 70-76. Google Scholar
  • Gardner P. L. (1995). Measuring attitudes to science: Unidimensionality and internal consistency revisited. Research in Science Education 25, (3), 283-289. Google Scholar
  • George J. M., Zhou J. (2002). Understanding when bad moods foster creativity and good ones don’t: The role of context and clarity of feelings. Journal of Applied Psychology 87, (4), 687. MedlineGoogle Scholar
  • Geraerts E., Bernstein D. M., Merckelbach H., Linders C., Raymaekers L., Loftus E. F. (2008). Lasting false beliefs and their behavioral consequences. Psychological Science 19, (8), 749-753. MedlineGoogle Scholar
  • Germann P. J. (1988). Development of the attitude toward science in school assessment and its use to investigate the relationship between science achievement and attitude toward science in school. Journal of Research in Science Teaching 25, (8), 689-703. Google Scholar
  • Glynn S. M., Brickman P., Armstrong N., Taasoobshirazi G. (2011). Science motivation questionnaire II: Validation with science majors and nonscience majors. Journal of Research in Science Teaching 48, (10), 1159-1176. Google Scholar
  • Glynn S. M., Taasoobshirazi G., Brickman P. (2007). Nonscience majors learning science: A theoretical model of motivation. Journal of Research in Science Teaching 44, (8), 1088-1107. Google Scholar
  • Grant A. M. (2008). Does intrinsic motivation fuel the prosocial fire? Motivational synergy in predicting persistence, performance, and productivity. Journal of Applied Psychology 93, (1), 48. MedlineGoogle Scholar
  • Guest C. B., Regehr G., Tiberius R. G. (2001). The life long challenge of expertise. Medical Education 35, (1), 78-81. MedlineGoogle Scholar
  • Halloun I., Hestenes D. (1996). Views about sciences survey: Vass, www.pearweb.org/atis/tools/14. Google Scholar
  • Hampton D., Pearce P. F., Moser D. K. (2016). Preferred methods of learning for nursing students in an on-line degree program. Journal of Professional Nursing 33, (1), 27-37. MedlineGoogle Scholar
  • Hansen M. J., Birol G. (2014). Longitudinal study of student attitudes in a biology program. CBE—Life Sciences Education 13, (2), 331-337. LinkGoogle Scholar
  • Horn J. L. (1965). A rationale and test for the number of factors in factor analysis. Psychometrika 30, (2), 179-185. MedlineGoogle Scholar
  • Horodyskyj L. B., Mead C., Belinson Z., Buxner S., Semken S., Anbar A. D. (2017). Habitable Worlds: Delivering on the promises of online education. Astrobiology (in press). Google Scholar
  • Hough L. W., Piper M. K. (1982). The relationship between attitudes toward science and science achievement. Journal of Research in Science Teaching 19, (1), 33-38. Google Scholar
  • Jaggars S. S. (2014). Choosing between online and face-to-face courses: Community college student voices. American Journal of Distance Education 28, (1), 27-38. Google Scholar
  • Jones M. G., Howe A., Rua M. J. (2000). Gender differences in students’ experiences, interests, and attitudes toward science and scientists. Science Education 84, (2), 180-192. Google Scholar
  • Jordan T. C., Burnett S. H., Carson S., Caruso S. M., Clase K., DeJong R. J., Hatfull G. F. (2014). A broadly implementable research course in phage discovery and genomics for first-year undergraduate students. MBio 5, (1), e01,051–013. Google Scholar
  • Koballa T. R., Glynn S. M. (2007, Ed. S. K. AbellN. G. Lederman, Attitudinal and motivational constructs in science learning In: Handbook of research on science education, Mahwah, NJ: Erlbaum, 75-102. Google Scholar
  • Krogh L. B., Thomsen P. V. (2005). Studying student’s attitudes towards science from a cultural perspective but with a quantitative methodology: Border crossing into the physics classroom. International Journal of Science Education 27, (3), 281-302. Google Scholar
  • Kutner B., Wilkins C., Yarrow P. R. (1952). Verbal attitudes and overt behavior involving racial prejudice. Journal of Abnormal and Social Psychology 47, (3), 649-652. Google Scholar
  • LaPiere R. T. (1934). Attitudes vs. actions. Social Forces 13, (2), 230-237. Google Scholar
  • Li C.-H. (2016). Confirmatory factor analysis with ordinal data: Comparing robust maximum likelihood and diagonally weighted least squares. Behavior Research Methods 48, (3), 936-949. MedlineGoogle Scholar
  • Lin-Siegler X., Ahn J. N., Chen J., Fang F.-F. A., Luna-Lucero M. (2016). Even Einstein struggled: Effects of learning about great scientists struggles on high school students’ motivation to learn science. Journal of Educational Psychology 108, (3), 314-328. Google Scholar
  • Liu M., Hu W., Jiannong S., Adey P. (2010). Gender stereotyping and affective attitudes towards science in Chinese secondary school students. International Journal of Science Education 32, (3), 379-395. Google Scholar
  • Lopatto D. (2003). The essential features of undergraduate research. Council on Undergraduate Research Quarterly 24, 139-142. Google Scholar
  • Lopatto D. (2009). Science in solution: The impact of undergraduate research on student learning, Tucson, AZ: Research Corporation for Science Advancement,. Google Scholar
  • Lopatto D., Alvarez C., Barnard D., Chandrasekaran C., Chung H.-M., Du C., Elgin S. C. R. (2008). Genomics Education Partnership. Science 322, (5902), 684-685. MedlineGoogle Scholar
  • Martin L. L., Ward D. W., Achee J. W., Wyer R. S. (1993). Mood as input: People have to interpret the motivational implications of their moods. Journal of Personality and Social Psychology 64, (3), 317. Google Scholar
  • McConnell D. A., van der Hoeven Kraft K. J. (2011). Affective domain and student learning in the geosciences. Journal of Geoscience Education 59, (3), 106-110. Google Scholar
  • McGlone M. S., Aronson J. (2007). Forewarning and forearming stereotype-threatened students. Communication Education 56, (2), 119-133. Google Scholar
  • McMillan J. H., May M. J. (1979). A study of factors influencing attitudes toward science of junior high school students. Journal of Research in Science Teaching 16, (3), 217-222. Google Scholar
  • Means B., Bakia M., Murphy R. (2014). Learning online: What research tells us about whether, when and how, New York: Routledge,. Google Scholar
  • Miller P. H., Slawinski Blessing J., Schwartz S. (2006). Gender differences in high-school students views about science. International Journal of Science Education 28, (4), 363-381. Google Scholar
  • National Center for Education Statistics (2014). NCES Digest of Educational Statistics, Retrieved November 9, 2017, from https://nces.ed.gov/programs/digest/d14/tables/dt14_311.22.asp. Google Scholar
  • Partin M. L., Haney J. J. (2012). The CLEM model: Path analysis of the mediating effects of attitudes and motivational beliefs on the relationship between perceived learning environment and course performance in an undergraduate non-major biology course. Learning Environments Research 15, (1), 103-123. Google Scholar
  • Pessoa L. (2008). On the relationship between emotion and cognition. Nature Reviews Neuroscience 9, (2), 148-158. MedlineGoogle Scholar
  • Preacher K. J., MacCallum R. C. (2003). Repairing Tom Swift’s electric factor analysis machine. Understanding Statistics 2, (1), 13-43. Google Scholar
  • Prokop P., Prokop M., Tunnicliffe S. D. (2007). Is biology boring? Student attitudes toward biology. Journal of Biological Education 42, (1), 36-39. Google Scholar
  • Pronin E., Lin D. Y., Ross L. (2002). The bias blind spot: Perceptions of bias in self versus others. Personality and Social Psychology Bulletin 28, (3), 369-381. Google Scholar
  • Ramsden J. M. (1998). Mission impossible? Can anything be done about attitudes to science. International Journal of Science Education 20, (2), 125-137. Google Scholar
  • Revelle W. (2016). Psych: Procedures for personality and psychological research (Version 1.6.12), Evanston, IL: Northwestern University., Retrieved November 9, 2017, from https://CRAN.R-project.org/package=psych. Google Scholar
  • Rogers W. D., Ford R. (1997). Factors that affect student attitude toward biology. Bioscene 23, (2), 3-5. Google Scholar
  • Rosseel Y. (2012). lavaan: An R package for structural equation modeling. Journal of Statistical Software 48, 1-36. Google Scholar
  • Russell J., Hollander S. (1975). A biology attitude scale. American Biology Teacher 37, (5), 270-273. Google Scholar
  • Sanders C. W., Sadoski M., Bramson R., Wiprud R., Walsum K. V. (2004). Comparing the effects of physical practice and mental imagery rehearsal on learning basic surgical skills by medical students. American Journal of Obstetrics and Gynecology 191, (5), 1811-1814. MedlineGoogle Scholar
  • Savelsbergh E. R., Prins G. T., Rietbergen C., Fechner S., Vaessen B. E., Draijer J. M., Bakker A. (2016). Effects of innovative science and mathematics teaching on student attitudes and achievement: A meta-analytic study. Educational Research Review 19, 158-172. Google Scholar
  • Semsar K., Knight J. K., Birol G., Smith M. K. (2011). The Colorado Learning Attitudes about Science Survey (CLASS) for use in biology. CBE—Life Sciences Education 10, (3), 268-278. LinkGoogle Scholar
  • Shaffer C. D., Alvarez C., Bailey C., Barnard D., Bhalla S., Chandrasekaran C., Elgin S. C. R. (2010). The Genomics Education Partnership: Successful integration of research into laboratory classes at a diverse group of undergraduate institutions. CBE—Life Sciences Education 9, (1), 55-69. LinkGoogle Scholar
  • Shiv B., Fedorikhin A. (1999). Heart and mind in conflict: The interplay of affect and cognition in consumer decision making. Journal of Consumer Research 26, (3), 278. Google Scholar
  • Shrigley R. L. (1990). Attitude and behavior are correlates. Journal of Research in Science Teaching 27, (2), 97-113. Google Scholar
  • Simons J., Dewitte S., Lens W. (2004). The role of different types of instrumentality in motivation, study strategies, and performance: Know why you learn, so you’ll know what you learn!. British Journal of Educational Psychology 74, (3), 343-360. MedlineGoogle Scholar
  • Simpson R. D., Oliver J. S. (1985). Attitude toward science and achievement motivation profiles of male and female science students in grades six through ten. Science Education 69, (4), 511-525. Google Scholar
  • Singh K., Granville M., Dika S. (2002). Mathematics and science achievement: Effects of motivation, interest, and academic engagement. Journal of Educational Research 95, (6), 323-332. Google Scholar
  • Steiner R., Sullivan J. (1984). Variables correlating with student success in organic chemistry. Journal of Chemical Education 61, (12), 1072. Google Scholar
  • Thompson T. L., Mintzes J. J. (2002). Cognitive structure and the affective domain: On knowing and feeling in biology. International Journal of Science Education 24, (6), 645-660. Google Scholar
  • Turner S. L., Steward J. C., Lapan R. T. (2004). Family factors associated with sixth-grade adolescents’ math and science career interests. Career Development Quarterly 53, (1), 41-52. Google Scholar
  • van der Hoeven Kraft K. J., Srogi L., Husman J., Semken S., Fuhrman M. (2011). Engaging students to learn through the affective domain: A new framework for teaching in the geosciences. Journal of Geoscience Education 59, (2), 71-84. Google Scholar
  • Walker D. A., Smith M. C., Hamidova N. I. (2013). A structural analysis of the Attitudes Toward Science Scale: Students’ attitudes and beliefs about science as a multi-dimensional composition. Multiple Linear Regression Viewpoints 39, (2), 38-48. Google Scholar
  • Weinburgh M. (1995). Gender differences in student attitudes toward science: A meta-analysis of the literature from 1970 to 1991. Journal of Research in Science Teaching 32, (4), 387-398. Google Scholar
  • Wenk L. (2000). Improving science learning: Inquiry-based and traditional first-year college science curricula (Doctoral dissertation), Retrieved from Proquest (AAI9988852). Google Scholar
  • Wheland E. R., Donovan W. J., Dukes J. T., Qammar H. K., Smith G. A., Williams B. L. (2013). Green action through education: A model for fostering positive attitudes about STEM. Journal of College Science Teaching 42, (3), 46-51. Google Scholar
  • Wood J. V., Perunovic W. E., Lee J. W. (2009). Positive self-statements. Psychological Science 20, (7), 860-866. MedlineGoogle Scholar