ASCB logo LSE Logo

Student Satisfaction and Learning Outcomes in Asynchronous Online Lecture Videos

    Published Online:https://doi.org/10.1187/cbe.18-08-0171

    Abstract

    Our study identified online lecture video styles that improved student engagement and satisfaction, while maintaining high learning outcomes in online education. We presented different lecture video styles with standardized material to students and then measured learning outcomes and satisfaction with a survey and summative assessment. We created an iterative qualitative coding scheme, “coding online asynchronous lectures” (COAL), to analyze open-ended student survey responses. Our results reveal that multimedia learning can be satisfying and effective. Students have strong preferences for certain video styles despite their equal learning outcomes, with the Learning Glass style receiving the highest satisfaction ratings. Video styles that were described as impersonal and unfamiliar were rated poorly, while those that were described as personal and engaging and evoked positive affective responses were rated highly. The students in our study rated lecture video styles that aligned with Mayer’s multimedia learning principles as highly satisfying, indicating that student feedback can be a valuable resource for course designers to consider as they design their own online courses. Finally, we provide guidelines for creating engaging, effective, and satisfying asynchronous lecture videos to support establishment of best practices in online instruction.

    INTRODUCTION

    Enrollment in online courses is growing rapidly, and public institutions are the largest educators of distance education students (68%; Allen and Seaman, 2017). One reason for increasing online course offerings is to accommodate more students without incurring significant costs of building new infrastructure (Seaman et al., 2018). Currently, distance learning remains highly concentrated, as almost half of distance education students are limited to 5% of all institutions. Importantly, increased online course options have the potential to reach students with limited access to higher education due to socioeconomic, geographic, financial, educational, and personal barriers (Davis, 2000; Hara, 2000; Haugen et al., 2001; Liaw and Huang, 2002; Chen et al., 2010; Flowers et al., 2012; Hansen and Reich, 2015; Willging and Johnson, 2009).

    Despite these advantages, some public undergraduate institutions still have not embraced hybrid and fully online course models due to negative misconceptions (Smart and Cappel, 2006; Allen and Seaman, 2017). In particular, some educators believe that online education diminishes the student experience, impairs the ability of students to connect with faculty, and decreases instructional quality (Brown, 1996; Hara, 2000). Other institutional factors, such as lack of resources, training, and incentives, also impede the growth of online instruction in public universities (Brownell and Tanner, 2012; Gormally et al., 2014; Harvey et al., 2016). Meanwhile, the continuing education, advanced training, and certificate programs across public, private, for-profit, and nonprofit sectors are increasingly deploying e-learning programs (Allen and Seaman, 2017). Thus, familiarity with online learning is becoming progressively more important for professional growth and career advancement, particularly in a globalized economy (Davis, 2000). To produce competitive graduates who can meet the changing demands of the modern workforce, public institutions will need to increase online course offerings. One of the challenges with online education is to create appealing video lessons while maintaining high educational value. Some studies indicate that online education is not as effective as face-to-face traditional instruction (Krause and Coates, 2008; Pickering and Swinnerton, 2019). Poor course design, poor oversight, and poor pedagogy in online instruction are possible factors that lead to poor learning outcomes and low enthusiasm for this format (Woodworth et al., 2015).

    It is becoming evident that effective pedagogy in online courses is different from that of face-to-face courses. Studies have shown that teaching online is fundamentally distinct from teaching face-to-face and requires instructors to develop new lesson planning skills (Johnston et al., 2005; Mayer, 2014b). Cognitive psychologists have identified effective practices in multimedia learning from carefully controlled laboratory experiments (Quitadamo and Brown, 2001; Mayer, 2014b,c; Mayer and Fiorella, 2014; Mayer and Pilegard, 2014). The studies from Mayer and colleagues have revealed multimedia learning principles that are guidelines for lesson planning in the multimedia setting. Mayer’s principles guide instructors to acknowledge and work within a learner’s cognitive capacity. Exceeding cognitive capacity decreases learning outcomes, in a process known as essential overload. Managing essential overload, reducing extraneous processing, and employing social cues can improve learning outcomes from lecture videos (Mayer, 2014b; Mayer and Fiorella, 2014; Mayer and Pilegard, 2014; Paas and Sweller, 2014). By creating videos with learner-paced segments (segmenting principle), using familiar names and terms (pretraining principle), and speaking instead of using on-screen text (modality principle), course designers decrease the risk of exceeding the student’s cognitive capacity while watching a video (Mayer and Pilegard, 2014). Eliminating extraneous information (coherence principle), combining narration with animation simultaneously (redundancy principle), using cues to highlight essential information (signaling principle), and organizing words and pictures to be proximal both in space (spatial contiguity principle) and in time (temporal contiguity principle) are all practices that reduce distractions that may contribute to essential overload (Mayer and Fiorella, 2014). Finally, engaging students with a human voice (voice principle) and a conversational speaking style (personalization principle) is optimal for learning outcomes (Mayer, 2014c).

    Online courses also differ from face-to-face courses in their methods of implementation. Creating an environment for success in online courses requires a different approach, because technology use, content design, learning assessment, student motivation, student diversity, and best practices are different in online settings (Davis, 2000; Quitadamo and Brown, 2001; Boettcher, 2011; Clark, 2014; Fayer, 2017). These requirements necessitate training, time, and resources for instructors to develop quality online courses. Subsequent studies investigated learning outcomes and student satisfaction in online versus face-to-face courses but yielded inconclusive results (Johnson et al., 2000, Swan, 2001; Baylor and Ritchie, 2002; Picciano, 2002; Koohang and Durante, 2003; Wang, 2003; O’Neill et al., 2004; Johnston et al., 2005; Eom et al., 2006; Eom and Ashill, 2016; Smart and Cappel, 2006; Kirkwood and Price, 2014; Mayer, 2014a; Biel and Brame, 2016; Brame, 2016; Pickering et al., 2017). Some studies showed that online learning was highly satisfying and achieved better learning outcomes than traditional face-to-face learning (Morton et al., 2016; Dooley et al., 2018; Green et al., 2018; Riddle and Gier, 2019), while other studies showed no differences (Pickering and Swinnerton, 2019), and yet other studies showed poor engagement with online education (Krause and Coates, 2008). Because effective online course design is an expensive endeavor, more studies are needed to identify best practices in online course design to improve quality and reduce costs moving forward (Rubenstein, 2003).

    In the current study, we set out to expand on these studies by incorporating Mayer’s multimedia learning principles into several common asynchronous video lecture styles, determining which formats appealed to students, and investigating whether there were differences in learning outcomes between the formats. We hypothesize that, although different online lecture videos may be met with variable student satisfaction, students will be able to learn effectively from any of the video styles, provided the videos incorporated Mayer’s multimedia learning principles. We conclude with best practices for creating engaging, student-centered online lecture videos.

    METHODS

    Study Site Description

    The study authors and participants are from the University of California, Los Angeles (UCLA). In 2015, the University of California’s (UC) nine undergraduate campuses did not offer many online course options. However, the UC has focused on expanding its online course offerings, partly in response to dramatic increases in undergraduate enrollment (Supplemental Figure S1). UCLA is only one of the campuses experiencing enrollment challenges and growing pains. With the realization that conventional teaching methods will be insufficient to accommodate the impending growth, the UC Office of the President instituted policies to greatly expand online course offerings to ameliorate the lengthened time to degree (McDonald, 1999; Benbunan-Fich and Hiltz, 2003; Burke and Moore, 2003; Smart and Cappel, 2006; Bullen, 2007; Lee and Choi, 2011; Allen and Seaman, 2017). This study was administered to inform future online course design across the UC system.

    Experimental Design

    Students evaluated eight different video styles designed to deliver standardized content in the life sciences. Our goal was to determine 1) students’ perceptions of effective and ineffective video styles; 2) the specific factors that influenced the students’ evaluations; and 3) learning outcome differences between the highest-ranked video styles. Each of the eight videos was designed with Mayer’s multimedia learning prin­ciples, which were identified in controlled, laboratory studies (Clark, 2014). Analysis of the responses revealed strengths and weaknesses of each video format.

    Identification of Common Video Styles

    We identified common online video styles that we refer to as the Classic Classroom, Weatherman, Demonstration (Demo), Learning Glass, Pen Tablet, Interview, Talking Head, and Slides On/Off (Figure 1, A–H). Two general types of video styles were identified: didactic and nondidactic. Six of the video styles were didactic (Classic Classroom, Weatherman, Learning Glass, Pen Tablet, Talking Head, Slides On/Off) and two were nondidactic (Demo and Interview). The primary goal of didactic videos was to teach the bulk of lecture materials. The primary goal of the nondidactic video styles was to supplement the instruction of the didactic video styles.

    FIGURE 1.

    FIGURE 1. Common formats for presenting content in educational videos. Eight different prototypical video styles were created to present a standardized set of course materials. Images shown are screenshots of each from the video styles: (A) Classic Classroom, (B) Weatherman, (C) Demo, (D) Learning Glass, (E) Pen Tablet, (F) Interview, (G) Talking Head, and (H) Slides On/Off. Videos of each format can be accessed in the Supplemental Material.

    Didactic Video Style Specifications

    The Classic Classroom captured the instructor standing near a monitor displaying PowerPoint lecture slides. The instructor could walk between the monitor and a nearby chalkboard that could be used for additional illustrations (Figure 1A). The instructor could point directly to material on the monitor. However, the Weatherman was filmed with the instructor positioned in front of a large green screen, which limited the instructor to pointing only to general areas, as the lecture slides were overlaid during postproduction editing (Figure 1B). Off to the side of the stage, a monitor provided a preview of the superimposed green screen to guide the instructor’s movements on camera. The instructor was unable to draw or write in real time.

    In contrast, the Pen Tablet style was characterized by the instructor’s use of an interactive pen tablet (Figure 1E). The instructor directed the presentation progress with the pen tablet, pointing with the stylus and drawing directly on the lecture slides, which were overlaid on the green screen behind the instructor in postproduction. The Talking Head style video captured the instructor using the pen tablet, which enabled the instructor to annotate the lecture slides using the stylus (Figure 1G). In postproduction editing, the instructor’s camera feed was inserted in the lower corner, while the lecture slide presentation and animations were displayed in full-screen format. The Slides On/Off style was filmed identically to the Talking Head style, except that either the lecture slide or the instructor was displayed in alternating full-screen mode (Figure 1H). In other words, there was no picture-in-picture component as in the Talking Head style.

    The Learning Glass (Figure 1D), designed by M. Anderson and J. Watson (Frazee and Anderson, 2014), featured an LED-illuminated low iron glass that functioned as a whiteboard. PowerPoint lecture slides were not used for this style. Though this didactic video style did not use PowerPoint lecture slides, the instructor reproduced the same lecture material directly onto the Learning Glass.

    Nondidactic Video Style Specifications

    The Demo (Figure 1C) and the Interview (Figure 1F) were filmed without the use of prepared lecture slides. The Demo style captured the instructor using orchestrated experiments to illustrate scientific concepts. The Interview style had intertitle shots with no additional visuals. It captured the instructor, sitting in front of a digital screen, in conversation with an off-screen interviewer (Figure 1F). The interviewer asked a question that the instructor addressed while displaying relevant lecture slides on the digital screen. In postediting, the question was displayed as text in full-screen mode before each response.

    Video Style Controls

    The aim was to isolate video style as the key factor affecting student satisfaction and learning outcomes, not the difficulty of the lecture material, the speaking ability of the instructor, or the production quality. To control for lecture material difficulty, we standardized lecture material in the didactic video styles. Four PowerPoint lecture slides (Supplemental Figure S2) representing common slide styles, including blocks of text, animations, images of micrographs, and schematic diagrams, were used to create a short lecture recorded in all didactic video styles except the Learning Glass. Instead, the same material was reproduced by hand onto the Learning Glass apparatus. To control instructor effects, one instructor (R.H.C.) recorded all video styles in the same professional recording studio. To control for production quality, a professional studio director worked with the instructor and led all video-production efforts.

    Recording Equipment and Production

    Lecture videos were recorded with a Canon XF305, at a bit-rate of 35 megabits per second, 30 frames per second, and a resolution of 1080 pixels. Several tungsten-balanced lights were used to illuminate the studio to record a short video lecture in each of the eight styles described previously. A green screen was used to film individual elements and composited in AVID Media Composer software. For the Learning Glass style, the Learning Glass display was created with a 91.44 cm (height) × 152.4 cm (width) × 1.27 cm (depth) panel of photographic glass. The perimeter of the glass was lit by a string of LED lights. During filming, the instructor stood on one side of the glass and the camera was positioned on the other side. The instructor, lit by studio lights, used neon dry erase markers to create the lesson on the photographic glass. For the Pen Tablet, the instructor was seated behind a Wacom Cintiq 21UX pen tablet, which she used to project slides and to draw directly on the slides using the electronic pen. The instructor’s drawings were video captured using INK2Go pen and capture software, which were composited in AVID Media Composer software. In postproduction, videos were composited from the Tri-caster setup to include the speaker, lecture materials, pen tablet display capture, and green screen. Video was color corrected, and audio was mixed and equalized. For all video formats, longer takes without interruption were used to minimize camera movement and camera cuts. AVID Media Composer software was used to edit all videos.

    Recording Studio Design

    The recording studio was located on campus (Center for Health Sciences 62-073) and was operated by the UCLA Office of Instructional Development to produce lecture videos. The large open recording studio was soundproofed to eliminate echo and outside noise. Additionally, the studio was equipped with a raised floor to minimize vibrations that may cause audio interference; overhead ceiling-mounted lights provided additional options for lighting. A Newtek Tricaster setup permitted multiple cameras to record two different angles along with screen capture, which enabled live casting and live composites. Just below the front-facing camera, a live composite displayed on a small screen allowed the instructor to monitor her progress during the recording while maintaining eye contact with the camera. Two drop microphones were used to capture audio.

    Practice Sessions

    Before recording, the instructor had a practice recording session with the director to acclimate to the studio. During the practice sessions, the instructor was recorded while giving an abbreviated lecture (3–5 minutes), and the slides were tested on the recording equipment to ensure high image quality. Lighting, sound, lecture materials, equipment settings, and studio temperature were adjusted as necessary. The instructor then reviewed the recording with the director to discuss the onscreen lecture performance and to receive coaching on screen presence, engagement, eye contact, lecture pace, and body movement. The instructor returned to the studio at a later date for the final recording.

    Study Design

    Survey data were collected in Fall 2015 from undergraduate students majoring in physiological science who were also enrolled in an upper-division core physiology course, Physiological Science 111A, focused on cardiovascular physiology, respiration, and endocrinology. Student participants had completed lower-division courses in biology, chemistry, and physics as prerequisites for the physiology course. Students in this course are typically in their third or fourth year of college. A total of 183 students voluntarily participated in this study.

    Afterward, we investigated the learning outcomes of the six highest-ranked styles to substantiate the results of the student satisfaction survey. Because the original student population graduated from the program and the course, we had to find a new sample population with similar experiences, courses of study, and expectations. These students were enrolled in Physiological Science 121, the online course that the authors developed based on these findings. The 71 undergraduate students enrolled in Physiological Sciences 121 in Summer 2018 and the 76 undergraduate students enrolled in Physiological Sciences 121 in Fall 2018 were randomized into one of six groups. Each group watched a video in one of the six styles and completed a summative assessment online. To control for outside factors, we asked the students to complete the assessment before the class started.

    Student Satisfaction Survey Design

    The survey included both open- and closed-ended questions, which provided a more complete picture of the research problem (Creswell, 2013). Participants were asked to rate each of the eight videos on a Likert-type scale from 1 to 5, with 1 being “not at all” and 5 being “yes, very much.” Each poll question was prompted in the same phrasing: “Do you think that this video is effective for learning?” A blank space next to each rating allowed students to leave their own comments or suggestions in an open-ended response. The last question on the survey was an open-ended request for overall comments and/or suggestions about the video styles. Analyses of these data have been approved by UCLA’s Institutional Review Board (IRB #16-001542). All videos can be viewed online (Supplemental Figure S3).

    Summative Assessment Study Design

    To substantiate the student satisfaction survey results, the authors additionally investigated the learning outcomes of the six highest-rated didactic video styles: Classic Classroom, Weatherman, Learning Glass, Pen Tablet, Talking Head, and Slides On/Off. The nondidactic video styles, Demo and Interview, were excluded because they did not present the same amount of lecture materials as the didactic styles. Additionally, low student satisfaction ratings in the survey contributed to the authors’ decision to exclude the Interview style from the assessment.

    Undergraduate students enrolled in Physiological Science 121, an upper-division physiology course focused on disease mechanisms and therapies, during Summer 2018 (n = 65) and Fall 2018 (n = 103) voluntarily participated in a summative assessment. This student population closely matches the student population from the video style satisfaction survey, with identical prerequisite course work. Students were randomized into one of six groups. Each group watched one of the six didactic video styles and completed a short, open-book online quiz with a 45-minute time limit that was administered through the Canvas learning management system. The questions were designed to test all levels of the cognitive domain as described by Bloom’s taxonomy (Bloom et al., 1965). Five questions were multiple choice, and two questions were free response. The quizzes were graded following a predetermined rubric that rewarded both correctness and complete responses. Students were given one attempt and the ability to refer to the lecture video if necessary. The graded scores were used to assess learning outcomes.

    Survey Administration

    Students were assembled into a large lecture hall on the UCLA campus and informed of their voluntary, anonymous participation and the overall aims of this survey. The videos were projected onto a large screen in the lecture hall in the following order: Classic Classroom, Weatherman, Demo, Learning Glass, Pen Tablet, Interview, Talking Head, and Slides On/Off. Students received hard copies of the survey and were asked to rate and comment on each video style within 1 minute after the screening of individual videos. After screening the videos, students were given an additional 5 minutes to provide overall comments and suggestions as well as revisiting their previous responses.

    Data Handling

    No rules were established for stopping data collection, because the survey was both voluntary and anonymous. Participants could exclude themselves or quit at any point during the survey. All data were collected and included in the analysis. The only retrospective exclusion criteria were skipped questions and nonresponses. Outliers were not excluded. Survey was performed one time. There was no randomization of results, because all students participated in the same survey. All responses were anonymous, so blinding was not necessary.

    Coding Online Asynchronous Lectures (COAL): Qualitative Analysis of Open-Ended Responses

    In addition to ranking each video style, undergraduate students participating in the survey also had the opportunity to provide comments in an open-ended response box. To identify features that students evaluated as strengths and weaknesses of each video style, we created a coding protocol specifically for education videos that we named “coding online asynchronous lectures,” or COAL. The coding was iterative, using a multistep process of reviewing student open-ended responses and organizing the material into meaningful segments followed by themes, which led to the identification of 3112 coded comments (Creswell, 2013; Shapiro et al., 2013).

    The COAL system was applied to the open-ended survey responses. The responses were divided into individual units that were then tagged with the following descriptors: participant, style, positive or negative, code, score, rank, content, developmental stage, and roles (Table 1). Participant number was an identifier; style number indicated video style; positive or negative indicated favorable or unfavorable comments; codes identified the 69 most common themes from student comments using a bijective numbering scheme (Supplemental Figure S4); scores were the student responses to the Likert-type closed-ended responses; rank was the rank-transformed score; content described the subject matter of the text (pedagogy, screen, instructor, user experience, or production); developmental stages described the different steps of the video production process (preproduction, lesson design, filming, and postproduction); roles identified who was primarily responsible for the subject matter of the comment (instructor, director, or both the instructor/director). Additionally, for descriptive statistics, multimedia principles overlap identified comments that paralleled at least one of Mayer’s Multimedia Principles (Supplemental Figure S4) (Mayer, 2014b,c; Mayer and Fiorella, 2014; Mayer and Pilegard, 2014) Eighteen comments did not have scores and were excluded from the rank analysis.

    TABLE 1. Coding categories and subcategories within the coding online asynchronous lectures (COAL) qualitative analysis and mean rank differencesa

    CategorySubcategoryDescriptionMean rank difference for positive and negative scoresp value
    Stages of developmentPreproductionDeciding video style388.9<0.0001
    Lesson designCreating lectures994.5<0.0001
    FilmingLecture recording812.0<0.0001
    PostproductionVideo editing and audio mixing after filming758.3<0.0001
    RolesInstructorInstructor793.2<0.0001
    DirectorDirector681.0<0.0001
    Instructor/directorDecisions by instructor and director925.6<0.0001
    ContentPedagogyLearning value686.3<0.0001
    ProductionStudio setup1013.00.0044
    ScreenVisual effects684.7<0.0001
    SpeakerLecturer’s performance557.8<0.0001
    User experienceStudent experience1099.0<0.0001

    aA list of the major categories and subcategories that were used to investigate students’ qualitative comments on educational videos is provided. The table provides positive and negative impacts of the subcategories. The mean rank differences between the positive and negative codes of each subcategory were determined, using Dunn’s multiple comparisons test, and are reported in the right two columns with their significance levels. Statistical analysis was performed in Prism software.

    First, many of the COAL codes bidirectionally addressed a common theme. Based on the tone of a student’s open-ended response on the survey, a positive or negative direction was assigned to the comments. For instance, code AN was applied to any comment that mentioned that the video style was engaging and code BV was applied to any comment that mentioned that the video style was not engaging. Subsequently, code AN was assigned a positive direction, while code BV was assigned a negative direction.

    Second, codes were assigned to one of four subcategories within the stages of development category, according to the chronological stages of video lesson development: preproduction, lesson design, filming, and postproduction (Table 1 and Supplemental Figure S4). The preproduction stage referred to the structural decisions that must be addressed before video production begins, including studio design, video styles, software, and audiovisual technology. The lesson planning stage focused on design and preparation of lecture slides and materials. The filming stage involved recording of the instructor’s lesson on camera. The final postproduction stage involved film editing, audio mixing, and positioning of the instructor and the slides to generate the final video lesson.

    Third, codes were assigned to one of three subcategories within the roles category, according to the party responsible for the student’s response: instructor, director, or both the instructor and the director (Table 1 and Supplemental Figure S4). The instructor role referred to the teaching aspects of the video that the instructor was solely responsible for, such as lecture pacing and lesson planning. The director role addressed technical aspects of the video that the director was solely responsible for, such as video quality and camera operation. The final “both” role referred to broader aspects of the video that required collaboration between both the instructor and director, such as lecture style and studio design.

    Finally, codes were assigned to one of five subcategories within the content category, according to the subject matter of the student responses: pedagogy, production, screen, speaker, and user experience (Table 1 and Supplemental Figure S4). The pedagogy subcategory referred to the learning value of the video style. The production subcategory referred to the look and feel of the final video style. The screen subcategory referred to the camera direction and the visual presentation of lesson material. The speaker subcategory referred to the instructor’s performance such as gesticulation and stage movements. The final user experience subcategory referred to student affective responses.

    Statistical Analysis of Survey Results

    Before quantitative analysis of the Likert-type scale rating responses, each participant’s eight scores were rank transformed, with ties being averaged. This improved the distribution of data and standardized ratings across individuals to improve their interpretability. All subsequent mentions of scores in this study refer to these rank-transformed values. After the written comments from student surveys were coded using COAL (Supplemental Figure S3), statistical analysis was conducted using R (v. 3.33) and Prism 7 (v. 7.01). Plots were created using the vioplots, ggplot2, cowplot, gridExtra, and colorspace R packages. Student’s t tests, analysis of variance (non-Gaussian), Mann-Whitney tests, and Dunn’s multiple comparisons tests were processed in Prism 7.

    Initial comparisons were made using the Student’s t test and the Kruskal-Wallis test. Post hoc Monte Carlo resampling methods and Dunn’s multiple comparisons test were used to measure contrast between groups. From the Dunn’s test, we used mean rank differences, which is a paired test statistic using rank sums, to identify groups that were different. Rank-transformed scores were used as the dependent variable and the categories, subcategories, and positive–negative codes were used as independent variables.

    Statistical Analysis of Learning Assessment Results

    To test our hypothesis that there are no differences in learning outcomes between different video styles, we analyzed the results of the summative assessments that the students completed. The quizzes were graded following a predetermined rubric that rewarded both correctness and completeness out of a maximum score of 5.0. Mean scores from each video style were tested for differences with the Kruskal-Wallis test.

    RESULTS

    The Learning Glass and the Demo Rank Highest for Perceived Effectiveness

    First, we analyzed students’ ratings of the video styles on the survey. The survey scores were rank transformed, and mean ranks and 95% confidence intervals were measured: Classic Classroom ( = 4.37, 4.11, 4.67), Weatherman ( = 4.55, 4.25, 4.87), Demo ( = 3.56, 3.27, 3.92), Learning Glass ( = 2.76, 2.36, 2.91), Pen Tablet ( = 4.14, 3.85, 4.42), Interview ( = 6.44, 6.22, 6.70), Talking Head ( = 4.59, 4.39, 4.94), and Slides On/Off ( = 5.30, 5.02, 5.56) (Figure 2). The Learning Glass was the highest-ranked style, while the Interview style was the lowest-ranked style (Figure 2). The Demo was the second highest ranked style. The Classic Classroom, Weatherman, Pen Tablet, Talking Head, and Slides On/Off styles did not have significantly different ranks (Figure 2) according to Kruskal-­Wallis tests.

    FIGURE 2.

    FIGURE 2. Student satisfaction scores for each video style. UCLA undergraduates scored short lecture videos that were filmed in eight different styles using a Likert-type scale from 1 to 5, 1 being the lowest and 5 being the highest. The scores were rank transformed and plotted as violin plot distributions, overlaid with a box plot showing minimum, interquartile range, and maximum values. The means (white circles) and outliers (black dots) are shown. The Learning Glass had the highest rank and the Interview had the lowest rank. Mean ranks and 95% confidence intervals for each style are as follows: Classic Classroom ( = 4.37, 4.11, 4.67), Weatherman ( = 4.55, 4.25, 4.87), Demo ( = 3.56, 3.27, 3.92), Learning Glass ( = 2.76, 2.36, 2.91), Pen Tablet ( = 4.14, 3.85, 4.42), Interview ( = 6.44, 6.22, 6.70), Talking Head ( = 4.59, 4.39, 4.94), and slides On/Off ( = 5.30, 5.02, 5.56). Rank analysis revealed that the Learning Glass format is favored by students. Rank 1 is highest; rank 8 is lowest.

    COAL for Video Styles

    Next, we analyzed the open-ended responses on the survey. Our COAL qualitative coding protocol allowed us to examine student comments and identify video features that students frequently mentioned in their responses. Our analysis of the Classic Classroom video style revealed that students valued the collaborative role of the director and instructor (n = 7) as well as the user experience (n = 72), while they scored the production value (n = 19) of the videos negatively (Figure 3). Students scored the Classic Classroom style highly for the comfortable and familiar setting, which enhanced their user experience. The low scores in production reflected the importance of having both the director and instructor working together to make this style effective. The Classic Classroom required a separate cameraperson and considerable postproduction editing to refine the finished video product.

    FIGURE 3.

    FIGURE 3. Rank analysis of videos reveals strengths and weaknesses within each category. Overall video style and subcategory kernel density distributions are shown to visualize COAL coding data from student surveys. The x-axis provides ranks from a range of 1–8, with ties being averaged. The y-axis represents the probability density functions with kernel smoothing. Graphical plots were created with R software.

    Similarly, students commented that production value (n = 45) weakened the Weatherman style (Figure 3), while noting the value of the instructor’s role (n = 50), the collaboration between the director and instructor (n = 33), as well as effective pedagogy (n = 16). The Weatherman style required significant support from a director during filming along with significant postproduction editing for effective use of this format. Students appreciated the increased visibility of the lecture material and the additional focus on the instructor.

    Of the eight video styles that we investigated, the Talking Head was the most similar to the Khan Academy videos (Khan Academy, 2016), with a small headshot featured at the bottom of the video to provide instructor presence within the full-screen lecture slides. Students rated the preproduction value (n = 64) positively for the Talking Head, mostly due to the increased visibility of the lecture slides (Figure 3). However, lesson design (n = 36), production (n = 19), and instructor role (n = 13) were rated negatively due to the lack of engagement and the occasional times when the headshot blocked material on the lecture slide.

    Production (n = 19) was again weak in the Pen Tablet when the instructor’s presence interfered with the visibility of the lecture material. Students also ranked user experience (n = 13) negatively for the Pen Tablet, because it was “boring” and “not engaging” (Figure 3). Within the didactic lecture styles, the Slides On/Off had the lowest average rank (Figure 3). The students reported that the frequent camera movements were distracting and interrupted viewing of the lecture slide contents.

    The overwhelming positive reception of the Learning Glass style was largely due to the positive engagement and connection with the instructor (n = 74) that was achieved through the collaborative effort between the director and instructor (n = 21). The Demo was scored highly for instructor’s role (n = 5) and user experience (n = 88) based on the students’ interaction and connection with the instructor, in addition to the increased engagement with the material relative to traditional lecture slides. Students also commented on the educational value of the Demo format. However, they noted that complex lecture material could not be delivered in the Demo style, which led to their negative ratings for filming (n = 25) and production (n = 25).

    The Interview style received the lowest average scores out of all eight styles. Students generally felt that the style was uncomfortable and awkward because of the studio setup. However, students did see potential learning value from this style, and suggested that it be included as a supplemental video for office hours or a frequently asked questions segment. Many students commented that this style would not be appropriate for an entire course because of the limitations of presenting material solely in the interview format.

    Determining Directional Impact of Student Comments

    The positive and negative descriptor allowed identification and measurement of the effect size of certain comments (Figure 4). The rank differences found in post hoc Dunn’s multiple comparisons test between matching positive and negative codes identified codes that affected student satisfaction more than others. To investigate the effects of each category on overall ranking, we compared the positive and negative comments within each subcategory and style in the COAL. The Kruskal-Wallis test revealed significant differences between positive and negative comments across all the subcategories and styles. To distinguish relationships between negative and positive comments with the subcategories, we used the nonparametric, post hoc Dunn’s multiple comparisons test. A higher mean rank difference indicated that the subcategory’s negative and positive scores strongly skewed to their respective extremes. A lower mean rank difference indicated that the category was not as impactful to the overall ranking of a video style. User experience (1099.0), production (1013.0), lesson design (994.5), and collaborative role of the instructor and director (925.6) had the highest differences, which reflected the high impact of these factors in determining students’ evaluations (Table 1). Most of the students’ comments focused on four subcategories: screen (40.1%), pedagogy (19.2%), user experience (17.6%), and production (6.9%). These subcategories showed larger mean rank differences than the others (Figure 4). Student’s t tests, Mann-Whitney, and Kolmogorov-Smirnov tests all indicated that positive and negative codes for each of the subcategories were significantly different (Table 1).

    FIGURE 4.

    FIGURE 4. Positive and negative rank analysis reveals the importance of the user experience. Positive and negative scores reveal the importance of user experience in students’ video ratings for learning effectiveness. For each of the 12 subcategories, the positive (blue) and negative (red) kernel density plots were overlaid to reveal differences in specific subcategories. Subcategories are aligned with each major category (content, roles, and stage). The x-axis provides ranks from a range of 1–8, with ties being averaged. The y-axis represents the probability density functions with kernel smoothing. Graphical plots were created with R software.

    Identification of Best Practices for Video Lessons

    By parsing the codes within the categories and subcategories, we identified strengths and weaknesses of each style (Supplemental Table S4). Additionally, 1907 out of the 3112 total COAL codes (61.3%) mirrored at least one of Mayer’s multimedia principles to improve learning outcomes (Mayer, 2014b,c; Mayer and Fiorella, 2014; Mayer and Pilegard, 2014;).

    Engaging Lesson Design and Visuals Should Be Emphasized during Video Production

    The Talking Head (n = 64) and Slides On/Off (n = 32) videos were weakly ranked in the preproduction subcategory due to staging errors that led to poor visibility of lecture material in the final recorded product. The Pen Tablet style (n = 54) excelled in lesson design, as students reacted positively to lecture slides with illustrations and animations that visualized content, compared with lecture slides with no illustrations and large blocks of text. Visuals improved the perceived learning value and created opportunities to engage students.

    The Learning Glass (n = 123) and Pen Tablet (n = 79) scored positively in the filming subcategory, while Interview (n = 64) and Slides On/Off (n = 99) scored negatively. Analysis of the specific comments revealed that students’ most frequent negative comments were focused on awkward user experiences and lack of engagement. The Pen Tablet minimized the instructor’s movements and emphasized the materials being presented. Furthermore, students frequently commented on lecture pacing, and they reported that the use of eyeglasses as a pointer was distracting.

    Collaboration between the Director and Instructor Is Critical

    Our analysis revealed that the collaboration of both the director and the instructor led to highly rated video lessons. The roles subcategory, both director and instructor, had the highest mean rank difference, on scores within the roles category (Table 1), highlighting this category’s strong influence on survey scores. However, it is acknowledged that the two roles do not necessarily have to be filled by different people. If the instructor has the technical skills to use a fully equipped recording studio, to record in high-quality audio and video, and to edit video in postproduction, there does not need to be a separate director for the project. Students ranked the Learning Glass (n = 21) highly, while they ranked the Interview (n = 16) and the Slides On/Off (n = 20) poorly in the instructor/director subcategory. Additionally, students preferred certain styles strongly (n = 109) and positively ranked familiarity (n = 39).

    Effective Pedagogy and Positive User Experience Predominate in Successful Videos

    The user experience subcategory reflects the students’ personal experiences and affective responses to the videos. For instance, students reported that some of the video styles were “fun” and “cool” (n = 88) and “engaging” (n = 75), while others reported that the videos styles were “awkward” and “impersonal” (n = 149) or “boring” (n = 46). These comments reflected the students’ affective responses, which may not be typically considered when lesson planning in the face-to-face classroom. The Demo (n = 88) and Learning Glass (n = 74) were strong, while Interview (n = 125) was weak, in the user experience subcategory. The Demo received many positive comments such as “entertaining” and “engaging” for the inclusion of an analogy that simplified difficult concepts without the use of overly complicated technical jargon. Students suggested that demonstrations be included as a supplement to other lecture videos, as recommended in the summary of best practices (Table 2).

    TABLE 2. Summary of best practices for creating engagement in educational videosa

    Stages of development
     Lesson planningCreate engagement with real-world challenges, case studies, illustrations; consider using the Learning Glass to take students through process of problem solving; consider using the Demo on select topics.
    Mayer’s multimedia principles: redundancy, segmenting, modality principles
     Lecture slidesUse large, clear fonts such as Arial; avoid wordy slides and long lists of texts; use diagrams and illustrations to help with concepts; images should be high-quality images to prevent distortion.
    Mayer’s multimedia principles: coherence, signaling, spatial/temporal contiguity principles
     PreproductionInstructor should practice lecture in studio with a focus on pace, minimizing distracting gestures, receiving student and faculty feedback on sample videos to meet student demands, designing the studio to improve video quality and audio quality.
    Mayer’s multimedia principles: personalization and voice principles
     FilmingFrame should include both instructor and content; instructor should make eye contact with camera; if using the Learning Glass, instructor should wear black clothing; if using green screen, instructor should avoid green clothing or patterned clothing; plan to wear solid colored clothing, such as blue, black, or gray; some fabrics and textile weaving patterns can appear as distorted recordings; screen testing instructor’s wardrobe is crucial; instructor should have a live-composite to monitor progress in real-time; drop microphones are optimal for recording the speaker; speaker should be familiar with the setting to record longer takes to minimize camera cuts; cooler temperatures improve speaker comfort; make-up and refreshments should be available to maintain energy levels; schedule shorter recording sessions with only a few lectures during a given session. Mayer’s multimedia principles: personalization and modality principles.
    Roles
     InstructorCreate new lecture materials that are appropriate for asynchronous videos; include more illustrations, animations, and schematics than you normally would in a face-to-face classroom; practice delivery.
     DirectorEffectively manage screen space; use high-definition filming equipment and studio; edit videos to maintain pace and flow; enable production of video styles that are not feasible for a instructor alone; try several styles and run focus groups to address weaknesses; review practice sessions with speaker for improvements in delivery; manage multiple camera feeds to minimize camera cuts and movement.
     BothMinimize distraction; develop a video style and workflow that works for both members.
    This allows instructor and director to focus on their responsibilities, enabling maximal video quality and speaker performance
    Content
     PedagogyFocus on important points, because students can rewatch videos if they are confused; shorter videos are received more positively; ensure good pacing; include illustrations and diagrams to limit text on slides and maximize visual learning.
    These factors all improve the perceived learning value.
     ScreenMake sure speaker is visible; ensure visibility of lecture materials; prevent the speaker blocking the material, or vice versa; create-high definition videos; use sharp, contrasting colors if using digital pointers; use clear, sans serif font faces; use high-quality images.
     InstructorMinimize distracting gestures; choose clothing appropriate for the screen; practice delivery; prepare material that will be engaging and interactive; establish eye contact; solicit feedback on on-screen performance; maintain energy; prepare make-up and refreshments as needed during recording sessions.
     User experienceReduce awkwardness (or negative affective responses) that some viewers may experience based on student population and needs; design engaging lectures; talk directly into the camera and establish a connection with the viewer; minimize disjointed scene cuts; use accessible language; create an entertaining product.
     ProductionMinimize errors in the video; manage the screen’s and the speaker’s placement; prioritize lecture material visibility; use high-quality recording and editing tools; minimize screen cuts.

    aThe table provides a summary of best practices that were identified from student surveys. These recommendations should support a positive user experience.

    The Classic Classroom (n = 198), Interview (n = 60), and Slides On/Off (n = 105) were weak, while Learning Glass (n = 177) was strong in the screen subcategory, which included all student comments that addressed the visual presentation of the videos, including the visibility of the text, slides, and instructor, as well as the camera direction. Analysis of the students’ specific comments revealed that many students focused on screen management issues. Students preferred high-quality videos, and they emphasized large text size, slide visibility, and high image quality.

    The Learning Glass (n = 101) and Pen Tablet (n = 36) were weak, while Interview (n = 137) and Slides On/Off (n = 27) were strong, in the pedagogy subcategory, which included comments focused on the learning value of the videos. Students reported that the Slides On/Off and Interview had high learning value, and they felt that the interactive quality of these styles improved the instructional value. Although the Learning Glass scored highly overall, it received weak ratings in the pedagogy category. Most criticism mentioned the shortcomings of the Learning Glass as an unsuitable lecture style for some of the more complex course materials that may require detailed images and animations that could not be accurately reproduced by drawing.

    Learning Outcomes of Didactic Video Styles Are Equal

    We found no statistically significant differences in the learning outcomes between the video styles using the Kruskal-Wallis test, p = 0.3501 (Figure 5). Mean scores out of a total score of 5.0 and 95% confidence intervals are reported: Classic Classroom ( = 3.47, 3.02, 3.91), Weatherman ( = 3.35, 2.92, 3.77), Learning Glass ( = 3.35, 3.03, 3.67), Pen Tablet ( = 3.67, 3.26, 4.09), Talking Head ( = 3.51, 3.10, 3.92), and Slides On/Off ( = 3.86, 3.49, 4.23). The Kruskal-Wallis test showed no significance (p = 0.35).

    FIGURE 5.

    FIGURE 5. Learning outcomes on pilot quiz across the didactic video styles. A seven-question online quiz with a 45-minute time limit was administered after students were randomized into groups that watched one of the six didactic video styles. The questions were designed test all levels of the cognitive domain as described by Bloom’s taxonomy. Five questions were multiple choice and two questions were free response. The quizzes were graded following a predetermined rubric that rewarded both correctness and completeness. Students were given one attempt and the ability to refer to the lecture video if necessary. Mean scores and 95% confidence intervals reported: Classic Classroom (n = 31), Weatherman (n = 29), Learning Glass (n = 28), Pen Tablet (n = 31), Talking Head (n = 30), and Slides On/Off (n = 25).

    DISCUSSION

    Learning Glass and Demo Video Styles Rated Highest

    Students rated the Learning Glass and Demo video styles higher than the others on the satisfaction survey. Though the learning outcomes in the Learning Glass style were not statistically different from the outcomes with the other didactic styles, the student perceptions varied significantly in favor of the Learning Glass. Additionally, while the Demo video style was nondidactic and presented far less course material, the students also rated the Demo video style highly in the survey. This suggests that certain video styles are particularly well perceived in the online environment without a consideration for complexity and depth of material. These results confirm the findings that show that there is an appropriate way to design online materials and that some video styles can be more appropriate than others (Kirkwood and Price, 2014; Morton et al., 2016).

    Learning Outcomes across Didactic Video Styles Are Equal

    This result confirms Mayer’s multimedia principles in our asynchronous lecture videos, and we conclude that a wide range of video styles could potentially teach lecture material equally well. This is a valuable finding that shows the unique advantage of asynchronous video lectures: advanced students can learn material from different video styles equally well as long as Mayer’s multimedia principles are followed. Instead, the major differences between these video styles are only the levels of engagement and satisfaction that students gain from the videos. Improving student satisfaction can have several benefits, including improving engagement (Barthelemy et al., 2015), meeting student expectations (Burgess et al., 2018), and increasing retention (Styron, 2010). It is important to consider the role of universities as service providers and make sure that their customers (students) are satisfied with the products they receive.

    COAL Reveals Strengths and Weaknesses of Each Video Style

    Singular analysis of the closed-ended Likert-type scale scores from the survey was limited due to insignificant differences in mean scores and significant skewing of scores of five of the eight video styles (Figure 2). Only the Learning Glass, Demo, and Interview styles had statistically significant differences in ranking. Consequently, we investigated the open-ended survey responses for further analysis. The coded COAL comments were analyzed in conjunction with associated survey scores to determine patterns between the open-ended student responses and closed-ended survey scores. Analysis between each style, category, and subcategory helped to identify factors that strongly influenced the students’ ranking of the video lessons. The rank transformation varied the skewed score distributions, improving our ability to both interpret and visualize the data. This approach was used to identify specific characteristics of each video lecture style that affected the students’ overall ranking, with positive deviations indicating strengths and negative deviations indicating weaknesses. We found that certain categories affected some styles more than others when comparisons were made between the rank distributions for each style (Figure 3). Both the overall style rank distributions and the overall category rank distributions (Supplemental Figures S6 and S7) revealed those categories with strong influence on the style scores.

    Confirmation of Mayer’s Multimedia Principles

    COAL analysis revealed specific best practices for creating engaging video lessons that students found satisfying, engaging, and effective for learning, which both confirmed and aligned with Mayer’s multimedia principles. Applying COAL to the survey results from our student population was ideal for several reasons: 1) This group of students would be enrolling in upcoming online courses; 2) upper-division students can form more thoughtful opinions about teaching styles based on their prior collegiate experiences in science courses; and 3) students in a pre–health major are motivated to pursue postgraduate careers in healthcare and science, and consequently are heavily invested in their educations. The link between student satisfaction and learning outcomes is still uncertain. There are conflicting studies showing that student satisfaction is not a strong predictor for learning outcomes (Johnson et al., 2000; Swan, 2001; Baylor and Ritchie, 2002; Picciano, 2002; Koohang and Durante, 2003; Wang, 2003; O’Neill et al., 2004; Johnston et al., 2005; Eom et al., 2006; Eom and Ashill, 2016; Smart and Cappel, 2006; Mayer, 2014a; Kirkwood and Price, 2014; Biel and Brame, 2016; Brame, 2016; Pickering et al., 2017). However, our results show that the advanced upper-­division students in our study consistently reported high satisfaction with video features that incorporated Mayer’s multimedia principles, which are known to improve learning outcomes (Mayer, 2014b,c; Mayer and Fiorella, 2014; Mayer and Pilegard, 2014). This aligns with other studies that indicate that student perceptions can predict learning outcomes (Lizzio et al., 2002). The experience of completing lower-division courses likely contributed to the students’ awareness of video style features that improved learning outcomes. Thus, while student satisfaction may not be predictive of subsequent learning outcomes in some scenarios, student satisfaction in older students may hold more predictive power in online education. This experience distinction among students warrants further study.

    Video Style Features That Strongly Influence Student Ratings

    Students expressed strong preferences for certain video styles and certain video style features. The Learning Glass was not only the highest-ranked style but also received the most written comments (483 comments or 17% of total comments). However, despite its high ratings, there were some flaws. Students suggested that this style be used to supplement lecture videos (in the form of practice problems and follow-up videos), because it would not be appropriate for lectures that require media presentations. This finding parallels Mayer’s multimedia principle emphasizing illustrations and animations instead of text.

    Similarly, the lower-rated video styles had some features that students commended. For instance, students suggested that the Interview style, the lowest-rated video style, be included as a supplemental video. With effective editing and course design, the Interview style can be a suitable alternative to host asynchronous online office hours, to address common misconceptions, and to answer frequently asked questions.

    Another feature that students applauded was the uninterrupted display of lecture slides and content. The Learning Glass and Pen Tablet styles both excelled here. Meanwhile, the Interview and Slides On/Off styles had distracting camera movements, poor screen management, and prolonged focus on the instructor that prevented viewing of the material. Additionally, frequent scene changes in these styles interrupted visibility of the content, which led to low ratings. Constant visibility of the slides and/or content is included in our recommended best practices and should be considered at each stage of video lesson design, particularly during the postproduction editing stage of development (Table 2). Audio and visual mixing in postproduction should also be focused on reducing distracting background noises and minimizing scene changes to improve visibility of both the lecture materials and the instructor. These recommendations align with Mayer’s multimedia principles to manage essential overload (Mayer, 2014c; Mayer and Fiorella, 2014). Along the same lines, in accordance with the signaling principle, a pointer with sharply contrasting colors should be used to direct the user’s attention on screen, as revealed by the strengths of the Learning Glass and Pen Tablet.

    One of the strongest recommendations for best practices is to create visual materials that are clear, legible, and unobstructed while the instructor is also simultaneously visible (Table 2). We have created examples of lecture videos that fit these guidelines and designed the recording studio to facilitate this style (Supplemental Movie S8). A sample video created in accordance with our recommended best practices can be viewed in Supplemental Movie S9.

    Collaboration between the Director and Instructor Is ­Critical

    We found that production and lesson design subcategories were critical factors that influenced students’ rankings. A lecture focused on evidence-based teaching practices is important for effective instruction in any setting, face-to-face or online. However, an online course will additionally require well-produced videos, indicated by the high mean rank differences for the production subcategory and the instructor/director subcategory. The high impact of these two subcategories supported the conclusion that video production was important, particularly the director’s collaboration with the instructor to produce a high-quality video.

    Given these strong preferences, choosing an appropriate video style for an online course is a top priority for designing effective online courses. Ideally, the instructor and the director would take time to consider the course learning goals and the resources available to find a style that would be effective and familiar for their students, although the authors acknowledge that institutional resources vary, particularly the availability of an audiovisual technician with cinematography expertise. If possible, however, it is recommended that resources be dedicated to employ an audiovisual production expert or to train the instructor in the appropriate audiovisual skills needed for the desired lecture style, such as audio balancing, video editing, equipment setup, and studio recording expertise. For example, recording in a professional studio enables superior audio quality, as revealed by the complete absence of negative student comments regarding sound quality in our survey. These skills are important, because even those video styles that seemed less technically demanding still required extensive directing and producing.

    The inclusion of a director also helped to minimize distractions while recording and improved the instructor’s on-screen presence and appearance. A director could also focus on maximizing visibility of the on-screen materials and managing the final video product, while the instructor could focus on designing lesson plans appropriate for the online venue and presenting the materials effectively on recording day. The Demo (n = 65) and Learning Glass (n = 140) were strongly rated in the director subcategory, while the Interview (n = 80) and Slides On/Off (n = 112) were weak. These four styles were particularly influenced by the quality of the final video product, requiring camera direction and video-editing expertise from a director (Table 2).

    The director, by assuming control of the technical aspects of the recording, allowed the instructor to focus primarily on instruction and delivery. While face-to-face teaching experience could be helpful for preparation, the online teaching setting and teaching to a camera will likely be unfamiliar. The instructor should develop his or her on-screen performance by creating visuals and practicing the lecture delivery in the recording studio (Table 2). Attention to gesticulation, pacing, and speech delivery should be considered. This helps to enforce Mayer’s personalization and voice principles (Mayer, 2014b). The director should also provide feedback to the instructor during practice sessions and during the preproduction and filming stages, which are expected to improve lesson design and instructor performance. It is recommended that the practice session be recorded and that the instructor and director review the recorded lesson together to discuss areas for improvement. Practice sessions are essential for identifying recording studio issues such as optimal lighting conditions for clothing, and range of on-screen movement.

    Students Value User Experience in Online Lecture Videos

    The COAL coding analysis also revealed areas for improvement for each of the video styles, which included improving production value, screen management, camera direction, and user experience and greater use of illustrations. We also identified the importance of special attention to design and the user experience for the student. COAL revealed that the user experience subcategory had the highest mean rank difference, which suggests a large effect size (Table 1). This finding both confirms and expands on Mayer’s personalization principle, indicating that students strongly prefer engagement and connection, even in their asynchronous courses. The findings from the current study are expected to inform instructors who are creating or refining their existing online courses to provide an effective and meaningful learning experience for students by ensuring that instructors can implement Mayer’s multimedia principles into lecture formats that facilitate students’ connection. The impact and the frequency of comments regarding the student’s user experience indicate that the online lecture videos were critiqued from a student’s perspective as well as from an end-user perspective, which supports previous findings that an emotional connection or affective response was important in the online classroom (Burke and Moore, 2003; Smith, 2003). This both expands the scope of online course design and confirms Mayer’s personalization principle by highlighting the importance of the student’s connection and interaction with the instructor (Mayer, 2014c).

    The eight lecture video formats described in this study serve as possible video styles for instructors to consider when designing their own online course videos. The COAL results can inform current online instructors about possible strengths and weaknesses in their own courses based on the student feedback. Furthermore, studies such as this current report are expected to encourage teachers and institutions to continue efforts to explore the ability of asynchronous e-learning to address challenges in undergraduate enrollment. As Mayer concluded, research in this arena can help institutions to focus on increasing fair access of diverse learners to high-quality, evidence-based instruction that uses institutional resources efficiently (Clark, 2014). Online education, when designed thoughtfully in this manner, has the potential to drastically improve both accessibility and outcomes for many students. Our findings can serve as an example of how to implement evidence-based multimedia principles in a fully online course for other public institutions interested in increasing their online course offerings.

    Limitations and Future Directions

    Though unlikely to significantly impact conclusions, there are some limitations of the study to consider. Because the students watched the videos in the same order in one session, the sequence of videos and survey fatigue may have affected the student responses. It is possible that comments are specific to the viewing conditions, rather than comments about video styles in general. Additionally, the participants of our study were UCLA students in the same life sciences major, so the views captured from this sample population may not represent the views of all students. However, our findings parallel the findings of other studies that identified characteristics of satisfying online courses (Volery and Lord, 2000; Swan, 2001; Smith, 2003; Watkins, 2005). Also, the voluntary participation in the survey allowed anyone to remove themselves from the study at any time. Finally, the standardized lecture material across all video styles can help to reduce sequence effects. For these reasons, the authors did not feel that the survey results were invalid.

    It is also noted that conservative application of the conclusions of this study limits their generalizability to other video styles. Future application of COAL to other online courses could narrow the broad nature of our findings and potentially identify specific codes or themes that may be more generalizable to all video styles. Additionally, randomizing the sequence of videos and reducing the length of the viewing session may improve study results. Because our study found that advanced students were able to learn equally across different video styles, future studies could identify specific skills, experiences, or institutional factors that enabled them to succeed across a variety of instructional styles.

    CONCLUSIONS

    As online instruction grows in the form of flipped classrooms, hybrid classrooms, fully online courses, and massive open online courses, identifying effective practices will be important for the guidance of future course design. Instructors will need to modify their pedagogies for teaching face-to-face courses compared with online courses, but an establishment of best practices could ease the transition to online instruction. The body of literature on learning outcomes, student satisfaction, and student engagement in online courses is growing, but some gaps still need to be filled. In addition to learning outcomes, the affective responses of students to online education can provide information on creating both effective and engaging online learning experiences.

    In addition, these findings can be particularly encouraging for instructors and course designers who have limited resources and institutional support. Though the higher-budget productions, the Learning Glass and Demonstration video styles, were the highest rated for student satisfaction, our results show that any of the six didactic styles can produce equal learning outcomes. The Pen Tablet and the Slides On/Off are less costly styles that can be recreated with modest resources. Economic and accessible alternatives to the interactive pen tablet, camera, and video-editing software can be used instead. Furthermore, course designers can implement our COAL protocol and student survey in their home institutions to serve the needs of their student populations.

    ACKNOWLEDGMENTS

    We thank Jeff Roth (Office of Academic Planning and Budget, UCLA) and Adam Sugano (Enrollment Planning and Academic Analysis, Institutional Research and Decision Support, UCLA) for providing campus enrollment data. We thank Janice Reiff, PhD (Department of History, UCLA), for support and discussion. We thank John Toledo, Tabares Lucia, and Joanne Valli-Meredith, PhD (Office of Instructional Development, UCLA), for the initial coding of student comments. We thank Andrew D. Watson, MD, PhD (David Geffen School of Medicine, UCLA), for assisting with survey administration. We thank Kevin P. Campbell, PhD (University of Iowa Carver College of Medicine, Iowa City, IA), for the example of the balloon demonstration used in the Demo video. Funding was provided by grants from the Office of the President Innovative Learning Technology Initiative, UC; Office of Instructional Development, UCLA; the Parent Project Muscular Dystrophy, Limb Girdle Muscular Dystrophy 2i Fund, Coalition Duchenne; and NIH NIAMS U54 AR052646 (to R.H.C.).

    REFERENCES

  • Allen, I. E., & Seaman, J. (2017). Distance education enrollment report 2017. Higher Education Reports. Retrieved from https://onlinelearningsurvey.com/reports/digtiallearningcompassenrollment2017.pdf Google Scholar
  • Barthelemy, R. S., Hedberg, G., Greenberg, A., & McKay, T. (2015). The climate experiences of students in introductory biology. Journal of Microbiology & Biology Education, 16(2, 138. MedlineGoogle Scholar
  • Baylor, A. L., & Ritchie, D. (2002). What factors facilitate teacher skill, teacher morale, and perceived student learning in technology-using classrooms? Computers & Education, 39(4), 395–414. doi: https://doi.org/10.1016/S0360-1315(02)00075-1 Google Scholar
  • Benbunan-Fich, R., & Hiltz, S. R. (2003). Mediators of the effectiveness of online courses. IEEE Transactions on Professional Communication, 46(4), 298–312. Google Scholar
  • Biel, R., & Brame, C. J. (2016). Traditional versus online biology courses: Connecting course design and student learning in an online setting. Journal of Microbiology & Biology Education, 17(3), 417. MedlineGoogle Scholar
  • Bloom, B. S., Engelhart, M. D., Furst, E. J., Hill, W. H., & Krathwohl, D. R. (1965). Taxonomy of educational objectives—The classification of educational goals. In Bloom, B. S. (Ed.), Ann Arbor, MI: David McKay Company, Inc. Google Scholar
  • Boettcher, J. (2011). Ten best practices for teaching online: Quick guide for new online faculty. Designing for Learning. Retrieved from http://designingforlearning.info/writing/ten-best-practices-for-teaching-online/ Google Scholar
  • Brame, C. J. (2016). Effective educational videos: Principles and guidelines for maximizing student learning from video content. CBE—Life Sciences Education, 15(4), es6. LinkGoogle Scholar
  • Brown, K. M. (1996). The role of internal and external factors in the discontinuation of off-campus students. Distance Education, 17(1), 44–71. Google Scholar
  • Brownell, S. E., & Tanner, K. D. (2012). Barriers to faculty pedagogical change: Lack of training, time, incentives, and … tensions with professional identity? CBE—Life Sciences Education, 11(4), 339–346. LinkGoogle Scholar
  • Bullen, M. (2007). Participation and critical thinking in online university distance education. International Journal of E-Learning & Distance Education, 13(2), 1–32. Google Scholar
  • Burgess, A., Senior, C., & Moores, E. (2018). A 10-year case study on the changing determinants of university student satisfaction in the UK. PLoS ONE, 13(2), e0192976. MedlineGoogle Scholar
  • Burke, L. A., & Moore, J. E. (2003). A perennial dilemma in OB education: Engaging the traditional student. Academy of Management Learning & Education, 2(1), 37–52. Google Scholar
  • Chen, P.-S. D., Lambert, A. D., & Guidry, K. R. (2010). Engaging online learners: The impact of Web-based learning technology on college student engagement. Computers & Education, 54(4), 1222–1232. Google Scholar
  • Clark, R. C. (2014). Multimedia learning in e-courses. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (2nd ed., pp. 842–881). Cambridge, UK: Cambridge University Press. Google Scholar
  • Creswell, J. W. (2013). Research design: Qualitative, quantitative, and mixed methods approaches (3rd ed.). Thousand Oaks, CA: Sage. Google Scholar
  • Davis, J. H. (2000). Traditional vs. on-line learning: It’s not an either/or proposition. Employment Relations Today, 27(1), 47. Google Scholar
  • Dooley, L. M., Frankland, S., Boller, E., & Tudor, E. (2018). Implementing the flipped classroom in a veterinary pre-clinical science course: Student engagement, performance, and satisfaction. Journal of Veterinary Medical Education, 45(2), 195–203. MedlineGoogle Scholar
  • Eom, S. B., & Ashill, N. (2016). The determinants of students’ perceived learning outcomes and satisfaction in university online education: An update. Decision Sciences Journal of Innovative Education, 14(2), 185–215. Google Scholar
  • Eom, S. B., Wen, H. J., & Ashill, N. (2006). The determinants of students’ perceived learning outcomes and satisfaction in university online education: An empirical investigation. Decision Sciences Journal of Innovative Education, 4(2), 215–235. Google Scholar
  • Fayer, L. (2017). A multi-case study of student perceptions of instructor-­created videos in online courses. International Journal for Scholarship of Technology Enhanced Learning, 1(2), 67–90. Google Scholar
  • Flowers, L. O., White, E. N., Raynor, J. E. Jr., & Bhattacharya, S. (2012). African American students’ participation in online distance education in STEM disciplines. SAGE Open, 2(2), 2158244012443544. doi: 10.1177/2158244012443544 Google Scholar
  • Frazee, J., & Anderson, M. (2014). Learning Glass specifications and assembly manual. Instructional Technology Services, San Diego State University, San Diego. Retrieved September 26, 2017, from https://its.sdsu.edu/docs/Learning_Glass_Specifications.pdf Google Scholar
  • Gormally, C., Evans, M., & Brickman, P. (2014). Feedback about teaching in higher ed: Neglected opportunities to promote change. CBE—Life Sciences Education, 13(2), 187–199. LinkGoogle Scholar
  • Green, R. A., Whitburn, L. Y., Zacharias, A., Byrne, G., & Hughes, D. L. (2018). The relationship between student engagement with online content and achievement in a blended learning anatomy course. Anatomical Sciences Education, 11(5), 471–477. MedlineGoogle Scholar
  • Hansen, J. D., & Reich, J. (2015). Democratizing education? Examining access and usage patterns in massive open online courses. Science, 350(6265), 1245–1248. MedlineGoogle Scholar
  • Hara, N. (2000). Student distress in a Web-based distance education course. Information, Communication & Society, 3(4), 557–579. Google Scholar
  • Harvey, C., Eshleman, K., Koo, K., Smith, K. G., Paradise, C. J., & Campbell, A. M. (2016). Encouragement for faculty to implement vision and change. CBE—Life Sciences Education, 15(4), es7. LinkGoogle Scholar
  • Haugen, S., LaBarre, J., & Melrose, J. (2001). Online course delivery: Issues and challenges. Issues in Information Systems, 2, 127–131. Google Scholar
  • Johnson, S. D., Aragon, S. R., & Shaik, N. (2000). Comparative analysis of learner satisfaction and learning outcomes in online and face-to-face learning environments. Journal of Interactive Learning Research, 11(1), 29–49. Google Scholar
  • Johnston, J., Killion, J., & Oomen, J. (2005). Student satisfaction in the virtual classroom. Internet Journal of Allied Health Sciences and Practice, 3(2), 6. Google Scholar
  • Khan Academy. (2016). Biology overview. Retrieved September 16, 2017, from https://youtu.be/dQCsA2cCdvA Google Scholar
  • Kirkwood, A., & Price, L. (2014). Technology-enhanced learning and teaching in higher education: What is “enhanced” and how do we know? A critical literature review. Learning, Media and Technology, 39(1), 6–36. Google Scholar
  • Koohang, A., & Durante, A. (2003). Learners’ perceptions toward the Web-based distance learning activities/assignments portion of an undergraduate hybrid instructional model. Journal of Information Technology Education: Research, 2(1), 105–113. Google Scholar
  • Krause, K. L., & Coates, H. (2008). Students’ engagement in first-year university. Assessment & Evaluation in Higher Education, 33(5), 493–505. Google Scholar
  • Lee, Y., & Choi, J. (2011). A review of online course dropout research: Implications for practice and future research. Educational Technology Research and Development, 59(5), 593–618. doi: 10.1007/s11423-010-9177-y Google Scholar
  • Liaw, S.-S., & Huang, H.-M. (2002). How Web technology can facilitate learning. Information Systems Management, 19(1), 56–61. Google Scholar
  • Lizzio, A., Wilson, K., & Simons, R. (2002). University students’ perceptions of the learning environment and academic outcomes: Implications for theory and practice. Studies in Higher Education, 27(1), 27–52. Google Scholar
  • Mayer, R. E. (2014a). The Cambridge handbook of multimedia learning (2nd ed.). Cambridge, UK: Cambridge University Press. Google Scholar
  • Mayer, R. E. (2014b). Cognitive theory of multimedia learning. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (2nd ed., pp. 43–71). Cambridge, UK: Cambridge University Press. Google Scholar
  • Mayer, R. E. (2014c). Principles based on social cues in multimedia learning: Personalization, voice, image, and embodiment principles. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (2nd ed., pp. 345–368). Cambridge, UK: Cambridge University Press. Google Scholar
  • Mayer, R. E., & Fiorella, L. (2014). Principles for reducing extraneous processing in multimedia learning: Coherence, signaling, redundancy, spatial contiguity, and temporal contiguity principles. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (2nd ed., pp. 279–315). Cambridge, UK: Cambridge University Press. Google Scholar
  • Mayer, R. E.,& Pilegard, C. (2014). Principles for managing essential processing in multimedia learning: Segmenting, pretraining, and modality principles. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning (2nd ed., pp. 316–344). Cambridge, UK: Cambridge University Press. Google Scholar
  • McDonald, D. S. (1999). Improved training methods through the use of multimedia technology. Journal of Computer Information Systems, 40(2), 14–22. Google Scholar
  • Morton, C. E., Saleh, S. N., Smith, S. F., Hemani, A., Ameen, A., Bennie, T. D., & Toro-Troconis, M. (2016). Blended learning: How can we optimise undergraduate student engagement? BMC Medical Education, 16(1), 195. MedlineGoogle Scholar
  • O’Neill, K., Singh, G., O’Donoghue, J., & Cope, C. (2004). Implementing elearning programmes for higher education: A review of the literature. Journal of Information Technology Education, 3(1), 313–323. Google Scholar
  • Paas, F., & Sweller, J. (2014). Implications of cognitive load theory for multimedia learning. In Mayer, R. E. (Ed.), The Cambridge handbook of multimedia learning. (2nd ed., pp. 27–42). Cambridge, UK: Cambridge University Press. Google Scholar
  • Picciano, A. G. (2002). Beyond student perceptions: Issues of interaction, presence, and performance in an online course. Journal of Asynchronous Learning Networks, 6(1), 21–40. Google Scholar
  • Pickering, J. D., Henningsohn, L., DeRuiter, M. C., de Jong, P. G., & Reinders, M. E. (2017). Twelve tips for developing and delivering a massive open online course in medical education. Medical Teacher, 37(7), 1–6. Google Scholar
  • Pickering, J. D., & Swinnerton, B. J. (2019). Exploring the dimensions of medical student engagement with technology-enhanced learning resources and assessing the impact on assessment outcomes. Anatomical Sciences Education, 12(2), 117–128. MedlineGoogle Scholar
  • Quitadamo, I. J., & Brown, A. (2001). Effective teaching styles and instructional design for online learning environments. Paper presented at the National Educational Computing Conference (Chicago, IL). Google Scholar
  • Riddle, E., & Gier, E. (2019). Flipped classroom improves student engagement, student performance, and sense of community in a nutritional sciences course (P07-007-19). Current Developments in Nutrition, 3(1), 657–659. Google Scholar
  • Rubenstein, H. (2003). Recognizing e-learning’s potential & pitfalls. Learning & Training Innovations, 4(4), 38–38. Google Scholar
  • Seaman, J. E., Allen, I. E., & Seaman, J. (2018). Grade increase: Tracking distance education in the United States. Oakland, CA: Babson Survey Research Group. Google Scholar
  • Shapiro, C., Ayon, C., Moberg-Parker, J., Levis-Fitzgerald, M., & Sanders, E. R. (2013). Strategies for using peer-assisted learning effectively in an undergraduate bioinformatics course. Biochemistry and Molecular Biology Education, 41(1), 24–33. doi: 10.1002/bmb.20665 MedlineGoogle Scholar
  • Smart, K., & Cappel, J. (2006). Students’ perceptions of online learning: A comparative study. Journal of Information Technology Education: Research, 5(1), 201–219. Google Scholar
  • Smith, N. N. (2003). Characteristics of successful adult distance instructors for adult learners. Inquiry, 8(1), n1. Google Scholar
  • Styron, R., Jr. (2010). Student satisfaction and persistence: Factors vital to student retention. Research in Higher Education Journal, 6, 1. Google Scholar
  • Swan, K. (2001). Virtual interaction: Design factors affecting student satisfaction and perceived learning in asynchronous online courses. Distance Education, 22(2), 306–331. Google Scholar
  • Volery, T., & Lord, D. (2000). Critical success factors in online education. International Journal of Educational Management, 14(5), 216–223. Google Scholar
  • Wang, Y.-S. (2003). Assessment of learner satisfaction with asynchronous electronic learning systems. Information Management, 41(1), 75–86. doi: 10.1016/s0378-7206(03)00028-4 Google Scholar
  • Watkins, R. (2005). Developing interactive e-learning activities. Performance Improvement, 44(5), 5–7. Google Scholar
  • Willging, P. A., & Johnson, S. D. (2009). Factors that influence students’ decision to dropout of online courses. Journal of Asynchronous Learning Networks, 13(3), 115–127. Google Scholar
  • Woodworth, J. L., Raymond, M. E., Chirbas, K., Gonzalez, M., Negassi, Y., Snow, W., & Van Donge, C. (2015). Online charter school study 2015. Stanford, CA: Center for Research on Educational Outcomes. https://charterschoolcenter.ed.gov/sites/default/files/files/field_publication_attachment/Online%20Charter%20Study%20Final.pdf Google Scholar