ASCB logo LSE Logo

Online Plagiarism Training Falls Short in Biology Classrooms

    Published Online:https://doi.org/10.1187/cbe.13-08-0146

    Abstract

    Online plagiarism tutorials are increasingly popular in higher education, as faculty and staff try to curb the plagiarism epidemic. Yet no research has validated the efficacy of such tools in minimizing plagiarism in the sciences. Our study compared three plagiarism-avoidance training regimens (i.e., no training, online tutorial, or homework assignment) and their impacts on students’ ability to accurately discriminate plagiarism from text that is properly quoted, paraphrased, and attributed. Using pre- and postsurveys of 173 undergraduate students in three general ecology courses, we found that students given the homework assignment had far greater success in identifying plagiarism or the lack thereof compared with students given no training. In general, students trained with the homework assignment more successfully identified plagiarism than did students trained with the online tutorial. We also found that the summative assessment associated with the plagiarism-avoidance training formats (i.e., homework grade and online tutorial assessment score) did not correlate with student improvement on surveys through time.

    INTRODUCTION

    The incidence and proposed causes of plagiarism in higher education are well documented (Scanlon and Neumann, 2002; Park, 2003; McCabe, 2005; Dee and Jacob, 2012). In response, faculty and researchers have argued that students should be taught how to recognize and avoid plagiarism, as relying exclusively on punishment to discourage plagiarism is ineffective (Roig, 1997; Moniz et al., 2008; Mages and Garson, 2010; Risquez et al., 2013). A key question, however, is how to effectively teach students to avoid plagiarism and properly incorporate others’ ideas into their work.

    Literature from teaching faculty and librarians asserts that online tutorials are an ideal method for addressing plagiarism issues (Nichols et al., 2003; Jackson, 2006; Snow, 2006; Silver and Nickel, 2007; Mages and Garson, 2010; Germek, 2012). Online tutorials can yield benefits for students and instructors alike, including flexibility of the timing and place of instruction, ease of creation due to easy-to-use software programs, data output available for assessment purposes, and the potential outcome of reduced incidence of plagiarism (Mages and Garson, 2010; Dee and Jacob, 2012; Germek, 2012). However, assessment of the efficacy of online plagiarism tutorials, despite their prevalence in academia, is sparse.

    Assessment oftentimes is not included in the development or implementation of online plagiarism tutorials, and faculty and librarians must rely on observations or anecdotal evidence to judge whether the tutorial meets its objectives or not (Thornes, 2012). Some rely upon students’ self-reported perceptions of the tutorial's effectiveness, which may not parallel actual student performance or knowledge (Mages and Garson, 2010; Oldham, 2011; Risquez et al., 2013). Moreover, Germek (2012) found that of 156 institutions he surveyed that offered plagiarism prevention instruction, 80% failed to incorporate pre- and posttests. One study that included such assessments compared only students’ understanding of plagiarism and citing sources, not their ability to properly paraphrase material or to recognize issues with plagiarism (Jackson, 2006). The few studies that have empirically tested online plagiarism tutorial efficacy have shown that online instruction provides equivalent learning gains of information literacy to in-class library instruction (Germain et al., 2000; Holman, 2000; Nichols et al., 2003; Silver and Nickel, 2007). Yet none of these studies focus on education within the sciences, nor do they assess online plagiarism tutorial efficacy in comparison with no education at all.

    Many higher-level biology courses demand that students integrate scientific literature into their work; however, they do not provide adequate instruction (Nuss, 1984; Power, 2009). The current study helps to address whether online plagiarism-avoidance instruction can fulfill this need. The primary goal of our study was to compare types of plagiarism training to optimize student success in recognizing plagiarism or the lack thereof. From previous work (Holt, 2012), we know that students trained with a homework assignment better discriminate between properly quoted, paraphrased, and attributed text and plagiarized text compared with students who received no training. However, Stanger-Hall (2012) suggests that students perform better on critical-thinking tasks when assessed with constructed-response assessments, akin to our homework assignment, than multiple choice–only assessments, reflective of our online training. Therefore, our first hypothesis proposed that students trained with an online plagiarism tutorial would perform intermediate to the no-training and assignment-training groups. Second, we evaluated how different types of plagiarism training differentially aided students in identifying various severities of plagiarism (i.e., blatant and no plagiarism, representing two ends of the spectrum). Holt (2012) demonstrated that untrained students are adept at identifying the ends of the gradient and more commonly fail to identify the middle of the gradient (i.e., proper paraphrases or quotations lacking citations or patchwork paraphrases with proper attribution; Howard, 1995). Finally, we hypothesized that the assessment linked to each training (i.e., homework grade or online tutorial assessment score) would be a good measure of students’ ability to identify plagiarism or the lack thereof and would positively correlate with performance on the surveys.

    MATERIALS AND METHODS

    Survey Instrument

    We surveyed students from three sections of a general ecology course at a public postsecondary institution in the western United States. Each section represented one treatment group that participated over a single semester. Surveys were administered the first and last weeks of the Fall 2010, Spring 2011, and Spring 2012 semesters. Surveys were administered online, and responses were collected using SurveyMonkey (www.surveymonkey.com).

    The survey consisted of two parts. The first part gathered unique identifiers, information on each student's background related to plagiarism, and demographics. The second part was a plagiarism knowledge survey (PKS) adapted from Roig (1997) to assess student success in recognizing plagiarism or the lack thereof. Our PKS contained a total of 12 total questions, in keeping with other similar-length PKSs that have successfully demonstrated short-term learning gains (Roig 1997, 2001; Landau et al., 2002). Our 12 questions were derived from two excerpts in each of six versions. The two short excerpts from the secondary literature (Pace et al., 1999; Licht et al., 2010) were chosen because they shared the common theme of trophic cascades and their general readability was accessible to an average second-year undergraduate student. The two excerpts represented two different reading levels, evaluated using the Flesch Reading Ease Score (Flesch, 1948). Yet Holt (2012) showed that the readability of these two passages did not affect students’ ability to discriminate plagiarism, so in the present study the two excerpts are considered duplicate trials. Based on these original passages, students were asked to judge whether six versions rewritten by Holt (2012) were plagiarized or not. The six versions represented different levels of plagiarism severity (Table 1).

    Table 1. Explanation of plagiarism severity of the six new versions provided to students in the PKS

    Plagiarism severityaQuotation marksbMaximum length of word strings identical to originalcProper citationCorrect response
    Good1NA2, 1PresentNot plagiarized
    Good2Present46, 57PresentNot plagiarized
    Fair1NA9, 15PresentPlagiarized
    Fair2NA2, 4AbsentPlagiarized
    Poor1NA15, 8AbsentPlagiarized
    Poor2Absent46, 57AbsentPlagiarized

    aEach severity rating (e.g., Good1) was represented twice in the survey; one question reflected an excerpt from Pace et al. (1999), and another question reflected an excerpt from Licht et al. (2010).

    bNA = the version included a paraphrase, not an attempted quotation; thus quotation marks were not appropriate.

    cProper names were not included in the word string count unless the entire version was copied. The first number represents maximum word strings present in a version of Pace et al. (1999), and the second number represents maximum word strings present in a version based on Licht et al. (2010).

    Independent and Response Factors

    The primary explanatory factor of interest in our study was plagiarism training. We used three levels of this factor: no training, an online plagiarism module as training, and a plagiarism homework assignment as training. All three class sections used for this study required students to complete three to four writing assignments, with the learning objective that students would develop critical-thinking and synthesis skills while also avoiding plagiarism (Holt, 2012). In the Spring 2010 class, plagiarism was addressed only on the syllabus, in which the university's definition of plagiarism was supplied, and in a 10-min demonstration by the instructor on improper paraphrasing (i.e., cutting and pasting from Internet text) midway through the course (Holt, 2012). The students in this group (n = 43) represent the no-training group.

    Students in the Fall 2010 class (n = 81) were required to complete a plagiarism homework assignment (Holt, 2012). This assignment, adapted from a similar assignment by Paul C. Smith at Alverno College, provides extensive definitions of plagiarism; guidelines; and examples of proper quoting, proper citing (according to the Council of Science Editors [CSE]), and proper paraphrasing (see Holt, 2012). The learning objectives of this summative assignment were that students would be able to identify scientific literature, discriminate different types of literature, generate citations according to CSE style, and construct quotations and paraphrases free of plagiarism on subsequent university assignments. This homework assignment, which required students to apply information and synthesize novel sentences, assessed higher levels of critical-thinking skills. The assignment was completed entirely outside of class. Students who were informally interviewed reported the assignment took on average 120 min to complete, and each student's assignment took E.A.H. roughly 45 min to grade. This assignment was graded and worth 9% of the final grade. The students in this class represent the plagiarism-training group, specifically using an assignment.

    Students in the Spring 2012 class (n = 49) were required to complete an online plagiarism module. The online module (https://learn-usu.uen.org/courses/31801) contained both video and written content and five short (one to six questions) quizzes. The module included definitions of plagiarism and guidelines and examples of proper quoting, paraphrasing, and citing (according to the American Psychological Association, Modern Language Association, and CSE). The learning objectives for this training differed from those of the assignment discussed above; students were expected to be able to identify definitions of plagiarism, recognize plagiarism or the lack thereof, and compare citation styles. Assessment of student learning from this training was evaluated at lower levels of cognitive thinking using multiple-choice and true–false questions. Completion of the online tutorial took the average student approximately 40 min and was scored by the Learning Management System. The quizzes were graded, but these scores did not affect the students’ grades; completion of the module and quizzes earned each student 4% of the final grade. This class is considered the online-training group.

    While the latter two classes both provided training to avoid plagiarism, the students were trained and assessed in different ways. The content of instruction strongly overlapped, yet the delivery of instruction was quite different (i.e., online vs. face-to-face). Other studies, however, have shown that student achievement is comparable between online and in-class instruction (Nichols et al., 2003; Bernard et al., 2004). So our comparison of the homework and online training formats was focused more on the differences in expectations of students and types of assessment. Of the 25 online plagiarism tutorials accessible in the American Library Association's Peer-Reviewed Instructional Materials Online Database and simple Google searches, 96% contained strictly multiple-choice, drag-and-drop, or true–false questions that only evaluated lower-order cognitive skills. Therefore, our online tutorial adequately represents the lower cognitive demand commonly expected with online plagiarism tutorials based on our sampling. Alternatively, our homework assignment addressed higher-order cognitive skills and presumably required students to expend more time and effort to meet the benchmarks.

    A second factor shown to be important when assessing unintentional plagiarism is the severity of plagiarism (Holt, 2012). At one end of a severity gradient, good plagiarism-avoidance behaviors include using quotation marks with direct quotes, avoiding paraphrases that copy word strings of five or more words from the original source (Howard, 1995; Roig, 2001), and inclusion of a proper citation. At the opposite end of the plagiarism severity gradient, work may lack all of these behaviors, constituting plagiarism. Implementation of only one or two of these behaviors falls in the middle of this plagiarism severity scale. Previous research has demonstrated that students commonly fail to identify plagiarism in the middle of this gradient (Wilhoit, 1994; Roig, 1997; Soto et al., 2004; Yeo, 2007; Holt, 2012). We simulated the two ends and the middle of this plagiarism severity gradient by using or neglecting the above plagiarism-avoidance behaviors in six rewritten versions of each passage in the PKS (Table 1).

    Our third factor was a repeated measure in time. Owing to inherent variability in students’ past experiences with plagiarism, we collected responses to the same survey during the first and last week of each semester. As part of the survey, each student selected a unique yet confidential tracking number that allowed for longitudinal comparisons of his or her survey responses.

    Student success, our response factor, was defined as the number of excerpts correctly identified as “plagiarized” or “not plagiarized” tallied over two reading levels for each of six severity-level versions, such that each response was coded as zero, one, or two successes out of two excerpts. We used a generalized linear mixed model with a binomial distribution and a logit link for the three-way factorial in a split-split plot design to test the effects of training, severity, and time on students’ ability to correctly identify plagiarism or the lack thereof. The students served as whole-plot units, plagiarism training as a whole-plot factor (assignment training, online training, or no training), severity (see Table 1) as a subplot factor associated with multiple excerpts presented to each student, and time (the first or last week of the semester) as a subsubplot factor. The data analysis was generated using the GLIMMIX procedure with Laplace estimation in SAS/STAT for Windows Version 9.3 (TS1M2; SAS Institute, Cary, NC). No adjustment was needed for overdispersion.

    RESULTS

    Over three semesters, 215 students participated in this study. Eighty percent of the participants (173 students) completed both pre- and postsurveys. For the students who reported demographic information, breakdowns were as follows: female, 35%; male, 65%; freshman, 8%; sophomore, 36%; junior, 41% ; senior, 14%; and postbaccalaureate, 1%. Ninety-five percent of students identified their majors within the Colleges of Science or Natural Sciences. The self-reported average grade point average was 3.3. on a 0.0–4.0 scale. The first week of the semester, during presurveys, more than half of the class reported that their understanding of what constitutes plagiarism was good (54%), whereas almost a third indicated that their understanding was very good (32%), only 13% reported a fair understanding, and 1% reported a poor understanding of plagiarism.

    Before any training, no significant differences in student success existed between any of the classes on the presurveys (F2, 170 = 0.13, P = 0.879), regardless of question severity rating (training × severity F10, 850 = 1.65, P = 0.089). On average, students failed to identify plagiarism or misidentified properly quoted, paraphrased, or attributed material roughly a fifth of the time (for the assignment group, the mean student success [M] = 81.3%, SE = 3.0%; for the online-training group; M = 79.4%, SE = 3.8%; for the no-training group; M = 79.3%, SE = 4.2%; see Figure 1). Considering that only eight (4.6%) students achieved perfect success on the presurvey and six (3.5%) students achieved a student success of 33% or lower, representing both ends of the spectrum, this overall pattern of student success is due to most students missing at least one question. Most of these errors occurred when students considered plagiarized versions, particularly excerpts in the middle of the severity range, as acceptable; students were very adept at correctly identifying properly quoted, paraphrased, and attributed material as not plagiarized (for the Good1 rating: student success M = 97.1%, SE = 1.0%; Good2: M = 96.9%, SE = 1.0%), leaving little room for improvement on postsurveys.

    Figure 1.

    Figure 1. Comparison of student success rates (the proportion of responses correctly identified as plagiarized or not plagiarized) between the two survey periods. The homework assignment–training group (depicted by black triangles) exhibited a large significant learning gain over time relative to the no-training control (shown by gray squares). Additionally, the online-training group (depicted by open circles) showed a slight, yet significant, increase in postsurvey scores as compared with presurvey scores when compared with the no-training control. The data are inverse-link estimates of the means; error bars represent ± 1 SE.

    Training and Plagiarism Severity Effects

    Averaging over severity levels, there was a significant training × time interaction (F2, 1020 = 16.60, P < 0.001; Figure 1). While student success rates did not differ significantly between pre- and postsurveys for the no-training control (F1, 1020 = 0.06, P = 0.807), both training groups exhibited significantly higher student success on postsurveys compared with presurveys. Students trained by the online module demonstrated a 7% increase in student success (F1, 1020 = 6.60, P = 0.010) as compared with a 15% gain achieved by students who completed the homework assignment (F1, 1020 = 84.05, P < 0.001).

    However, student success varied with plagiarism severity (F5, 1020 = 62.31, P < 0.001), and more notably, improvement due to training was affected by plagiarism severity (training × severity × time interaction, F10, 1020 = 3.61, P < 0.001; Figure 2). There was no evidence of varying time effects across severity levels for either the online-training group (F5, 1020 = 0.53, P = 0.753) or the no-training group (F5, 1020 = 1.76, P = 0.119). Although online training improved success when averaged over severity levels, there was no evidence for a difference for any severity level individually, indicating uncertainty in the impact of online training in this study. For students who completed the plagiarism homework, gains in recognizing plagiarized text varied with severity level (F5, 1020 = 11.65, P < 0.001). Success increased for the Poor1 severity rating (t1020 = −4.16, P < 0.001, adjusted for family-wise type I error using Holm-simulated method), Poor2 (t1020 = −5.55, adjusted P < 0.001), Fair1 (t1020 = −6.14, adjusted P < 0.001), and Fair2 (t1020 = −8.38, adjusted P < 0.001), yet showed no change for unplagiarized text in Good1 (t1020 = −1.26, adjusted P = 0.207) and Good2 (t1020 = 1.66, adjusted P = 0.187).

    Figure 2.

    Figure 2. Interaction of time and plagiarism severity on student success shown for (a) the class that received no plagiarism training, (b) the class that received training via an online module, and (c) the class that received training as a homework assignment. Open symbols represent presurvey scores and closed symbols represent postsurvey scores. The distance between points within a given severity rating equals the learning gain in one semester. The data are inverse-link estimates of the means; the error bars represent ± 1 SE.

    Assessment Score versus Improvement

    Both plagiarism training formats had associated assessments during or following completion of the training. The homework assignment contained five multiple-part short-answer questions, and the online training included five quizzes containing a total of 13 true–false or multiple-choice questions. The final percentage that each student earned on the assessment was binned into six categories (5 = 100–90%, 4 = 89–80%, 3 = 79–70%, 2 = 69–60%, 1 = 59–1%, and 0 = 0% or assessment not completed). We found no correlation between assessment score and improvement (measured as the number of 12 questions answered correctly on the postsurvey minus the number on the presurvey) for the homework assignment–training group (Spearman's r = 0.063, P = 0.579) or the online-training group (Spearman's r = 0.126, P = 0.386).

    DISCUSSION

    Training Improves Student Success

    Providing students with formal education on plagiarism clearly improves their ability to discriminate plagiarism or the lack thereof. However, not all education is equal. Students who were trained with a homework assignment focused on plagiarism avoidance could better discriminate plagiarized text from properly quoted, paraphrased, and attributed text compared with untrained students or students trained by the online module. Conversely, there was no evidence for an online-training effect for any given severity level, but averaged over all severity levels, there was evidence for a modest online-training effect.

    This finding is relevant given the ubiquity of online plagiarism modules available in academia today. In our study, half of students in the no-training (53%) and online-training (53%) groups demonstrated no improvement or their performance declined with time. This pattern is in striking contrast to the assignment-training group, in which 77% of students showed improvement with time, and more students achieved greater improvement compared with the other groups. The value of the online plagiarism training, using only lower-level assessments of learning, is minimal in our study, while the homework assignment that assessed student learning at higher cognitive levels has clear positive outcomes.

    Plagiarism Assessment Not Correlated with Improvement

    We anticipated a positive monotonic correlation between scores on the homework assignment or online quizzes and improvement on postsurveys. Contrary to our expectation, however, scores from assessments linked to the assignment and online training were not related to improvement in student success over time. A possible explanation for this pattern may be that some students lacked a complete understanding of plagiarism and performed poorly on both the presurveys and the assignment or quizzes associated with their training. Feedback from this poor performance may have served to motivate these students toward greater achievement on the subsequent assignments (Landau et al., 2002; Dickhäuser et al., 2011), which could have manifested in higher-than-expected improvement for some low-scoring assessment scores. Furthermore, the learning objectives and tasks required by the homework assignment (i.e., defining plagiarism and generating novel quotations, paraphrases, and citations free of plagiarism) did not align with the objectives of the survey (i.e., apply knowledge of plagiarism to identify plagiarism in new text). Misalignment of instruction and assessment can result in lower achievement (Cohen, 1987), which may explain lower-than-expected improvement on surveys for students who scored 70–90% on their homework assignment. A similar maintenance or deterioration in student success of students in the online-training group, who scored well on the quizzes, may be due to the poor retention of material from the online training. Perhaps this training provides sufficient education for some students to perform adequately on the quizzes at the moment of instruction, yet the instruction may be inadequate for long-term (i.e., semester-long) retention of information. Low retention, with overconfidence, may have led to some students scoring high on the online quizzes but failing to improve or even declining in their ability to discriminate plagiarism or the lack thereof.

    Different Responses to Plagiarism Severity

    Students’ ability to discriminate plagiarized from unplagiarized text depended on the severity of the plagiarism. We further support previous work (Wilhoit, 1994; Roig, 1997; Soto et al., 2004; Yeo, 2007; Holt, 2012) that suggests that students perform the worst and benefit the most from instruction on more ambiguous examples that are either properly paraphrased but not attributed or are properly attributed but obvious patchwork paraphrases. However, our work indicates that these positive learning outcomes result only from the homework assignment training; the online training does not clearly prepare students any better than no training at all to identify plagiarism to any degree.

    While we documented a reliable pattern of students entering general ecology courses performing worst on the Fair severity ratings, moderate on the Poor severity ratings, and best on the Good ratings (pre-survey data only, F5, 850 = 29.15, P < 0.001), variability in students’ inherent facility with plagiarism inevitably exists. For example, during the first week of class, we observed that the no-training group had slightly lower success on both Poor ratings and the Fair1 severity version, while they performed better than the other groups on both Good ratings.

    CONCLUSIONS

    Plagiarism training using a homework assignment had a significant and realized effect on student success; therefore, we hope that, by extension, the incidence of inadvertent plagiarism by these students will decline. Our study highlights the advantages of this more extensive assignment over an online tutorial to reach this goal. While we found a slight positive learning gain in the online-training group over the no-training group, this difference disappeared when each severity rating was considered individually. Only the assignment training provided sufficient education to allow significant improvement in students’ identification of plagiarized text, while the online training was comparable with student success of those who received no training.

    Our findings represent the first empirical analysis of the efficacy of an online plagiarism tutorial in the sciences. Counter to other researchers in the humanities (Germain et al., 2000; Holman, 2000; Nichols et al., 2003; Silver and Nickel, 2007), we found that our online tutorial does not provide adequate plagiarism-avoidance instruction. While the tutorial required far less time investment by students and instructors, our study shows that the learning gain is equivalent to providing no education at all. Given the lack of replication in our study, however, additional work with multiple classes, taught by multiple instructors, and implementing multiple online tutorials is needed to validate our findings. Moreover, future studies should control for any disparity in student effort in the training by making the summative assessment of both the homework assignment and online tutorial assessment worth an equal amount of the final grade or by using an online tutorial with higher-order cognitive skills assessments. Finally, research that better aligns the training goals and survey assessment and focuses on assessing more long-term retention of such skills is needed.

    ACKNOWLEDGMENTS

    We thank all the students who participated in this study and Michelle Baker for allowing us to survey students in her courses. We are very grateful to Paul Smith for allowing E.A.H. to adapt the plagiarism assignment for this study. This study was approved by Utah State University Institutional Review Board (protocol no. 2711).

    REFERENCES

  • Bernard RM, Abrami PC, Lou Y, Borokhovski E, Wade A, Woznet L, Wallet PA, Fiset M, Huang B (2004). How does distance education compare with classroom instruction? A meta-analysis of the empirical literature. Rev Educ Res 74, 379-439. Google Scholar
  • Cohen SA (1987). Instructional alignment: searching for a magic bullet. Educ Res 16, 16-20. Google Scholar
  • Dee TS, Jacob BA (2012). Rational ignorance in education: a field experiment in student plagiarism. J Hum Resour 47, 397-434. Google Scholar
  • Dickhäuser C, Buch SR, Dickhäuser O (2011). Achievement after failure: the role of achievement goals and negative self-related thoughts. Learn Instr 21, 152-161. Google Scholar
  • Flesch R (1948). A new readability yardstick. J Appl Psychol 32, 21-233. Google Scholar
  • Germain CA, Jacobson TE, Kaczor SA (2000). A comparison of the effectiveness of presentation formats for instruction: teaching first-year students. Coll Res Libr 61, 65-72. Google Scholar
  • Germek G (2012). The lack of assessment in the academic library plagiarism prevention tutorial. Coll Res Libr 19, 1-17. Google Scholar
  • Holman L (2000). A comparison of computer-assisted instruction and classroom bibliographic instruction. Ref User Serv Q 40, 53-60. Google Scholar
  • Holt EA (2012). Education improves plagiarism detection by biology undergraduates. BioScience 62, 585-592. Google Scholar
  • Howard RM (1995). Plagiarism, authorships and the academic death penalty. Coll Engl 57, 788-806. Google Scholar
  • Jackson PA (2006). Plagiarism instruction online: assessing undergraduate students’ ability to avoid plagiarism. Coll Res Libr 67, 418-428. Google Scholar
  • Landau JD, Druen PB, Arcuri JA (2002). Methods for helping students avoid plagiarism. Teach Psychol 29, 112-115. Google Scholar
  • Licht DS, Millspaugh JJ, Kunkel KE, Kochanny CO, Peterson RO (2010). Using small populations of wolves for ecosystem restoration and stewardship. BioScience 60, 147-153. Google Scholar
  • Mages WK, Garson DS (2010). Get the cite right: design and evaluation of a high-quality online citation tutorial. Libr Inform Sci Res 32, 138-146. Google Scholar
  • McCabe DL (2005). Cheating among college and university students: a North American perspective. Int J Acad Integr 1, www.ojs.unisa.edu.au/index.php/IJEI/article/viewFile/14/9%E2%80%9D (accessed 2 May 2013). Google Scholar
  • Moniz R, Fine J, Bliss L (2008). The effectiveness of direct-instruction and student-centered teaching methods on students’ functional understanding of plagiarism. Coll Undergrad Libr 15, 255-279. Google Scholar
  • Nichols J, Schaffer B, Shockey K (2003). Changing the face of instruction: is online or in-class more effective. Col Res Lib 64, 378-388. Google Scholar
  • Nuss EM (1984). Academic integrity: comparing faculty and student attitudes. Improving Coll Univ Teach 32, 140-144. Google Scholar
  • Oldham BW (2011). Impact of an online library tutorial on student understanding of academic integrity. Catholic Libr World 82, 27-31. Google Scholar
  • Pace ML, Cole JJ, Carpenter SR, Kitchell JF (1999). Trophic cascades revealed in diverse ecosystems. Trends Ecol Evol 14, 483-488. MedlineGoogle Scholar
  • Park C (2003). In other (people's) words: plagiarism by university students—literature and lessons. Assess Eval High Educ 28, 471-488. Google Scholar
  • Power LG (2009). University students’ perceptions of plagiarism. J High Educ 80, 643-662. Google Scholar
  • Risquez A, O’Dwyer M, Ledwith A (2013). “Thou shalt not plagiarise”: from self-reported views to recognition and avoidance of plagiarism. Assess Eval High Educ 38, 34-43. Google Scholar
  • Roig M (1997). Can undergraduate students determine whether text has been plagiarized?. Psychol Rec 47, 113-122. Google Scholar
  • Roig M (2001). Plagiarism and paraphrasing criteria of college and university professors. Ethics Behav 11, 307-323. Google Scholar
  • Scanlon PM, Neumann DR (2002). Internet plagiarism among college students. J Coll Stud Dev 43, 374-385. Google Scholar
  • Silver SL, Nickel LT (2007). Are online tutorials effective? A comparison of online and classroom library instruction methods. Res Strategies 20, 389-396. Google Scholar
  • Snow E (2006). Teaching students about plagiarism: an Internet solution to an Internet problem. Innovate: J Online Educ 2. Google Scholar
  • Soto JG, Anand S, McGee E (2004). Plagiarism avoidance: an empirical study examining teaching strategies. J Coll Sci Teach 33, 42-48. Google Scholar
  • Stanger-Hall KF (2012). Multiple-choice exams: an obstacle for higher-level thinking in introductory science classes. CBE Life Sci Educ 11, 294-306. LinkGoogle Scholar
  • Thornes SL (2012). Creating an online tutorial to support information literacy and academic skills development. J Inform Liter 6, 81-95. Google Scholar
  • Wilhoit S (1994). Helping students avoid plagiarism. Coll Teach 42, 161-164. Google Scholar
  • Yeo S (2007). First-year university science and engineering students’ understanding of plagiarism. High Educ Res Dev 26, 199-216. Google Scholar