ASCB logo LSE Logo

Improvement in Generic Problem-Solving Abilities of Students by Use of Tutor-less Problem-Based Learning in a Large Classroom Setting

    Published Online:https://doi.org/10.1187/cbe.12-06-0081

    Abstract

    Problem-based learning (PBL) was originally introduced in medical education programs as a form of small-group learning, but its use has now spread to large undergraduate classrooms in various other disciplines. Introduction of new teaching techniques, including PBL-based methods, needs to be justified by demonstrating the benefits of such techniques over classical teaching styles. Previously, we demonstrated that introduction of tutor-less PBL in a large third-year biochemistry undergraduate class increased student satisfaction and attendance. The current study assessed the generic problem-solving abilities of students from the same class at the beginning and end of the term, and compared student scores with similar data obtained in three classes not using PBL. Two generic problem-solving tests of equal difficulty were administered such that students took different tests at the beginning and the end of the term. Blinded marking showed a statistically significant 13% increase in the test scores of the biochemistry students exposed to PBL, while no trend toward significant change in scores was observed in any of the control groups not using PBL. Our study is among the first to demonstrate that use of tutor-less PBL in a large classroom leads to statistically significant improvement in generic problem-solving skills of students.

    INTRODUCTION

    Problem-based learning (PBL) has its roots in medical education programs but is now being used in a wide variety of disciplines due to the array of benefits observed with its implementation. Institutes of higher education are encouraging active learning and development of skills relevant to the 21st century, including problem solving (Michael, 2006; Trilling and Fadel, 2009; National Research Council, 2012). Although many variations on the PBL technique are being used in classrooms of higher education, a feature common for all forms of PBL is the implementation of contextualized problems, which enable students to develop problem-solving skills in addition to acquiring subject-specific knowledge (Jonassen, 2011). The majority of research on the effectiveness of PBL is focused on its traditional use in small, tutor-led group settings of medical and dental schools, where PBL has been shown to improve student satisfaction (Berkson, 1993; Finucane et al., 1995), information retention (Berry, 2008), knowledge acquisition (Prosser, 2004; Gijbels et al., 2005), and problem-solving skills (Schwartz et al., 1997; Dufferin, 2003; Kivela and Kivela, 2005).

    One of the main advantages of PBL over classical teaching techniques is the development of students’ problem-solving skills, as evidenced by several studies on the use of PBL in small tutor-led groups (Gallagher and Rosenthal, 1992; Berkson, 1993; Strobel and van Barneveld, 2009). Norman and Schmidt (1992) agree that problem-solving skills are advanced by the use of PBL, but believe that the advancement only occurs within the context of the subject being studied and that the skills are not transferable. On the other hand, Moore et al. (1994) found no difference between control groups and groups using PBL to improve problem-solving abilities in students. Barrows (1996) claimed that the inconsistency of reported results was due to the different forms of PBL being implemented and perhaps the measurement tools, which was reiterated by Walker and Leary (2009), and that improvement is evident if PBL is implemented in its traditional form. Moreover, it has been shown that student exposure to the ill-structured problems typical of PBL leads to enhanced subject-specific and general problem-solving ability but that student exposure to well-structured problems with known solutions does not improve the ability to solve ill-structured problems (Dunkle et al., 1995; Cho and Jonassen, 2002; Hong et al., 2003).

    We believe that current teaching styles and techniques should focus on development of students’ problem-solving and critical-thinking skills. This approach would ensure that, in addition to gaining subject-specific knowledge, students are also able to apply the obtained knowledge to solve problems. Previous studies have shown that students perceive improvement in their problem-solving abilities during courses involving PBL methodology (Woods, 1996; Morales-Mann and Kaitell, 2001; Uhlin et al., 2007). Woods (1996) used Heppner's Problem-Solving Inventory (Heppner and Petersen, 1982) to assess confidence in problem solving, willingness to engage in solving challenging problems, and sense of control of the situations requiring problem solving and found that students who had been exposed to PBL produced scores significantly higher than a control group. Schmidt (1983) found that students exposed to PBL were better able to apply information to solve problems. This study used ill-structured problems in a PBL group setting to assess improvement in problem-solving ability that was transferable to well-structured problems with known solutions.

    Overall, the aforementioned studies support the idea that changes in student problem-solving skills should be assessed when new forms of PBL are implemented. Most of the PBL-related research thus far has focused on small tutor-led groups, and student performance has been measured by using traditional examination techniques. Most often these examinations are incongruent with the PBL teaching methodology; that is, they do not replicate the problem-solving nature of the PBL experience.

    Several studies have also recorded beneficial effects of tutor-less PBL on student learning in large classrooms. Woods (1996) found that student grades improved with introduction of problem-solving techniques and, by using inventories, he recorded positive trends related to students’ perception of learning in tutor-less groups. Pastirik (2006) studied tutor-less groups, but did not use grades from examinations to measure student learning. Instead, she established that students perceived that their learning was enhanced by the PBL process, especially with regard to the variety and depth of knowledge gained. Kaliyadan et al. (2012) compared tutor-less PBL with tutor-led PBL and found no significant difference in effects on student learning, which indicates that tutor-less PBL could be implemented in instances in which funding and time constraints had previously prohibited use of small PBL groups requiring use of additional tutors. Given the positive effects of tutor-less PBL observed in previous studies, we set out to confirm and extend these observations by using an alternative technique; we used generic problem-solving tests to measure improvement in problem-solving abilities of students exposed to tutor-less PBL.

    We have developed and implemented a tutor-less PBL technique for use in a large classroom setting (Klegeris and Hurren, 2011). In our previous study, we demonstrated that such PBL, which was used in the form of stand-alone activities within an existing traditional lecture course, significantly increased student satisfaction and attendance. In addition, we obtained preliminary data showing that students’ ability to solve subject-specific problems in a pre/posttest scenario was improved; however, due to the subject-specific problem administered, this study lacked a control group (Klegeris and Hurren, 2011). The purpose of the current study was to establish the effect of tutor-less PBL on problem-solving skills of undergraduate students by comparing the changes in student ability to solve generic problems between a group of students exposed to this type of PBL and three other groups of students who were not exposed to PBL. To achieve this, we administered tests containing problems unrelated to the course subject or discipline at the beginning and end of the term. Use of such generic problems allowed for the assessment of students from different disciplines and courses that used mainly traditional lecturing (control groups). We demonstrate that the generic problem-solving skills of students exposed to tutor-less PBL in a large classroom setting are significantly improved; such enhancement was not observed in other classes that did not use PBL.

    RESEARCH METHODOLOGY

    The study was conducted within four different third-year courses—human kinetics (HMKN), chemistry (CHEM), sociology (SOCI), and biochemistry (BIOC)—offered at the University of British Columbia (UBC) Okanagan campus in Canada during the Fall 2011 term. The courses were taught by different instructors using traditional teaching methods to deliver the majority of the course content. Table 1 shows that, unlike the other third-year courses involved in the study, the BIOC course included a PBL component in addition to standard lectures. According to Barrows’ taxonomy of PBL methods (Barrows, 1986), the “modified case-based” method was used, in which students are presented with some information and are asked to decide on the forms of action and decisions they may make. Following some research and hypotheses, they are provided with more information about the case. Jonassen (2011) would classify the problem type as “decision-making.” For a detailed description of the PBL methodology used in the BIOC course, please refer to Klegeris and Hurren (2011).

    Table 1. Third-year undergraduate courses studied

    CourseContentMethodologies used in the classroom (% time)Out of class workStudents enrolled/participants in the study (%)
    Biochemistry (BIOC)Introduction to pharmacologyTraditional lectures (65%) PBL (25%) Poster presentations (8%) Equation solving (2%)Independent study of research papers Researching learning issues for PBL cases Poster group work55/47 (86%)
    Chemistry (CHEM)Chemical processes in the environmentTraditional lectures (75%) Demonstration of equations (25%)Independent quizzes involving equations32/19 (59%)
    Sociology (SOCI)Perspectives on social controlLectures (60%) Class discussions (30%) Group projects (10%)  Assigned readings Group projects Final paper40/21 (53%)
    Human kinetics (HMKN)Exercise psychologyTraditional lectures (100%)Two different written assignments77/56 (73%)

    The three control groups (CHEM, HMKN, and SOCI) were chosen randomly from a list of third-year courses to include courses from three different disciplines. The majority of students in these classes were at the same instructional level of third-year standing, having just completed the prerequisite first- and second-year courses. It is important to note that, even though the control courses were selected randomly and there were no apparent differences between the groups of students taking them, we cannot rule out pre-existing differences between the students enrolled in these courses unrelated to PBL, such as age, general background knowledge, or academic ability. The instruction used in the control groups did not include PBL; it was a mixture of traditional lectures and other classroom activities (see Table 1). It is important to note that, in addition to the traditional lectures and PBL cases, the BIOC course incorporated other alternative instruction techniques, including group work on posters, independent studies of research articles, and solving of pharmacological equations.

    This study was made possible by the voluntary participation of the students enrolled in the aforementioned courses. Study protocols were approved by the UBC Human Research Ethics Board, and students willing to participate in the study voluntarily signed an appropriate consent form. Third-party support was utilized to ensure that each student's consent and research data were hidden from the course instructors and also from the principal investigator (A.K.), who had access only to data summaries or anonymous sets of data.

    Problem-Solving Tests

    Questions from the Program for International Student Assessment (PISA) developed by the Organisation for Economic Co-operation and Development (OECD, 2004) were used to design two problem-solving tests (1 and 2) comprising four problems (six questions) each. PISA is an international study that began in the year 2000. It aims to evaluate education systems worldwide by testing the skills and knowledge of 15-year-old students in participating countries/economies. PISA assesses students’ knowledge, skills, and, since 2003, problem-solving abilities. PISA defines problem solving as: “Problem solving is an individual's capacity to use cognitive processes to confront and resolve real, cross-disciplinary situations in which the solution path is not immediately obvious and the literacy domains or curricular areas that might be applicable are not within a single domain of mathematics, science or reading” (OECD, 2004). As such, PISA measures general problem-solving ability, which was also purpose of this study. The PISA problems used in the problem-solving tests measured several aspects of problem solving, including design and troubleshooting, decision making, and system analysis.

    The PISA tool had already identified the difficulty level for each of the questions included in their assessment of student problem-solving abilities. This allowed us to design two problem-solving tests of similar difficulty level. For determining whether the problem-solving tests were in fact equal in difficulty, a pilot study was conducted in which 10 UBC Okanagan students completed both tests 1 and 2. The results of this study reinforced the notion that the two tests were in fact of equal or similar difficulty, thereby allowing us to measure any improvement in the problem-solving abilities of students enrolled in each of the four courses over the course of the semester. Because the questions were designed for younger students, we introduced a 15-min time limit for completion of the tests by the university students to reduce the class time spent on this activity. The time limit was established through a pilot study, which assessed student performance when tested with 10-min, 15-min, or no time limits. The pilot study showed that, without the time limit, the third-year students obtained very high scores. Fifteen minutes was enough for most students to complete the tests comfortably; therefore, it is unlikely that differences in reading comprehension affected student scores. The pilot study also indicated that both tests were equal in difficulty and reasonable in nature for the purposes of our study. A rubric for assessment of student answers provided by PISA was used by all markers.

    In the present study, the problem-solving tests were used to evaluate the generic problem-solving abilities of students, because the questions were not related to the course content covered in any of the courses. An example of the types of questions included in the problem-solving tests can be found in Table 2. To our knowledge, this is the first adaptation of PISA questions to measure an educational intervention at the university undergraduate level. Clearly, more studies will be needed to validate these tests in an undergraduate setting. Examples of the PBL exercises used in the BIOC course as well as the problem-solving tests and answer keys used in this study can be obtained by contacting the corresponding author.

    Table 2. Example of a question used in the problem-solving test

    Jane bought a new cabinet-type freezer. The manual gave the following instructions:
     • Connect the appliance to the power and switch the appliance on.
     • You will hear the motor running now.
     • A red warning light (LED) on the display will light up.
     • Turn the temperature control to the desired position. Position 2 is normal.
     • The red warning light will stay on until the freezer temperature is low enough. This will take 1–3 h, depending on the temperature you set.
     • Load the freezer with food after 4 h.
    PositionTemperature (°C)
    1−15
    2−18
    3−21
    4−25
    5−32
    Jane followed these instructions, but she set the temperature control to position 4. After 4 h, she loaded the freezer with food. After 8 h, the red warning light was still on, although the motor was running and it felt cold in the freezer.
    Question 1: Jane read the manual again to see if she had done something wrong. She found the following six warnings:
     1. Do not connect the appliance to an unearthed power point.
     2. Do not set the freezer temperatures lower than necessary (–18°C is normal).
     3. The ventilation grills should not be obstructed. This could decrease the freezing capability of the appliance.
     4. Do not freeze lettuce, radishes, grapes, whole apples and pears, or fatty meat.
     5. Do not salt or season fresh food before freezing.
     6. Do not open the freezer door too often.
    Ignoring which of these six warnings could have caused the delay in the warning light going out?
    Answer “Yes” or “No” for each of the six warnings.

    Problem-solving tests 1 and 2 were administered to students in a random manner at the beginning of the term (September) and were completed within a 15-min period. At the end of the term (December), individual students completed the other of the two tests. This way, students who completed problem-solving test 1 at the beginning of the term were administered problem-solving test 2 at the end of the term and vice versa. Therefore, by the end of the term, each student participating in this study had taken two tests that were different yet comparable in difficulty. First, all tests from all four classes were graded by a teaching assistant according to a predetermined answer key with a maximum score of 13 points. For confirming that marker bias was not responsible for the differences or similarities in the student grades, all tests from all four courses were re-marked in a blinded manner by an additional marker. Tests from the BIOC course, in which statistically significant improvements were recorded, were re-marked by an additional four markers, as described in the Results section. This blind assessment was achieved by first coding the exams, and then removing the student names, course numbers, and completion dates and randomly shuffling all tests into a single pile before marking them.

    Data Analysis

    Two-sample Student's t test comparisons were used to assess differences in the marks obtained by the students on problem-solving tests 1 and 2, as well as improvement in student marks over the course of the semester in each of the four courses. Data obtained from the five independent markers grading the BIOC course in a blinded manner were analyzed using the Generalized Linear Model Randomized Block design in the SPSS statistical software. Data are presented as means ± SE of the mean (SEM). P values of <0.05 were considered statistically significant.

    RESULTS

    Enrollment in the courses studied was as follows: HMKN, n = 77; CHEM, n = 32; SOCI, n = 40; and BIOC, n = 55. Seventy-three percent of students in the HMKN course (n = 56), 59% of students in the CHEM course (n = 19), 53% of the students in the SOCI course (n = 21), and 86% of students in the BIOC course (n = 47) volunteered to participate in the study (see Table 1). An approximately equal number of students in each course completed problem-solving test 1 or test 2 at the beginning of the term. After the problem-solving test was administered for the first time (i.e., at the beginning of the term), the marks obtained by students who attempted problem-solving test 1 were compared with the marks obtained by those who attempted problem-solving test 2 for all students in the four courses studied. The two tests were again found to be equally challenging to this larger student population (n = 143), because the difference in average test scores (7.8 ± 0.4 [n = 73] versus 7.2 ± 0.4 [n = 70]) was not statistically significant (P = 0.23). On the strength of this observation, the study was continued by distributing the same two tests at the end of the term to individual students such that no student took the same test twice. Figure 1 compares the marks obtained by students at the beginning and at the end of the term in each of the four courses. For confidentiality purposes, the three control courses are denoted Control 1, 2, and 3. Only students in the BIOC course, the only course with a PBL component, showed statistically significant, ∼15% improvement in problem-solving abilities. The data presented in Figure 1 were collected after all tests from all four courses were marked by a single teaching assistant in an unblinded manner as they became available; therefore, marker bias may have contributed to the recorded differences. For confirming the statistically significant change in test scores in the experimental group using tutor-less PBL (BIOC course), five independent markers regraded all the problem-solving tests from this course in a blind manner after all the test papers were coded and shuffled into a single pile. The average test scores awarded by individual markers are presented in Table 3. This data set, which was obtained in a blinded manner, was subjected to generalized linear model randomized block analysis, with beginning and end of term (September and December) marks assigned as the fixed factor, and the five different markers, as well as the different students, assigned as random factors. This analysis confirmed a statistically significant (P = 0.04) difference between the average scores obtained at the beginning and end of the term. It also confirmed that there was no statistically significant difference between the scores assigned by the five different markers (P = 0.59). Interestingly, the overall improvement determined by the blinded markers was somewhat smaller (∼13%) than the value recorded by the original marker (∼15%; see Figure 1).

    Figure 1.

    Figure 1. Problem-solving abilities of the students enrolled in the BIOC course improved over the semester, as shown by the statistically significant increase in the average mark obtained by the students at the end of the term compared with the beginning of the term. Students from the other third-year courses did not show improvement in problem-solving abilities assessed by the generic problem-solving tests. All tests were graded by the same teaching assistant, and the data from each course were analyzed using two-sample Student's t test (*, P < 0.05).

    Table 3. Average mark assigned by five independent markers grading the problem-solving tests of students from the BIOC course in a blinded manner

    Average mark (out of 13)
    Marker #Beginning of termEnd of term
    18.5 ± 0.59.6 ± 0.5
    28.5 ± 0.49.7 ± 0.5
    38.3 ± 0.59.6 ± 0.5
    48.5 ± 0.49.6 ± 0.5
    58.6 ± 0.59.7 ± 0.5

    The possibility that other courses may also have shown an improvement in problem-solving abilities not detected by the original marker was ruled out by having a single, independent marker graded the problem-solving tests from the three control courses in a blinded manner. The data presented in Table 4 show that, even after blinded marking, no trends toward significant differences were observed, with the lowest P value being 0.69; therefore these tests were not re-marked by additional blinded markers.

    Table 4. Average mark assigned by an independent marker grading the problem-solving tests of students from the three different control courses in a blinded manner

    Average mark (out of 13)
    Course nameBeginning of termEnd of termP value
    Control 16.0 ± 0.46.2 ± 0.40.69
    Control 29.0 ± 0.69.1 ± 0.60.92
    Control 38.0 ± 0.78.1 ± 0.70.95

    DISCUSSION

    This study demonstrates that students obtained higher scores in generic problem-solving tests after attending a large biochemistry class that involved PBL without the assistance of tutors, while the scores of students enrolled in three other courses not involving PBL showed no improvement. We believe that the two problem-solving tests can reliably be used for further studies assessing students’ general problem-solving abilities. They may be used to assess changes in student problem-solving skills within courses, to compare the problem-solving abilities of students from different disciplines, and to track changes in student problem-solving abilities over the course of their postsecondary education. Consequential administration of two tests similar to the method used in this study (parallel assessments) has been used previously in psychology-based studies (Nunnally and Bernstein, 1994; Lee and Coladarci, 2002; Brummel et al., 2009). However, because these tests have not been administered before to undergraduate students, their further validation as assessment tools of student problem-solving abilities in different university courses is needed. It should be noted that OECD is currently developing tools for assessing problem-solving abilities of university-level students, namely AHELO (Assessment of Higher Education Learning Outcomes). These tools may become available in 2013 (OECD, 2012).

    Observations made during this study revealed the following: 1) there were differences between test scores among the four classes, with student scores in one of the control classes significantly lower than all other classes at the beginning of the term; 2) in three out of four classes, there was no statistically significant change in the test scores of students between beginning and end-of-term testing; 3) students’ scores in the BIOC course employing tutor-less PBL improved significantly (P < 0.05). The initial data obtained in an unblinded manner were confirmed by several independent markers evaluating student answers in a blinded manner. Interestingly, even though both unblinded and blinded marking confirmed statistically significant improvement in the test scores of the experimental group using PBL (BIOC students) at the end of the term, there was a small difference in the magnitude of improvement recorded by the blinded markers (∼13%) compared with the original unblinded marker (∼15%). This may reflect the bias of unblinded marking that has been described in many previous studies (Dinnet et al., 2005; Halpern, 2007). Nevertheless, the statistical significance of the improvement in student scores was confirmed by five blinded markers. Furthermore, statistical analysis showed no significant differences between the scores assigned by these markers, which may mean that fewer blinded markers could be used to evaluate test results in future studies. An improvement of 13% in student achievement is noteworthy. We failed to identify other studies quantifying changes in student problem-solving abilities, but previous studies have measured improvement in knowledge of course content. Thus, in a study of pharmaceutical courses over 9 yr, Romero et al. (2010) found that midterm examination marks of student exposed to PBL were 3% higher compared with students from classes using standard lectures.

    It is not known why the absolute scores obtained by students in one of the control groups were lower than those obtained by three other groups of students. It is possible that this difference is due to student selection processes or the degree of emphasis placed on problem solving in that particular discipline, but further studies are needed to draw any conclusions. Nevertheless, our observations emphasize the importance of comparing classes from the same year, since, with the exception of this one significant difference, students from three other courses obtained very similar results in the problem-solving tests.

    The most significant observation was improvement in the problem-solving skills of the students exposed to tutor-less PBL. It is important to note that all courses were taught by experienced instructors and that the mean absolute scores of BIOC students were comparable with those obtained by students in the control courses. In fact, the problem-solving abilities of students exposed to PBL (as assessed by their absolute scores on the previously described problem-solving tests) were second highest at the beginning of the term and improved to first place standing at the end of the term. This suggests that the observed improvement should not be attributed to the possibility that the BIOC students were lagging behind their peers at the beginning of the term.

    One interpretation of the data is that the select improvement in the problem-solving test scores of BIOC students was due to their exposure to PBL exercises, which were not administered to other students. However, other possibilities cannot be ruled out. For example, instructor bias, teaching style differences, and/or subject matter engagement may have impacted the results. It is also possible that other instructional activities specific for the BIOC course (posters, independent studies, or calculations) could have caused improvement in problem-solving abilities, or this change could be a result of better peer interactions, student engagement, and active learning due to diverse instructional techniques, rather than the specific methodology of PBL. The contributions of these alternative factors could be determined using a BIOC course control group, for whom the same topics would be covered by the same professor, but no PBL would be used. However, due to high student satisfaction with the PBL component (see Klegeris and Hurren, 2011) and our preliminary data showing improved student problem-solving skills after PBL exposure, performing such a study would disadvantage some students. A similar conclusion was made by Romero et al. (2010) after their study was complete. We are therefore searching for an alternative way to conclusively establish that it is specifically the PBL component of the BIOC course that leads to the improvement in student problem-solving skills. Alternatively, tutor-less PBL could be introduced as an instructional technique in the control classes used in this study or other classes that currently do not use PBL to see whether such intervention improves student problem solving. We are currently exploring this option with other course instructors at our university.

    In conclusion, our data demonstrate that the use of PBL in a large classroom without tutors (thereby avoiding additional expenses) leads to statistically significant improvement in the generic problem-solving skills of students. Although such improvement has been well-documented for small tutor-led groups, very few studies have documented similar benefits in a large classroom setting (Woods, 1996; Pastirik, 2006). Our study is among the first to clearly demonstrate the benefits of tutor-less PBL and may encourage interested colleagues to implement this type of PBL in large classrooms and to conduct further research on its benefits. Our previous studies (Klegeris and Hurren, 2011) combined with our current observations indicate that the particular PBL teaching method we have developed not only may lead to increased student attendance, engagement, and satisfaction, but could also enhance students’ discipline-specific and generic problem-solving abilities. This is a very desirable outcome that is valued not only by educational institutions (Jefferson, 2001; Pierrakos et al., 2010; Taylor et al., 2010;), but also by students themselves (Kaufman and Mann, 1996; Lieux, 1996) and by their future employers (Boyer Commission on Educating Undergraduates in the Research University, 2002; National Academy of Engineering, 2005; Anderson et al., 2008).

    ACKNOWLEDGMENTS

    We thank the students from all four courses for their participation in the study and the course instructors for their cooperation. We also thank Drs. J. Pither and R. Lekivetz for advice and help with the study design and data analysis and Nichole Gill for editing the manuscript. Research funding by UBC Okanagan internal grants that made this study possible is acknowledged.

    REFERENCES

  • Anderson W, Mitchell S, Osgood M (2008). Gauging the gaps in student problem-solving skills: assessment of individual and group use of problem-solving strategies using online discussions. CBE Life Sci Educ 7, 254-262. LinkGoogle Scholar
  • Barrows H (1986). A taxonomy of problem-based learning methods. Med Educ 20, 481-486. MedlineGoogle Scholar
  • Barrows H (1996). Problem-based learning and problem solving. Probe: Newsletter of the Australian Problem-based Learning Network 26, 8-9. Google Scholar
  • Berkson L (1993). Problem-based learning: have the expectations been met?. Acad Med 68, S79-S88. MedlineGoogle Scholar
  • Berry W (2008). Surviving lecture: a pedagogical alternative. Coll Teach 56, 102-106. Google Scholar
  • Boyer Commission on Educating Undergraduates in the Research University (2002). Reinventing Undergraduate Education: Three Years after the Boyer Report, Stony Brook, NY: University of Stony Brook. Google Scholar
  • Brummel B, Rupp D, Spain S (2009). Constructing parallel simulation exercises for assessment centres and other forms of behavioural assessment. Pers Psychol 62, 137-170. Google Scholar
  • Cho K, Jonassen D (2002). The effects of argumentation scaffolds on argumentation and problem solving. Educ Technol Res Dev 50, 5-22. Google Scholar
  • Dinnet E, Mungall M, Kent J, Ronald E, Anderson E, Gaw A (2005). Unblinding of trial participants to their treatment allocation: lessons from the prospective study of pravastatin in the elderly at risk (PROSPER. Clin Trials 2, 254-259. MedlineGoogle Scholar
  • Dufferin M (2003). Integrating problem-based learning in an introductory college food science course. J Food Sci Educ 2, 2-6. Google Scholar
  • Dunkle M, Schraw G, Bendixen K (1995). Cognitive processes in well-defined and ill-defined problem solving In: Paper presented at the Annual Meeting of the American Educational Research Association, December, San Francisco, CA. Google Scholar
  • Finucane P, Alley L, Hayes T (1995). Comparison of teachers at a “traditional” and “innovative” medical school. Med Educ 29, 104-109. MedlineGoogle Scholar
  • Gallagher S, Rosenthal S (1992). The effects of problem-based learning on problem solving.. Gifted Child Q 36, 195-200. Google Scholar
  • Gijbels D, Dochy F, Van den Bossche P, Segers M (2005). Effects of problem-based learning: a meta-analysis from the angle of assessment. Rev Educ Res 75, 27-61. Google Scholar
  • Halpern S (2007). Evaluating preference effects in partially unblinded, randomized clinical trials. Int J Epidemiol 36, 654-663. MedlineGoogle Scholar
  • Heppner P, Petersen C (1982). The development and implications of a personal problem solving inventory. J Couns Psychol 291, 66-75. Google Scholar
  • Hong N, Jonassen D, McGee S (2003). Predictors of well-structured and ill-structured problem solving in an astronomy simulation. J Res Sci Teach 40, 6-33. Google Scholar
  • Jefferson JR (2001). Problem-based learning and the promotion of problem solving: choices for physical therapy curricula. J Phys Ther Educ 15, 26-31. Google Scholar
  • Jonassen D (2011). Supporting problem solving in PBL. Interdisc J Problem-based Learn 5, 95-112. Google Scholar
  • Kaliyadan F, Amri M, Dhufiri M, Khan M (2012). Effectiveness of a modified tutorless problem-based learning method in dermatology—a pilot study. J Eur Acad Dermatol 26, 111-113. MedlineGoogle Scholar
  • Kaufman D, Mann K (1996). Student perceptions about their courses in problem-based learning and conventional curricula. Acad Med 71, S52-S54. MedlineGoogle Scholar
  • Kivela J, Kivela RJ (2005). Student perceptions of an embedded problem-based learning instructional approach in a hospitality undergraduate programme. Int J Hosp Manag 24, 437-463. Google Scholar
  • Klegeris A, Hurren H (2011). Impact of problem-based learning in a large classroom setting: student perception and problem-solving skills. Adv Physiol Educ 35, 408-415. MedlineGoogle Scholar
  • Lee J, Coladarci T (2002). Using Multiple Measures to Evaluate the Performance of Students and Schools: Learning from the Cases of Kentucky and Maine, Arlington, VA:: National Science Foundation, www.eric.ed.gov//ED468074.pdf (accessed 15 June 2012). Google Scholar
  • Lieux EM (1996). A Comparative Study of Learning in Lecture vs. Problem-Based Format, University of Delaware, Newark, www.udel.edu/pbl/cte/spr96-nutr.html (accessed 14 June 2012). Google Scholar
  • Michael J (2006). Where's the evidence that active learning works?. Adv Physiol Educ 30, 159-167. MedlineGoogle Scholar
  • Moore G, Block S, Style C (1994). The influence of the new pathway curriculum on Harvard medical students. Acad Med 69, 983-989. MedlineGoogle Scholar
  • Morales-Mann E, Kaitell C (2001). Problem-based learning in a new Canadian curriculum. J Adv Nurs 33, 13-19. MedlineGoogle Scholar
  • National Academy of Engineering (2005). Educating the Engineer of 2020: Adapting Engineering Education to the New Century, Washington, DC: National Academies Press. Google Scholar
  • National Research Council (2012). Discipline-Based Education Research: Understanding and Improving Learning in Undergraduate Science and Engineering, Washington, DC: National Academies Press. Google Scholar
  • Norman G, Schmidt H (1992). The psychological basis of problem-based learning: a review of the evidence. Acad Med 67, 557-565. MedlineGoogle Scholar
  • Nunnally J, Bernstein I (1994). Psychometric Theory, 3rd ed. New York: McGraw-Hill. Google Scholar
  • Organisation for Economic Co-operation and Development (OECD) (2004). Problem solving for tomorrow's world: First measures of cross-curricular competencies from PISA 2003. www.pisa.oecd.org/dataoecd/25/12/34009000.pdf (accessed 15 June 2012). Google Scholar
  • OECD (2012). AHELO Feasibility Study Interim Report. http://search.oecd.org/officialdocuments/displaydocumentpdf/?cote=edu/imhe/ahelo/gne(2012)5&doclanguage=en (accessed 5 September 2012). Google Scholar
  • Pastirik P (2006). Using problem-based learning in a large classroom. Nurse Educ Pract 6, 261-267. MedlineGoogle Scholar
  • Pierrakos O, Zilberberg A, Anderson R (2010). Understanding undergraduate research experiences through the lens of PBL: implications for curriculum translation. The Interdisc J Problem-based Learn 4, 35-62 http://docs.lib.purdue.edu/ijpbl/vol4/iss2/4 (accessed 14 June 2012). Google Scholar
  • Prosser M (2004). A student learning perspective on teaching and learning, with implications for problem-based learning. Eur J Dent Educ 8, 51-58. MedlineGoogle Scholar
  • Romero R, Eriksen S, Haworth I (2010). Quantitative assessment of assisted problem-based learning in a pharmaceutics course. Am J Pharm Educ 7, 66. Google Scholar
  • Schmidt H (1983). Problem-based learning: rationale and description. Med Educ 17, 11-16. MedlineGoogle Scholar
  • Schwartz R, Burgett J, Blue A (1997). PBL and performance-based testing: effective alternatives for undergraduate surgical education and assessment of student performance. Med Teach 19, 19-23. Google Scholar
  • Strobel J, van Barneveld A (2009). When is PBL more effective? A meta-synthesis of meta-analyses comparing PBL to conventional classrooms. Interdisc J Problem-based Learn 3, 44-58. Google Scholar
  • Taylor J, Smith K, van Stolk A, Spiegelman G (2010). Using invention to change how students tackle problems. CBE Life Sci Educ 9, 504-512. LinkGoogle Scholar
  • Trilling B, Fadel C (2009). 21st Century Skills: Learning for Life in Our Times, San Francisco, CA: Jossey-Bass. Google Scholar
  • Uhlin L, Johannesson E, Silen C (2007). To Challenge Students Beliefs to Support Transition into Higher Education, Stockholm, Sweden: Council for the Renewal of Higher Education in Sweden. http://gupea.ub.gu.se/bitstream/2077/18111/1/gupea_2077_18111_1.pdf (accessed 15 June 2012). Google Scholar
  • Walker A, Leary H (2009). A problem based learning meta analysis: differences across problem types, implementation types, disciplines, and assessment levels. Interdisc J Problem-based Learn 3, 12-43. Google Scholar
  • Woods DR (1996). Problem-based learning for large classes in chemical engineering. New Dir Teach Learn 68, 91-99. Google Scholar