ASCB logo LSE Logo

Research MethodsFree Access

Benefit–Cost Analysis of Undergraduate Education Programs: An Example Analysis of the Freshman Research Initiative

    Published Online:https://doi.org/10.1187/cbe.17-06-0114

    Abstract

    Institutions and administrators regularly have to make difficult choices about how best to invest resources to serve students. Yet economic evaluation, or the systematic analysis of the relationship between costs and outcomes of a program or policy, is relatively uncommon in higher education. This type of evaluation can be an important tool for decision makers considering questions of resource allocation. Our purpose with this essay is to describe methods for conducting one type of economic evaluation, a benefit–cost analysis (BCA), using an example of an existing undergraduate education program, the Freshman Research Initiative (FRI) at the University of Texas Austin. Our aim is twofold: to demonstrate how to apply BCA methodologies to evaluate an education program and to conduct an economic evaluation of FRI in particular. We explain the steps of BCA, including assessment of costs and benefits, estimation of the benefit–cost ratio, and analysis of uncertainty. We conclude that the university’s investment in FRI generates a positive return for students in the form of increased future earning potential.

    INTRODUCTION

    According to the National Center for Education Statistics, postsecondary institutions in the United States spend ∼$150 billion annually on instruction (National Center for Education Statistics, 2016). Grants from federal, state, and philanthropic agencies provide additional funds for the development, testing, and evaluation of innovative undergraduate education programs, which, if demonstrated to be effective, often are expected to be sustained from other sources when grant funding ends. Yet the changing landscape in postsecondary education, including increasing enrollment, expanding access, and decreasing state-level investment, is putting added pressure on postsecondary education budgets (Pew Charitable Trusts, 2015). How can administrators make informed decisions about how to invest limited funds? How can the directors of undergraduate education programs determine whether their initiatives yield sufficient benefits to be worth the cost, and how can they provide such evidence to administrators? Of course, many factors must be considered when making decisions about how to invest funds, including alignment of particular initiatives with institutional missions, priorities, and strategic plans. This Research Methods essay aims to add an additional tool to the decision-making toolbox: benefit–cost analysis.

    Benefit–cost analysis (BCA) is one method of economic evaluation, or the systematic analysis of the relationship between costs and outcomes for a given program or policy. The purpose of economic evaluation is to provide stakeholders with information for making decisions about how to allocate resources, such as whether the benefits of the program outweigh its costs and whether returns on investments are sufficient to justify continued or even expanded funding for a program. The National Academies of Sciences, Engineering, and Medicine recently released a consensus study designed to improve the use of economic evidence to inform investments in children, youth, and families (Steuerle et al., 2016). Here, we offer a general guide for researchers and practitioners looking to conduct BCA to yield such evidence.

    Two main types of economic evaluation are cost-effectiveness analysis (CEA) and BCA. CEA compares the costs of a program with its impacts measured in natural units, or units that occur in real life, such as college attrition rates. For example, CEA might yield information about the percentage by which college attrition rates are reduced per dollar spent on a program aimed at retaining students in college (i.e., $X spent results in Y% reduction in attrition). BCA compares program costs with program outcomes or impacts that have been monetized, or expressed in dollars. For instance, BCA could provide information about the extent to which a program increases a college student’s future earning potential for every dollar spent on the program (i.e., $X spent results in $Y increase in future earning potential). BCA produces several summary measures, including the ratio of benefits to costs (i.e., benefit–cost ratio [BCR]) and benefits minus costs, or net benefits, both of which are presented in net present dollars; that is, dollars expressed at a current value as opposed to a past or future value. Return on investment (ROI) analysis is a subset of BCA, in which results are presented as the percentage of the program cost that is returned as a program net benefit. For example, ROI might provide information about the benefit of an educational institution’s recruitment campaign in terms of increased tuition income for the institution ($X spent on recruitment yields $3X in tuition income, or an ROI of 200%). Table 1 presents the key features of CEA, BCA, and ROI for a hypothetical disease vaccination program. The hypothetical program costs $5000 and averts 50 cases of the disease. The cost of disease treatment is assumed to be $250. Therefore, the benefits of the program are $12,500 (50 cases averted * $250 per case) and the net benefits are $7500 ($12,500 − $5000).

    TABLE 1. Types of economic evaluation

    Type of evaluationUnit for benefit/effectsFormulaSummary measure
    Cost-effectiveness analysis (CEA)Natural unitsCost-effectiveness ratio
    Interpretation: The cost-effectiveness ratio is $100:1 ($5000/50 cases), meaning the program costs $100 for each disease case avoided.
    Benefit–cost analysis (BCA)DollarsBenefit–cost ratio (BCR)
    Interpretation: The BCR is 2.5:1 ($12,500/$5000), meaning the program generates $2.50 in savings for every $1 spent.
    Return on investment (ROI)DollarsPercent return
    Interpretation: The ROI is 150% ($7500/$5000), meaning the program generates $1.50 in net savings (i.e., profit) for every $1 spent.

    Although economic evaluation is commonly used to evaluate healthcare and public health interventions (Haddix et al., 2003; Drummond et al., 2015; Neumann et al., 2016), regular application of economic evaluation in the field of education has primarily focused on early childhood educational programs (Barnett, 1985; Lee et al., 2012; Karoly, 2016). The application of economic evaluation to postsecondary education programs and policies is more nascent, as evaluations in these contexts face increased study design challenges as well as a lack of standardized outcome measures (Hummel-Rossi and Ashdown, 2002). Our purpose with this essay is to describe methods for conducting a BCA as one approach to economic evaluation, using an example of an existing undergraduate research experience (URE) program, the Freshman Research Initiative (FRI) at the University of Texas Austin (UT Austin). We hope this example will be useful for demonstrating how to apply economic methodologies to evaluate an undergraduate education program and also for evaluating the costs and benefits of FRI in particular.

    FRI was developed at UT Austin to engage students in multiple semesters of course-based undergraduate research experience (CURE) early in their college careers, with the goal of increasing students’ persistence in scientific degree programs and careers. FRI makes use of an expanded apprenticeship model, integrating large numbers of undergraduate students into research groups, called “research streams,” as an alternative to entry-level laboratory courses. The program comprises a three-course sequence taken within the first 2 years of undergraduate study. In each stream, groups of 35–40 undergraduate students work on a common research problem with mentorship and guidance from a PhD-trained research educator (RE) and a tenure-track/tenured principal investigator. The RE role is unique and essential to FRI, because each RE mentors 35+ students in his or her stream, which would not be practical in a more traditional research group structure. REs are immersed in the cognitive apprenticeship model of interaction with the students (Ritchie and Rigano, 1996), creating and implementing a research program designed to support students in learning core science concepts and research skills while making meaningful scientific contributions (e.g., authorships on peer-reviewed publications). FRI capitalizes on the power of research experiences as a science, technology, engineering, and mathematics (STEM) recruitment and retention tool, integrating a combination of experiences that contribute to student success: mentoring (Coppola, 2001), tutoring (Topping, 1996), research experiences (Lopatto, 2007; Russell et al., 2007; Hurtado et al., 2009), and learning communities (Springer et al., 1999).

    A recent consensus study from the National Academy of Sciences highlights the need for more research evaluating the benefits and costs of UREs, particularly for students majoring in STEM fields (National Academies of Sciences, Engineering, and Medicine, 2017). Several existing studies discuss the benefits and costs of UREs but do not attach a monetary value to program benefits and costs, which precludes economic evaluation (Pennebaker, 1991; Hoffman, 2009; Lei and Chuang, 2009). These studies are undoubtedly helpful in providing information about the supports and constraints of implementing a program. Yet there is a clear gap in knowledge regarding the potential returns of allocating resources to UREs.

    Here we focus on BCA, the first step of which is to clearly define the program of interest and to identify the program’s alternative, or the “status quo” experience (Karoly, 2016; Steuerle et al., 2016). The status quo experience refers to the program or intervention that the study population would receive if they were not participating in the program being evaluated. In many cases, the status quo refers to a “do-nothing” approach. This alternative experience serves as the baseline for comparison to most accurately capture the costs and benefits attributable to the program. For our example, FRI is the program of interest and STEM majors who do not participate in FRI take comparable non-CURE science courses (i.e., the status quo experience), referred to hereafter as the comparison program.

    After the program and the alternative have been clearly defined, it is necessary to have demonstrated evidence of program effectiveness, or the impact of the program compared with the alternative. An important point to remember is that the usefulness of an economic evaluation rests on the robustness of the underlying effectiveness study. The effectiveness study should be conducted with a comparable population, should incorporate valid methodology, and should produce outcomes amenable to economic analysis. FRI is a good candidate for BCA, because it is an example of a large-scale URE program for which effectiveness has been demonstrated (Rodenbusch et al., 2016). Participation in FRI has been shown to increase overall graduation rates from 66% for comparison students to 83% for FRI students, after carefully controlling for other factors that affect graduation rates using propensity score matching. FRI has also been shown to increase the percentage of students graduating with a STEM major (instead of transferring to a non-STEM major) from 71% for comparison students to 94% for FRI students. See work from Beckham, Rodenbusch, and colleagues for more details about FRI and its effects (Beckham et al., 2015; Rodenbusch et al., 2016). These results, along with data on the costs of FRI and the comparison experience, allow for a BCA of the program to be conducted, which we describe in the following section.

    PROCEDURES

    To conduct a BCA of FRI, we estimated all costs incurred by UT Austin for both FRI and the comparison program. We conceptualized program benefits as the estimated future potential earnings of FRI students relative to comparison program students. We used a BCR as the summary measure, estimating the ratio of benefits to costs for FRI in relation to the comparison program. We illustrate this process in Figure 1 and describe each step in the following sections.

    FIGURE 1.

    FIGURE 1. Flowchart for BCA outlining the steps necessary to conduct and interpret a BCA for a program.

    Estimating Costs

    We started by conducting a programmatic cost analysis, which is the standard first step in economic evaluation including BCA and refers to the collection, valuation, and analysis of all resources required to implement a program or policy. We first determined the perspective of the analysis (i.e., who bears the costs), as the perspective will drive which cost data we choose to collect. For example, if we analyze the program from an organizational perspective—in the case of FRI this would be UT Austin—we would only collect data on costs paid by the organization. Alternatively, we could frame the analysis from a societal perspective and include costs to participants as well as the organization, such as time and travel costs. Best practices for economic evaluation indicate using the societal perspective in order to provide the most comprehensive picture of benefits and costs (Steuerle et al., 2016). However, recommendations for education studies emphasize the importance of matching the study perspective to the goals of the evaluation, especially in cases when a societal perspective would unnecessarily complicate the interpretation of the study findings (Barnett, 1993; Hummel-Rossi and Ashdown, 2002). Here, we analyzed costs from the university’s perspective and did not consider the costs to students to participate in FRI, because we did not anticipate that costs to FRI versus comparison students would differ (i.e., students do not pay extra costs to participate in the program). We also did not include an estimate of any potential cost differences in terms of the time or effort students spent in FRI versus the comparison experience. We could not reliably estimate these values and did not expect them to differ appreciably between the two experiences, although this should be tested more directly in future studies.

    Once an analytic perspective is selected, costs are collected prospectively or retrospectively, using a microcosting or gross-estimation approach. Prospective cost collection refers to the ongoing recording of program costs as they accrue, such as through activity logs, project invoices, and travel logs. Retrospective cost collection involves estimating expenditures after program implementation. Microcosting involves collecting costs by identifying individual resources, while gross estimation uses total program expenditures as costs (e.g., from budgets). Prospective microcosting for both preimplementation (i.e., planning and development) and implementation phases is preferred, because it provides the most detail about the resources required to implement a program and therefore is the most useful for program implementers (Steuerle et al., 2016). However, this method requires the most evaluator effort and is not always feasible.

    For our BCA of FRI, we used a retrospective microcosting approach from the university perspective. We obtained itemized expenditure data from the FRI program, including the individual personnel costs (salary plus fringe for instructors/REs and graduate/undergraduate assistants) and materials costs for each FRI course and for each comparison course. Regarding indirect costs, the FRI program was found to use more building resources than the comparison experience due to students spending more time in campus labs for assignments: FRI courses 1 and 2 used ∼75% more building resources than their comparators, and FRI course 3 was set equal to FRI course 2, while comparison course 3, an independent study, used no building resources. Indirect (or overhead) costs can be allocated several different ways depending on the information available, but it is generally recommended that allocation be tied to resource use, such as total direct costs or total personnel costs (Drummond et al., 2015; Steuerle et al., 2016). We estimated building resource costs as a proportion of total personnel costs based on the allocation found in the University of Texas system’s annual financial report. In other words, we set building resource costs at 2.8% of total personnel costs for comparison courses 1 and 2 (University of Texas System, 2016). There was no marginal difference in administrative costs and institutional support costs between the FRI program and the comparison experience, as both course sequences require similar levels of administration and coordination. Therefore, these costs were not included.

    We focused our cost collection on implementation costs, because planning and development costs were not available and may also differ significantly between institutions planning FRI-like programs. We provide a summary of the costs of FRI and the comparison experience in Table 2. Because different courses had different levels of enrollment, we present costs at the per student level. On average, FRI costs $2875 per student, while the comparison program costs $1820 per student.

    TABLE 2. Costs per student for FRI and comparison programa

    InstructorTeaching assistantMaterialsBuilding resourcesTotal costs
    FRI group
     Course 1$258$141$20$10$429
     Course 2$908$275$148$30$1361
     Course 3$908$0$148$30$1086
     Total$2074$417$315$70$2875
    Comparison group
     Course 1$208$0$0$6$213
     Course 2$575$32$20$17$644
     Course 3$863$0$100$0$963
     Total$1646$32$120$23$1820

    aOwing to rounding, there may be slight discrepancies in sums.

    Estimating Benefits

    Before a BCA can be conducted, the benefits of the education program of interest must be identified and monetized. Benefits can be multiple and far-reaching, accruing to students in the form of higher grade point averages and to institutions in the form of reputation. Rodenbusch and colleagues (2016) identified benefits of FRI by comparing outcomes of students who participated in FRI versus a propensity score–matched group of students who participated in the comparison experience. They found that FRI participation led to significant increases in likelihood of graduating from college (from 66 to 83%) and significant improvements in rates of STEM retention (from 71 to 94%) (Rodenbusch et al., 2016). STEM retention refers to the sample of students who entered college with a declared STEM major and graduated from college with a STEM major. Because FRI had demonstrated effectiveness for these two outcomes, we used college graduation and STEM major to define our study benefits. Attrition from college (i.e., not graduating) represents a third, complementary outcome for our study population.

    The outcome must be monetized to turn a study outcome into a benefit for BCA. Future earning potential is a common outcome measure for BCAs in education (Stem et al., 1989; Hummel-Rossi and Ashdown, 2002; Karoly, 2016), and we chose to use future earning potential as a monetization for each of the three outcomes. Data provided by the Hamilton Project at the Brookings Institution show career earnings by educational attainment and by college major, both as median annual earnings over a career and as median lifetime earnings (Hershbein and Kearney, 2014). The earning potentials generated by the Hamilton Project are comparable to other estimates when adjusted for discounting and inflation (Carnevale et al., 2013). By categorizing the majors into two groups, STEM and non-STEM, we were able to estimate a median potential earnings value for each group. We defined our study benefits in terms of median initial annual earning potential and median lifetime earning potential for each of our three study outcomes: college attrition, graduation with a STEM degree, or graduation with a non-STEM degree. College graduates have a much higher earning potential than those who leave college without graduating, and STEM graduates have a higher earning potential than graduates with majors in non-STEM fields. We summarize the benefits for this analysis in Table 3.

    TABLE 3. Benefits in potential earnings per student by outcome in 2014 U.S. dollars

    Leave collegeSTEM graduateNon-STEM graduate
    Initial annual earnings$12,200$31,300$23,400
    Lifetime earningsa$720,000$1,425,000$1,010,000

    aLifetime earnings are discounted annually at a rate of 3%.

    It should be noted that, instead of directly comparing institutional costs to institutional benefits, we are chose to measure benefits exclusively from the student perspective. Many additional benefits of FRI could also be monetized and analyzed, such as the tuition dollars gained and reduced recruitment costs from increased student retention, as well as benefits to the reputation of UT Austin (Heldman, 2008; Raisman, 2013). Increased retention is also likely to save state and federal governments money in the form of publicly funded scholarships and grants provided to students who end up dropping out (Schneider, 2010). Finally, from a societal perspective, increased graduation rates and increased STEM graduation rates in particular are likely to produce societal benefits in the form of technological progress and increased economic productivity (Krueger and Lindahl, 2001). Including additional benefits in our analysis may have resulted in a more comprehensive BCA, but the challenges inherent in monetizing more abstract and distal benefits, such as quantifying reduced recruitment costs in terms of FRI effectiveness and how to value increased university reputation, would likely have weakened the overall usefulness of the study. We opted not to monetize these additional benefits in order to maintain a simplified focus on the organizational and student perspectives and in an effort to prioritize the explanation of the BCA process for this essay.

    For making fair cost comparisons, it is important to ensure costs and benefits are adjusted for inflation and for time preference (i.e., discounting), especially for cases in which the benefits occur in the future. The value of a dollar 5 years ago does not equal the value of a dollar today, and adjusting for inflation mitigates this difference in purchasing power. For this study, we collected FRI costs for 1 year of operation in 2015 and obtained benefits data presented in 2014 U.S. dollars. To adjust for inflation, we used the All Items Index of the Consumer Price Index to adjust all costs to the base year of 2014, so that all dollar values of costs and benefits possess equal purchasing power (Bureau of Labor Statistics, 2014). Discounting is separate from inflation and can be defined as the reduced valuation of costs and benefits that occur in the future due to the concept of time preference. Time preference refers to the advantage of obtaining a benefit now instead of in the future, and this preference holds true even in a scenario in which inflation does not exist. The costs of FRI do not need to be discounted, as they occur in a single year; however, the benefits of FRI accumulate over the course of the student’s career, and this differential timing of costs and benefits necessitates discounting. Therefore, the future earning benefits obtained from the Hamilton Project were discounted at a 3% annual rate (a common discount rate for social programs) and are reported as present values in order to be fairly compared with the program costs (Hershbein and Kearney, 2014). Additional adjustments for monetized benefits may be necessary to ensure that the transfer of benefits from one source accurately applies to the population under consideration. For example, a geographic cost of living adjustment may be required if benefits estimates derived from a Los Angeles population are applied to participants of a program implemented in the Midwest. Because the future earning potential estimates used in this study were derived from a nationally representative sample and we have no reason to believe that UT graduates significantly over- or underearn when compared with the national average, no further adjustments are necessary.

    Modeling Costs and Benefits

    The decision tree is among the most common methods for modeling economic evaluations (Drummond et al., 2015). A decision tree (Figure 2) functions like a flowchart, with a hypothetical population beginning at a decision node (rectangle) and then proceeding through the tree sequentially until arriving at a final outcome represented by a terminal node (triangle). Along the route are chance nodes (circles) at each bifurcation in the tree, which represent the probability of a given event occurring along the pathway. At the end of the tree, each terminal node represents a final outcome associated with the pathway of events. The probability of each final outcome occurring can be calculated by multiplying the probabilities at each chance node along a particular path through the tree. The average projected cost and benefit of each path can then be calculated and compared.

    FIGURE 2.

    FIGURE 2. Decision tree model for the costs and benefits of FRI vs. the comparison program. The potential population of FRI is STEM majors at a decision node (rectangle on left), who either become part of the FRI or comparison group. Chance nodes (circles) are points where the population has different likelihoods of pursuing different paths on the way to realizing different outcomes (triangles). The percentages of the population that proceed on each path are noted next to the path. The probability of each outcome occurring is calculated by multiplying the probabilities at each chance node (i.e., the percentages) associated with that path. The average cost and benefit of each path can then be calculated and compared (on right).

    Figure 2 depicts a decision tree for estimating the projected costs and benefits of FRI versus the comparison program. The hypothetical population consists of STEM majors who participate or not in FRI (the decision node). For simplicity’s sake, we assume a population of 200 STEM majors, with 100 in the FRI path and 100 in the comparison path. Following the two populations sequentially through the tree, the evidence from Rodenbusch et al. (2016) suggests that one would expect 17% of the FRI students to leave college and 34% of comparison students to leave college. Of the 83 FRI students who graduate college, 94% (i.e., 78 students) go on to graduate with a STEM degree, while the remaining five graduates do not. Similarly, 71% of the 66 comparison students who graduate (i.e., 47 students) do so with a STEM degree, while the remaining 19 comparison students graduate in a non-STEM field.

    The right side of the decision tree (Figure 2) shows the expected probability for each group (FRI vs. comparison) of achieving each of the three outcomes and the associated potential earnings. Probabilities are rounded to the nearest hundredth in this example. Because average costs do not vary among outcomes within each study group, they are shown only at the decision node. Average potential earnings for each study group are estimated with the expected outcome probabilities as weights. On the basis of this model, we estimate that the expected average median initial annual earning potential for FRI students is $27,658, while the expected average median initial annual earning potential for comparison program students is $23,305. The expected average median lifetime earning potential for FRI students is $1,284,400, while the average median lifetime earning potential for comparison students is $1,106,450.

    Benefit–Cost Analysis

    The next step in a BCA is to directly compare the program’s costs with its benefits. It is common for BCAs to compare a program with a scenario in which there is no program in place, referred to as a “do-nothing” scenario. A do-nothing scenario has a cost of $0, making the calculation of the BCR relatively straightforward: the benefits of the new program are compared with the costs of the new program. For example, BCAs of early childhood education programs often compare a preschool population with a population that did not attend preschool. The benefits are estimated from the improved performance of the preschool graduates compared with children who did not attend preschool, and the BCR reflects these benefits compared with total program costs. However, when the comparison experience is not a do-nothing scenario but instead refers to a basic program or the status quo, which the program of interest is enhancing, it is more appropriate to compare the incremental costs and benefits in a BCR (Karoly, 2016).

    In this study, we compare FRI with a traditional college course sequence, the status quo in this case, instead of a do-nothing scenario; therefore, we estimated the incremental, or additional, costs and benefits of FRI relative to the costs and benefits of the status quo. The microcosting data indicate that FRI costs an average of $1055 more per student than the comparison program. The projected benefits indicate that participation in FRI produces an average increase of $4353 per student in potential initial annual earnings and $177,950 per student in potential lifetime earnings. Thus, FRI participants are estimated to earn almost 19% more in initial annual earnings upon graduation and 16% more in lifetime earnings when compared with the comparison group. Calculating an incremental ratio of benefits to costs reveals a 4.13:1 ratio for initial annual earning potential and a 169:1 ratio for lifetime earning potential. Any ratio greater than 1:1 indicates a positive return on the university’s investment. Thus, we estimate that FRI generates a return of more than $4 in students’ initial earning potential and a return of $169 in students’ lifetime earning potential for every $1 that the university invests when compared with the earning potential of students in the comparison program.

    The final step of an economic evaluation, including BCA, is to conduct a sensitivity analysis (Drummond et al., 2015; Steuerle et al., 2016). Costs and benefits of a program may vary among participants, and a sensitivity analysis is conducted to reflect this uncertainty. For this study, we conducted a two-way sensitivity analysis, in which we varied two key parameters both individually and simultaneously in order to present potential scenarios producing lower- and upper-bound estimates to supplement our baseline estimate of the BCR.

    The first key parameter we varied for the sensitivity analysis was the average cost per student of the FRI course sequence, as this cost depends on the resource intensity of the research undertaken in each course. For example, computer science courses required far fewer resources than wet-lab science courses. Our data showed that total FRI program costs ranged from $2137 to $3785 per student for the three courses; therefore, we assumed FRI costs of $2137 per student in a low-cost scenario and a cost of $3785 per student in a high-cost scenario. A more robust sensitivity analysis would also include the associated differences in effectiveness by course type, but these data were not available in the effectiveness study.

    The second key parameter we varied was the graduation rate of students in the hypothetical cohort. Our baseline scenario assumed that students who left UT Austin did not finish college elsewhere and accrued the future earning potential of students who never finish college. For the sensitivity analysis, we add a scenario in which 10% of students who leave UT Austin enroll in and graduate from a different college (Schneider, 2010) and thus accrue the future earning potential of a non-STEM graduate.

    Table 4 presents the results of a two-way sensitivity analysis. The estimates of FRI costs are given in the first column, followed by the BCRs for the baseline graduation assumptions and then the ratios for the increased graduation rates. The lowest BCR scenario (the costliest FRI course sequence and the increased graduation rate) reduced the incremental BCR to 2.12:1 for initial annual earnings and 88:1 for lifetime earnings, while the highest BCR scenario (the least costly FRI courses and the baseline graduation rate) increased the BCR to 13.7:1 for initial annual earnings and 561:1 for lifetime earnings. Therefore, every additional dollar that UT Austin invests in FRI when compared with a traditional program of study generates $2 to $14 in returns for students in increased potential initial annual earnings and $88 to $561 in returns for students in increased potential lifetime earnings. In all scenarios, even the most costly FRI courses generate a positive return for students.

    TABLE 4. Sensitivity analysis of the incremental BCR of FRI

    BCRIncreased graduation ratea
    FRI costsInitial $Lifetime $Initial $Lifetime $
    Baseline$28754.13168.703.95164.03
    Low$213713.73561.4013.13545.85
    High$37852.2290.582.1288.07

    aIncreased graduation rate assumes that 10% of those who leave college go on to graduate from a different institution and thus gain the earning potential of a college graduate.

    LIMITATIONS

    There are several limitations to our FRI analysis that should be considered in any economic evaluation. First, best practices for economic evaluation recommend a societal rather than organizational perspective in order to provide the most comprehensive economic estimates (Steuerle et al., 2016). Such a perspective would account for any marginal differences in student costs between the two programs, including marginal differences in time spent on course work. However, a comprehensive societal perspective requires more extensive data collection and does not always have a straightforward and applicable interpretation. In our example analysis of FRI, we made use of multiple perspectives in an effort to most clearly and succinctly illustrate how the university’s investment can benefit FRI students. Specifically, we estimated costs from an organizational perspective, as UT Austin (the organization) funds the extra programmatic costs of FRI (students do not pay extra fees to participate in the program). We estimated benefits as future earning potential from a student perspective, because improving student retention in college and in STEM were primary objectives of the program. Multiple perspectives are not uncommon in BCA; BCAs of government programs often incorporate multiple perspectives, as costs are typically estimated from the government’s perspective, while benefits accrue to vulnerable populations who may not contribute to the tax base for the program. Public education funding serves as a useful example, because property taxes fund a large proportion of public education, but not all who pay property taxes have children using public education and not all who benefit from public education pay property taxes. Such a multiperspective approach does, however, preclude a traditional ROI analysis, as the student benefits do not necessarily return to the investor (the university). An analysis that estimated benefits in terms of increased tuition dollars or attributable alumni donations would allow for an estimation of ROI.

    Second, there are limitations in our assumptions of costs and benefits. We did not include preimplementation costs, such as planning costs, in our estimate of FRI costs. Including these costs would decrease the BCR, although over time this impact would lessen as these costs were spread over more FRI participants. Further, we obtained effectiveness data at the aggregate level only and therefore were unable to analyze costs and outcomes more precisely at the major level, which would have enabled BCR estimates by major. In our estimation of benefits, we did not have data on the actual earnings of FRI graduates and instead used existing national estimates to project the earning potential of STEM versus non-STEM graduates. We used median estimates instead of average earnings to mitigate the skewedness of the data, but variances in earning were unequal between STEM and non-STEM majors, with non-STEM majors realizing higher variance. Additionally, median earnings data were reported at the major level, and in order to report group-level earnings potential of STEM and non-STEM majors, we used the median earnings of the median major for each group. These are not the true median earnings of STEM and non-STEM majors, as those data were unavailable.

    Finally, we presented a simple two-way sensitivity analysis, which introduced the concept of uncertainty, to encourage readers to consider how variability in assumptions may affect the evaluation’s conclusions. However, per guidance on how to conduct sensitivity analyses from the National Academies and others, a more robust sensitivity analysis should include varying all parameters in the model, uniquely and simultaneously (Drummond et al., 2015; Steuerle et al., 2016). In our sensitivity analysis, there are no analyses of uncertainty around the benefit estimates of future earning potential or around the effectiveness of FRI, and a robust multiway analysis incorporating these parameters would be appropriate. The use of more sophisticated modeling techniques, such as probabilistic sensitivity analysis, would also strengthen the study.

    SUMMARY

    A recent consensus study calls for research that evaluates the benefits and costs of UREs, particularly for students in STEM majors (National Academies of Sciences, Engineering, and Medicine, 2017). Here, we aimed to provide more general guidance on how to conduct economic evaluations of undergraduate education programs by explaining the basic methods used to assess a program’s benefits and costs in a BCA. To illustrate how BCA can be used in practice, we conducted a BCA using a large-scale URE program, the FRI at UT Austin, the results of which can be used to inform decisions about the program. We conclude that the university’s investment in FRI is likely to generate a positive return for students in the form of increased future earning potential.

    ACKNOWLEDGMENTS

    We thank Lauren Crowe, Cassandra Delgado-Reyes, and Marty Mass for providing cost information. Support for FRI was provided by a grant from the Howard Hughes Medical Institute (#52006958). The contents of this paper are solely the responsibility of the authors and do not necessarily represent the official views of HHMI.

    REFERENCES

  • Barnett, W. S. (1985). Benefit-cost analysis of the Perry Preschool Program and its policy implications. Educational Evaluation and Policy Analysis, 7(4), 333–342. Google Scholar
  • Barnett, W. S. (1993). Economic evaluation of home visiting programs. Future of Children, 3(3), 93–112.10.2307/1602544 Google Scholar
  • Beckham, J. T., Simmons, S., Stovall, G. M., & Farre, J. (2015). The Freshman Research Initiative as a model for addressing shortages and disparities in STEM engagement. In Peterson, M. A.Rubinstein, Y. A. (Eds.), Directions for mathematics research experience for undergraduates (pp. 181–212). Singapore: World Scientific. Google Scholar
  • Bureau of Labor Statistics. (2014). Consumer Price Index, All Items Index. Bureau of Labor Statistics Consumer Price Index Archived News Releases. Retrieved March 2, 2017, from www.bls.gov/bls/news-release/cpi.htm. Google Scholar
  • Carnevale, A. P., Rose, S. J., & Cheah, B. The college payoff: Education, occupations, lifetime earnings. Georgetown University Center on Education and the Workforce 2013 Retrieved February 27, 2017, from https://cew-7632.kxcdn.com/wp-content/uploads/2014/11/collegepayoff-complete.pdf. Google Scholar
  • Coppola, B. P. (2001). Full human presence: A guidepost to mentoring undergraduate science students. New Directions for Teaching and Learning, 2001(85), 57–73. Google Scholar
  • Drummond, M. F., Sculpher, M. J., Claxton, K., Stoddart, G. L., & Torrance, G. W. (2015). Methods for the economic evaluation of health care programmes. Oxford, UK: Oxford University Press. Google Scholar
  • Haddix, A. C., Teutsch, S. M., & Corso, P. S. (2003). Prevention effectiveness: A guide to decision analysis and economic evaluation. Oxford, UK: Oxford University Press. Google Scholar
  • Heldman, C. (2008). Looking at the costs of student acquisition and attrition. Recruitment & Retention in Higher Education, 22(5), 6–7. Google Scholar
  • Hershbein, B., & Kearney, M. (2014). Major decisions: What graduates earn over their lifetimes. Washington, DC: Hamilton Project, Brookings Institution. Google Scholar
  • Hoffman, J. R. (2009). Applying a cost-benefit analysis to undergraduate research at a small comprehensive university. Council on Undergraduate Research Quarterly, 30(1), 20–24. Google Scholar
  • Hummel-Rossi, B., & Ashdown, J. (2002). The state of cost-benefit and cost-effectiveness analyses in education. Review of Educational Research, 72(1), 1–30.10.3102/00346543072001001 Google Scholar
  • Hurtado, S., Cabrera, N. L., Lin, M. H., Arellano, L., & Espinosa, L. L. (2009). Diversifying science: Underrepresented student experiences in structured research programs. Research in Higher Education, 50(2), 189–214. MedlineGoogle Scholar
  • Karoly, L. A. (2016). The economic returns to early childhood education. Future of Children, 26(2), 37–55. Google Scholar
  • Krueger, A. B., & Lindahl, M. (2001). Education for growth: Why and for whom. Journal of Economic Literature, 39, 1101–1136. Google Scholar
  • Lee, S., Drake, E., Pennucci, A., Bjornstad, G., & Edovald, T. (2012). Economic evaluation of early childhood education in a policy context. Journal of Children’s Services, 7(1), 53–63.10.1108/17466661211213670 Google Scholar
  • Lei, S. A., & Chuang, N.-K. (2009). Undergraduate research assistantship: A comparison of benefits and costs from faculty and students’ perspectives. Education, 130(2), 232–240. Google Scholar
  • Lopatto, D. (2007). Undergraduate research experiences support science career decisions and active learning. CBE—Life Sciences Education, 6(4), 297–306. LinkGoogle Scholar
  • National Academies of Sciences, Engineering, and Medicine. (2017). Undergraduate research experiences for STEM students: Successes, challenges, and opportunities. Washington, DC: National Academies Press. Google Scholar
  • National Center for Education Statistics. (2016). Postsecondary Institution Expenses. How much do colleges and universities spend on students. Retrieved June 27, 2017, from https://nces.ed.gov/fastfacts/display.asp?id=75. Google Scholar
  • Neumann, P. J., Sanders, G. D., Russell, L. B., Siegel, J. E., & Ganiats, T. G. (2016). Cost-effectiveness in health and medicine. Oxford, UK: Oxford University Press. Google Scholar
  • Pennebaker, D. F. (1991). Teaching nursing research through collaboration: Costs and benefits. Journal of Nursing Education, 30(3), 102–108. MedlineGoogle Scholar
  • Pew Charitable Trusts. (2015). Federal and State Funding of Higher Education: A Changing Landscape. Retrieved June 27, 2017, from www.pewtrusts.org. Google Scholar
  • Raisman, N. A. (2013). The cost of college attrition at four-year colleges & universities: An analysis of 1669 US institutions. Virginia Beach, VA: Educational Policy Institute. (Policy perspectives) Google Scholar
  • Ritchie, S. M., & Rigano, D. L. (1996). Laboratory apprenticeship through a student research project. Journal of Research in Science Teaching, 33(7), 799–815. Google Scholar
  • Rodenbusch, S. E., Hernandez, P. R., Simmons, S. L., & Dolan, E. L. (2016). Early engagement in course-based research increases graduation rates and completion of science, engineering, and mathematics degrees. CBE—Life Sciences Education, 15(2), ar20 LinkGoogle Scholar
  • Russell, S. H., Hancock, M. P., & McCullough, J. (2007). Benefits of undergraduate research experiences. Science, 316(5824), 548–549. MedlineGoogle Scholar
  • Schneider, M. (2010). Finishing the first lap: The cost of first year student attrition in America’s four year colleges and universities. Washington, DC: American Institutes for Research. Google Scholar
  • Springer, L., Stanne, M. E., & Donovan, S. S. (1999). Effects of small-group learning on undergraduates in science, mathematics, engineering, and technology: A meta-analysis. Review of Educational Research, 69(1), 21–51. Google Scholar
  • Stem, D., Dayton, C., Paik, I.-W., & Weisberg, A. (1989). Benefits and costs of dropout prevention in a high school program combining academic and vocational education: Third-year results from replications of the California Peninsula Academies. Educational Evaluation and Policy Analysis, 11(4), 405–416. Google Scholar
  • Steuerle, E., Price, O. A., Basurto-Davila, R., Brooks, J., Brooks-Gunn, J., Chow, B., … Karoly, L. (2016). Advancing the power of economic evidence to inform investments in children, youth, and families. Washington, DC: National Academies Press. Google Scholar
  • Topping, K. J. (1996). The effectiveness of peer tutoring in further and higher education: A typology and review of the literature. Higher Education, 32(3), 321–345. Google Scholar
  • University of Texas System. (2016). Consolidated Annual Financial Report: FY 2016. Retrieved October 23, 2017, from www.utsystem.edu. Google Scholar