Abstract

The study examines the effectiveness of adaptive learning technology as a supplemental component in online precalculus courses using data from vendor software and a public university system in the southeastern United States. Outcomes examined include final exam score and course completion with a passing grade. The results highlight that not all students utilize the technology and many will not take exams. Of those who participated in the course as intended, adaptive learning activity shows a modest and positive influence on final exam scores and successful course completion. Results further indicate the importance of aligning vendor and course curriculum and the need for further research that controls for variables that may relate to motivation and participation.

The Effectiveness of Adaptive Learning Software on Exam and Course Outcomes in Online Precalculus Courses

Higher education faces simultaneous pressures from stakeholders to demonstrate both effectiveness and fiscal responsibility. Using resources efficiently to maximize institutional outcomes often necessitates relying on external vendor tools and products. Vendor products come with associated costs, however, and institutions must determine if the expected benefits outweigh those costs. Institutions should consider providing evidence that the product is both effective and the most financially efficient method of achieving the desired results. The study examines the effectiveness of one vendor product as it relates to outcomes in an online precalculus course. Additionally, the study addresses research questions about the efficacy of an adaptive learning tool to provide evidence and decision support to administrators considering similar software in translatable contexts.

Outsourcing in Higher Education

Institutions of higher education commonly outsource a variety of ancillary functions, including dining operations, grounds maintenance, and custodial services (Adams, Guarino, Robichaux, & Edwards, 2004) and may do so as a way to focus more directly on the centralized mission of teaching (Gupta, Herath, & Mikouiza, 2005). According to survey research by Gupta et al. (2005), other motivations include cost savings, staffing resources, directives from governance structures, and a lack of internal capability. However, little research exists that examines the relationships between institutions of higher education and the private organizations with which they outsource. Furthermore, existing empirical studies of effectiveness are limited, inconclusive, and dependent upon the outsourced activity (Wekullo, 2017). The current study adds to the literature on outsourcing by examining outcomes associated with a form of outsourced instruction.

While institutions of higher education possess differentiated missions, the majority include elements that combine teaching, research, and service. Public, non-profit institutions may land farthest from vendor missions that often focus more on profitability. However, outsourcing functions that are more central to institutional core missions, such as instruction, may be gaining momentum even among public institutions, particularly those facing declines in financial resources. Engaging in partnerships with private companies to deliver instruction may be beneficial in moderation when properly implemented (Russell, 2010). Still, the practice of privatizing instruction is controversial. Critics identify challenges of incompatible missions, threats to faculty autonomy, and quality concerns (Russell, 2010). Similarly, Schibik and Harrington relate instructional outsourcing to the use of part-time faculty and conclude that a successful outsourcing arrangement relies on a corporate approach that minimizes costs (2004). Researchers and institutions should empirically evaluate the outsourced arrangement's effectiveness to determine if the benefits of outsourcing instructional components outweigh concerns and challenges.

Adaptive Learning Products

One method of outsourcing instruction involves adaptive learning products. According to an adaptive learning implementation guide published by the Association of Public and Land Grant Universities, adaptive learning solutions “support a range of institutional goals” and include technology-based methods that deliver a customized learning experience for each user (Vignare et al., 2018, p.iii). The products react dynamically to learner behaviors by offering customized content based on those behaviors. Institutions can incorporate this type of software to create, replace, or supplement course content and implement these solutions as delivered, or customize them to integrate with the already-developed curriculum (Vignare et al., 2018).

To support the teaching function of institutions implementing this technology, the tools must align with the desired program or course-level outcomes in a way that supports current learning objectives. Careless implementation jeopardizes student progress and motivation and may lead to inferior outcomes (Gregg, Wilson, & Parrish, 2018). Problematically, products purport to deliver better student outcomes but may not have data to measure institutional program, course-specific, or student-level outcomes (Oxman & Wong, 2014).

Without access to course-specific learning measures or outcomes, adaptive learning products utilize internal calculations, or algorithms, to determine the student’s path. Because these algorithms are proprietary and internal to the product, assessments of students’ competencies are essentially made inside a “black box” (Blumenstyk, 2016). Thus, the process lacks transparency, and an unknown pedagogy underscores the process of student evaluation and progress. Gregg, Wilson, and Parrish (2018) further explain the “black box” phenomenon with the following:

Adaptive learning systems, for example, often make qualitative designations about learner competence, such as whether a learner is a novice, proficient, or expert, based on assumptions about question difficulty and test properties. These decisions require human judgment; for instance, how many ‘difficult’ questions a student has to answer correctly, and in what time frame, to be designated an ‘expert.’ The numbers ‘speak’ through hidden algorithms and program-designed features. (p.3)

A troubling implication of black box algorithms is the potential to reproduce bias (Raymond, Young, & Shackelford, 2018). If the algorithms are biased, then the results will also be biased. Without access to the proprietary algorithms of adaptive learning technology, institutions must rely on their evaluations to determine whether the solution is effective and unbiased.

As is true for other outsourcing methods, research on the effectiveness of adaptive learning is limited but tends to define effectiveness in terms of student retention and improved learning (Means, Peters, & Zheng, 2014; Johnson, 2016). In a meta-analysis performed on projects funded by the Gates Foundation, researchers note that the effects on course outcomes and learning outcomes were only slightly above zero, but the studies failed to account for confounding variables (Means, Peters, & Zheng, 2014). For example, Arizona State University saw a 20% increase in College Algebra course success after implementing adaptive learning, but the researchers failed to account for two other simultaneous changes in course delivery (Vignare et al., 2018). Similar studies noting increased outcomes also fail to provide adequate empirical evidence that supports the effectiveness of adapted learning (Prusty & Russell, 2014; Howlin, 2014).

The current study addresses the effectiveness of adaptive learning as a supplemental component in online precalculus courses. If the software effectively increases concept mastery at the student-level, it may also relate to improved performance on final exams and, subsequently, course success. To this end, the research addresses the following research questions:

  1. Is the utilization of the adaptive learning software (ALS) associated with an increase in final exam performance?
  2. Is the utilization of the adaptive learning software associated with successful course completion?

Methods

Sample

The current study examines the effects of adaptive learning software on outcomes in online precalculus courses. The data derives from students enrolled at public, four-year institutions in the southeast in the Spring of 2019. This term included thirteen sections of the course that ranged in enrollment from 22 to 38 students each (mean = 32.2308, SD = 4.3427). In this term, 421 students registered for the course, and of those, 78 withdrew before completing the course. Two cases (less than one percent of the sample) were excluded for missing demographic data. The final sample consisted of 419 students, 76 of which withdrew prior to course completion.

Measures

Dependent variables. Two dependent variables measure the effects of adaptive learning on student performance and course outcomes. First, student performance on final exams provides a cumulative assessment of course concepts. The exams utilized in each course section were identical, and the measure does not include any follow-up grade adjustments made by individual instructors. This measure excludes zeroes for non-attempts and actual scores ranged from 13.81 to 100 (mean = 70.8920; SD = 21.7175). Around 50% of the scores fell between 40 and 80, and less than 10% of scores fell below 35. Only 282 (67.3031%) of the students in the sample completed the final exam.

Second, successful course completion provides the measure of student course outcome. Successful course completion was dummy-coded as “1” if the student earned an “A,” “B,” or “C” as the final grade in the course, and “0” if the student earned a “D,” “F,” or “W.”

Independent variables. The primary independent variables measured activity in the adaptive learning component of the course. At the end of the term, the adaptive learning vendor provided a cumulative data file containing information extracted from the platform. From the data, the total number of modules completed within the platform and the total number of assessment questions taken to complete the module was calculated. Due to the software’s dynamic nature, students received a higher number of assessment items if they answered any questions in the module incorrectly. Students who answered incorrectly always received an additional assessment item but had the choice to review additional instructions prior to attempting the additional assessment item. Therefore, the number of self-selected instructional items was also included in the model. If the software adequately fills the missing knowledge gaps, the number of instructional items should be positively associated with course outcomes despite the number of assessment items. Unfortunately, the vendor software did not provide a reliable measure of time, so there is no measure for activity duration.

Control variables. To control for the effect of previous academic ability, scores on the proctored midterm exam were included in the analyses. While all midterm exams were administered by an independent and impartial proctor, only some of the final exams were proctored. Therefore, the models included a control variable for whether the final exam was proctored. To further control for variations in course delivery, a dummy variable captured whether the course spanned over the 8 or 16 weeks.

The available data identified each student as either “Asian,” “Black or African American,” “Hispanic or Latino,” “Native Hawaiian or Pacific Islander,” “Race and Ethnicity Unknown,” “Two or More Races,” “Unknown,” or “White.” From the categories emerged the following series of dummy-coded variables: Asian, Black, Hispanic/Latino, White, and Other.

Additional control variables captured whether the student was jointly enrolled in college while still enrolled in high school (1 = yes), gender (1 = female), age at the end of the term, whether the student had freshman class standing (1 = yes), and major of study. The major variable allows comparison of majors involving math and science to non-math and science majors, where STEM = 1 indicates a math or science major. Table 1 describes each variable in detail, and Appendix A lists each major and its classification.

Analyses
First, I examined the sample to determine how many students utilized the course’s adaptive learning component. Given the research questions, I excluded 37 students from the primary analysis because they never accessed the course’s adaptive learning component. For reference, I compared the characteristics of cases that did and did not access the adaptive learning component using t-tests. Similarly, because not all students took both proctored exams, I examined differences between students who did and did not take both exams.

Next, I ran regression models for each of the research questions. To address the first question about the association between adaptive learning and final exam scores, I ran two ordinary least squares (OLS) regression models. The first model included the adaptive learning variables, and the next model contained both the adaptive learning variables and the control variables. For the second research question, I ran a logit regression model with successful course completion as the outcome variable.

Limitations

The present study addresses the effectiveness of adaptive learning software on the course outcomes of final exam scores and successful course completion. Unfortunately, not all students in the sample participated in the adaptive learning component and took both exams. Because these variables are integral to the research questions, cases with missing data were removed from portions of the analysis. The missing data limits the generalizability of the sample but is appropriate given the conceptual models.

Another limitation exists in the use of proctored midterm scores as a proxy for preexisting competency. Although it is appropriate to control for past performance when looking at performance as an outcome, an ideal measure would capture skill at the onset of the course in the form of a pre-test. Because a pre-test measure was not available, proctored midterm scores were chosen as the best alternative.

Results

Descriptive Statistics

Table 2 displays the descriptive statistics for students who either accessed or did not access the adaptive learning software. The difference in course completion and successful course completion is significant between the two groups with higher rates associated with accessing the ALS homework component. On average, 86.4% of students that accessed the ALS component completed the course, and 63.1% did so with a grade of “C” or above. Conversely, only 35% of students that did not access the ALS completed the course, and none did so successfully. All students that failed to access the ALS also failed to take the exams, so a comparison across these groups is not available. Since the conceptual models exclude students that did not access the software, the limited sample may bias the results.



Table 3 contains descriptive statistics for students who took both major exams and for students who failed to take both exams. The analysis revealed significant differences between these two groups in age, freshman status, joint enrollment status, course completion, and successful course completion. Jointly enrolled students made up 17.9% of the students that took both exams, whereas jointly enrolled students made up only 2.2% of the students that did not take both exams (p < 0.001). Further, the average age of students that completed both exams was 21.4 years, whereas the average age for those that did not was 23.96 ( p < 0.001). Only 45.32% of students that failed to take both midterm and final exams completed the course, and only none of these did so successfully (p < 0.001). Of students that took both exams, 100% completed the course, and 85.7% did so successfully (p < 0.001).


Table 4 shows the descriptive statistics for the proctored exams. Three hundred and eighteen students completed the midterm exam, 282 completed the final exam, and 280 students completed both exams. Final exam scores were slightly higher (mean = 70.892, SD = 21.718) than scores on the midterm exams (mean = 66.207; SD = 23.168). This difference may suggest that either the adaptive learning material is bridging knowledge gaps as intended or could be attributed to fewer final exams being completed in a proctored environment.


OLS Regressions

Table 5 details the first two regression models’ results using final exam scores as the dependent variable. The first model only included adaptive learning variables. The second model adds student and control variables to the adaptive learning variables.The first model accounts for 11.9% of the variance in final exam scores (adj. R-squared = 0.119). The number of completed modules is significant such that on average, an increase in one completed ALS assignment is associated with a 1.014 increase on the final exam (p < 0.001). In contrast, each additional assessment item completed by a student is associated with a decrease of 0.03 points on the final exam (p < 0.01). The second model adds in the control variables and substantially increases the explanatory power of the model to (adj. R-squared = 0.498). In this model, both completed modules and assessment questions remain significant. Additional associations emerge, showing relationships between course delivery, midterm exam scores, and proctoring decisions. Holding all else constant, an increase of one completed adaptive learning module is associated with an average increase of 1.075 points on the final exam. Likewise, a rise of one point on the midterm exam predicts an average rise of 0.229 on the final exam, holding all else constant. Compared to short, 8-week sessions, full session courses are likely to earn an average of 6 points less on the final exam. Furthermore, proctored final exams tend to score 21.35 fewer points on average, holding all else constant.


Reference groups are white, non-freshman, short session, non-proctored final, and non-stem major.

Logit Model

The final model examines the effects of adaptive learning on successful course completion. Significant variables include the number of completed assignments, age, race/ethnicity, midterm exam score, and proctoring decisions.The marginal effects indicate that net of all other variables, for each additional module, the probability of students completing the course successfully rises by about 1.8% (p < 0.001). Compared to white students, Asian students are about 15.8% less likely to complete the course successfully (p < 0.05). While midterm scores are significant, the effects are marginally positive (p < 0.001). Finally, compared to students with non-proctored final exams, students taking proctored final exams are about 10.4% less likely to complete the course successfully (p < 0.05).


Discussion and Implications

If the adaptive learning software addresses gaps in student learning, we would expect to see predictable increases in exam scores that cover comparable material. Furthermore, if adaptive learning works as purported and content is appropriately aligned with the course curriculum, we would also expect to see an increase in students who complete the course with a passing grade. Unfortunately, not all students participate fully in a course. Particular to this study, some students failed to access the adaptive learning modules, and others failed to take the midterm and final exams. Although not the focus of the present study, the disconnect of students from integral course components is concerning and requires further investigation.

When students participated in the adaptive platform, the number of modules that a student completed was positively and significantly associated with final exam scores. Course delivery variables, such as whether the course spanned eight or sixteen weeks and whether the final exam was proctored or non-proctored, also showed relationships to final exam scores. Beyond the regression analysis, the descriptive statistics of adaptive learning variables are informative. The maximum number of assessment items completed was 1,972 questions. This high number suggests that the software may become a black hole in which some students expend considerable effort to find their way to the end of a module. By design, the instructional items are available for students to review if they find themselves struggling with a concept. If students select a high number of additional instructions within the homework component, this may indicate gaps or misalignment between the platform and the primary course curriculum. On average, students chose around 75 instructional items within the adaptive learning platform.

Exam scores may be another indicator of misalignment. Despite average midterm exams falling below passing rates and final exam scores hovering at 70, course completions and successful course completions were much higher. While students earned a 70.89 final exam score on average, 85.7% of students completed the course with a grade of “C” or higher. Near failing grades on a summative exam implies that there may be gaps left unbridged by the adaptive learning component.

Overall the adaptive activities in the study show some relationship to outcomes, but more work is needed to fully justify the outsourcing of instruction to a for-profit vendor. Future analysis should consider utilizing a more robust measure of prior performance, such as a course pre-test. Other student characteristics such as motivation may add to the analyses by uncovering why particular students fail to access the adaptive component or fail to take the required exams.

Conclusion

Examining the effectiveness of outsourced activities is an essential consideration for institutions of higher education that aim to demonstrate fiscal responsibility to their stakeholders. Limited by time, money, and other resources outsourcing adaptive learning technology may be a promising way to address gaps in student knowledge and level the playing field within individual courses. The present study provides a foundation from which to consider a future analysis of adaptive technology. Expanding on these initial findings in which completing activities in an adaptive learning platform was associated with slightly higher exam scores and rates of course completion and success, future work in this area should consider prior academic performance and alignment between adaptive learning and course curriculum. Future findings can then be compared to the relative costs to determine the net benefit of outsourcing instruction to meet the relative needs of each student.

References

Atul Gupta, S. K. (2005). Outsourcing in higher education: An empirical examination. The International Journal of Education Management, 19(4/5), 396-412.

Blumenstyk, G. (2016). As Big-Data Companies Come to Teaching, a Pioneer Issues a Warning. The Chronicle of Higher Education. Retrieved from https://www.chronicle.com/article/As-Big-Data-Companies-Come-to/235400?cid=cp21

Gregg, A., Wilson, B., & Parrish, P. (2018). Do no harm: A balanced approach to vendor relationships, learning analytics, and higher education. IDEA Paper #72.

Howlin, C. (2014). Realizeit at the University of Central Florida. Realizeit.

Johnson, D. &. (2016). The Potential Transformation Of Higher Education Through Computer-Based Adaptive Learning Systems. Global Education Journal, 2016(1), 1-17.

Means, B., Peters, V., & Zheng, Y. (2014). Lessons from Five Years of Funding Digital Courseware: Postsecondary Success Portfolio Review. Menlo Park, CA: SRI Education.

Olin L. Adams III, A. J. (2004). A comparison of outsourcing in higher education, 1998-99 and 2003-04. Journal of Educational Research & Policy Studies, 4(2), 90-110.

Oxman, S. W. (2014). White Paper: Adaptive Learning Systems. DV X Innovations DeVry Education Group. Retrieved from https://kenanaonline.com/files/0100/100321/DVx_Adaptive_Learning_White_Paper.pdf

Prusty, B., & Russell, C. (2014). Engaging students in learning threshold concepts in engineering mechanics: adaptive eLearning tutorials. Retrieved 2019, from https://www.researchgate.net/publication/228448413_Engaging_students_in_learning_threshold_concepts_in_engineering_mechanics_adaptive_eLearning_tutorials

Russell, A. (2010). Outsourcing Instruction: Issues for Public Colleges and Universities. American Association of State Colleges and Universities.

Timothy J. Schibik and Charles F, H. (2004). The Outsourcing of Classroom Instruction in Higher Education. Journal of Higher Education Policy and Management, 26(3), 393-400.

Vignare, K., Lammers Cole, E., Greenwood, J., Buchan, T., Tesene, M., DeGruyter, J., . . . Kruse, S. (2018). A guide for implementing adaptive courseware: From planning through scaling. Joint publication of Association of Public and Land-grant Universities and Every Learner Everywhere.

Wekullo, C. S. (2017). Outsourcing in higher education: the known and unknown. Journal of Higher Eduation Policy and Management, 39(4), 453-468.