Abstract

This quantitative study is designed to determine the impact of online proctoring software in graduate courses. The researchers compared the final grades of two groups of online graduate students who were taught by professors who had online exams before and after the university implemented proctoring software. Essentially the only difference between the groups was the use and nonuse of proctoring software. The overall sample in this study were 426 students in six different online graduate nursing courses at a medium-sized, public university in the United States. The findings showed the implementation of online proctoring software had a statistically significant impact on the students’ course grade when all the graduate courses were regressed together. The authors also regressed the data using independent human capital variables (i.e. male/female, full/part-time status, and cumulative GPA). Student cumulative GPA also proved statistically significant for the group of graduate nursing courses.

Keywords: Online proctoring, proctoring software, distance learning, graduate education, online exam, exam misconduct, online courses, academic dishonesty, academic misconduct

The Impact of Remote Online Proctoring versus No Proctoring: A Study of Graduate Courses

Online courses have increased dramatically in the last few years, especially since the COVID-19 Pandemic forced universities to move to an online environment. “E-learning has become mainstream in the education[al] sector and has been massively adopted in higher education” (Al-Fraihat et al., 2020, p. 2). Thus, college instructors are forced to adjust to the new reality of high demand for online courses. The testing environment in the last couple of decades has evolved from being primarily an in-person, traditional classroom with on-site exam proctoring to a substantial number of online courses with online exams (Clesham, 2010). With the uptick in online courses, instructors are pressed to find a way to promote academic integrity in their assessments.

The risk of cheating with online exams is a significant concern for instructors and educational institutions (Alessio et al., 2017; Alessio et al., 2018; Bedford, Gregg, & Clinton, 2011; Clusky et al., 2011; Hobbs, 2021; Hylton et al., 2016; Oeding, 2022; Oeding et al., 2024). Academic misconduct in higher education is nothing new. McCabe and Trevino (1993) studied over 6000 students from 31 colleges and universities, and 64 percent of the students surveyed admitted to engaging in exam misconduct. Exam-related misconduct occurs when a student uses forbidden sources during an exam, when someone impersonates a student, and/or when a student provides or receives information from others (Faucher & Caves, 2009; Hughes & McCabe, 2006; Hylton et al., 2016). Remote, online proctoring is one way that instructors choose to combat cheating with online assessments (Nigam et al., 2021; Oeding, 2022). Online exam proctoring uses the Internet and technology to observe examinees remotely from a traditional in-person testing site. (Foster, 2013).

Literature Review

Cheating in a “New” Era: Online Assessments

When completing assignments, modern students operate differently from students of prior generations, who did not have the same technology available to them (Lieneck and Esparza, 2018). The advent of the Internet and technology opens the door for a broader scope of misconduct. Students commonly share assignments, notes, grades, and answers with their friends and classmates. Students use artificial intelligence (“AI”) to generate answers (Abd-Elaal et al., 2019), exchange assignments with family and friends, copy assignments and assessments from public or private websites (Awdry, 2020), and have people impersonate them for class and assignments (Hylton et al., 2016). In addition, test banks and answer guides are often published on the Internet for any student to see. Other technology-assisted misconduct includes the use of smart watches and cell phones during exams in addition to the use of ultraviolet pens to read invisible ink on “scrap paper” (Kellaghan & Greaney, 2019).

Numerous studies show that students perceive a greater opportunity to cheat and express a greater willingness to cheat in an online environment as compared to a traditional, seated environment (King et al., 2009; Moten et al., 2013; Watson & Sottile, 2010). Oeding (2022) contended that students taking online exams continue to cheat, even though they are being monitored with remote, proctoring software. This researcher studied the exam proctoring videos of business students over five consecutive semesters and found that at least 26 out of 164 students committed academic dishonesty. This researcher designed a list of online exam proctoring rules for students, suggested that instructors watch exam proctoring videos, and offered advice for detecting online exam dishonesty.

Online Exam Proctoring as a Deterrence to Academic Dishonesty

Modern technology permits instructors to use a webcam and microphone to video the examinee’s movements and record the sounds of the examinee. (Alessio et al., 2017; Oeding, 2022; Weiner & Hurtz, 2017). The exam proctoring software uses artificial intelligence to permit instructors to record the student during the exam, block the examinee from using a browser, and verify the identity of the examinee (Nigam et al., 2021; Slusky, 2020). The recording may then be reviewed later by the instructor. (Oeding, 2022)

Some studies validate online proctoring software by comparing online and offline proctoring environments. In a robust study, Weiner and Hurtz (2017) compared onsite, traditional proctoring to online remote proctoring in high-stakes, professional licensing exams (N = 14,623) administered at the same time but at different testing locations. Some examinees were proctored onsite in testing centers, and other examinees were proctored in computer kiosks using online, remote proctoring software. The researchers concluded that examinees who were proctored in traditional onsite testing centers scored comparably to online, remotely proctored examinees.

Lee (2020) compared online and offline proctored environments with graduate students. The study analyzed data from 1,762 graduate students who took nine different courses over an eight-year period. The exams were taken at a pre-approved testing site at a location where they were able to access the Internet. All exams, whether offline or online, were proctored at network test sites, on campus, or at independent test sites. The researchers found no statistically significant difference in the average exam score, suggesting that the exam proctoring environment does not cause a change in student performance.

A third, smaller study compared on-site and online proctoring environments, finding a difference in scores. Wuthisatian (2020) compared the final exam scores of 65 graduate students in an online graduate-level economics course. Students had the choice to take the final exam using on-site proctoring services or a remote online proctoring service. The researcher found that the average scores of students who used on-site proctoring services scored 6-7% higher than the students who took the final exam through the remote, online proctoring service. This study raises concerns as to why students would score lower in a remote, online environment.

Comparison of Performance of Proctored and Unproctored Exams

Several studies compared the performance of proctored and unproctored online assessments with the use and nonuse of proctoring software (Alessio et al., 2017; Alessio et al., 2018; Carstairs & Myors, 2009; Karim et al., 2014; Oeding et al., 2024; Reisenwitz, 2020). Oeding et al., (2024) tags the results of these studies as a “mixed bag,” referring to the fact that some studies find that online courses using proctoring software tend to have lower exam scores (Alessio et al., 2017; Alessio et al., 2018; Carstairs & Myors, 2009; Reisenwitz, 2020), and other studies find no significant difference in scores (Beck, 2014; Ladyshewsky, 2015). The authors will separate the following studies between undergraduate and graduate for further differentiation.

Undergraduate Studies Comparing Online Exams With and Without Proctoring

Numerous researchers have studied the impact of online proctoring software on student performance in undergraduate courses. Several undergraduate studies found lower online exam scores with the proctored exams as compared to unproctored exams (Alessio et al., 2017; Alessio et al., 2018; Carstairs & Myors, 2009; Daffin & Jones, 2018; Goedl & Malla, 2020; Reisenwitz, 2020). For example, when comparing 1,700 students who took online psychology classes at Washington State University, Daffin and Jones (2018) found the non-proctored students took twice as long to complete an exam and they scored 10-20% better than proctored students.

Reisenwitz (2020) compared proctored and unproctored exam scores from two online sections of the same introductory marketing course. The first online section with 40 students completed unproctored exams, and the second online section with 33 students took proctored exams. Both sections of the course had the same teacher, course content, and exams. Reisenwitz (2020) found the average scores of the proctored exams to be significantly lower than the unproctored exams. Goedl and Malla (2020) found similar results (i.e. lower grades in proctored online exams) when comparing online sections of introductory accounting courses.

Alessio et al., (2017 & 2018) performed two undergraduate studies comparing proctored and unproctored online examinations and quizzes in Medical Terminology at a public university in Ohio. Similar to numerous other undergraduate studies, Alessio et al., (2017 & 2018) found that remotely proctored students scored significantly lower and spent less time taking the exam or quiz than unproctored students. Based on these results, Alessio et al., (2017) stated, “Proctoring with video monitoring. . . is important to assure academic integrity” in online exams. Nevertheless, while these results may arouse suspicion of academic dishonesty, they do not provide irrefutable evidence of cheating.

In a study performed in a private university in Jamaica, Hylton et al., (2016) compared web-based proctored and unproctored online exams of undergraduate students who had the same instructor who used the same pool of exam questions. Although the proctored students scored slightly lower than the unproctored, the researcher did not find a statistically significant difference in the scores of the students with and without the web-based proctor. However, the proctored students took a statistically significant longer time to complete the exam, which is similar to the studies by Alessio et al., (2017 & 2018). These researchers believed the lack of monitoring during online exams increased the opportunity to engage in academic dishonesty.

Undergraduate Comparison Studies Using Human Capital Variables

Harmon and Lambrinos (2008) was the first project known to the authors, which studied academic misconduct with online exams by regressing human capital variables (i.e. GPA, age, major, college grade level) to predict test scores. The researchers compared the examination scores from two online courses of Principles of Economics. The final exam was proctored in one course (N=38), but not in the other (N=24). In both courses the first three exams were unproctored. The authors concluded that cheating occurred with the unproctored final exam because the human capital variable statistics were not able to explain the variation in exam scores for the unproctored final exam.

Using a similar methodology as Harmon and Lambrinos (2008), Beck (2014) only found a slight difference between proctored and unproctored students. These researchers compared midterm and final exam scores of three different sections (i.e. online, hybrid, and face-to-face) of an undergraduate course taught by the same instructor. The exams were unmonitored in the online section (N=19) and monitored in the hybrid (N=21) and face-to-face (N=60) sections. Beck used human capital variables (i.e. GPA, class standing, and age) to regress the data. The title of Beck’s paper well states his findings regarding academic dishonesty with online exams: “Much ado about nothing.”

Using Beck’s (2014) statistical model, Dendir and Maxwell (2020) studied 648 students in two undergraduate classes (i.e. economics and geography). They found a significant decrease in the scores of proctored, high-stakes exams in these courses. Using a student’s GPA, the researchers found a stronger relationship between a student’s ability and exam performance when the proctoring was in place. These researchers regressed exam scores with human capital measures (i.e. ability, male/female, age, class rank), finding that “academic dishonesty is a serious issue in online courses” (p. 8). Interestingly, the researchers also found that older students tend to perform better in an unproctored environment.

Using a similar methodology involving human capital variables, Oeding et al., (2024) compared the proctored and unproctored exams of 252 undergraduate students in six separate courses of different disciplines, with a variety of results. When regressing all the courses together, the proctored students did not result in a significantly lower course grade. However, when the researchers regressed the data of individual courses with independent human capital variables including grade point average, the researchers made three following findings: 1) a business law course had significantly lower course grades with proctoring software; 2) females using proctoring software had lower grades than males in a 100-level math course, 3) and full-time students using proctoring software had lower course grades than part-time students, suggesting, correctly or not, that full-time students were more likely to engage in academic misconduct.

Online Proctoring versus No Proctoring at the Graduate Level

The literature is sparce as to the study of the impact of online proctoring software versus no proctoring software regarding student performance in online graduate courses. This study intends to fill in some of the gaps in the literature. One of the early studies comparing proctored versus unproctored online exams was conducted by Prince et al., (2009). They compared the average scores of 76 business students who took a graduate-level, online management course with the same instructor, the same textbook, similar Blackboard curriculum, and similar exams drawn from a pool of test questions. The researchers compared students who took unproctored exams with students who took exams proctored through a live proctor or proctoring software. The researchers found that students who took proctored exams scored lower as compared to non-proctored exams.

However, Ladyshewsky (2015) came to the opposite conclusion when studying 250 graduate students in a management and leadership course. He found no difference between the mean scores for in-person, proctored exams and online, unproctored online exams. Thus, Ladyshewsky concluded, “Concerns about increased cheating in unsupervised online test are not supported” (p. 1).

Another interesting study to note is a graduate study performed by Paredes et al., (2021), who analyzed student perceptions of how remotely proctored exams impacted online graduate students and their academic integrity. The researchers surveyed 100 students enrolled in a Master of Education program at a private university in Mexico. The students felt that remotely proctored exams were helpful to avoid cheating rather than improving the teaching and learning process. The students reported that the use of remote proctoring software minimized the possibility of dishonesty because the students felt they were being watched. Other issues of concern from the students were the lack of privacy and anxiety related to remotely proctored exams. These other issues could also negatively impact student grades.

Method

Before commencing the study, the university provided faculty members with the choice to incorporate Proctorio, an online exam proctoring software, into their courses. This software operates by recording the examinee's screen and utilizes a webcam to capture the examinee's sounds and actions during the exam. A researcher involved in this study observed a decline in course grades after the implementation of the online exam proctoring software. Therefore, the authors developed the following research questions:

Original Research Question: Does online, remote proctoring software impact the overall letter grades for online graduate nursing courses?

The researchers obtained permission to complete this qualitative study from the Internal Review Board (IRB) of the university, whose administrators provided the data for the study. The relevant university is a medium-sized, public university in the United States. The authors first surveyed undergraduate and graduate professors who chose to use exam proctoring software in their courses during the first academic year in which the university started offering exam proctoring software. After determining which professors did not change their courses in any substantial way other than adding Proctorio to exams and/or quizzes, the authors included in the study only courses by professors who had online courses before and after the implementation of proctoring software. The researchers also obtained permission from the professors to use their course data.

Initially, this study began as a regression analysis of graduate and undergraduate online courses, but the researchers decided to separate the undergraduate from graduate data because the combined group (i.e. undergraduate and graduate) did not show statistical significance. All the relevant graduate courses in this study were nursing courses. The overall sample included 426 graduate students in the six different online nursing courses (i.e. Clinical Pharmacology for Advanced Practice Nurses, Primary Care Nursing of Families, Psychiatric Mental Healthcare of Families Across the Lifespan I, Psychiatric Mental Healthcare of Families Across the Lifespan II, Primary Care of Elders and Adults I, Primary Care of Elders and Adults II) that were administered during at least one of the four semesters prior to the university’s proctoring software implementation and for at least one of the first two semesters after software implementation. Thus, the data spanned across six semesters.

Once the researchers determined the pertinent courses to study based on results from the survey, personnel from information technology and data analytics departments aided the researchers in gathering the de-identified demographic and course assessment data used in the project. The researchers conducted a comparative analysis of the final grades achieved by two groups of online students enrolled in the same course, taught by the same professor, and utilizing consistent content delivery (i.e., lectures, exams, quizzes, written assignments, etc.), similar to the methodologies deployed by Alessio et al., (2017), Alessio et al., (2018), Hylton et al., (2016), and Reisenwitz (2020). The primary distinction between the two student groups compared was in the utilization or non-utilization of proctoring software. The “pre-Proctorio” group took the course without the use of proctoring software, whereas the “post-Proctorio” group was subjected to online exam proctoring software during assessments.

The “post-Proctorio” participants in the study were informed about the requirement to use proctoring software for course assessments. To enable this software, students were required to download it before taking the initial assessment in the course. The course instructors activated the exam proctoring feature within the “post-Proctorio” group, integrating it as an automatic function of the online exam. Proctorio provides various functionalities that instructors can leverage to monitor students during the examination. Once students began their exams, Proctorio initiated recording of the student and saves the recording for later review by the faculty members.

In this quantitative study, the authors used regression analysis to further the graduate-level research regarding course performance with proctored and unproctored assessments in online graduate-level courses. The course data was analyzed first for the entire group of graduate courses and subsequently for each individual course.

Like the studies performed by Harmon and Lambrinos (2008), Beck (2014), Dendir and Maxwell (2020), and Oeding et al., (2024), the authors regressed human capital variables as well as assessment data. The assessment data provided by the university included individual exam scores, quiz scores, and course grades. The non-identifiable demographic data included male/female, age (i.e. under 24/25 and over), race/ethnicity, first-generation college student, in-state/out-of-state, living on/off-campus, cumulative GPA, high school GPA, college classification, student classification (i.e. freshman, sophomore, junior, senior), major, full/part-time status, and SAT/ACT scores. The authors did not use all the requested demographic and assessment data in the project, but it is available for further study. The variables the authors used were male/female, full/part-time status, and cumulative GPA. This is the only graduate study known to the authors that used human capital variables to compare the use and nonuse of proctoring at the graduate level.

Findings and Discussion

The design of the study has the dependent variable of course grade and with the anticipation of implementing online proctoring software, the course grade would decrease. There are two different timeframes as our main independent variable of pre-Proctorio or post-Proctorio (Pre/Post) to measure against the course grade. In the analysis of the data, we used regression of our variables to find if there was statistical significance. In the findings, the implementation of online proctoring software did have a statistically significant impact on the students’ course grade when all the graduate courses were regressed together. Unfortunately, the R-squared was extremely low denoting that our model needed more variables.

The authors added other independent, human capital variables into the regression along with Pre/Post and analyzed them after the addition of each, similar to studies by Harmon and Lambrinos (2008), Beck (2014), Dendir and Maxwell (2020), and Oeding et al., (2024). The second independent variable is a measure of female or male students in the courses. This variable did not prove to be statistically significant in the regression of the all-graduate-courses group, but the Pre/Post variable retained its significance here. Again, the R-squared value was low and did not support the model. The third independent variable is a measure of full-time or part-time students, which in the graduate courses is a low count variable with only 8.2% of the subjects having full-time status. The addition of this variable did not prove to be a statistically significant measure when regressed together with the preceding variables in the all-graduate-courses group, but again, the Pre/Post variable retained its significance. In some of the graduate courses this variable had to be eliminated because there were no full-time students and that caused an error in the regression.

The fourth independent variable, student cumulative GPA proved to be statistically significant in the all-graduate-courses group. The R-squared was also higher at 13.25%, which is much higher than the R-squared for any of the Pre/Post significance levels. This variable also overpowered the Pre/Post variable, which had an insignificant P-value with all the variables regressed together. Since the student cumulative GPA was found to be an important measurement, the regression model was changed to make this variable the primary independent variable. The model was re-analyzed, and student cumulative GPA was statistically significant for the all-graduate-courses group and with the addition of each predefined variable. This should be expected and proved that the students with the higher GPA also earned a high course grade for their graduate courses. This analysis gave validation to the data. Please see Table 1.

Table 1: Regression Results of All-Graduate-Courses Group
1 var2 var3 var4 var
R Square0.0110.011   0.01450.1325
Significance F*    0.03070.0965 0.103**    0.000
X1 = Pre/Post*    0.0307*    0.0312*     0.02710.1396
X2 = Fem/Malxx0.9028   0.80520.9091
X3 = FT/PTxxxx   0.21950.8132
X4 = Overall GPAxxxxxx**    0.000
Statistically significant: * < .05 and ** < .001

Two graduate courses, Clinical Pharmacology for Advanced Practice Nurses and Primary Care of Elders and Adults II, seemed to carry the results for all the graduate courses, even though they only accounted for 39 percent of the subjects. Similar to the methodologies employed by Alessio et al., (2017), Alessio et al., (2018), Hylton et al., (2016), Reisenwitz (2020), when we regressed the individual graduate courses, the Pre/Post variable was statistically significant in two of the nursing courses in support of grades being affected with the addition of online proctoring software. Some definitions of each course: Clinical Pharmacology for Advanced Practice Nurses had 147 students and was statistically significant with the addition of each variable, but with an R-squared of 14% except when student cumulative GPA variable was added (also statistically significant) and the R-squared jumped to 35%. The second course definition: Primary Care of Elders and Adults II only had 20 students and was statistically significant with the addition of each variable and with our highest R-squared range of 34-36%. When we added the student cumulative GPA variable, the R-squared jumped to 46.8%. For the Primary Care of Elders and Adults II course, there were no full-time students, so we had to remove the full-time variable and regress without it. The larger R-squared values help to explain our model well.

Two other graduate courses proved the pre/post variable statistically significant but only with the addition of the cumulative GPA variable. These courses: Primary Care Nursing of Families had 182 students with an R-squared of 10% while Psychiatric Mental Healthcare of Families across the Lifespan Ɪ had only 31 students but was statistically significant with an R-squared of 38.3%. These latter findings were similar to Oeding et al., (2024) research involving undergraduate courses. Please see Table 2.

Table 2: Regression Results of Individual GR Courses
Clinical Pharmacology for Advanced Practice NursesPrimary Care Nursing of FamiliesPsychiatric Mental Healthcare of Families across the Lifespan IPrimary Care of Elders and Adults II
Observations1471823120
R Square               0.3499           0.1021              0.3831           0.4679
Significance F**           0.000**       0.0007*            0.0112*         0.0156
X1 = Pre/Post**           0.000           0.441              0.9886*         0.0059
X2 = Fem/Mal               0.859           0.8562              0.5849           0.641
X3 = FT/PT               0.4371           0.1453              0.8211xxx
X4 = Overall GPA**           0.000**         0.000**            0.000           0.0739
Statistically significant: * < .05 and ** < .001

Conclusions, Implications, and Further Research

The findings of this quantitative study showed the implementation of online proctoring software had a statistically significant impact on the students’ course grade when all the graduate courses were regressed together, similar to Alessio et al., (2017), Alessio et al., (2018), Daffin and Jones (2018), Goedl & Malla (2020), Hylton et al., (2016), and Reisenwitz (2020). All the courses in the study happened to be nursing courses. When the investigators regressed the individual courses, the Pre/Post variable was statistically significant in two of the nursing courses in support of grades being affected with the addition of online proctoring software. These two courses carried the results for all the graduate courses, even though they only accounted for 39 percent of the subjects. Two other graduate courses proved the pre/post variable statistically significant but only with the addition of the cumulative GPA variable.

This is the only graduate study known to the authors that used human capital variables to compare the use and nonuse of online exam proctoring at the graduate level. The variables used were male/female, full/part-time status, and cumulative GPA. Student cumulative GPA proved to be statistically significant for the all-graduate-courses group of students.

This quantitative study is intended to help instructors and educational institutions make informed decisions about the usage of online proctoring software in graduate courses. Based on prior studies finding that proctoring software deters academic misconduct (Alessio et al., 2017; Alessio et al., 2018; Beck, 2014; Dendir & Maxwell, 2020; Harmon & Lambrinos, 2008; Hylton et al., 2016; Oeding et al., 2024), the authors agree that online, remote proctoring appears to deter academic misconduct in the online graduate nursing courses studied.

Further avenues for related research could involve investigating the phenomenon to determine what characteristics of individual courses, such as the two courses that carried the results in this study, are more prone to having a significant impact on the course grades with proctoring software. Future researchers could also investigate why some students in certain universities and/or countries are more likely to take advantage of an unproctored environment. Researchers could also regress the data using human capital variables other than what was used in this study.

Acknowledgements

We would like to thank Dr. Andrew Dill of the University of Southern Indiana and Dr. Jamie Seitz of Indiana University for helping us conceptualize this research project. We would also like to thank the instructors who permitted us to use their course data for this project.

References

Abd-Elaal, E. S., Gamage, S. H., & Mills, J. E. (2019, January). Artificial intelligence is a tool for cheating academic integrity. In 30th Annual Conference for the Australasian Association for Engineering Education (AAEE 2019): Educators becoming agents of change: Innovate, integrate, motivate (pp. 397-403).

Al-Fraihat, D. Joy, M., Masa'deh, R., & Sinclair, J. (2020). Evaluating e-learning systems success: An empirical study. Computers in Human Behavior, 102, 67-86. https://doi.org/10.1016/j.chb.2019.08.004

Alessio, H. M., Malay, N. J., Maurer, K., Bailer, A. J., & Rubin, B. (2017). Examining the effect of proctoring on online test scores. Online Learning, 21(1). https://doi.org/10.24059/olj.v21i1.885

Alessio, H. M., Malay, N. J., Maurer, K., Bailer, A. J., & Rubin, B. (2018). Interaction of proctoring and student major on online test performance. The International Review of Research in Open and Distributed Learning, 19(5). https://doi.org/10.19173/irrodl.v19i5.3698

Awdry, R. (2020). Assignment outsourcing: moving beyond contract cheating. Assessment & Evaluation in Higher Education, 46, 220 - 235. https://doi.org/10.1080/026029...

Beck, V. (2014). Testing a model to predict online cheating—Much ado about nothing. Active Learning in Higher Education, 15(1), 65-75. https://doi.org/10.1177/1469787413514646

Bedford, D. W., Gregg, J. R., & Clinton, M. S. (2011). Preventing online cheating with technology: A pilot study of remote proctor and an update of its use. Journal of Higher Education Theory and Practice, 11(2).

Carstairs, J., & Myors, B. (2009). Internet testing: A natural experiment reveals test score inflation on a high-stakes, unproctored cognitive test. Computers in Human Behavior, 25(3), 738–742. https://doi.org/10.1016/j.chb.2009.01.011

Clesham, R. (2010). Changing assessment practices resulting from the shift towards on­screen assessment in schools [Doctoral dissertation, University of Hertfordshire]. University of Hertfordshire Research Archive.

Cluskey, G. R., Elhen, C. R., & Raiborn, M. H. (2011). Thwarting online exam cheating without proctor supervision. Journal of Academic and Business Ethics, 4(1), 1–7

Daffin, L. W. J., & Jones, A. A. (2018). Comparing student performance on proctored and non-proctored exams in online psychology courses. Online Learning Journal, 22(1). https://doi.org/10.24059/olj.v22i1.1079

Dendir, S., & Maxwell, R. S. (2020). Cheating in online courses: Evidence from online proctoring. Computers in Human Behavior Reports, 2, 100033. https://doi.org/10.1016/j.chbr.2020.100033

Faucher, D., & Caves, S. (2009). Academic dishonesty: Innovative cheating techniques and the detection and prevention of them. Teaching and Learning in Nursing, 4(2), 37-41. https://doi.org/10.1016/j.teln.2008.09.003

Foster, D. (2013, March 26). Security Issues in Technology-Based Testing. [Ch. 3] In Wollack, J.A., & Fremer, J.J. (Eds.), Handbook of Test Security (1st ed.). Routledge. https://doi.org/10.4324/9780203664803

Goedl, P.A., & Malla, G.B. (2020). A study of grade equivalency between proctored and unproctored exams in distance education. American Journal of Distance Education, 34, 280 – 289. https://doi.org/10.1080/08923647.2020.1796376

Harmon, O.R. & Lambrinos, J. (2008). Are online exams an invitation to cheat? Journal of Economic Education 39(2), 116-125. https://doi.org/10.3200/JECE.39.2.116-125

Hobbs, Tawnell (2021, May). Cheating at school is easier than ever – and it’s rampant. Wall Street Journal. https://www.wsj.com/articles/cheating-at-school-is-easier-than-everand-its-rampant-11620828004

Hughes, J., & McCabe, D. (2006). Understanding academic misconduct. Canadian Journal of Higher Education, 36(1), https://doi.org/10.47678/cjhe.v36i1.183525

Hylton, K., Levy, Y., & Dringus, L. P. (2016). Utilizing webcam-based proctoring to deter misconduct in online exams. Computers & Education, 92-93, 53–63. https://doi.org/10.1016/j.compedu.2015.10.002

Karim, M. N., Kaminsky, S. E., & Behrend, T. S. (2014). Cheating, reactions, and performance in remotely proctored testing: An exploratory experimental study. Journal of Business and Psychology, 29(4), 555–572. https://doi.org/10.1007/s10869-014-9343-z

Kellaghan, T. & Greaney, V. (2019). Public Examinations Examined. Washington, DC: World Bank. https://doi.org/10.1596/978-1-4648-1418-1

King, C. G., Guyette, R. W., & Piotrowski, C. (2009). Online exams and cheating: An empirical analysis of business students’ views. The Journal of Educators Online, 6(1). https://doi.org/10.9743/JEO.2009.1.5

Ladyshewsky, R. K. (2015). Post-graduate student performance in ‘supervised in-class’ vs. ‘unsupervised online’ multiple choice tests: implications for cheating and test security. Assessment & Evaluation in Higher Education, [s. l.], 40(7), 883–897. https://doi.org/10.1080/02602938.2014.956683

Lee, J. W. (2020). Impact of proctoring environments on student performance: online vs. offline proctored exams. Journal of Asian Finance Economics and Business, 7(8), 653-660. https://doi.org/10.13106/jafeb.2020.vol7.no8.653

Lieneck, C., & Esparza, S. (2018). Collaboration or collusion? The new era of commercial online resources for students in the digital age: An opinion piece. Internet Journal of Allied Health Sciences and Practice, 16(3), 7.

McCabe, D. L., & Trevino, L. K. (1993). Academic dishonesty: Honor codes and other contextual influences. Journal of Higher Education, 64(5), https://doi.org/10.2307/2959991

Moten, J., Fitterer, A., Brazier, E., Leonard, J., & Brown, A. (2013). Examining online college cyber cheating methods and prevention measures. Electronic Journal of e-Learning. 11. 139-146.

Nigam, A., Pasricha, R., Singh, T., & Churi, P. (2021). A systematic review on AI-based proctoring systems: Past, present and future. Education and Information Technologies, 26(5), 6421-6445. https://doi.org/10.1007/s10639...

Oeding, J. (2022). The prevention and detection of academic dishonesty: Watch the online exam proctoring videos. Quarterly Review of Distance Education, 23(1), 79-89.

Oeding, J., Gunn, T., & Seitz, J. (2024). The mixed-bag impact of online proctoring software in undergraduate courses. Open Praxis, 16(1), pp. 82–93. https://doi.org/10.55982/openpraxis.16.1.585

Paredes, S. G., Peña, F. J. J., & Alcazar, J. M. F. (2021). Remote proctored exams: Integrity assurance in online education? Distance Education, 42(2), 200-218. https://doi.org/10.1080/01587919.2021.1910495

Prince, D. J., Fulton, R. A., & Garsombke, T. W. (2009). Comparisons of proctored versus non- proctored testing strategies in graduate distance education curriculum. Journal of College Teaching & Learning, 6(7). https://doi.org/10.19030/tlc.v6i7.1125

Reisenwitz, T. H. (2020). Examining the necessity of proctoring online exams. Journal of Higher em>Education Theory and Practice, 20(1), 118-124. https://doi.org/10.33423/jhetp.v20i1.2782

Slusky, L. (2020). Cybersecurity of online proctoring systems. Journal of International Technology and Information Management, 29(1), 56-83. https://doi.org/10.58729/1941-6679.1445

Watson, G., & Sottile, J. (2010). Cheating in the digital age: Do students cheat more in online courses?, Online Journal of Distance Learning Administration, 13(1), 1–12.

Weiner, J. A., & Hurtz, G. M. (2017). A comparative study of online remote proctored versus onsite proctored high-stakes exams. Journal of Applied Testing Technology, 18(1). https://doi.org/10.1186/s12909-021-03068-x

Wuthisatian, R. (2020). Student exam performance in different proctored environments: Evidence from an online economics course. International Review of Economics Education, 35, https://doi.org/10.1016/j.iree.2020.100196.