Discouraging Student Cheating Online Without Surveillance

• Bookmarks: 143


COVID-19 has transformed the way we work — and the way we learn. In response to the pandemic, many higher education institutions have sought to transition to remote learning. As a result, companies offering technological solutions to problems posed by the pandemic have experienced a windfall; notably, virtual proctoring services have grown in popularity. The CEO of Proctorio, one such service, told the New York Times that the number of tests proctored through the service had increased by 900% between 2019 and 2020. Meanwhile, other services in the industry have doubled their staff and experienced an increase of as much as 60% in subscription.

Universities are using virtual proctoring in place of traditional approaches to prevent cheating. Human proctors require training and can be expensive to employ. Besides that, human proctoring is often flawed: common issues include bias, inattention, a lack of motivation, and difficulty invigilating a large room. Online examinations only exacerbate these flaws. According to a recent survey by Wiley Education Services, 93% of instructors believe that students are more likely to cheat in an online environment. Instructors and administrators worry that allowing cheating to go unchecked rewards dishonesty; worse still, they fear, this might diminish the value of the degrees bestowed, and produce graduates without the requisite knowledge to practice successfully (or, in some cases, safely) in their fields.

Virtual proctoring is marketed as a solution that is cheaper, requires minimal work on the part of faculty or administrators, and in the words of Proctorio, “combin[es] machine learning and advanced facial detection technologies to remove human error and bias.”Students take tests with their cameras and microphones on; the software monitors keystrokes, movement, noises, and students’ line of sight. The software’s algorithm then flags “suspicious” behavior for instructors, who can later review the footage to determine if cheating has occurred.

Online proctoring is no silver bullet, however. In a recent New York Times article, student interviewees described how invasive and unjust this level of surveillance can feel, and how much can go wrong. For one, such software programs struggle to accommodate students with disabilities. For another, facial recognition algorithms often fail to accurately identify non-white faces, as has repeatedly been demonstrated. Beyond that, online proctoring services assume that students possess a functional camera and microphone, hardware capable of running it, and a quiet, well-lit work environment. They also assume a stable Internet connection, which can be hard to come by for students living in impoverished areas with poor broadband infrastructure. These failures are the product of expectations concerning what constitutes “normal” students and “normal” behavior that is then baked into the code, and ultimately results in discrimination. All these shortcomings are on top of any more general concerns about student privacy and the security of the data produced.

There are other potential approaches to minimizing student cheating that are not as invasive or potentially biased. Researchers Joanna Golden and Mark Kohlbeck recently examined the literature on cheating in online courses and studied how instructors might use low-technology strategies of their own to combat common forms of cheating. In their experiment, they administered tests to multiple sections of accounting students across two public universities in the American South, both online and in-person. Their main hypothesis was that paraphrasing test bank questions would thwart attempts to find answers on the Internet, and thereby deter cheating. As part of their analysis, they compared the performance of students enrolled in online sections when given paraphrased questions to the performance of those given questions pulled directly from test banks provided by textbook publishers. According to their findings, students taking the tests online performed worse overall on the paraphrased questions than on the verbatim test bank ones. Conversely, for those students taking in-person tests with a human proctor, this performance gap was not observed. Their scores were also no different from those achieved by the students taking the test online who received paraphrased questions. The observed gap held when accounting for factors such as GPA, age, full-time status, and gender. Most critically, this reduction in scores remained even when paraphrased questions were used in conjunction with virtual proctoring software or an honor code, suggesting that students had been circumventing the virtual proctor.

Some instructors are using the COVID-19 pandemic as an opportunity to examine the structure of higher education, going so far as to suggest that universities reconsider grading entirely. Doing away with grades or worrying less about cheating is easier in some disciplines and institutional contexts than it is in others, and the need for innovative approaches to prevent students from acting unethically will remain. In the words of David Rettinger of the University of Mary Washington, a psychologist who has studied academic integrity, the solution is simple: “Teach better.” This is easier said than done, especially when alternative approaches to deter cheating require more effort from already overburdened instructors. But this research suggests that the benefits of virtual proctoring might not outweigh the very real costs — or, at least, to the degree that university administrators might expect.


Golden, Joanna, and Mark Kohlbeck. 2020. “Addressing Cheating When Using Test Bank Questions In Online Classes”. Journal Of Accounting Education 52: 100671. https://doi.org/10.1016/j.jaccedu.2020.100671.

1057 views
bookmark icon