Surveillance in Proctored Exams: How Much is Too Much?

Reading Time: 4 minutes

The transition to 100% online education shows both the usefulness of current assessment methodologies and the possible over-surveillance of students.

Surveillance in Proctored Exams: How Much is Too Much?
Image: Istock/Michail_Petrov-96.
Reading time 4 minutes
Reading Time: 4 minutes

Online surveillance protocols start with the idea that students are not trustworthy. At the same time, the evaluation platforms retain the rights to their confidential information. Are they keeping it confidential?

The present lockdown and online education boom have marked structural changes in education, including curriculum design, teacher-student communication, adaptation to a full online schedule, virtual classrooms’ safety, and evaluation resources.

Faced with the requirement of tests that have to be done remotely in this health emergency, teachers and academic staff have had to resort to standardized online monitoring and testing systems. The implementation of this resource has undoubtedly made it easier to continue evaluating and grading students. However, this new modality brings particular problems that daily affect the students’ academic futures.

The problem of data privacy

Educational authorities describe online exams as a necessary evil. Faced with empty classrooms or test centers, educators have had to turn to services such as Respondus or ProctorU to maintain an evaluation methodology. However, these resources are not harmless. Parents, students, and educational staff have expressed serious concerns about these services’ measures to ensure that students take their exams honestly. These steps can be interpreted as an excessive invasion of privacy and jeopardization of students’ personal information and data.

Most of these platforms do not allow students to access the exams until they have activated their webcams, taken a photo of their faces, offered a valid government ID or official identification from the university they attend, and pan their camera 360 degrees in the room where they will take the exam (making sure there is nothing present that would facilitate academic dishonesty).

The scrutiny level of these evaluation services would be seen as disruptive if it happened within a classroom. If a teacher raised the students from their chairs, rummaged under them, and asked the students to open their backpacks to show their belongings to ensure there are no test-cheating aids, we would undoubtedly question the teacher’s competency. However, evaluation software remains the first choice of schools in the United States. “In the last 30 days, we have performed 2.5 million supervised tests. In the same period the previous year, we did 235,000,” declared Mike Olsen, CEO of Proctorio, one of the most booming online exam firms today.

Another unforeseen consequence of supervised tests for validation purposes is data theft. These platforms’ systems can be exploited by hackers to obtain confidential student information such as personal identification data or their rooms’ details. The exam monitoring companies retain the rights to much of this data and can share it. This creates valid and grave concerns on the part of both teachers and parents.

Assessment data vs. surveillance data

The need for such services (and the methodologies they apply) starts with a basic principle: Students cannot be trusted; they need to be monitored to ensure that they do not cheat on the tests.

 It is not disputed that measures to avoid academic dishonesty are essential to maintain a level of ethics and continuity of quality control for exams to function as learning validation tools. However, it is crucial to start a conversation about what aspects of student behavior and data we need to review to ensure there won’t be any academic dishonesty during a proctored exam.

>
“It is like a spyware that we just legitimized.”

Systems like ProctorU request access to the students’ webcams, microphones, and browser sessions. They monitor their facial features with biometric controls, recording and counting how many times they blink. They register how much time the students are not looking directly at the monitor. If they pass a certain time limit, usually seconds, without looking at the screen, they are warned and, in some cases, penalized.

Calibrations so profound and insidious create system errors and misunderstandings that can lead students to risk or lose their academic futures just because they rested their eyes for more than four seconds or repeated a question out loud to understand it. This happened to a student who received a failing mark on an exam because she was filmed trying to re-read a question she had not understood. The student was on a scholarship. The class teacher sent an academic infraction directly to the scholarship committee before the investigation that the student had requested was completed. Had it not been for the rapid action of both the student and the dean, whom she asked to help, she would certainly have lost the scholarship because of a misinterpretation of the system that read a completely innocuous act as an instance of academic dishonesty.

In these cases, it is vital to question what measures would make a test more cheating proof and what actions will unnecessarily harm the students. Zoe Fisher, an instructional designer for Pierce College in Lakewood, drew a line between evaluation data and espionage data long before the pandemic obliged everyone to rely entirely on online assessment services. According to Fisher, the evaluation data are the exam results. The students’ answers are the best way to know if they took the test effectively and without cheating.

A digital recording of their whole room, their retinal movements, or how many times they clicked the mouse during the exam is not evaluation data; it is surveillance information. This monitoring is not necessarily useful in ensuring academic honesty or learning. It is a resource to systematize and facilitate online assessments in large volumes, but at what cost?

 The educational authorities themselves are aware that this level of scrutiny crosses lines that perhaps should not be crossed. Chris Dayley, the Academic Director of Evaluative Services for the University of Utah, commented to the Washington Post, “It is like a spyware that we just legitimized.”

Have you used online evaluation systems? Do you think the benefits outweigh the disadvantages? What has been your experience with either administering or presenting exams in this way? Tell us in the comments.

Translation by Daniel Wetta.

Sofía García-Bullé

This article from Observatory of the Institute for the Future of Education may be shared under the terms of the license CC BY-NC-SA 4.0