By Ian Linkletter, Emerging Technology and Open Education Librarian, British Columbia Institute of Technology
On October 17, 2025, as part of the BCcampus EdTech Sandbox Series, I presented Remote Proctoring Through an Ethical Lens: The Case Against Surveillance. The session brought together dozens of faculty, learning designers, educational technologists, librarians, and administrators to examine the harms and risks of remote proctoring (or e-proctoring) and how to address them.
What is Remote Proctoring?
Remote proctoring refers to the surveillance of students during exams using software instead of in-person invigilation. These systems exist along a spectrum of invasiveness. At the low end are lockdown browsers, which restrict actions such as switching tabs or accessing other applications. Some lockdown browser products require students to grant administrative access to their operating system, which can carry high risk. At the highest end of the invasiveness spectrum are live and automated systems that subject students to continuous monitoring by remote workers or AI. These systems may record webcams, screens, audio, keystrokes, and network telemetry to flag suspicious behaviour. The most egregious invasion of privacy is the room scan feature, which can subject students to an unwarranted search of their personal living environment. In 2022, a federal judge in the United States deemed room scans to be unconstitutional.
Some remote proctoring systems qualify as automated decision systems and are subject to additional regulatory and ethical scrutiny. The way to tell if a platform makes automated decisions is to determine whether it makes consequential decisions without human judgment at the moment of action. Every institution must make its own ethical determination about whether flagging suspicious activity constitutes automated decision-making. Some systems even go so far as to deny access to exams based on faulty facial detection, interrupt tests because of reflections or photos mistaken for additional faces, calculate suspicion scores, or even terminate exams automatically without any live human oversight. Automating such consequential decisions risks profound harm and poses accessibility barriers.
The harms of remote proctoring are well documented. Facial detection systems disproportionaly fail to recognize students of colour, sometimes preventing them from accessing exams. Flagging so-called abnormalities embeds ableist assumptions about what bodies and behaviours are normal. Forced surveillance constitutes an invasion of privacy that may subject students to extreme stress.
Ed Tech or Academic Surveillance Software?
A simple way to evaluate whether remote proctoring qualifies as educational technology is to ask whether it serves a pedagogical purpose. It is difficult to imagine a course where it supports learning outcomes, though one example might be in a psychology class as a case study of unethical research practices. These systems would never pass a human research ethics review, so how can they ethically be forced on students?
Remote proctoring is not ed tech. It’s academic surveillance software designed to monitor and control student behaviour during exams. Framing it as ed tech legitimizes practices that would otherwise be unacceptable in the classroom. Just because these systems integrate with learning management systems doesn’t mean they should be funded from ed tech budgets, supported by ed tech workers, or made the default in assessment design. I don’t even call remote proctoring systems “tools”, because tools work. Researchers from the University of Twente published “On the Efficacy of Online Proctoring using Proctorio” and concluded that “the use of online proctoring is therefore best compared to taking a placebo”. Why are schools paying tens or hundreds of thousands of dollars for risky, faulty, and unreliable technology? It doesn’t make any sense.
Ethical Frameworks for Evaluating Remote Proctoring Systems
In June 2025, the Government of British Columbia published the B.C. Post-Secondary Ethical Educational Technology Toolkit developed by a working group of educational leaders. It contains a section about understanding and addressing bias. One strategy it suggests for avoiding biased technologies is to look at research conducted by other institutions. The Canadian Privacy Library I founded in 2024 is useful for this purpose as it contains over 500 Privacy Impact Assessments from B.C. public post-secondary institutions. In 2026, it will expand to include institutional evaluations of risk and bias, such as Algorithmic Impact Assessments.
Brock University’s Ethical Framework for Educational Technologies, approved by its senate in May 2025, is another framework that contains useful considerations applicable to remote proctoring and other surveillance technologies. In addition to access, equity, accessibility, privacy, care, and wellbeing, it highlights the importance of identifying and considering algorithmic bias. Brock calls for “awareness of the protected grounds of the Ontario Human Rights Code” when ensuring that technology is not racist, ableist, or discriminatory. This framework specifically references remote proctoring as an avoidable risk, warning educational leaders not to “procur[e] a remote exam proctoring tool that disproportionally flags students with darker skin tones as cheating”.
As Dr. Tiera Tanskley of UCLA wrote in an essential paper about AI-mediated racism in school:
Such was the case with Proctorio, an anticheating software that uses facial detection and machine learning technology to identify “behavioral anomalies” of live test takers (Proctorio, 2024). However, audits of the program’s technological infrastructures reveal an inability to “see” Black faces as human – a discriminatory design feature that not only disproportionately flags Black students as cheaters and “unethical” users of technology but also simultaneously increases their exposure to school discipline and carceral contact (Clark, 2021; Feathers, 2021).
The British Columbia Institute of Technology (BCIT), where I work and serve as vice-chair of the Educational Technology and Learning Design Committee, has also taken a proactive approach with potentially biased automated decision systems. We implemented the Government of Canada’s Algorithmic Impact Assessments as a requirement before proctoring systems can be approved by the committee. Algorithmic Impact Assessments, or AIAs, require scrutiny of bias, algorithms, and automated decision making, as well as a mitigation strategy to address risks and harms.
Remote proctoring is not a neutral tool. Its use prioritizes behavioural control over learning, and signals to students that privacy, equity, and accessibility are not as important as surveillance. Institutions already possess ethical frameworks and impact assessments to guide procurement and implementation decisions. The best time to implement these tools was yesterday. Don’t delay – protect students today!
Personal note
In the session, I mentioned being sued five years ago by a remote proctoring company, Proctorio, for citing links to their YouTube videos to support my criticism of their product. I am pleased to announce that the lawsuit was settled in November 2025. You can read my announcement online. Thank you all for your support over the years. Be fearless. Criticize academic surveillance software, and remember: it’s not ed tech.
Webinar Resources and Transcript
If you missed the webinar, or want a quick refresher, you can access the webinar recordings and transcript here:
EdTech Sandbox Series: Remote Proctoring Through an Ethical Lens – the Case Against Surveillance