18 November 2020


'Good Proctor or “Big Brother”? AI Ethics and Online Exam Supervision Technologies' by Simon Coghlan, Tim Miller and Jeannie Paterson comments 

This article philosophically analyzes online exam supervision technologies, which have been thrust into the public spotlight due to campus lockdowns during the COVID-19 pandemic and the growing demand for online courses. Online exam proctoring technologies purport to provide effective oversight of students sitting online exams, using artificial intelligence (AI) systems and human invigilators to supplement and review those systems. Such technologies have alarmed some students who see them as ‘Big Brother-like’, yet some universities defend their judicious use. Critical ethical appraisal of online proctoring technologies is overdue. This article philosophically analyzes these technologies, focusing on the ethical concepts of academic integrity, fairness, non-maleficence, transparency, privacy, respect for autonomy, liberty, and trust. Most of these concepts are prominent in the new field of AI ethics and all are relevant to the education context. The essay provides ethical considerations that educational institutions will need to carefully review before electing to deploy and govern specific online proctoring technologies.

The authors state 

Recently, online exam supervision technologies have been thrust into the public spotlight due to the growing demand for online courses [Ginder et al., 2019] and lockdowns during the COVID-19 pandemic [Flaherty, 2020]. While educational institutions can supervise remote exam-takers simply by watching live online video (e.g. via Zoom), an evolving range of online proctoring (OP) software programs offer more sophisticated, scalable, and extensive monitoring functions, including both human-led and automated remote exam supervision. Such technologies have generated confusion and controversy, including vigorous student protests [White, 2020]. Some universities have dug in against criticism, while others have outright rejected the technologies or have retreated from their initial intentions to use them [White, 2020]. At the root of disagreement and debate between concerned students and universities are questions about the ethics of OP technologies. This essay explores these ethical questions. By doing so, it should assist students and educators in making informed judgements about the appropriateness of OP systems, as well as shining a light on an increasingly popular digital technology application. 

OP software platforms, which first emerged in 2008 [ProctorU, 2020b], are now booming. A 2020 poll found that 54% of educational institutions now use them [Grajek, 2020]. Increasingly, OP software contains artificial intelligence (AI) and machine learning (ML) components that analyse exam recordings to identify suspicious examinee behaviours or suspicious items in their immediate environment. OP companies, which can make good profits from their products [Chin, 2020], claim that automating proctoring increases the scalability, efficiency, and accuracy of exam supervision and the detection of cheating. These features have an obvious attraction for universities, some of which believe the benefits of OP technologies outweigh any drawbacks. However, the complexity and opacity of OP technologies, especially their automated AI functions [Hagendorff, 2020], can be confusing. Furthermore, some (though not all) students complain of a “creepy” Big Brother sense of being invaded and surveiled Hubler [2020]. Predictably, some bloggers are instructing students how to bluff proctoring platforms [Binstein, 2015]. Scholars have just begun exploring remote and automated proctoring from a range of perspectives, including pedagogical, behavioral, psychological, and technical perspectives [Asep and Bandung, 2019, Cramp et al., 2019, González-González et al., 2020]. Nonetheless, and despite vigorous ethical discussion in regular media [Zhou, 2020], blog posts [Torino, 2020], and on social media, the ethics of emerging OP technologies has received limited scholarly analysis (cf. Swauger [2020]). Although moral assessments can be informed by empirical data about online and in-person proctoring — such as data about test-taker behavior [Rios and Liu, 2017] and grade comparisons [Goedl and Malla, 2020] — moral assessments depend crucially on philosophical analysis. In the following ethical analysis, we identify and critically explore the key moral values of academic integrity, fairness, non-maleficence, transparency, privacy, autonomy, liberty, and trust as they apply to OP technologies. 

Some of these concepts are prominent in the new field of AI ethics [Jobin et al., 2019], which is burgeoning as AI moves increasingly into many facets of our lives, including in education. In this paper, we suggest that OP platforms are neither a silver bullet for remote invigilation nor, as some would have it, a completely “evil” technology [Grajek, 2020]. This ethical analysis will help to inform concerned individuals while setting out important ethical considerations for educational institutions who are considering OP platforms, including how they might devise appropriate governance frameworks for their use and remain accountable for their decisions. It will also provide a context for various future empirical investigations of OP technologies. 

The essay is structured as follows. The Philosophical Approach section briefly explains the relevance of the central moral values to the OP debate. The Background section provides relevant context concerning exam invigilation and outlines central technological capabilities of popular OP programs. The Discussion section examines important ethical issues raised by the emergence of OP software. Finally, the Conclusion summarizes the ethical lessons for educational institutions and others.