'How Common is Cheating in Online Exams and did it Increase During the COVID-19 Pandemic? A Systematic Review' by Philip M Newton and Keioni Essex in (2023) Journal of Academic Ethics comments
Academic misconduct is a threat to the validity and reliability of online examinations, and media reports suggest that misconduct spiked dramatically in higher education during the emergency shift to online exams caused by the COVID-19 pandemic. This study reviewed survey research to determine how common it is for university students to admit cheating in online exams, and how and why they do it. We also assessed whether these self-reports of cheating increased during the COVID-19 pandemic, along with an evaluation of the quality of the research evidence which addressed these questions. 25 samples were identified from 19 Studies, including 4672 participants, going back to 2012. Online exam cheating was self-reported by a substantial minority (44.7%) of students in total. Pre-COVID this was 29.9%, but during COVID cheating jumped to 54.7%, although these samples were more heterogenous. Individual cheating was more common than group cheating, and the most common reason students reported for cheating was simply that there was an opportunity to do so. Remote proctoring appeared to reduce the occurrence of cheating, although data were limited. However there were a number of methodological features which reduce confidence in the accuracy of all these findings. Most samples were collected using designs which makes it likely that online exam cheating is under-reported, for example using convenience sampling, a modest sample size and insufficient information to calculate response rate. No studies considered whether samples were representative of their population. Future approaches to online exams should consider how the basic validity of examinations can be maintained, considering the substantial numbers of students who appear to be willing to admit engaging in misconduct. Future research on academic misconduct would benefit from using large representative samples, guaranteeing participants anonymity.
The authors state
Distance learning came to the fore during the global COVID-19 pandemic. Distance learning, also referred to as e-learning, blended learning or mobile learning (Zarzycka et al., 2021) is defined as learning with the use of technology where there is a physical separation of students from the teachers during the active learning process, instruction and examination (Armstrong-Mensah et al., 2020). This physical separation was key to a sector-wide response to reducing the spread of coronavirus.
COVID prompted a sudden, rapid and near-total adjustment to distance learning (Brown et al., 2022; Pokhrel & Chhetri, 2021). We all, staff and students, had to learn a lot, very quickly, about distance learning. Pandemic-induced ‘lockdown learning’ continued, in some form, for almost 2 years in many countries, prompting predictions that higher education would be permanently changed by the pandemic, with online/distance learning becoming much more common, even the norm (Barber et al., 2021; Dumulescu & MuĊ£iu, 2021). One obvious potential change would be the widespread adoption of online assessment methods. Online exams offer students increased flexibility, for example the opportunity to sit an exam in their own homes. This may also reduce some of the anxiety experienced during attending in-person exams in an exam hall, and potentially reduce the administrative cost to universities.
However, assessment poses many challenges for distance learning. Summative assessments, including exams, are the basis for making decisions about the grading and progress of individual students, while aggregated results can inform educational policy such as curriculum or funding decisions (Shute & Kim, 2014). Thus, it is essential that online summative assessments can be conducted in a way that allows for their basic reliability and validity to be maintained. During the pandemic, Universities shifted, very rapidly, in-person exams to an online format, with limited time to ensure that these methods were secure. There were subsequent media reports that academic misconduct was now ‘endemic’, with universities supposedly ‘turning a blind eye’ towards cheating (e.g. Henry, 2022; Knox, 2021). However, it is unclear whether this media anxiety is reflected in the real-world experience in universities.
Dawson defines e-cheating as ‘cheating that uses or is enabled by technology’ (Dawson, 2020, p. 4). Cheating itself is then defined as the gaining of an unfair advantage (Case and King 2007, in Dawson, 2020, P4). Cheating poses an obvious threat to the validity of online examinations, a format which relies heavily on technology. Noorbebahani and colleagues recently reviewed the research literature on a specific form of e-cheating; online exam cheating in higher education. They found that students use a variety of methods to gain an unfair advantage, including accessing unauthorized materials such as notes and textbooks, using an additional device to go online, collaborating with others, and even outsourcing the exam to be taken by someone else. These findings map onto the work of Dawson, 2020, who found a similar taxonomy when considering ‘e-cheating’ more generally. These can be driven by a variety of motivations, including a fear of failure, peer pressure, a perception that others are cheating, and the ease with which they can do it (Noorbehbahani et al., 2022). However, it remains unclear how many students are actually engaged in these cheating behaviours. Understanding the scale of cheating is an important pragmatic consideration when determining how, or even if, it could/should be addressed. There is an extensive literature on the incidence of other types of misconduct, but cheating in online exams has received less attention than other forms of misconduct such as plagiarism (Garg & Goel, 2022).
One seemingly obvious response to concerns about cheating in online exams is to use remote proctoring systems wherein students are monitored through webcams and use locked-down browsers. However, the efficacy of these systems is not yet clear, and their use has been controversial, with students feeling that they are ‘under surveillance’, anxious about being unfairly accused of cheating, or of technological problems (Marano et al., 2023). A recent court ruling in the USA found that the use of a remote proctoring system to scan a student’s private resident prior to taking an online exam was unconstitutional (Bowman, 2022), although, at the time of writing, this case is ongoing (Witley, 2023). There is already a long history of legal battles between the proctoring companies and their critics (Corbyn, 2022), and it is still unclear whether these systems actually reduce misconduct. Alternatives have been offered in the literature, including guidance for how to prepare online exams in a way that reduces the opportunity for misconduct (Whisenhunt et al., 2022), although it is unclear whether this guidance is effective either.
There is a large body of research literature which examines the prevalence of different types of academic dishonesty and misconduct. Much of this research is in the form of survey-based self-report studies. There are some obvious problems with using self-report as a measure of misconduct; it is a ‘deviant’ or ‘undesirable’ behaviour, and so those invited to participate in survey-based research have a disincentive to respond truthfully, if at all, especially if there is no guarantee of anonymity. There is also some evidence that certain demographic characteristics associated with an increased likelihood of engaging in academic misconduct are also predictive of a decreased likelihood of responding voluntarily to surveys, meaning that misconduct is likely under-reported when a non-representative sampling method is used such as convenience sampling (Newton, 2018).
Some of these issues with quantifying academic misconduct can be partially addressed by the use of rigorous research methodology, for example using representative samples with a high response rate, and clear, unambiguous survey items (Bennett et al., 2011; Halbesleben & Whitman, 2013). Guarantees of anonymity are also essential for respondents to feel confident about answering honestly, especially when the research is being undertaken by the very universities where participants are studying. A previous systematic review of academic misconduct found that self-report studies are often undertaken with small, convenience samples with low response rates (Newton, 2018). Similar findings were reported when reviewing the reliability of research into the prevalence of belief in the Learning Styles neuromyth, suggesting that this is a wider concern within survey-based education research (Newton & Salvi, 2020).
However, self-report remains one of the most common ways that academic misconduct is estimated, perhaps in part because there are few other ways to meaningfully measure it. There is also a basic, intuitive objective validity to the method; asking students whether they have cheated is a simple and direct approach, when compared to other indirect approaches to quantifying misconduct, based on (for example) learner analytics, originality scores or grade discrepancies. There is some evidence that self-report correlates positively with actual behaviour (Gardner et al., 1988), and that data accuracy can be improved by using methods which incentivize truth-telling (Curtis et al., 2022).