Showing posts with label Plagiarism and Cheating. Show all posts
Showing posts with label Plagiarism and Cheating. Show all posts

03 December 2024

Cheating

'Responsible but powerless: staff qualitative perspectives on cheating in higher education' by Rowena Harper and Felicity Prentice in (2024) 20 International Journal for Educational Integrity comments 

Since its identification, contract cheating has evolved into a significant interdisciplinary field in higher education, encompassing both research and practice. This field informs institutional strategies, practices to mitigate contract cheating, professional development, and student education (Morris 2020). With many governments enacting legislation to combat commercial cheating industries, and quality assurance agencies establishing legislative standards for higher education providers, contract cheating has become a focal concern in the educational landscape. 

In Australia, the location for this study, a series of media scandals in 2015 sparked federal government concerns that students were increasingly using commercial contract cheating services to complete their assignments, and that universities were failing to detect it. Implications in some of the reporting that international students were amongst the users contributed to those concerns, as higher education was Australia’s third largest export industry at the time (behind iron and coal), with international students comprising over 25% of the higher education population. The prospect of reputational or economic damage to universities, or the Australian higher education sector more broadly, by a narrative that suggested compromised integrity led to widespread investment in understanding and addressing the issue of contract cheating at national and local levels. Demands on academics have expanded in parallel, with their roles given new administrative, research and pedagogical dimensions requiring new and evolving skills and resources. Their work requires a growing knowledge base that includes contemporary student behaviours that can undermine educational integrity, the individual, attitudinal and contextual factors that can motivate these behaviours, and security threats and cheating opportunities that may exist in the teaching and learning environment. This knowledge must then be applied in designing an engaging and supportive learning environment that develops students’ academic integrity and academic practice (Gottardello and Karabag 2022), acknowledges and scaffolds students’ diverse academic and linguistic abilities (Bretag et al. 2019; Slade et al. 2019), and utilises assessment practices that are authentic and meaningful, and as secure as practicable (Ellis et al. 2018; Dawson 2021). For the most part these teaching and learning activities align with teachers’ conceptions of their professional identity (Lynch et al. 2021). Less well understood is how teaching staff perceive their role in detecting and managing contract cheating and other forms of academic misconduct, particularly in an environment where academic misconduct responsibilities are increasingly distributed across different institutional roles (Ahuna, Frankovitch and Murphy 2023; Vogt and Eaton 2022). These roles may include faculty-based and/or centralised teams of academic integrity specialists who provide policy leadership, staff training, student education, or have responsibility for aspects of academic misconduct investigation and management. Roles may also include more senior academics to whom teachers are required to delegate certain forms of academic misconduct. 

Research into the institutional management of academic misconduct has focussed on the development of policies and procedures to prevent, detect and respond to incidents (Birks et al. 2020; Bretag and Mahmud 2014; Stoesz et al. 2019). These policies and procedures typically position teaching staff as having a policing role that feels inconsistent with and even anathema to their conceptualisations of their role and identity as facilitators of learning. For instance, in a comparative study across six countries, Gottardello and Karabag (2022) found that academics are often required to adopt the role of ‘intimidator’ to ensure students understand the consequences of academic misconduct. With the rise of contract cheating, the act of evaluating assessment tasks has increasingly become infused with a level of suspicion, as evidence suggests that the detection rate of contract cheating improves when academic staff maintain awareness of its potential occurrence (Dawson and Sutherland-Smith 2018; 2019). The gathering of evidence to identify and substantiate a case can require quasi-forensic processes such as linguistic and stylometric analyses (Ison 2020; Mellar et al. 2018), nuanced interpretation of text-matching software reports (Bretag and Mahmud 2009; Lancaster and Clarke 2014), scrutiny of document metadata (Johnson and Davies 2020), and surveillance of Learning Management System traffic to leverage information on user IP addresses (Dawson 2021). All this occurs against a backdrop of challenging organisational conditions that include dwindling resources, increasing workloads and increasing casualisation (Amigud and Pell 2021; Birks et al. 2020; Harper et al. 2019; De Maio et al. 2020). 

In addition to their roles in teaching, learning and detection, teaching staff have been described by some as ‘morally responsible’ (Sattler et al. 2017, 1128) for the ongoing problem of student cheating, with others suggesting that a failure to prevent and detect academic misconduct actively is indicative of ‘staff laziness’ and ‘lack of creativity’ (Walker and White 2014, 679). Some of the language used in the literature frames the problem as a combative one and positions teaching staff as the ‘guardians of integrity’ (Amigud and Pell 2022, 312) who are on the front line (Burrus et al. 2015, p. 100; Singh and Bennington 2012, 115), ‘in the trenches’ (Atkinson et al. 2016, 197), in an ‘arms race’ (Birks et al. 2020, p. 13) and ‘waging a losing battle’ (Asefa and Coalter 2007, p. 43) against academic misconduct. The combatants portrayed in this war seem to be the teaching staff and students, staring at each other across a moral divide. Given the critical task of teaching staff to address contract cheating, the ways in which they make sense of and navigate their competing roles and responsibilities needs to be better understood. The project reported in this paper was part of a nationally funded research project entitled Contract Cheating and Assessment Design: Exploring the Connection, which conducted parallel staff and student surveys at 12 Australian higher education institutions, including 8 universities, between October and December 2016. The surveys addressed four research questions:

1. How prevalent is contract cheating in Australian universities? 2. Is there a relationship between cheating behaviours and sharing behaviours? 3. What are university staff experiences with and attitudes towards contract cheating and other forms of outsourcing? 4. What are the individual, contextual and institutional factors that are correlated with contract cheating and other forms of outsourcing?

This paper reports only on the data gathered from the 8 universities. Notably, the data were collected at a time before the COVID-19 pandemic prompted an emergency pivot in teaching and assessment, and most significantly prior to the emergence of Large Language Model Generative Artificial Intelligence (GenAI). However, we assert that the fundamental challenges of ‘cheating’ remain the same, and that the organisational conditions and staff experiences illustrated here are only likely to have intensified as a result of the disruptions experienced since 2016.

10 October 2024

No, Just No

In Dayal [2024] FedCFamC2F 1166 Humphreys J considered an solicitor who used AI in generating a list and summary of authorities that, oops but unsurprising given AI hallucinations, were later  acknowledged by the solicitor not to exist. The accuracy of the document was not verified by the solicitor. 

 The judgment states 

This matter relates to my decision to refer the conduct of a solicitor to the Office of the Victorian Legal Services Board and Commissioner. The solicitor in question tendered to the court a list and summary of legal authorities that do not exist. The solicitor has informed the court the list and summary were prepared using an artificial intelligence (“AI”) tool incorporated in the legal practice management software he subscribes to. The solicitor acknowledges he did not verify the accuracy of the information generated by the research tool before submitting it to the court. 

The solicitor concerned is Mr Dayal, a Victorian solicitor and principal of the firm C Law Firm. I will refer to him as “the solicitor” and his name and the name of his firm will be anonymised when my reasons are published, noting the purpose of my decision is not punitive. 

For the background to the matter, I refer to my earlier ex tempore reasons delivered on 19 July 2024 in the enforcement proceeding in which the solicitor appeared as agent for another firm of solicitors.  Those reasons explain the circumstances in which the list and summary of authorities was tendered by the solicitor, how the content of the list and summary of authorities was identified to be inaccurate and the solicitor’s acknowledgement that the list and summary of authorities was prepared with the assistance of AI.... 

In his written submissions the solicitor acknowledged: (a) Handing up to the court on 19 July 2024 a document that purported to contain summaries of relevant authorities and included what looked like medium neutral citations identifying those decisions; (b) Using legal software, and in particular an AI driven research tool module, to generate the list of authorities and summaries; (c) Neither he nor another legal practitioner had reviewed the output generated by the research tool to ensure the accuracy of the list of authorities and case summaries; and (d) The authorities identified in the list and summary tendered to the court do not exist. 

The solicitor has offered an unconditional apology to the court for tendering the inaccurate list and summary of authorities. He has provided an assurance that he will “take the lessons learned to heart and will not commit any such further breach of professional standards in the future.” He asks that I not make a referral to the Victorian Legal Services Board. 

The submissions made by the solicitor include that he did not intentionally mislead the court. In support of that submission, the solicitor provided information as to the circumstances which lead to him relying on the AI tool within the practice management software he uses and how he generated the list of authorities and case summaries. He explained that he did not fully understand how the research tool worked. He acknowledged the need to verify AI assisted research, or indeed any source of legal research relied upon, for accuracy and integrity. 

The solicitor outlined the steps he has taken to address and mitigate the impact of his conduct, including voluntarily making a payment to the solicitors for the other party in the enforcement proceeding, in settlement of costs thrown away for the hearing on 19 July 2024. He says he has informed the Legal Practitioners Liability Committee (“LPLC”) of what occurred and that the LPLC is providing him with ongoing professional support. The solicitor has also provided submissions in relation to his personal and professional circumstances and the stress and cost caused to him as a result of his conduct on 19 July 2024. He offered to provide an affidavit to verify the information provided in his submissions. 

Use of AI in litigation 

The use of technology is an integral part of efficient modern legal practice. At the frontier of technological advances in legal practice and the conduct of litigation is the use of AI. Whilst the use of AI tools offer opportunities for legal practitioners, it also comes with significant risks. 

Relevantly to this case, the USA District Court case of Mata v Avianca Inc drew worldwide attention to the risk of relying on generative AI for research purposes in litigation without independent verification. In that case, attorneys of a firm who relied on generative AI to prepare legal submissions which were filed referring to non-existent cases, and initially stood by the submissions when called into question by the court, were found to have abandoned their professional responsibilities and sanctioned. The USA District Court outlined the potential harms flowing from the filing of bogus submissions in its judgment as follows:  

Many harms flow from the submission of fake opinions. The opposing party wastes time and money in exposing the deception. The Court’s time is taken from other important endeavors. The client may be deprived of arguments based on authentic judicial precedents. There is potential harm to the reputation of judges and courts whose names are falsely invoked as authors of the bogus opinions and to the reputation of a party attributed with fictional conduct. It promotes cynicism about the legal profession and the American judicial system. And a future litigant may be tempted to defy a judicial ruling by disingenuously claiming doubt about its authenticity.

The potential harms identified by the USA District Court apply to the reliance on non-existent authorities in this court. 

Whilst this court has not yet done so, a number of courts in Australia and overseas have formulated guidelines for the responsible use of generative AI by litigants and lawyers, to assist those conducting litigation before them. 

Guidelines issued by each of the Supreme Cout of Victoria and County Court of Victoria for example,emphasise: (a) Parties and practitioners who are using AI tools in the course of litigation should ensure they have an understanding of the manner in which those tools work, as well as their limitations; (b) The use of AI programs must not indirectly mislead another participant in the litigation process (including the court) as to the nature of any work undertaken or the content produced by that program. Ordinarily parties and their practitioners should disclose to each other the assistance provided by AI programs to the legal task undertaken;  and (c) The use of AI to assist in the completion of legal tasks must be subject to the obligations of legal practitioners in the conduct of litigation, including the obligation of candour to the court. 

Importantly in the context of this matter, the guidelines issued by the Supreme Court and County Court of Victoria explain that generative AI and large language models create output that is not the product of reasoning and nor are they a legal research tool. Generative AI does not relieve the responsible legal practitioner of the need to exercise judgment and professional skill in reviewing the final product to be provided to the court. 

Duties of legal practitioners 

Whilst not issued by this court or applying directly to practitioners conducting litigation in this court, I mention these particular guidelines because they reflect the responsible use of AI by practitioners in litigation by reference to the duties of legal practitioners generally, including the duty not to mislead the court or another participant in the litigation process and the duty of candour to the court. In that sense, the guidance provided by these particular guidelines is applicable to practitioners conducting litigation in this court. 

Relevantly to the conduct of the solicitor before me, the duties of Victorian solicitors include: (a) The paramount duty to the court and to the administration of justice,  which includes a specific duty not to deceive or knowingly or recklessly mislead the court; (b) Other fundamental ethical duties, including to deliver legal services competently and diligently;  and (c) To not engage in conduct which is likely to diminish public confidence in the administration of justice or bring the legal profession into disrepute. 

The solicitor has acknowledged a breach of the professional standards expected of a solicitor in this court, by his conduct in tendering a list and summary of authorities that do not exist, generated without disclosing the source of the information presented to the court and without verifying its accuracy.

Humpreys J stated that 'it is in the public interest for the Victorian Legal Services Board and Commissioner to be aware of the professional conduct issues arising in this matter, given the increasing use of AI tools by legal practitioners in litigation more generally'.

In the earlier judgment - Handa & Mallick [2024] FedCFamC2F 957 - Humphreys J stated 

 The matter was stood down this morning for the purposes of the parties’ legal representatives discussing the issues identified earlier this morning in relation to the enforcement application and to see if there was any prospect of a negotiated resolution. I asked if either party’s representative was in a position to provide me with any authorities that they sought to rely upon, for me to read while the matter was stood down. 

Mr B tendered a single-page list of authorities. Upon returning to chambers neither I nor my associates were able to locate the cases identified in that list. The case citations provided for each of the four listed cases correspond with cases reported by names. My associates asked Mr B to provide copies of the authorities referred to in the list, and he did not do so. 

When the matter returned to court, I asked Mr B if the list of authorities had been provided using artificial intelligence. He informed me the list had been prepared from LEAP, being a legal software package, as I understand it, used for legal practice management and other purposes. I asked if LEAP relies on artificial intelligence. He indicated that it does, answering “there is an artificial intelligence for LEAP.” I foreshadowed making procedural orders later in the day, requiring Mr B to provide an explanation as to what had occurred. Mr B clarified this afternoon that he prepared the list of authorities and not Ms Aus Lawyers. 

I informed the parties and their legal representatives this morning that as a concern had arisen in relation to the veracity of information provided in the list of authorities, a concern had in turn been raised in relation to the competency and ethics of Mr B. In light of what transpired, I asked that the husband be assisted to seek advice from a duty lawyer in relation to Mr B continuing to assist him today. The husband has been present in court throughout these discussions. He informed the court via Mr B and also directly in court after seeing the duty lawyer that he is comfortable for Mr B to continue assisting him today. 

Unfortunately, the parties have been unable to reach an agreement in relation to the enforcement application. That may be because other matters have arisen during the course of the day taking their attention and time away from their negotiations. I encourage them to continue those negotiations over the coming days, pending the adjourned hearing next Wednesday. 

I have foreshadowed with Mr B and counsel for the wife, making an order providing Mr B an opportunity to respond to the court's proposal to refer his conduct in tendering the apparently inaccurate list of authorities today, to the Legal Services Board and Commissioner for investigation. Beyond that, I will not be making an assessment or a determination in relation to that conduct. That will be a matter for the legal professional body if a referral is made. The purpose of the order I make is for Mr B to be afforded procedural fairness in relation to my proposal to make that referral. I will provide him with one month to do that. Mr B has been informed of the orders I intend to make this afternoon and has not wished to make submissions against that course. 

Counsel for the wife has foreshadowed making an application for costs in relation to the adjournment of today's hearing. He anticipates doing so at the conclusion of the enforcement hearing rather than separately today. I have indicated to the parties and to their legal representatives today, that if any application is made for costs to be paid personally by Mr B (as agent appearing today for the husband), he is to be put on notice of that application and have an opportunity to respond by way of procedural fairness. I will ensure that any further orders made in relation to the foreshadowed cost application provide for that to occur.

08 September 2024

Cheating

The Financial Times comments that Deloitte in the UK has reinstated in-person interviews for its graduate scheme, reflecting pressure from the Financial Reporting Council about reducing the potential for cheating in virtual assessments. 

 Deloitte stated it will resume to in-person interviews from this month for those applying for its graduate and apprenticeship programmes, switching back from a fully online recruitment process during COVID.

The FRC's lapidary annual review of audit quality at Deloitte, published in July, characterised Deloitte’s fully online recruitment arrangements as posing potential “risks”. In contrast the other Big Four were commended for taking steps to reduce the risk of cheating in online recruitment tests, for example conducting in-person interviews and assessments. 

 The FT states

 The FRC has in recent years raised concerns that some recruitment tests were susceptible to cheating, saying that this type of misconduct affected the integrity of the profession. It said in July that it had continued to find examples of cheating over the past year, adding that this was “unacceptable”. It declined to say at which firms it found examples of cheating. 

Deloitte said ... “In-person interviews provide candidates with an opportunity to see first-hand what it’s like to work at Deloitte and meet the people they will be working with. The initial stages of the application process will remain virtual.” 

Deloitte declined to say whether the FRC found any examples of cheating by candidates in online recruitment tests at the firm. One person familiar with the details said the firm “investigates matters” if there are concerns that its people are not demonstrating the “highest professional standards”.

02 August 2024

Cheating

'Chegg’s Growth, Response Rate, and Prevalence as a Cheating Tool: Insights From an Audit within an Australian Engineering School' by Edmund Pickering and Clancy Schuller in (2024) Journal of Academic Ethics comments 

Online tools are increasingly being used by students to cheat. File-sharing and homework-helper websites offer to aid students in their studies, but are vulnerable to misuse, and are increasingly reported as a major source of academic misconduct. Chegg.com is the largest such website. Despite this, there is little public information about the use of Chegg as a cheating tool. This is a critical omission, as for institutions to effectively tackle this threat, they must have a sophisticated understanding of their use. To address this gap, this work reports on a comprehensive audit of Chegg usage conducted within an Australian university engineering school. We provide a detailed analysis of the growth of Chegg, its use within an Australian university engineering school, and the wait time to receive solutions.

The authors state that over half of audited units had  assessment content found on Chegg. 

1180 solutions were found on Chegg which directly matched assessment content, with the largest unit having 394 matches identified. We reiterate that the uploading of these 1180 assessment items to Chegg constituted academic misconduct as these were efforts to subvert assessment.

They note that

Chegg is broadly used to cheat and 50% of questions asked on Chegg are answered within 1.5 h. This makes Chegg an appealing tool for academic misconduct in both assignment tasks and online exams. We further investigate the growth of Chegg and show its use is above pre-pandemic levels. This work provides valuable insights to educators and institutions looking to improve the integrity of their courses through assessment and policy development. Finally, to better understand and tackle this form of misconduct, we call on education institutions to be more transparent in reporting misconduct data and for homework-helper websites to improve defences against misuse.

17 June 2024

Cheating

'Understanding how and why students use academic file-sharing and homework-help websites: implications for academic integrity' by Christine Slade, Guy J Curtis and Sheona Thomson in (2024) Higher Education Research and Development comments

 In the past decade, extra-institutional file-sharing and homework-help websites have gone from being small-scale operations to large corporate businesses. File-sharing and homework-help websites threaten academic integrity when students use assessment work sourced from these sites as if it were their own. However, little is known about how students use these websites, what motivates students’ use, and whether students are aware of the risks of using these sites. In an international survey of 1000 students, nearly half had heard of, or used, file-sharing and homework-help websites, and 377 completed a longer follow-up survey. We also undertook qualitative analysis of social media posts related to file-sharing and homework-help websites. Students indicate that they used the websites to obtain material to study for and/or complete assessments, and they exchanged assessment and study materials for altruistic reasons as well as for personal benefit. Students were mostly aware of academic integrity risks in using the websites but were typically unaware of their own institutions’ position or policies regarding the use of these sites. It is recommended that higher education institutions develop policies and educate students regarding unaffiliated file-sharing and homework-help websites to promote academic integrity. 
 
Contract cheating is the outsourcing of students’ educational assessment work, which they should personally complete, to third parties, often for payment (Bretag et al., 2019; Curtis et al., 2022a). Research into contract cheating has examined students’ use of ghostwriters, who complete bespoke assignments on demand (e.g., Clarke & Lancaster, 2006; Eaton et al., 2022; Newton, 2018). Yet Curtis et al. (2022c) found that it is more common for students to download, lightly edit, and then submit assessments written by other students that they obtain from file-sharing websites. Other recent research showed a nearly 200% increase in internet traffic to a homework-help website within the first year of the COVID-19 pandemic (Lancaster & Cotarian, 2021). Yet, with rare exceptions (e.g., Rogerson, 2022, n2023), little attention has been paid to students’ use of file-sharing and homework-help websites, and the potential impact on academic integrity. 
 
This paper reports on a research project that investigated how students use file-sharing and homework-help online services, and their motivation in undertaking these transactional practices. On these platforms, we know that students share examples of examination preparation notes and even completed assessment tasks, and use the ‘tutoring’ and ‘homework help’ functions to rapidly source solutions to assessments in progress, e.g., unproctored online quizzes (Rogerson, 2023; Rogerson & Basanta, 2016). Nonetheless, as the following review of the current literature reveals, there are numerous gaps in our knowledge of how students interact with and think about these services. 
 
The ‘Buy, sell, trade’ business model 
 
In general, academic file-sharing websites operate by allowing students to upload materials such as their notes and assessment items (essays, reports, tests), and, in return, they receive ‘unlocks’ that allow them to download a lesser number of files than they have uploaded (Eaton, 2021; Rogerson & Basanta, 2016). Alternatively, students can pay a subscription fee to access (or unlock) a certain number of items in the period of their subscription. Homework-help websites allow students to post questions and receive answers, and they may also provide access to answers that have already been provided to other students (Lancaster & Cotarian, 2021). Some websites provide both file-sharing and homework-help functions, while others just provide one. Thus, depending on the website, students may pay and/or upload materials to download files and/or get answers to questions. 
 
The business model of file-sharing and homework-help websites is such that they can be used both legitimately as a study aid by students or potentially to cheat on higher education assessments (Eaton, 2021; Rogerson, 2022). Recent evidence suggests that over 1 in 10 Australian students submit work that was principally written by other students, which they obtained from file-sharing websites (Curtis et al., 2022c). In addition, the ability to rapidly get ‘expert’ answers to questions allows students to obtain third party written solutions to assessment items and submit these under their own name. Concerns about such use and promotion of homework-help websites were recently articulated by the Tertiary Education Quality Standards Agency in Australia because such actions would breach new laws against providing cheating services (Ross, 2023).

21 May 2024

Identity

Mellor J in COPA v Wright [2024] EWHC 1198 (Ch) comments 

1. Dr Craig Steven Wright (‘Dr Wright’) claims to be Satoshi Nakamoto i.e. he claims to be the person who adopted that pseudonym, who wrote and published the first version of the Bitcoin White Paper on 31 October 2008, who wrote and released the first version of the Bitcoin Source Code and who created the Bitcoin system. Dr Wright also claims to be a person with a unique intellect, with numerous degrees and PhDs in a wide range of subjects, the unique combination of which led him (so it is said) to devise the Bitcoin system. 

2. Thus, Dr Wright presents himself as an extremely clever person. However, in my judgment, he is not nearly as clever as he thinks he is. In both his written evidence and in days of oral evidence under cross-examination, I am entirely satisfied that Dr Wright lied to the Court extensively and repeatedly. Most of his lies related to the documents he had forged which purported to support his claim. All his lies and forged documents were in support of his biggest lie: his claim to be Satoshi Nakamoto. 

3. Many of Dr Wright’s lies contained a grain of truth (which is sometimes said to be the mark of an accomplished liar), but there were many which did not and were outright lies. As soon as one lie was exposed, Dr Wright resorted to further lies and evasions. The final destination frequently turned out to be either Dr Wright blaming some other (often unidentified) person for his predicament or what can only be described as technobabble delivered by him in the witness box. Although as a person with expertise in IT security, Dr Wright must have thought his forgeries would provide convincing evidence to support his claim to be Satoshi or some other point of detail and would go undetected, the evidence shows, as I explain below and in the Appendix, that most of his forgeries turned out to be clumsy. Indeed, certain of Dr Wright’s responses in cross-examination effectively acknowledged that point: from my recollection at least twice he indicated if he had wanted to forge a document, he would have done a much better job. 

4. If Dr Wright’s evidence was true, he would be a uniquely unfortunate individual, the victim of a very large number of unfortunate coincidences, all of which went against him, and/or the victim of a number of conspiracies against him. 

5. The true position is far simpler. It is, however, far from simple because Dr Wright has lied so much over so many years that, on certain points, it can be difficult to pinpoint what actually happened. Those difficulties do not detract from the fact that there is a very considerable body of evidence against Dr Wright being Satoshi. To the extent that it is said there is evidence supporting his claim, it is at best questionable or of very dubious relevance or entirely circumstantial and at worst, it is fabricated and/or based on documents I am satisfied have been forged on a grand scale by Dr Wright. These fabrications and forgeries were exposed in the evidence which I received during the Trial. For that reason, this Judgment contains considerable technical and other detail which is required to expose the true scale of his mendacious campaign to prove he was/is Satoshi Nakamoto. This detail was set out in the extensive Written Closing Submissions prepared by COPA and the Developers and further points drawn out in their oral closing arguments.

08 September 2023

AI and Integrity

'How Generative Ai Turns Copyright Law on its Head' by Mark A Lemley comments 

While courts are litigating many copyright issues involving generative AI, from who owns AI-generated works to the fair use of training to infringement by AI outputs, the most fundamental changes generative AI will bring to copyright law don't fit in any of those categories. The new model of creativity generative AI brings puts considerable strain on copyright’s two most fundamental legal doctrines: the idea-expression dichotomy and the substantial similarity test for infringement. Increasingly creativity will be lodged in asking the right questions, not in creating the answers. Asking questions may sometimes be creative, but the AI does the bulk of the work that copyright traditionally exists to reward, and that work will not be protected. That inverts what copyright law now prizes. And because asking the questions will be the basis for copyrightability, similarity of expression in the answers will no longer be of much use in proving the fact of copying of the questions. That means we may need to throw out our test for infringement, or at least apply it in fundamentally different ways. 

'AI Providers as Criminal Essay Mills? Large Language Models meet Contract Cheating Law' (UCL Faculty of Laws, 2023) by Noëlle Gaumann & Michael Veale comments

Academic integrity has been a constant issue for higher education, already heightened by the easy availability of essay mill and contract cheating services over the Internet. Jurisdictions across the world have passed a range of laws making it an offence to offer or advertise such services. Because of the nature of these services, which may make students agree to not submit work they create or support, some of these offences have been drafted extremely broadly, without intent or knowledge requirements. The consequence of this is that there sit on statute books a range of very wide offences covering the support of, partial or complete authoring of assignments or work. 

At the same time, AI systems have become part of public consciousness, particularly since the launch of chatGPT from OpenAI. These large language models have quickly become part of workflows in many areas, and are widely used by students. These have concerned higher education institutions as they highly resemble essay mills in their functioning and result. 

This paper attempts to unravel the intersection between essay mills, general purpose AI services, and emerging academic cheating law. We:

  • Analyse, in context, academic cheating legislation from jurisdictions including England and Wales, Ireland, Australia, New Zealand, US States, and Austria in light of how it applies to both essay mills, AI-enhanced essay mills, and general purpose AI providers. (Chapter 2) 

  • Examine and document currently available services by new AI-enhanced essay mills, characterising them and examining the way they present themselves both on their own websites and apps, and in advertising on major social media platforms including Instagram and TikTok. These include systems which both write entire essays as well as those designed to reference AI-created work, provide outlines, and to deliberately ‘humanise’ text as to avoid nascent AI detectors. (Chapter 3) 

  • Outline the tensions between academic cheating legal regimes and both AI-enhanced essay mills and general purpose AI systems, which can allow students to cheat in much the same way. (Chapter 4) 

  • Provide recommendations to legislators and regulators about how to design regimes which both effectively limit AI powered contract cheating without, as in some current jurisdictions, accidentally bringing bona fide general purpose AI systems into scope unnecessarily. (Chapter 5)

We make some important findings. xx Firstly, there is already a significant market of AI-enhanced essay mills, many of which are developing features directly designed to frustrate education providers’ current attempts to detect and mitigate the academic integrity implications of AI generated work. 

Secondly, some jurisdictions have scoped their laws so widely, that it is hard to see how ‘general purpose’ large language models such as Open AI’s GPT-4 or Google’s Bard would not fall into their provisions, and thus be committing a criminal offence. This is particularly the case in England and Wales and in Australia. 

Thirdly, the boundaries between assistance and cheating are being directly blurred by essay mills utilizing AI tools. Most enforcement, given the nature of the academic cheating regimes, we suspect will result from private enforcement, rather than prosecutions. These regimes interact in important and until now unexplored ways with other legal regimes, such as the EU’s Digital Services Act, the UK’s proposed Online Safety Bill, and contractual governance mechanisms such as the terms of service of AI API providers, and the licensing terms of open source models. 

We conclude with recommendations for policymakers and HE providers. These include that:

  • Jurisdictions should explore creating obligations for AI-as-a-service providers to enforce their own terms and conditions, similar to obligations placed on intermediaries under the Digital Services Act and the Online Safety Bill. This would create an avenue to cut off professionalised essay mills using these services when notified or investigated. 

  • Jurisdictions should name a regulator and provide them with investigation and enforcement powers. If they are unwilling to do this, giving formal ability to higher education institutions to refer matters to prosecuting authorities would be a start. 

  • Regulators should issue guidelines on the boundaries of essay mills in the context of AI, considering general purpose systems and systems that allow co-writing, outlining or research. 

  • Regulators, when established, should have a formal, international forum to create shared guidance, which they should have regard to when enforcing. Legislation should be amended to give formal powers of joint investigation and cooperation through this forum. 

  • Legislation should be amended to give general-purpose AI systems a safe harbour from criminal consideration as an essay mill, insofar as they meet a series of criteria designed to lower their risk in this regard. We propose watermarking, regulatory co-operation, and time- limited data retention and querying capacity based on queries provided by educational institutions, as mechanisms to consider. 

  • Higher education institutions share funding to organise individuals to monitor advertising archives and other services for essay mills, and report these to prosecutors in relevant jurisdictions as well as take down adverts for these services rapidly. Reporting should be wide, including to payment service providers, who may be able to stop profit from these regimes, and to AI service providers.

26 July 2023

Cheating

'Widespread use of Chegg for academic misconduct: Perspective from an Australian engineering school' by Edmund Pickering and Clancy Schuller comments 

Online tools are increasingly being used by students to cheat. File-sharing and homework-helper websites offer to aid students in their studies, but are vulnerable to misuse, and are increasingly reported as a major source of academic misconduct. Chegg.com is the largest such website. Despite this, there is little public information about the use of Chegg as a cheating tool. This is a critical omission, as for institutions to effectively tackle this threat, they must have a sophisticated understanding of their use. To address this gap, this work reports on a comprehensive audit of Chegg usage conducted within an Australian university engineering school. We provide a detailed analysis of the growth of Chegg, its use within an Australian university engineering school, and the wait time to receive solutions. Alarmingly, we find Chegg is broadly used to cheat and 50% of questions asked on Chegg are answered within 1.5 hours. This makes Chegg an appealing tool for academic misconduct in both assignment tasks and online exams. This work provides valuable insights to educators and institutions looking to improve the integrity of their courses through assessment and policy development. Finally, to better understand and tackle this form of misconduct, we call on education institutions to be more transparent in reporting misconduct data and for homework-helper websites to improve defences against misuse. 

Academic integrity is of critical importance to the modern tertiary education sector and underpins pedagogical approaches to teaching and learning. To ensure the integrity of their courses, academics and institutions must be aware of the latest methods students use to cheat. Universities must be agile in their response to new trends in academic integrity and student misconduct, especially given the new challenges and opportunities offered by the digital era. While early academic integrity research placed emphasis on plagiarism (Walker, 1998), new forms of misconduct are growing, including homework-helper, contract cheating and file- sharing websites (Curtis et al., 2022; Lancaster & Cotarlan, 2021a), automatic text-spinning and paraphrasing tools (Prentice & Kinden, 2018; Rogerson & McCarthy, 2017) and AI tools (Finnie-Ansley et al., 2022). Of interest in this paper is the growing prevalence of homework- helper websites, in particular Chegg.com (henceforth referred to as Chegg), the largest such website (Chegg Inc, 2022). 

Homework-helper websites, offer to aid students in their learning, however they also represent a growing threat to academic integrity. In this article we draw a distinction between contract-cheating websites (e.g. essay mills) which exist exclusively for the purpose of cheating, and homework-helper websites which present as a legitimate service, but whose business model is extremely vulnerable to misuse. We also make a distinction between homework-helper and filesharing websites. The latter term is commonly used in literature, but is not preferred by the authors of this article as file sharing is only one such service offered by homework-helper websites (Lancaster & Cotarlan, 2021a). 

Homework-helper websites offer to aid students in their studies through a range of services, for example question & answer (Q&A) services, file-sharing (e.g. study notes and assessment) and citation assistance. Of concern in this paper are Q&A services, which are identified as a concerning source of cheating (Broemer & Recktenwald, 2021; Lancaster & Cotarlan, 2021a). Through these Q&A services, students can look-up solutions to questions on a database or submit their own questions to be solved by the website’s ‘tutors’ (these are then added to the database). By far the largest homework-helper website is Chegg, with a market capital of 3.7 billion USD and 7.3 million subscribers (Chegg Inc, 2022; Nasdaq Inc, 2022). Other large homework-helper sites include CourseHero, Studocu and Bartleby. These websites generally operate under a subscription model by which students pay a monthly fee (14.95 USD for Chegg as of writing) to access the solution database and to ask their own questions. While these sites purport to be for legitimate study, they are highly vulnerable to misuse, and there are limited mechanisms to prevent students using these Q&A services to cheat. Broemer and Recktenwald (2021) proposed that the Chegg’s Q&A service is primarily used for cheating, this is further supported by Lancaster & Cotarlan (2021b). Figure 1 provides context on use of these sites by showing (a) a unique assessment question, (b) Google search results identifying solutions to the questions on (c-d) Chegg and (e) CourseHero. ... 

Considering a student’s motivations to cheat, homework-helper websites are highly appealing. A small portion of students routinely cheat, while a much larger group, approximately 44%, fall within the cheat-curious category – these are students who may cheat under certain circumstances (Bretag et al., 2018, 2019; Rigby et al., 2015). Drivers of cheating behaviour include a perception of low risk (Diekhoff et al., 1999), perception that there are lots of opportunities to cheat (Bretag et al., 2019) and perception of norms (Curtis et al., 2018) (i.e., the perception that cheating is the norm, or that others are cheating). 

Homework-helper websites embody these motivators. Homework-helper websites appear high in search engine results pages (often as the first result), and thus may appear when students engage in normal and healthy study behaviour (e.g. researching an assignment problem). It takes just one student to upload an item to a homework-helper website for it to appear in common search engines like Google. The appearance of these links provides a low barrier to entry, provides ample opportunity and feeds into a low perception of risk. Further, the fact that a peer has uploaded the assessment to a homework-helper site feeds into the perception that others are engaging in this behaviour. Combined, cheat curious students are vulnerable to these motivators. Finally, due to the large subscriber base and ease of uploading questions, these websites can rapidly include unique assignment questions (Christodoulou, 2022). 

The global COVID-19 pandemic has increased rates of academic misconduct and usage of homework-helper websites (Comas-Forgas et al., 2021; Erguvan, 2021; Lancaster & Cotarlan, 2021a). Subsequently, the pandemic has seen an increased focus from tertiary education institutions on modern trends in academic misconduct (Erguvan, 2021; Reedy et al., 2021; Turner et al., 2022). A recent study by Lancaster and Cotarlan (2021a) studied the impact of COVID-19 on Chegg usage, finding 196% increase following the transition to online learning. This trend is particularly concerning for engineering and other STEM disciplines. STEM is known to be overrepresented in academic misconduct (Bretag et al., 2019; Lancaster & Cotarlan, 2021a). For example, a recent Australian based large-scale survey of students found engineering students were 1.8x more likely to engage in cheating behaviour (Bretag et al., 2019). Adding to this concern, Chegg’s own data shows that a majority (59%) of their userbase are STEM students (Chegg Inc, 2022). 

While there is growing concern about homework-helper websites in the tertiary education sector, there remains little information on the use of these websites. To demonstrate, as of the date of writing, a Scopus search for ‘Chegg,’ the largest homework-helper website, returns nine journal articles, with only four related to academic misconduct, the others relating to Chegg’s textbook hiring service or geology (Busch, 2017; Emery-Wetherell & Wang, 2023; Lancaster & Cotarlan, 2021a; Ruggieri, 2020). Lancaster and Cotarlan (2021a) studied the impact of COVID-19 on Chegg usage; Ruggieri (2020) found 38 – 71% of physics students reported Chegg usage, with increased usage in more advanced units; while Busch (2017) explored methods to reduce usage of sites like Chegg; Emery-Weatherell & Wang (2023) explored cheating via Chegg in an introductory statistics course, explored ways to discourage cheating, and provided code to help identify cheating students. 

Broader searches uncover additional literature. In a valuable conference paper, Broemer and Recktenwald (2021) presented a detailed investigation of Chegg usage in a 2-hour online mechanical engineering exam, identifying 129 unique posts to Chegg, with 71% of posts answered during the exam (50% answered within 1 hour). Interestingly, many answered posts were not viewed during the exam, even by the uploader. Manoharan & Speidel (2020) uploaded assignment questions to Chegg investigating factors like solution quality and time for a solution to be provided, finding easy questions were answered quickly and correctly, while complex questions were not answered or received substandard answers. Interestingly, Manoharan & Speidel (2020), ensured the questions they uploaded were clearly identifiable as formative assessment, which violates Chegg’s policy and which Chegg tutors are required to report. None of these questions were identified as attempts to cheat. Finally, Somers et al (2023) developed a tool to automatically detect the upload of questions to the file-sharing and homework-helper sites like Chegg. 

As Chegg is the largest homework-helper website, this lack of investigation represents a major gap in our understanding of academic misconduct and cheating. With such sparse information, developing effective strategies and policies is near impossible. To address this gap, this article aims to present a detailed understanding of Chegg usage within an Australian technology university engineering school. This article provides insights into the growth of Chegg’s homework-helper service, prevalence of Chegg usage, and time required to receive a solution on Chegg. The findings provide a valuable resource for institutions and academics in understanding the challenge presented by homework-helper websites.

11 January 2023

GPT

'GPT Takes the Bar Exam' by Michael James Bommarito and Daniel Martin Katz comments

 Nearly all jurisdictions in the United States require a professional license exam, commonly referred to as “the Bar Exam,” as a precondition for law practice. To even sit for the exam, most jurisdictions require that an applicant completes at least seven years of post-secondary education, including three years at an accredited law school. In addition, most test-takers also undergo weeks to months of further, exam-specific preparation. Despite this significant investment of time and capital, approximately one in five test-takers still score under the rate required to pass the exam on their first try. In the face of a complex task that requires such depth of knowledge, what, then, should we expect of the state of the art in “AI?” 

In this research, we document our experimental evaluation of the performance of OpenAI’s text-davinci-003 model, often-referred to as GPT-3.5, on the multistate multiple choice (MBE) section of the exam. While we find no benefit in fine-tuning over GPT-3.5’s zero-shot performance at the scale of our training data, we do find that hyperparameter optimization and prompt engineering positively impacted GPT-3.5’s zero-shot performance. For best prompt and parameters, GPT-3.5 achieves a headline correct rate of 50.3% on a complete NCBE MBE practice exam, significantly in excess of the 25% baseline guessing rate, and performs at a passing rate for both Evidence and Torts. GPT-3.5’s ranking of responses is also highly correlated with correctness; its top two and top three choices are correct 71% and 88% of the time, respectively, indicating very strong non-entailment performance. 

While our ability to interpret these results is limited by nascent scientific understanding of LLMs and the proprietary nature of GPT, we believe that these results strongly suggest that an LLM will pass the MBE component of the Bar Exam in the near future. 

07 September 2022

Cheating

‘On the Efficacy of Online Proctoring using Proctorio’ by Laura Bergmans, Nacir Bouali, Marloes Luttikhuis and Arend Rensink in Proceedings of the 13th International Conference on Computer Supported Education 1 (CSEDU 2021) 279-290 comments 

In this paper we report on the outcome of a controlled experiment using one of the widely available and used online proctoring systems, Proctorio. The system uses an AI-based algorithm to automatically flag suspicious behaviour, which can then be checked by a human agent. The experiment involved 30 students, 6 of which were asked to cheat in various ways, while 5 others were asked to behave nervously but make the test honestly. This took place in the context of a Computer Science programme, so the technical competence of the students in using and abusing the system can be considered far above average. 

The most important findings were that none of the cheating students were flagged by Proctorio, whereas only one (out of 6) was caught out by an independent check by a human agent. The sensitivity of Proctorio, based on this experience, should therefore be put at very close to zero. On the positive side, the students found (on the whole) the system easy to set up and work with, and believed (in the majority) that the use of online proctoring per se would act as a deterrent to cheating. 

The use of online proctoring is therefore best compared to taking a placebo: it has some positive influence, not because it works but because people believe that it works, or that it might work. In practice however, before adopting this solution, policy makers would do well to balance the cost of deploying it (which can be considerable) against the marginal benefits of this placebo effect. 

The authors state

 All over the world, schools and universities have had to adapt their study programmes to be conducted purely online, because of the conditions imposed by the COVID-19 pandemic. The University of Twente is no exception: from mid-March to the end of August, no teaching-related activities (involving groups) were allowed on-campus. 

Where online teaching has worked at least reasonably well, in that we have by and by found effective ways to organise instruction, tutorials, labs and projects using online means, the same cannot be said for the testing part of the programme. Traditionally, we test our students using a mix of group project work and individual written tests. The latter range from closed-book multiple choice tests to open-book tests with quite wide-ranging, open questions. Such tests are (traditionally) always taken in a controlled setting, where the students are collected in a room for a fixed period, at the start of which they are given their question sheet and at the end of which they hand in their answers. During that period, a certain number of invigilators (in other institutions called proctors) are present to observe the students’ behaviour so as to deter them from cheating — defined as any attempt to answer the questions through other means than those intended and proscribed by the teacher. This system for testing is, we believe, widespread (if not ubiquitous) in education. 

Changing from such a controlled setting to online testing obviously opens up many more opportunities for cheating. It is hard to exaggerate the long-term threat that this poses to our educational system: without reliable testing, the level of our students cannot be assessed and a university (or any other) diploma essentially becomes worthless. We have to do more than just have students make write the test online and hope for the best. 

Solutions may be sought in many different directions, ranging from changing the nature of the test altogether (from a written test to some other form, such as a take-home or oral test), to offering multiple or randomised versions to different students, or applying plagiarism checks to the answers, or calling upon the morality of the students and having them sign a pledge of good faith; or any combination of the above. All of these have their pros and cons. In this paper, rather than comparing or combining these measures, we concentrate on one particular solution that has found widespread adoption: that of online proctoring. In particular, we describe an experiment in using one of the three systems for online proctoring that have been recommended in the quickscan (see (Quickscan SURF, 2020)) by SURF, a “collaborative organisation for ICT in Dutch education and research” of which all public Dutch institutes of higher education are members. 

Approach. 

Online proctoring refers to the principle of remotely monitoring the actions of a student while she is taking a test, with the idea of detecting behaviour that suggests fraud. The monitoring consists of using camera, microphone and typically some degree of control over the computer of the student. The detection can be done by a human being (the proctor, also called invigilator in other parts of the Anglosaxon world), or it can be done through some AI-based algorithm — or a combination of both. 

The question we set out to answer in this paper is: how well does it work? In other words, is online proctoring a good way to detect actual cheating, without accusing honest students — in more formal terms: is it both sensitive and specific? How do students experience the use of proctoring? 

In answering this question, we have limited ourselves to a single proctoring system, Proctorio, which is one of the three SURF-approved systems of (Quickscan SURF, 2020). The main reason for selecting Proctorio is the usability of the system; it is possible to use it on the majority of operating systems by installing a Google Chrome extension and it can be used for large groups of students. It features automatic detection of behaviour deemed suspicious in a number of categories, ranging from hand and eye movement to computer usage or sound. The teacher can select the categories she wants to take into account, as well as the sensitivity level at which the behaviour is flagged as suspicious, at any point during the proceedings (before, during or after the test). Proctorio outputs an annotated real-time recording for each student, which can be separately checked by the teacher so that the system’s suspicions can be confirmed or negated. The system is described in some detail in Section 2. 

Using Proctorio, we have conducted a controlled randomized trial involving 30 students taking a test specifically set for this experiment. The students were volunteers and were hired for their efforts; their results on the test did not matter to the experiment in any way. The subject of the test was a first-year course that they had taken in the past, meaning that the nature of the questions and the expected kind of answers were familiar. Six out of the 30 students were asked to cheat during the test, in ways to be devised by themelves, so as to fool the online proctor; the rest behaved honestly. Moreover, out of the 24 honest students, five were asked to act nervously; in this way we wanted to try and elicit false positives from the system. 

Besides Proctorio’s capabilities for automatic analysis, we also conducted a human scan of the (annotated) videos, by staff unaware of the role of the students (but aware of the initial findings of Proctorio). We expected that humans would be better than the AI-based algorithm in detecting certain behaviours as cheating, but worse in maintaining a sufficient and even level of attention during the tedious task of monitoring. 

Findings. 

Summarising, our main findings were: The automatic analysis of Proctorio detected none of the cheating students; the human reviewers detected 1 (out of 6). Thus, the percentage of false negatives was very large, pointing to a very low sensitivity of online proctoring. 

None of the honest students were flagged as suspicious by Proctorio, whereas one was suspected by the human reviewer. Thus, the percentage of false positives was zero for the automatic detec- tion, and 4% for the human analysis, pointing to a relatively high specificity achievable by online proctoring (which, however, is quite useless in the light of the disastrous sensitivity). Furthermore, we gained valuable insights into the conditions necessary to make online proctoring an acceptable measure in the opinion of the participating students. 

The outcome of the experiment is presented in more detail in Section 3, and discussed in Section 4 (including threats to validity). After discussing related work (Section 5), in Section 6 we draw some conclusions.

'Cheating in online courses: Evidence from online proctoring' by Seife Dendira and Stockton Maxwell in (2020) 2 Computers in Human Behavior Reports 100033 comments 

This study revives the unsettled debate on the extent of academic dishonesty in online courses. It takes advantage of a quasi experiment in which online proctoring using a webcam recording software was introduced for high-stakes exams in two online courses. Each course remained the same in its structure, content and assessments before and after the introduction of online proctoring. Analysis of exam scores shows that online proctoring was associated with a decrease in average performance in both courses. Furthermore, the decrease in scores persists when accounting for potential confounding factors in a regression framework. Finally, in separate regressions of exam performance on student characteristics, the regression explanatory power was higher for scores under proctoring. We interpret these results as evidence that cheating took place in the online courses prior to proctoring. The results also imply that online proctoring is an effective tool to mitigate academic dishonesty in online courses. 

 The authors state 

In the past two decades, higher education institutions have experienced unprecedented growth in online learning. In the U.S., where this study took place, enrollment in distance higher education grew steadily between 2002 and 2016. Since 2012, whereas overall enrollment in higher education has been declining, growth in distance education has in fact been rising. As of 2016, the latest year for which data are published, close to a third of all college students were taking at least one distance course (Seaman et al., 2018). 

More than half of these distance learners were students that were combining non-distance (face-to-face, F2F) learning with distance learning. Accordingly, today many “traditional” institutions offer a menu of online courses as well as fully online programs. This is prompted by sustained demand for such courses and programs – in 2016, for example, about 30 percent of students in public and private non-profit institutions in the U.S. enrolled in at least one distance learning course (source: own calculation using data in Seaman et al., 2018). It also appears that educators in all types of institutions have recognized that a structural shift has occurred, and that online delivery and learning will be a mainstay of higher education in the future. 

Therefore, the dialogue surrounding online education has turned to how best to deliver online courses. Various aspects of online courses, such as modality (fully online versus hybrid; synchronous versus asynchronous), technology platform, assessment and accessibility are considered and debated. The goal of such dialogue, ultimately, is to design and deliver online courses in which student learning and experience are at least on par with traditional (F2F) courses. Given this goal, the question of how much learning takes place in online courses (relative to the traditional/F2F mode) has become a critical point of contention (see, among others, Cavanaugh & Jacquemin, 2015; Alpert et al., 2016; Dendir, 2019; Paul & Jefferson, 2019). 

A particularly pertinent issue in this regard is academic dishonesty (McCabe et al., 2012). Some argue that even the measures that are used to gauge learning in online courses, such as scores on formative or summative assessments, do not truly reflect learning because they are possibly tainted by cheating that occurs during these assessments (Harmon et al., 2010; Arnold, 2016). If, for example, exam score distributions turn out to be comparable in an online course and its F2F counterpart, it does not mean that comparable learning takes place in the two modes simply because the online scores are likely inflated by cheating.1 Such arguments are predicated on the assumption that academic dishonesty is more prevalent in online courses than F2F ones (Kennedy et al., 2000; Young, 2012). 

Various arguments are provided as to why online courses could be more amenable to academic dishonesty. One is that because assessments often happen in unsupervised or unproctored settings, it is difficult to confirm the identity of the test taker (Kraglund-Gauthier & Young, 2012). Similarly, online test takers can use unauthorized resources (e.g. cheat sheets, books or online materials) during assessments. Also, the online environment – by the mere absence of a close relationship and interaction with an instructor – can encourage collaborative (group) work with other students (Sendag et al., 2012; McGee, 2013; Hearn Moore et al., 2017). 

While there is also growing empirical evidence showing that academic dishonesty is relatively more common in online learning, the debate is not yet fully settled (Harton et al., 2019; Peled et al., 2019). It is in this context that the current study presents evidence from a quasi/natural experiment that occurred in two online courses at a midsize comprehensive university in the U.S. The experiment involved the introduction of online proctoring using a webcam recording software for high-stakes exams. The structure, content and assessments (exams) in each course remained the same before and after the introduction of online proctoring. A change in student performance, if any, can therefore be attributed to the mitigation of cheating after online proctoring came into place, and provides direct evidence on the scale of academic dishonesty in online courses. 

Relative to much of the existing literature, the treatment here is unique because proctoring did not entail a change in modality. Many studies that investigate academic dishonesty typically compare student performance in unproctored online assessments and proctored F2F ones. But a comparison of student performance in proctored F2F exams and unproctored online exams may not be entirely valid because some of the performance differences could be due to the testing environment per se, apart from the effect of supervision (Fask et al., 2014; Butler-Henderson & Crawford, 2020). By comparing performance in the same learning mode (online) but before and after the advent of supervision, this study avoids any such complications. Furthermore, from a practical point of view, in many scenarios in-person proctoring of tests may not be feasible for fully online courses. Therefore, the results of the current study also provide evidence on the efficacy of easily adoptable, relatively low-cost online proctoring in online courses. 

The findings of the study suggest that cheating was taking place in the unsupervised exams. First, simple bivariate analyses show that there was a significant drop in average exam scores in both courses after online proctoring was introduced, in many cases by more than a letter grade. This is despite the fact that student characteristics remained largely similar before and after proctoring (implying sample selection was unlikely to be a factor). Second, explicitly accounting for student characteristics in a multiple regression framework could not explain away the decrease in performance. Finally, a comparison of the explanatory powers of regressions of scores on student ability and maturity indicators showed that they were higher for proctored exams. From these results one can also infer that online proctoring of assessments is a viable strategy to mitigate cheating in online courses. 

The balance of the paper is organized as follows. The next section reviews the related literature on academic dishonesty. Section 3 describes the setup of the study and data. Section 4 presents bivariate analysis, the regression methodology and results. Section 5 points out some caveats and limitations of the study. The last section concludes and draws a few implications on the basis of the results of the study.

12 October 2021

Cheating Sites

Whack a mole? Tertiary Education Quality and Standards Agency v Telstra Corporation Ltd [2021] FCA 1202 comments

1 The applicant, the Tertiary Education Quality and Standards Agency (TEQSA), seeks orders pursuant to s 127A of the Tertiary Education Quality and Standards Agency Act 2011 (Cth) (TEQSA Act) requiring the respondent carriage service providers (CSPs) to take steps to disable access to the online locations at the following domain names: ‘assignmenthelp4you.com’ and ‘assignmenthelp2u.com’, and the associated Uniform Resource Locators (URLs) and Internet Protocol (IP) addresses identified in the amended originating application and Schedule A to the amended concise statement (together, the Online Locations). 

2 The Online Locations (which appear to be operated by a person or persons located in India) are websites which are, or were, accessible to internet users (including in Australia) at the URLs, domain names, and the IP addresses specified in Schedule A to the amended concise statement. The Online Locations advertise or publish an advertisement to students undertaking courses of study offered by various Australian higher education providers, and offer services which include assignment, dissertation and essay writing for a fee. Those higher education providers include the University of Sydney, the University of New South Wales, La Trobe University , the University of Technology, the University of Melbourne, the University of Western Australia, the University of Queensland, the University of South Australia and the Australian Institute of Business and Management Pty Ltd (trading as King’s Own Institute). 

3 Although the site-blocking orders are sought in respect of two domain names, URLs and IP addresses, it is contended by TEQSA that they emanate from the same source and advertise the same service in substantially the same terms. This is in a context where the online location at the domain name ‘assignmenthelp4you.com’, became inaccessible (at least temporarily) very shortly after TEQSA notified the operator of this application. Around the same time, the online location became accessible at the domain name ‘assignmenthelp2u.com’, revealing strong similarities to ‘assignmenthelp4you.com’. 

4 The basis of the application for the orders sought is that the Online Locations advertise, publish or broadcast advertisements for an academic cheating service to students undertaking Australian courses of study with higher educator providers thereby, (at least) facilitating a contravention of s 114B(2) of the TEQSA Act. 

5 Each of the respondents is a CSP within the meaning of s 5 of the TEQSA Act and s 87 of the Telecommunications Act 1997 (Cth), and provides access to the internet for users of its internet service in Australia. Each of the respondent CSPs were notified of this application in accordance with s 127A(6)(b) of the TEQSA Act and have filed submitting appearances. 

6 No entity has opposed the orders that TEQSA seeks, nor contested the facts and contentions set out in TEQSA’s amended concise statement. ... 

10 As noted above, this application is based on a contravention of s 114B(2) which is relevantly in the following terms:

114B Prohibition on advertising academic cheating services (1) A person commits an offence if: (a) the person advertises, or publishes or broadcasts an advertisement for, an academic cheating service to students undertaking, with a higher education provider: (i) an Australian course of study; or (ii) an overseas course of study provided at Australian premises; and (b) either: (i) the person does so for a commercial purpose; or (ii) the academic cheating service has a commercial purpose. Penalty: 2 years imprisonment or 500 penalty units, or both. (2) A person contravenes this subsection if the person advertises, or publishes or broadcasts an advertisement for, an academic cheating service to students undertaking, with a higher education provider: (a) an Australian course of study; or (b) an overseas course of study provided at Australian premises. Civil penalty: 500 penalty units.

11 These provisions were introduced into the TEQSA Act by amendments in 2019. The Explanatory Memorandum to the Tertiary Education Quality and Standards Agency Amendment (Prohibiting Academic Cheating Services) Bill 2019 (Cth) (the Bill) introducing ss 114B and 127A, stated: Section 127A will be particularly important to reduce the visibility of, and ease of access to, overseas websites that provide or advertise cheating services. While prosecution of overseas website operators and content authors may be difficult, blocking of these sites by internet service providers...is an action that can be taken from within Australia and will go some way to reducing their availability and impact. 

12 This is the first application under s 127A. That said, the Copyright Act 1968 (Cth) contains s 115A, which is a similar provision to s 127A. Section 115A provides, in that context, for site-blocking injunctions against foreign online locations that have the primary purpose or effect of infringing copyright or facilitating copyright infringement. ... 

24 I am satisfied that, as TEQSA submitted, the persons operating the Online Locations are advertising an academic cheating service to students undertaking an Australian course of study with a higher education provider, in contravention of s 114B(2). It follows that the Online Locations are facilitating the operators’ contravention of (at least) s 114B(2) for the purpose of s 127A(1) of the TEQSA Act. 

25 The below statements made on the website of ‘assignmenthelp4you.com’ (and exhibited in screenshots taken by Ms Pritchard in April and May 2021) reveal, inter alia, that it variously advertised and published the following services: ‘Find Best Academic Writers for Hire! Get Classroom Assignment Writing Service Acquire world’s best online assignment writing service at cheaper rate! ... No plagiarism policy – Enhance your academic grade with scoring high in each assignment! ... Order An Assignment!!’. ‘Buy Premium Writing Services’, ‘we deliver assignments of high quality that always meets your requirement and urgent deadlines, ‘...we deliver ....tailored academic papers...’, ‘expertise in composing dissertations for students...’, ‘Buy custom essay from professional writers’: ‘Buy Custom Essay - Acquire Top Quality Essay Writing Service!’, ‘Completely original and authentic papers ... Timely assignment delivery’, ‘If a student... cannot give much to complete their assignments, therefore, students hire a professional essay writer to ease their academic stress’, ‘The affordable essay help is served by our tutors as the own high experience and professional skills thus we have hired best writers who know how to tackle the difficult academic tasks of students’. ‘If you have a tight deadline and your assignment date is near, our experts can prepare your assignment on an urgent basis also’, ‘We also ensure you about the quality of assignment done by our academic writers. Assignment writers of Assignmenthelp4you have the potential to write lengthy or complicated assignments of any subject in the last minutes of submissions’. 

26 These statements show that the website ‘assignmenthelp4you.com’ advertised an academic cheating service as defined in s 5 of the TEQSA Act. 

27 This service was expressly advertised as being delivered to students in Australia, amongst others. So much is evident from the following examples (also taken by Ms Pritchard in April and May 2021), as set out by TEQSA in its submissions: (a) The Online Location available at assignmenthelp4you.com included a page titled ‘Acquire La Trobe University Australia Assignment Help By Hiring Academic Experts At Best Prices! It invites students to ‘come to us anytime and avail the benefits of our trustworthy La Trobe University Australian assignment help service’, and sets out a ‘List of Topics Covered by Courses of La Trobe University Australia’, including FIN1FOF Fundamentals of Finance and ACC1AMD Accounting for Management Decisions. (b) The Online Location available at assignmenthelp4you.com also included a similar page in relation to ‘Kings Own Institute’, which makes reference to courses including ACC301 Tax Law and FIN200 Corporate Financial Management. (c) Further, the Online Location available at assignmenthelp4you.com made reference to a large number of other Australian tertiary education institutions, including RMIT University , Charles Sturt University, University of Melbourne, University of Queensland, University of Technology Sydney, University of New South Wales, University of Western Australia, etc. 

28 Ms Pritchard’s evidence establishes that the subjects referred to in subparagraphs (a) and (b), recited above, are subjects offered by Australian higher education providers, and each of the institutions referred to in subparagraph (c) are on the National Register of Higher Education Providers, which TEQSA is required by ss 198(1) and 198(5) of the TEQSA Act to establish, maintain and make available for inspection on the internet. Indeed, of the 70 institutions listed on ‘assignmenthelp4you.com’, 54 were higher education providers for the purposes of the TEQSA Act.

21 November 2020

Plagiarism

In Medical Board of Australia v Soh (Review and Regulation) [2019] VCAT 1549 the TRibunal considered claims of forgery and practitioner by a clinician. 

The Tribunal states 

The respondent in this matter, Dr Soh, is a medical practitioner. This matter has come before us as a disciplinary matter in relation to Dr Soh and there has been an Agreed Statement of Facts and Agreed Determination. At the outset of the hearing this morning we indicated that the panel did not need to hear submissions from either party as we have read the whole file, being the Tribunal Book, and we have read the agreed statement of facts and we have also read the determination. On that basis we are of the opinion that we are able to give an oral decision in this matter and we will do so in due course. 

The following facts are set out in the agreed statement of facts: 

Facts 

Dr Bryan Min Han Soh claimed authorship of an article which was published in Annals of Medicine and Surgery on 22 March 2017, titled ‘The use of super-selective mesenteric embolization as a first-line management of acute lower-gastrointestinal bleeding’. 

That article was plagiarised from the original article titled ‘Super-Selective Mesenteric Embolization Provides Effective Control of Lower GI Bleeding’ published in the Journal of Radiology, Research and Practice on 22 January 2017, authored by Toan Pham, Ian Faragher and others and undertaken by the Colorectal Unit, Department of Surgery, Western Health. 

Dr Pham, the primary author of the original article, sent a draft of the article to Dr Soh on 5 August 2014 for him to consider whether he could make a contribution. Dr Soh undertook proof reading, basic editing, and minor additions, but Dr Pham did not consider that Dr Soh had made a sufficient contribution to be listed as a contributing author for publication of the article. 

Dr Soh used the plagiarised article to gain entry to the Surgical Education and Training (SET) in General Surgery program of the Royal Australasian College of Surgeons (RACS). 

On 31 March 2017, Dr Soh submitted an application for SET to RACS supported by a curriculum vitae which contained the plagiarised article. 

On 16 March 2017, Dr Soh was advised by letter that his result in the RACS Surgical Science Generic Examination (Examination) was a ‘FAIL’. 

Dr Soh forged his RACS Surgical Science Generic Examination result letter dated 16 March 2017 by altering the ‘FAIL’ result to a ‘PASS’ result. Dr Soh altered the Examination result letter in order to gain entry to the SET in General Surgery program of the RACS. 

On 31 March 2017, Dr Soh submitted an application for SET to RACS containing the fraudulent Examination letter. Western Health undertook an investigation into the plagiarism by Dr Soh in 2017. 

On or about 28 June 2017, Dr Soh provided copies of emails between himself and the Journal of Surgery Case Reports which were not true copies of the originals and had been altered. The alterations made it appear that Mr Pham had been copied in to the email, and inserted text into the body of the email. 

On 3 November 2017, Western Health issued Dr Soh a formal warning to be placed on his file for 12 months, and did not renew his contract of employment which was to end in January 2018. 

On 30 January 2018, the offer by RACS to Dr Soh to undertake SET was withdrawn on the basis that the information in the application was not true.

The outcome was 

 Dr Soh admits and the Tribunal finds that he engaged in the conduct particularised in the Agreed Facts and the Allegations. 
 
Dr Soh admits and the Tribunal finds that his conduct breached principles of the Good Medical Practice: A Code of Conduct for Doctors in Australia (March 2014). 
 
The Tribunal finds that the conduct of Dr Soh as particularised in the Agreed Facts and Allegations 1 and 2 constitutes ‘professional misconduct’ within the meaning of paragraphs (a) and/or (b) of the definition of professional misconduct in the Health Practitioner Regulation National Law (Victoria) Act 2009. 
 
The respondent is reprimanded pursuant to s. 196(2)(a) of the Health Practitioner Regulation National Law (Victoria) Act 2009 (National Law). 
 
The respondent is fined $10,000 (to be paid on or before 19 November 2019) pursuant to s. 196(2)(c) of the National Law. 
 
The respondent’s registration is subject to conditions requiring him to undertake education on ethics, specifically academic integrity and research ethics in the terms outlined in Annexure “A” pursuant to s. 196(2)(b)(i) of the National Law.