'How Generative Ai Turns Copyright Law on its Head' by Mark A Lemley comments
While courts are litigating many copyright issues involving generative AI, from who owns AI-generated works to the fair use of training to infringement by AI outputs, the most fundamental changes generative AI will bring to copyright law don't fit in any of those categories. The new model of creativity generative AI brings puts considerable strain on copyright’s two most fundamental legal doctrines: the idea-expression dichotomy and the substantial similarity test for infringement. Increasingly creativity will be lodged in asking the right questions, not in creating the answers. Asking questions may sometimes be creative, but the AI does the bulk of the work that copyright traditionally exists to reward, and that work will not be protected. That inverts what copyright law now prizes. And because asking the questions will be the basis for copyrightability, similarity of expression in the answers will no longer be of much use in proving the fact of copying of the questions. That means we may need to throw out our test for infringement, or at least apply it in fundamentally different ways.
'AI Providers as Criminal Essay Mills? Large Language Models meet Contract Cheating Law' (UCL Faculty of Laws, 2023) by Noëlle Gaumann & Michael Veale comments
Academic integrity has been a constant issue for higher education, already heightened by the easy availability of essay mill and contract cheating services over the Internet. Jurisdictions across the world have passed a range of laws making it an offence to offer or advertise such services. Because of the nature of these services, which may make students agree to not submit work they create or support, some of these offences have been drafted extremely broadly, without intent or knowledge requirements. The consequence of this is that there sit on statute books a range of very wide offences covering the support of, partial or complete authoring of assignments or work.
At the same time, AI systems have become part of public consciousness, particularly since the launch of chatGPT from OpenAI. These large language models have quickly become part of workflows in many areas, and are widely used by students. These have concerned higher education institutions as they highly resemble essay mills in their functioning and result.
This paper attempts to unravel the intersection between essay mills, general purpose AI services, and emerging academic cheating law. We:
- Analyse, in context, academic cheating legislation from jurisdictions including England and Wales, Ireland, Australia, New Zealand, US States, and Austria in light of how it applies to both essay mills, AI-enhanced essay mills, and general purpose AI providers. (Chapter 2)
- Examine and document currently available services by new AI-enhanced essay mills, characterising them and examining the way they present themselves both on their own websites and apps, and in advertising on major social media platforms including Instagram and TikTok. These include systems which both write entire essays as well as those designed to reference AI-created work, provide outlines, and to deliberately ‘humanise’ text as to avoid nascent AI detectors. (Chapter 3)
- Outline the tensions between academic cheating legal regimes and both AI-enhanced essay mills and general purpose AI systems, which can allow students to cheat in much the same way. (Chapter 4)
- Provide recommendations to legislators and regulators about how to design regimes which both effectively limit AI powered contract cheating without, as in some current jurisdictions, accidentally bringing bona fide general purpose AI systems into scope unnecessarily. (Chapter 5)
We make some important findings. xx Firstly, there is already a significant market of AI-enhanced essay mills, many of which are developing features directly designed to frustrate education providers’ current attempts to detect and mitigate the academic integrity implications of AI generated work.
Secondly, some jurisdictions have scoped their laws so widely, that it is hard to see how ‘general purpose’ large language models such as Open AI’s GPT-4 or Google’s Bard would not fall into their provisions, and thus be committing a criminal offence. This is particularly the case in England and Wales and in Australia.
Thirdly, the boundaries between assistance and cheating are being directly blurred by essay mills utilizing AI tools. Most enforcement, given the nature of the academic cheating regimes, we suspect will result from private enforcement, rather than prosecutions. These regimes interact in important and until now unexplored ways with other legal regimes, such as the EU’s Digital Services Act, the UK’s proposed Online Safety Bill, and contractual governance mechanisms such as the terms of service of AI API providers, and the licensing terms of open source models.
We conclude with recommendations for policymakers and HE providers. These include that:
- Jurisdictions should explore creating obligations for AI-as-a-service providers to enforce their own terms and conditions, similar to obligations placed on intermediaries under the Digital Services Act and the Online Safety Bill. This would create an avenue to cut off professionalised essay mills using these services when notified or investigated.
- Jurisdictions should name a regulator and provide them with investigation and enforcement powers. If they are unwilling to do this, giving formal ability to higher education institutions to refer matters to prosecuting authorities would be a start.
- Regulators should issue guidelines on the boundaries of essay mills in the context of AI, considering general purpose systems and systems that allow co-writing, outlining or research.
- Regulators, when established, should have a formal, international forum to create shared guidance, which they should have regard to when enforcing. Legislation should be amended to give formal powers of joint investigation and cooperation through this forum.
- Legislation should be amended to give general-purpose AI systems a safe harbour from criminal consideration as an essay mill, insofar as they meet a series of criteria designed to lower their risk in this regard. We propose watermarking, regulatory co-operation, and time- limited data retention and querying capacity based on queries provided by educational institutions, as mechanisms to consider.
- Higher education institutions share funding to organise individuals to monitor advertising archives and other services for essay mills, and report these to prosecutors in relevant jurisdictions as well as take down adverts for these services rapidly. Reporting should be wide, including to payment service providers, who may be able to stop profit from these regimes, and to AI service providers.