Showing posts with label Copyright. Show all posts
Showing posts with label Copyright. Show all posts

19 December 2025

Productivity

The Productivity Commission report Harnessing data and digital technology released today states 

Data and digital technologies are the modern engines of economic growth. Australia needs to harness the consumer and productivity benefits of data and digital technology while managing and mitigating any downside risks. There is a role for government in setting the rules of the game to foster innovation and ensure that Australians reap the benefits of the data and digital opportunity. 

Emerging technologies like artificial intelligence (AI) could transform the global economy and speed up productivity growth. The Productivity Commission considers that multifactor productivity gains above 2.3%, and labour productivity growth of about 4.3%, are likely over the next decade, although there is considerable uncertainty. But poorly designed regulation could stifle the adoption and development of AI. Australian governments should take an outcomes based approach to AI regulation – using our existing laws and regulatory structures to minimise harms (which the Australian Government has committed to do in its National AI Plan) and introducing technology specific regulations only as a last resort. Developing and training AI models is a global opportunity worth many billions of dollars. Currently, gaps in licensing markets – particularly for open web material – make AI training in Australia more difficult than in overseas jurisdictions. However, licensing markets are developing, and if courts overseas interpret copyright exceptions narrowly, Australia could become relatively more attractive for AI development. As such the PC considers it premature to make changes to Australia’s copyright laws. 

Data access and use fuels productivity growth: giving people and businesses better access to data that relates to them can stimulate competition and allow businesses to develop innovative products and services. A mature data sharing regime could add up to $10 billion to Australia’s GDP. The Australian Government should rightsize the Consumer Data Right (CDR) with the immediate goal of making it work better for businesses and consumers in the sectors where it already exists. In the longer term, making the accreditation model, technical standards and designation process less onerous will help make the CDR a more effective data access and sharing platform that supports a broader range of use cases. 

The benefits of data access and use can only be realised if Australians trust that data is handled safely and securely to protect their privacy. Some requirements in the Privacy Act constrain innovation without providing meaningful protection to individuals. And complying with the controls and processes baked into the Act can make consent and notification a ‘tick box’ exercise where businesses comply with the letter of the law but not its spirit. The Australian Government should amend the Privacy Act to introduce an overarching outcomes based privacy duty for regulated entities to deal with personal information in a manner that is fair and reasonable in the circumstances. 

Financial reports provide essential information about a company’s financial performance, ensuring transparency and accountability while informing the decisions of investors, businesses and regulators. The Australian Government can further spark productivity by making digital financial reporting the default for publicly listed companies and other public interest entities while also removing the outdated requirement that reports be submitted in hard copy or PDF form. This would improve the efficiency of analysing reports, enhance integrity and risk detection, and could boost international capital market visibility for Australian companies.  

The Commission's  recommendations are - 

Artificial intelligence 

Recommendation 1.1 Productivity growth from AI should be enabled within existing legal foundations. 

Gap analyses of current rules need to be expanded and completed Any regulatory responses to potential harms from using AI should be proportionate, risk based, outcomes based and technology neutral where possible. 

The Australian Government should complete, publish and act on ongoing reviews into the potential gaps in the legal framework posed by AI as soon as possible. 

Where relevant gap analyses have not begun, they should begin immediately. 

All reviews of the legal gaps posed by AI should consider: • the uses of AI • the additional risk of harm posed by AI (compared to the status quo) in a specific use case • whether existing regulatory frameworks cover these risks potentially with improved guidance and enforcement; and if not, how to modify existing regulatory frameworks to mitigate the additional risks. 

Recommendation 1.2 AI specific regulation should be a last resort 

AI specific regulations should only be considered as a last resort and only for use cases of AI where: • existing regulatory frameworks cannot be sufficiently adapted to handle AI related harms • technology neutral regulations are not feasible or cannot adequately mitigate the risk of harm. This includes whole of economy regulation such as the EU AI Act and the Australian Government’s previous proposal to mandate guardrails for AI in high risk settings.

Copyright and AI 

Recommendation 2.1 A review of Australian copyright settings and the impact of AI 

The Australian Government should monitor the development of AI and its interaction with copyright holders over the next three years. It should monitor the following areas: • licensing markets for open web materials • the effect of AI on creative incomes generated by copyright royalties • how overseas courts set limits to AI related copyright exceptions, especially fair use. If after three years the monitoring program shows that these issues have not resolved, the government could establish an Independent Review of Australian copyright settings and the impact of AI. The Review’s scope could include, but not be limited to, consideration of whether: • copyright settings continue to be a barrier to the use of open material in AI training, and if so whether changes to copyright law could reduce these barriers • copyright continues to be the appropriate vehicle to incentivise creation of new works and if not, what alternatives could be pursued.

Data access 

Recommendation 3.1 Rightsize the Consumer Data Right 

The Australian Government should commit to reforms that will enable the Consumer Data Right (CDR) to better support data access for high value uses while minimising compliance costs. 

In the short term, the government should continue to simplify the scheme by removing excessive restrictions and rules that are limiting its uptake and practical applications in the banking and energy sectors. To do this the government should: • within the next two years, enable consumers to share data with third parties and simplify the on boarding process for businesses • commit to more substantiative changes to the scheme (in parallel with related legislative reforms), including aligning the CDR’s privacy safeguards with the Privacy Act and enabling access to selected government held datasets through the scheme. 

In addition to the above, the CDR framework should be significantly amended so that it has the flexibility to support a broader range of use cases beyond banking and energy, by making the accreditation model, technical standards and designation process less onerous. 

Privacy regulation 

Recommendation 4.1 An outcomes based privacy duty embedded in the Privacy Act 

The Australian Government should amend the Privacy Act 1988 (Cth) to embed an outcomes based approach that enables regulated entities to fulfil their privacy obligations by meeting criteria that are targeted at outcomes, rather than controls based rules. 

This should be achieved by introducing an overarching privacy duty for regulated entities to deal with personal information in a manner that is fair and reasonable in the circumstances. 

The Privacy Act should be further amended to outline several non exhaustive factors for consideration to guide decision makers in determining what is fair and reasonable – including proportionality, necessity, and transparency. The existing Australian Privacy Principles should ultimately be phased out. 

Implementation of the duty should be supported through non legislative means including documentation such as regulatory guidance, sector specific codes, templates, and guidelines. 

The Office of the Australian Information Commissioner should be appropriately resourced to support the transition to an outcomes based privacy duty.

Digital financial reporting 

Recommendation 5.1 Make digital financial reporting the default 

The Australian Government should make the necessary amendments to the Corporations Act 2001 (Cth) and the Corporations Regulations 2001 (Cth) to make digital annual and half yearly financial reporting mandatory for disclosing entities. The requirement for financial reports to be submitted in hard copy or PDF form should be removed for these entities. The implementation of mandatory digital financial reporting should be phased, with the Treasury determining the appropriate timelines for this approach. 

Setting requirements for report preparation 

The existing International Financial Reporting Standards (Australia) (IFRS AU) taxonomy should be used for digital financial reporting. The Australian Securities and Investments Commission (ASIC) should continue to update the taxonomy annually. ASIC should be empowered to specify, from time to time, the format in which the reports must be prepared. At present, ASIC should specify inline eXtensible Business Reporting Language (iXBRL) as the required format. 

Establishing infrastructure and procedures for report submission 

ASIC, together with market operators such as the Australian Securities Exchange, should determine where and how digital financial reports are submitted. The arrangements should aim to minimise preparers’ reporting burden while keeping reports accessible to report users. 

Supporting the provision of high quality, accessible digital financial data 

ASIC should implement the measures necessary to ensure that digital financial reports contain high quality data. ASIC could (among other actions): • establish a data quality committee that would develop guidance and rules to improve data quality • integrate automated validation checks into the submission process • set guidelines around the use of taxonomy extensions and report format • maintain feedback loops with stakeholders. • To enable report users to harness the benefits of digital financial data, digital financial reports should be publicly and freely available, and easily downloadable.

06 August 2025

(Un)Harnessing AI

The interim report by the Productivity Commission on Harnessing Data and Digital Technology - consistent with the national government's enthusiasm for AI - can be read as proposing a looser regulatory framework. 

The report states 

Data and digital technologies are the modern engines of economic growth. Emerging technologies like artificial intelligence (AI), which can extract useful insights from massive datasets in a fraction of a second, could transform the global economy and speed up productivity growth. 
 
Australia needs to harness the consumer and productivity benefits of data and digital technology while managing and mitigating the downside risks. There is a role for government in setting the rules of the game to foster innovation and ensure that Australians reap the benefits of the data and digital opportunity. 
 
The economic potential of AI is clear, and we are still in the early stages of its development and adoption. Early studies provide a broad range of estimates for the impact of AI on productivity. The Productivity Commission considers that multifactor productivity gains above 2.3% are likely over the next decade, though there is considerable uncertainty. This would translate into about 4.3% labour productivity growth over the same period. But poorly designed regulation could stifle the adoption and development of AI and limit its benefits. Australian governments should take an outcomes based approach to AI regulation – one that uses our existing laws and regulatory structures to minimise harms and introduces technology specific regulations as a last resort. 
 
Data access and use can fuel productivity growth: insights from data can help reduce costs, increase the quality of products and services and lead to the creation of entirely new products. But some requirements in the Privacy Act, the main piece of legislation for protecting privacy, are constraining innovation without providing meaningful protection to individuals. For example, complying with the controls and processes baked into the Act can make consent and notification a ‘tick box’ exercise – where businesses comply with the letter of the law but not the spirit of it. The Australian Government should amend the Privacy Act to introduce an alternative compliance pathway that enables firms to fulfil their privacy obligations by meeting outcomes based criteria. 
 
Data about individuals and businesses underpins growth and value in the digital economy. But often those same individuals and businesses cannot easily access and use this data themselves. Under the right conditions, giving people and businesses better access to data that relates to them can stimulate competition and allow businesses to develop innovative products and services. A mature data sharing regime could add up to $10 billion to Australia’s annual economic output. 
 
Experience shows that we need a flexible approach to facilitating data access across the economy, where obligations placed on data holders and the level of government involvement can match the needs and digital maturity of different sectors. New lower cost and flexible regulatory pathways would help to guide expanded data access throughout the digital economy, focusing first on sectors where the gains can be significant and relatively easy to achieve. 
 
Financial reports provide essential information about a company’s financial performance, ensuring transparency and accountability while informing the decisions of investors, businesses and regulators. Government can further spark productivity by making digital financial reporting the default – that is, mandatory lodgement of financial reports in machine readable form. At the same time, the Australian Government should remove the outdated requirement that financial reports be submitted in hard copy or PDF format. This change would increase the efficiency and accuracy with which information is extracted and analysed.

The  draft recommendations are

 Artificial intelligence 

Draft recommendation 1.1 Productivity growth from AI will be built on existing legal foundations. 

Gap analyses of current rules need to be expanded and completed. Australian governments play a key role in promoting investment in digital technology, including AI, by providing a stable regulatory environment. Any regulatory responses to potential harms from using AI must be proportionate, risk based, outcomes based and technology neutral where possible. 

The Australian Government should continue, complete, publish and act on ongoing reviews into the potential gaps in the regulatory framework posed by AI as soon as possible. 

Where relevant gap analyses have not begun, they should begin immediately. 

All reviews of the regulatory gaps posed by AI should consider: • the uses of AI • the additional risk of harm posed by AI (compared to the status quo) in a specific use case • whether existing regulatory frameworks cover these risks potentially with improved guidance and enforcement; and if not how to modify existing regulatory frameworks to mitigate the additional risks. 

Draft recommendation 1.2 AI specific regulation should be a last resort 

AI specific regulations should only be considered as a last resort for the use cases of AI that meet two criteria. These are: • where existing regulatory frameworks cannot be sufficiently adapted to handle the issue • where technology neutral regulations are not feasible.   

Draft recommendation 1.3 Pause steps to implement mandatory guardrails for high risk AI 

The Australian Government should only apply the proposed ‘mandatory guardrails for high risk AI’ in circumstances that lead to harms that cannot be mitigated by existing regulatory frameworks and where new technology neutral regulation is not possible. Until the reviews of the gaps posed by AI to existing regulatory structures are completed, steps to mandate the guardrails should be paused. 

Data access 

Draft recommendation 2.1 Establish lower cost and more flexible regulatory pathways to expand basic data access for individuals and businesses 

The Australian Government should support new pathways to allow individuals and businesses to access and share data that relates to them. These regulatory pathways will differ by sector recognising that the benefits (and the implementation costs) from data access and sharing are different by sector. This could include approaches such as: • industry led data access codes that support basic use cases by enabling consumers to export relatively non sensitive data on a periodic (snapshot) basis • standardised data transfers with government helping to formalise minimum technical standards to support use cases requiring high frequency data transfers and interoperability. 

These pathways should be developed alongside efforts that are already underway to improve the Consumer Data Right (which will continue to provide for use cases that warrant its additional safeguards and technical infrastructure) and the My Health Record system. 

The new pathways should begin in sectors where better data access could generate large benefits for relatively low cost; and there is clear value to consumers. Potential examples include: • enabling farmers to combine real time data feeds from their machinery and equipment to optimise their operations and easily switch between different manufacturers • giving tenants on demand access to their rental ledgers which they can share to prove on‑time payments to new landlords or lenders • allowing retail loyalty card holders to export an itemised copy of their purchase history to budgeting and price comparison tools that can analyse spending and suggest cheaper alternatives. The scope of the data access pathways should expand over time based on industry and consumer consultation, where new technology, overseas experience or domestic developments show that there are clear net benefits to Australia.   

Privacy regulation 

Draft recommendation 3.1 An alternative compliance pathway for privacy 

The Australian Government should amend the Privacy Act 1988 (Cth) to provide an alternative compliance pathway that enables regulated entities to fulfil their privacy obligations by meeting criteria that are targeted at outcomes, rather than controls based rules. 

Draft recommendation 3.2 Do not implement a right to erasure 

The Australian Government should not amend the Privacy Act 1988 (Cth) to introduce a ‘right to erasure’, as this would impose a high compliance burden on regulated entities, with uncertain privacy benefits for individuals. 

Digital financial reporting 

Draft recommendation 4.1 Make digital financial reporting the default 

The Australian Government should make the necessary amendments to the Corporations Act 2001 (Cth) and the Corporations Regulations 2001 (Cth) to make digital financial reporting mandatory for disclosing entities. The requirement for financial reports to be submitted in hard copy or PDF format should also be removed for those entities.

It goes on

AI specific regulation should be a last resort 

AI specific regulations should only be considered as a last resort for the use cases of AI that meet two criteria. These are: • where existing regulatory frameworks cannot be sufficiently adapted to handle the issue • where technology neutral regulations are not feasible. 

Economy wide efforts to regulate AI should be paused until all gap analyses are complete and implemented 

In August 2024 Australian Government Department of Industry, Science and Resources released a set of 10 voluntary AI safety standards, or guardrails, based on risk management standards such as ISO/IEC 42001:2023 (Information technology – Artificial intelligence – Management system) and the National Institute of Standards and Technology’s Artificial Intelligence Risk Management Framework (AI RMF 1.0) (DISR 2024b, p. 5). The guardrails cover aspects of AI development and application. They require several risk-management processes. These include testing of models, developing a risk plan and providing transparency to users of AI tools and owners of copyrighted materials used in the training of models. The guardrails outline reasonable risk-management practices for many organisations. In this way they have served a very important and useful step in AI governance in Australia by equipping businesses with voluntary, structured and internationally recognised standards to support and guide their adoption of AI. 

The guidelines are very useful for smaller businesses without comprehensive risk-management procedures in place. Indeed, submissions from participants to this inquiry (and submissions to the mandatory guardrails – discussed below – consultation process ) showed that many larger organisations have implemented risk management protocols that are similar in spirit to these guardrails. 

Mandating the guardrails is not necessary 

In September 2024 (DISR 2024a) a proposals paper for a set of mandatory guardrails for AI in high risk settings was released by the Australian Government. The proposal is to turn the voluntary guidelines into mandatory regulations for AI development and application. 

The PC is concerned with two aspects of the guardrails being made mandatory. First, the proposals paper argued that the mandatory guardrails would apply to all high risk uses of AI – regardless of whether risks can be better mitigated through outcomes based regulations. Second, the proposals paper argued that General Purpose AIs – which would include many generative AI tools – above a certain threshold of capability be classified as high risk by default. The proposals paper did not argue for any particular measure or threshold for technical capability, though it could include aspects like FLOPS (DISR 2024a, p. 18). It was argued that these models can perform so many functions that their risks cannot be adequately foreseen. This could result in the guardrails being applied to common generative AI tools such as ChatGPT, Claude and Grok, depending on what is chosen as the threshold and measure of technical capability. 

In general, high risk uses of AI can be split into three broad types. 

1. High risk uses that can be adequately controlled by existing regulatory frameworks (potentially with some modification) – this could include issues with privacy law (which the PC thinks can be resolved within existing frameworks with modification to make the regulations more outcomes focused, chapter 2). 

2. High risk uses that can be adequately controlled with new technology neutral regulations – this could include (non consensual) sexually explicit deepfake images which the Australian Government has recently banned (through the Criminal Code Amendment (Deepfake Sexual Material) Act 2024). 

3. High risk use cases that require technology specific regulations – these would be use cases identified in the various gap analyses as having no technology neutral solution. 

The PC’s concern with the guardrails is that they would not distinguish between these categories. This in our view raises significant issues, as the first two cases can already, by definition, be dealt with adequately by other regulatory mechanisms. It might also result in most commercial chatbots being classified as high risk regardless of the efficacy of existing regulations. The result of this approach is that many AI models would be complying with two different sets of regulation to achieve the same outcome. 

For example, the TGA’s review noted that with respect to medical devices, all ten proposed guardrails had close parallels in existing regulations (2025, pp. 27–30). That is, it is likely that firms providing AI based medical devices in Australia would already be fulfilling the objectives of the guardrails if they are operating legally under the TGA’s existing regulations. But if the guardrails are mandated, then the provider of the medical device would need to demonstrate compliance with the TGA regulations and the guardrails, raising the regulatory burden with no change in outcomes. 

The mandating of the guardrails is only appropriate in circumstances where existing regulatory frameworks or new technology-neutral regulations are not able to adequately mitigate the risk of harm. Once the Australian Government has completed and acted on all gap analyses of its existing policy framework, it will know what regulatory holes cannot be plugged by existing frameworks or new technology neutral legislation. Consideration of economy wide efforts to mandate the guardrails should be paused until these gap analyses are complete. 

Pause steps to implement mandatory guardrails for high risk AI The Australian Government should only apply the proposed ‘mandatory guardrails for high risk AI’ in circumstances that lead to harms that cannot be mitigated by existing regulatory frameworks and where new technology neutral regulation is not possible. Until the reviews of the gaps posed by AI to existing regulatory structures are completed, steps to mandate the guardrails should be paused.

In dealing with copyright the PC states 

 Copyright violation is an example of a harm that AI could exacerbate by changing economic incentives. Previous waves of innovation in information and communication technology have made the sharing of copyrighted materials much cheaper and easier, creating challenges for copyright. In most instances, copyright law was able to be adapted (or better enforced) to mitigate the harm. This made it unnecessary to directly regulate technology by, for example, regulating computer software or hardware to prevent copyright breach. It is the PC’s view that the copyright issues posed by AI can also similarly be resolved through adapting existing copyright law frameworks rather than introducing AI specific regulation. 

What is copyright? 

Copyright law prohibits a person from using original works without the permission of the copyright holder – usually the author (AGD 2022a). The types of works that are protected include text, artistic works, music, computer code, sound recordings and films (ACC 2024a). It does not protect the underlying ideas or information (AGD 2022a). In some cases, data and datasets may be protected, ‘largely depend[ing] on how the data has been arranged, structured or presented’ (Allens 2020, p. 3).  

The rise of AI technology has led to new challenges for copyright law. 

The emergence of AI also raises some additional, principle based questions about how the copyright framework (as part of Australia’s broader intellectual property regime) works to benefit society by encouraging creation and innovation, rewarding intellectual effort and achievement, and supporting the dissemination of knowledge and ideas. (AGD 2023c, p. 12) 

In 2023, the Attorney General established the Copyright and Artificial Intelligence Reference Group, which acts as ‘a standing mechanism to engage with stakeholders across a wide range of sectors on issues at the intersection of AI and copyright’ (AGD 2023a). Since then, the group has met on several occasions to discuss issues relating to AI technology and copyright law (AGD 2023a). 

This section explores one issue particularly relevant to productivity: whether current Australian copyright law is a barrier to building and training AI models. There are other legal issues relating to the outputs of AI models that are less relevant to productivity – such as whether those outputs attract copyright protection and what happens when AI outputs infringe a third party’s copyright (Evans et al. 2024). 

Training AI models 

Building and refining AI models requires the use of large amounts of data. 

The term ‘AI model training’ refers to this process: feeding the algorithm data, examining the results, and tweaking the model output to increase accuracy and efficacy. To do this, algorithms need massive amounts of data that capture the full range of incoming data. (Chen 2023) The datasets used to train AI models often contain digital copies of media such as web pages, books, videos, images and music. These media are often the subject of copyright protection, which means that their use to train AI models requires permission from the copyright holder. 

Permission is required because AI models must ‘copy’ the protected material at least temporarily to undertake the training process. The use of copyrighted materials to train an AI model is a separate issue to the copyright status of anything the model produces. As discussed above, AI outputs may have their own copyright challenges. 

A survey of the Copyright and Artificial Intelligence Reference Group indicated that, in practice, a range of copyrighted materials are used to train AI models – including literary and artistic works, sound recordings, films and musical works (AGD 2024, p. 12). 

There is evidence to suggest that large AI models are already being trained on copyrighted materials without consent or compensation (APA and ASA, qr. 39, pp. 3–4; APDG, qr. 6, p. 4; APRA AMCOS, qr. 58, p. 4; ARIA and PPCA, qr. 65, p. 5, Creative Australia, qr. 62, p. 3). It should be noted that Australian copyright law only applies to copying that occurs within Australia’s boundaries – in other words, the training of AI models overseas is subject to the relevant laws of the jurisdiction in which it occurs. Lawsuits have been brought against technology companies – including Meta, Microsoft and OpenAI – in some overseas jurisdictions about the unlicensed use of copyrighted works to train AI models (Ryan 2023). 

There are concerns that the Australian copyright regime is not keeping pace with the rise of AI technology – whether because it does not adequately facilitate the use of copyrighted works or because AI developers can too easily sidestep existing licensing and enforcement mechanisms. There are several policy options, including: • no policy change – that is, copyright owners would continue to enforce their rights under the existing copyright framework, including through the court system • policy measures to better facilitate the licensing of copyrighted materials, such as through collecting societies • amending the Copyright Act to include a fair dealing exception that would cover text and data mining. 

The PC is seeking feedback on what reforms are needed to bring the copyright regime up to date. 

Is there a need to bolster the licensing or enforcement regime? 

Several participants expressed concern about the unauthorised use of copyrighted materials to train AI models. For example, Creative Australia said: Much of the data has been used reportedly without consent from the original creator, and without acknowledgement or remuneration. The global nature of the technology industry has made it difficult for the owners of creative work to enforce their intellectual property rights and be remunerated for the use of their work. (qr. 62, p. 3) 

There are two points at which concerns of this type could be addressed. First, they could be addressed before the fact, through copyright licensing. Licensing is the key mechanism through which a copyright holder grants permission for others to use their work and often involves some form of payment. In Australia, licensing is often done through collecting societies, which are organisations that represent copyright holders. This can streamline the licensing process, because the collecting society can negotiate licences on behalf of multiple copyright holders at once. As the Copyright Agency said: We can help these sectors use third party content for AI related activities. Our annual licence for businesses now allows staff to use news media content in prompts for AI tools (e.g. for summarisation or analysis). We are extending this to other third party content later in the year. We are also in discussions with our members and licensees about other collective licensing solutions, including the use of datasets for AI related activities. (qr. 7, pp. 2–3) 

The issue of unauthorised use of copyrighted materials could also be addressed after the fact, through enforcement. This encompasses a range of possible measures, including take down notices, alternative dispute resolution and court action. In 2022 23, the Attorney General’s Department undertook a Copyright Enforcement Review to assess ‘whether existing copyright enforcement mechanisms remain effective and proportionate’ (AGD 2022b). That review found that additional regulatory measures are needed to achieve an effective copyright enforcement regime, and work is currently underway to identify options for: • reducing barriers for Australians to use of the legal system to enforce copyright, including examining simple options to resolve ‘small value’ copyright infringements • improving understanding and awareness about copyright. (AGD 2023b) 

In light of this ongoing work, the issue of copyright enforcement is not in scope for this inquiry. 

Is there a case for a text and data mining exception? 

Another option is to expand the existing ‘fair dealing’ regime, which provides certain exceptions to the requirement to obtain permission from the copyright holder (box 1.6). Currently, there is no exception that covers AI model training per se (The University of Notre Dame Australia 2024). However, depending on the case, a different exception could apply. For example, AI models built as part of research could fall within the scope of the ‘research or study’ exception. 

Box 1.6 – What are fair dealing exceptions? 

Fair dealing exceptions allow for the use of copyright material without permission from the copyright owner, so long as it is used for one of several specified purposes and is considered fair. What are the specified purposes? The Copyright Act specifies several purposes where the exception may apply. These include: research or study, criticism or review, parody or satire, reporting news, and enabling a person with a disability to access the material (Copyright Act 1968 (Cth), Part III, Div 3; Part VIA, Div 2). 

What counts as ‘fair’? 

Fairness is determined with regard to all the relevant circumstances – that is, it depends on the facts. Some purposes have specified criteria that must be taken into account. For example, where the use is for research or study, the following considerations apply: • the purpose and character of the dealing • the nature of the work • whether the work can be obtained within a reasonable time at an ordinary commercial price • the effect of the dealing upon the potential market for, or value of, the work • how amount and substantiality of the work that was copied (Copyright Act 1968 (Cth), s 40(2)). 

The ‘fair use’ doctrine – an alternative approach 

Some overseas jurisdictions (notably the United States) take a ‘fair use’ approach to copyright exceptions. Under this doctrine, any types of use can be considered non infringing, provided that it is considered ‘fair’ – in other words, the use need not fall within one of several defined categories. Several reviews have recommended the adoption of the fair use doctrine in Australia (including by the Australian Law Reform Commission and the Productivity Commission), but this has not occurred. Source: ACC (2024b); ALRC (2013); Copyright Act 1968 (Cth); PC (2021, p. 187). 

In its report on Copyright and the Digital Economy, the Australian Law Reform Commission recommended amendments to enable text and data mining by adopting a fair use approach to copyright exceptions (box 1.6) – or, failing that, through a new fair dealing exception. It explained: There has been growing recognition that data and text mining should not be infringement because it is a ‘non expressive’ use. Non expressive use leans on the fundamental principle that copyright law protects the expression of ideas and information and not the information or data itself (2013, p. 261)  

The Australian Government has since indicated that it is not inclined to introduce a fair use regime (Australian Government 2017, p. 7). Therefore, the PC is considering whether there is a case for a new fair dealing exception that explicitly covers text and data mining (a ‘TDM exception’). TDM exceptions exist in several comparable overseas jurisdictions (box 1.7). 

Such an exception would cover not just AI model training, but all forms of analytical techniques that use machine read material to identify patterns, trends and other useful information. For example, the use of text and data mining techniques is common in research sectors to produce large datasets that can be interrogated through statistical analysis. 

Box 1.7 – Text and data mining around the world 

European Union: There are two text and data mining (TDM) exceptions embedded in the Digital Single Market Directive (EU 2019/790) – one for scientific research (article 3) and another for general use (article 4). The Artificial Intelligence Act (Regulation (EU) 2024/1689) specifically characterises the training of AI models as involving ‘text and data mining techniques’ (recital 105) and refers to the TDM exception (article 53). The recent case of Kneschke v. LAION [2024] endorsed the view that the TDM exception extends to cover AI training (Goldstein et al. 2024a, 2024b). 

United States: It has been argued that training AI models falls within the scope of the fair use doctrine (Khan 2024; Klosek and Blumenthal 2024). However, the case Thomson Reuters v. Ross [2023] 694 F.Supp.3d 467 highlights that whether AI training is covered by the doctrine depends on whether the fair use factors are met in the circumstances (ReedSmith 2025). 

United Kingdom: There is a TDM exception for that applies to non commercial research (UK Intellectual Property Office 2014). There have been proposals to expand the exception to cover all uses, though these are still under consideration (Pinsent Masons 2023; UK Government 2024). 

Japan: The Japanese Copyright Act includes broad statutory exemptions for TDM (article 30 4(ii)), provided the work is used for ‘non enjoyment’ purposes (Senftleben 2022, p. 1494). In essence, the requirement for ‘non enjoyment’ distinguishes between whether the work is being consumed as a work or as data, and is broadly equivalent to the distinction between expressive and non expressive uses. 

Singapore: The Singaporean Copyright Act includes a specific TDM exception, as well as a broader fair use exception (Ng-Loy 2024). 

To assist its consideration of this option, the PC is seeking feedback about the likely effects of a TDM exception on the AI market, the creative sector and productivity in general – particularly in light of the following considerations. • At present, large AI models (including generative AI and large language models) are generally available to be used in Australia. The introduction (or not) of a TDM exception is unlikely to affect whether AI models continue to be available and used in Australia (PC 2024c, p. 13). • At present, large AI models are trained overseas, not in Australia. It is unclear whether the introduction of a TDM exception would change this trend. • As discussed above, large AI models are already being trained on unlicensed copyrighted materials. • A TDM exception could make a difference to whether smaller, low compute models (such as task specific models) can be built and trained in Australia, such as by Australian research institutions, medical technology firms, and research service providers. It should also be noted that a TDM exception would not be a ‘blank cheque’ for all copyrighted materials to be used as inputs into all AI models. As discussed in box 1.4, the use must also be considered ‘fair’ in the circumstances – this requirement would act as a check on copyrighted works being used unfairly, preserving the integrity of the copyright holder’s legal and commercial interests in the work. There may be a need for legislative criteria or regulatory guidance about what types of uses are likely to be considered fair. 

Information request 1.1 

The PC is seeking feedback on the issue of copyrighted materials being used to train AI models. • Are reforms to the copyright regime (including licensing arrangements) required? If so, what are they and why? The PC is also seeking feedback on the proposal to amend the Copyright Act 1968 (Cth) to include a fair dealing exception for text and data mining. • How would an exception covering text and data mining affect the development and use of AI in Australia? What are the costs, benefits and risks of a text and data mining exception likely to be? • How should the exception be implemented in the Copyright Act – for example, should it be through a broad text and data mining exception or one that covers non commercial uses only? • Is there a need for legislative criteria or regulatory guidance to help provide clarity about what types of uses are fair?

14 May 2025

Authors

'Authoring While Dead' (Stanford Public Law Working Paper) by Mark A Lemley and Oliver Wendell Holmes, Jr comments 

Bob Marley died in 1981. But he wrote a song in 2017 with The Killers. At least, that’s what the song credits say. Why? Because The Killers’ song included the two words “redemption songs,” the title of a classic Bob Marley hit. Rather than fight, The Killers agreed to add Marley as a co-author. 

There is an increasing trend in the music industry toward resolving disputes over music copyright by granting co-authorship (or “interpolation”) credit to the claimant, no matter how weak the claim (as in Marley’s case), and even if they are dead (as in Marley’s case). Bob Marley and the Killers are not alone. Olivia Rodrigo agreed to add Paramore as a co-author despite the absence of any plausible copyright claim. Sam Smith did the same with Tom Petty. So did Beyonce. They are all identified as co-authors of the songs they (generally falsely) alleged were infringing. 

But they aren’t and can’t be authors under copyright law. Even if the copyright cases have merit – and they generally don’t – that would make the defendant an infringer, but it wouldn’t make the plaintiff a joint author. Instead, the deal for co-authorship credit appears to be a form of trolling. Under most music contracts it gives the complaining party an undeserved share of the royalties. If that was all it did, we might put up with it. After all, the parties agreed to it for whatever reason. But permitting retroactive co-authorship claims does harm to others and to the system as a whole. It creates problems for later understanding of authorship, for termination rights, and is a form of rights accretion that Jim Gibson warns us about. There is reason to worry that it will lead to a statutory interpolation right– a right to be credited for, get money for, and eventually to control songs that don’t infringe in the first place.  

The subsequent version in Georgia Law Review gives lead authorship to the late lamented Holmes. 

14 July 2024

Social Media TOC overreach

Social networking sites' licensing terms: A cause of worry for users?' by Phalguni Mahapatra and Anindya Sircar in (2024) The Journal of World Intellectual Property comments 

Terms of service (ToS) for social networking sites (SNS) like Instagram, Meta, X, and so on, is a clickwrap agreement that establishes a legal relationship between platform owners and users, yet probably it is the most overlooked legal agreement. The users of these sites often overlook the ToS while registering themselves on these sites and even if users (especially those with no legal background) are attempting to read them, it is difficult for them to understand because of the legal jargon. As a result, they end up signing away legal rights about which they are unaware. According to these sites' ToS, though the ownership of the user-generated content is bestowed upon the user but the users grant to these sites “a non-exclusive, royalty-free, transferrable, sub-licensable, worldwide license” and this license can be used “to host, use, distribute, modify, run, copy, publicly perform or display, translate and create derivative works of user's content.” These sites even bestow on themselves the right to modify the content which poses challenges to the right-holders' moral rights. The fact that these platforms can sublicense the user's work creates complexities when a user intends to grant an exclusive license of his work. There is no clarity on the language of the terms like the manner of exploiting the user's content, what happens if the sublicensing is for a wrongful purpose? The problem magnifies as there is neither explicit indication about the duration of the license nor about the territorial extent. This would suggest that these sites can get a perpetual license on the content of the users. These SNS have consumers spread worldwide but in their ToS, they have forum selection clauses that list out the courts and districts in California. This means users will be discouraged to bring a copyright suit due to the lack of an option to file a claim in their home country. The US case Agence France Presse (AFP) v. Morel helps us conclude twofold mainly there is a hope that SNS will not take ToS to shield themselves from further use of the user's work and strengthen the idea that these platforms may choose to license to their partners. Further, in 2018, the Paris Tribunal declared most clauses of Twitter “null and void” due to the nature of the license and also, because it was not in compliance with French Intellectual Property Code. This gives a faint hope for a positive shift in the legal treatment of user-generated content. Though these sites claim to retain the sublicensing right to run their sites smoothly but the licensing is very broad and carries the possibility of many usages of the content that too without paying compensation to the user. Therefore, this paper aims to highlight and give insight into the unfair licensing terms of the most often used social networking sites and its implications.

15 December 2023

Games, copyright and metaverses

The 2023 CREATE working paper 'Gaming without Frontiers: Copyright and Competition in the Changing Video Game Sector' by Aysel Gizem YaÅŸar, Amy Thomas, Kenny Barr and Magali Eben states

This working paper examines aspects of the contemporary video games sector at a time when incumbent and new-entrant market participants vie for primacy in the games industry. In this setting, ownership configurations and business models of key actors are in a state of flux. As consumers increasingly access culture ‘on-demand’ by way of cloud technologies, myriad opportunities and challenges emerge, not only for the video games sector, but for the wider cultural industries and society as a whole. It is in this very dynamic industrial landscape that the working paper is located. 

The paper marks a starting point for collaborative research on the games industry, drawing on the range of expertise within CREATe to provide a more holistic view of innovation, creativity, and power dynamics in games. The authors draw on different research specialisms and interests including: digitalisation of the cultural industries; copyright and notions of user creativity; digital services and product market definition; and competition law, innovation and the role of technology. The paper draws on each of these specialisms in turn. It starts by providing the industrial context of the discussion and analysis. This feeds into three analytical sections examining: user creativity and intellectual property in video games; the implications of industry concentration for different articulations of creativity; and finally, an exploration of the potential ramifications of developments in the games sector for innovation at the dawn of the metaverse era. 

In doing so, this work sets the scene for future research, which brings together competition law, IP law, and cultural policy perspectives. With questions formulated throughout the paper, the authors embark on a project to review the changing landscape of gaming and its implications for creativity, innovation, access and integration. ... Transformations do not occur merely within the more traditional confines of ‘games’. As the gaming industry goes through a cloud transformation, it is also providing the basis for the development of something bigger: the metaverse. While virtual environments known as metaverse are still in their infancy, their connection to the gaming sector is clear. Popular games and gaming platforms like Minecraft, Fortnite and Roblox have been labelled ‘proto metaverses’. The immersive experience of metaverse lends itself well to gaming. At least some of the M&A trend in the gaming sector seems motivated by metaverse development. Established players in the gaming industry, like Microsoft and Epic Games, are taking shots at different aspects of metaverse. As such, metaverse development is an integral part of this project. 

Despite this close connection however, the metaverse goes beyond gaming, and metaverse projects encompass many aspects of human lives, from socialising to work, fitness, and even psychotherapy. Metaverse players are emerging outside of the gaming sector. It also has the potential to foster user creativity far beyond what video games have allowed so far and open up different business models. The authors of this paper are interested in the historical and contemporary connections between gaming and the metaverse. Some of the concentration trends and user creativity in the metaverse run parallel to the research focus in the gaming sector, setting the scene for an investigation into corresponding regulatory regimes. 

This paper is not intended to provide clear answers on what the changes in the games industry mean for IP or competition law. Rather, it aims to bring together a range of perspectives, identifying central research questions which can best be answered through a multi-perspective lens. The authors of this paper draw on different research specialisms and interests including: digitalisation of the cultural industries, copyright and notions of user creativity, digital services and product market definition, and competition law, innovation and the role of technology. The paper draws on each of these specialisms in turn. It starts by providing the industrial context of the discussion and analysis. This feeds into the three analytical sections examining: user creativity and intellectual property in video games; the implications of industry concentration for different articulations of creativity; and finally, an exploration of the implications of developments in the games sector for innovation at the dawn of the vaunted metaverse era. The concluding section synthesises each of these component parts in the closing discussion. It identifies the questions which will underpin the future research of the CREATe games project.

20 September 2023

Generative AI

'Talkin’ ‘Bout AI Generation: Copyright and the Generative AI Supply Chain' by Katherine Lee, A. Feder Cooper and James Grimmelmann comments 

"Does generative AI infringe copyright?" is an urgent question. It is also a difficult question, for two reasons. First, “generative AI” is not just one product from one company. It is a catch-all name for a massive ecosystem of loosely related technologies, including conversational text chatbots like ChatGPT, image generators like Midjourney and DALL·E, coding assistants like GitHub Copilot, and systems that compose music and create videos. Generative-AI models have different technical architectures and are trained on different kinds and sources of data using different algorithms. Some take months and cost millions of dollars to train; others can be spun up in a weekend. These models are made accessible to users in very different ways. Some are offered through paid online services; others are distributed on an open-source model that lets anyone download and modify them. These systems behave differently and raise different legal issues. 

The second problem is that copyright law is notoriously complicated, and generative-AI systems manage to touch on a great many corners of it. They raise issues of authorship, similarity, direct and indirect liability, fair use, and licensing, among much else. These issues cannot be analyzed in isolation, because there are connections everywhere. Whether the output of a generativeAI system is fair use can depend on how its training datasets were assembled. Whether the creator of a generative-AI system is secondarily liable can depend on the prompts that its users supply. 

In this Article, we aim to bring order to the chaos. To do so, we introduce the "generative-AI supply chain": an interconnected set of stages that transform training data (millions of pictures of cats) into generations (a new, potentially never-seen-before picture of a cat that has never existed). Breaking down generative AI into these constituent stages reveals all of the places at which companies and users make choices that have copyright consequences. It enables us to trace the effects of upstream technical designs on downstream uses, and to assess who in these complicated sociotechnical systems bears responsibility for infringement when it happens. Because we engage so closely with the technology of generative AI, we are able to shed more light on the copyright questions. We do not give definitive answers as to who should and should not be held liable. Instead, we identify the key decisions that courts will need to make as they grapple with these issues, and point out the consequences that would likely flow from different liability regimes.

16 September 2023

Performers Rights

'AI and Performers’ Rights in Historical Perspective' (CREATe Working Paper 2023/9) by Elena Cooper comments 

This article uses legal history as a vantage point for reflecting on the current moment in the debate about AI and performers’ rights. Current debates often refer to ‘creators’ and/or ‘copyright’ as generic categories denoting both performers and authors. Legal history, I argue, sharpens the critical lens on current debate by drawing our attention to what today remains different about the legal rules protecting performers. That difference, at present, leaves performers less well placed to deal with the challenge of AI than authors and also goes to the heart of Equity’s current reform proposals. That difference should now be debated. 

Last year, I published an Opinion in the E.I.P.R. - Copyright History as a Critical Lens – a follow- up to Interrogating Copyright History (an E.I.P.R. Opinion that I co-authored with Ronan Deazley in 2016). I argued that the study of law in past times can be a powerful critical lens on how we see the legal present. Whether the past reveals a story of continuity or change, there is, I argued: value in looking backwards, before we look forwards: an historical perspective helps us to recover the contingency of the present, to imagine things differently and to look to the future with a more critical eye. 

My comments should be read in the light of a wider ‘historical turn’ in critical thinking about intellectual property law in the last two decades, following the publication in 1999 of The Making of Modern Intellectual Property Law by Brad Sherman and Lionel Bently. Sherman and Bently’s historical work showed that the categories of intellectual property that we know today, are not timeless, natural or inevitable; ‘theory... played at best an ex post facto role’ in later legitimating the legal categories that emerged. In so doing, Sherman and Bently opened the way for legal historians to probe critical questions about the legal present and future of intellectual property law. 

In this article, I provide an example of looking backwards before we look forwards. I demonstrate the critical value of an historical long-lens on a discrete strand of current legal debates raised by cutting-edge technology today: a facet of the impact of Artificial Intelligence technology (or ‘AI’) on performers, particularly actors, and the search today for an appropriate legal response. Equity, the UK actors’ trade union launched a campaign last year, Stop AI Stealing the Show, seeking the legislative reform of statutory performers’ rights, specifically the increase in the scope of legal protection. However, the UK Government has, so far, resisted reform. The UK Government’s position on performers’ rights is indicated in the closing paragraphs of its response to the UK Intellectual Property Office’s consultation AI and IP: Copyright and Patents. Referring to proposals for ‘an expansion of the scope of performers’ rights in the Copyright, Designs and Patents Act 1988’, the UK Government comments as follows: at this stage, the impacts of AI technologies on performers remain unclear. It is also unclear whether and how existing law (both in the IP framework and beyond it) is insufficient to address any issues. If intervention is necessary, the IP system may not be the best vehicle for this. We will keep these issues under review from an IP perspective. 

How might an historical perspective enable us critically to reflect on the present moment in the debate of the future of performers’ rights? 

Technology in the 21st century: Aspects of the challenge of AI today 

Before I look to the past, I start with today. AI is technology that uses machine-based learning to perform functions that were previously the province of usually slower, more costly and/or more labour-intensive processes undertaken by human beings. AI has a wide field of application from medical diagnosis, robotics, to the management of insurance-risk. However, one set of questions for intellectual property law today relates to AI’s impact on authors and performers working in a variety of sectors. 

The creative potential of AI technology has been embraced by some visual artists and AI has been hailed elsewhere as democratising creativity, by providing everyone with the tools to create cultural works.  AI technology also offers new possibilities for enhancing human performances, for instance, in the creation of video games, in turn opening opportunities for actors. Yet, there are also reports that AI is, in certain contexts, increasingly replacing human authors and performers, and putting them out of work. A good example of this is the audio performance sector, where AI generated voices can now be used to undertake audio work (e.g. audio books) or provide voice-overs at negligible cost, in circumstances where a professional actor would previously have been employed. Redundancy for actors caused by the introduction of AI – the replacement of humans by machines – is reported to be now increasingly commonplace. 

While AI can replace human authors and performers, it also frequently utilises their pre-existing work. AI learns by tracking patterns in an existing body of material: a ‘data-set’. In the case of visual images produced by AI, the data-set may comprise large quantities of copyright- protected visual images scraped from the internet without copyright clearance.  For AI produced voices, the data-set may be a collection of recordings of human voices, which may have been recorded by actors for another unrelated purpose (e.g. a casting or audition) and included in the data-set without consent. Alternatively, the data-set may comprise recordings that were licensed to a third party for broad purposes, e.g. ‘for research’, yet particularly where the licence pre-dates AI technology, AI uses were not specifically contemplated by the parties at the time the contract was concluded. 

In addition to the AI learning process, the material generated by AI – for instance images generated in response to a text prompt – may involve ‘copying’ through a new means: computer synthetisation.  In relation to performance, prior to AI technology, the circumstances in which a performance could be copied, without direct taking from a recording itself, were more limited and confined to human imitation such as a sound-a-like imitating an actor’s voice, as in the passing off case of Sim v Heinz (discussed further below).  By contrast, AI technology today opens a future in which a performance, or aspects of a performance, can be recreated through technology, without direct copying from the recording. Equity, adopting the arguments of Mathilde Pavis in Artificial Intelligence and Performers’ Rights, refers to these new modes of copying performances, via ‘digital sound and look-alike’, as ‘performance synthetisation’. 

Whether or not performers are sufficiently protected is, of course, tied to the distinct circumstances raised by the new and unprecedented technologies of today: the challenge for legislators and the courts is to strike the right balance of interests in a specific technological context and then (as in the case of the Economics of Music Streaming Enquiry in recent times) to continue to track how well that ‘balance’ operates in practice. How, then, can the past help us to reflect on debates about AI and performers today?

08 September 2023

AI and Integrity

'How Generative Ai Turns Copyright Law on its Head' by Mark A Lemley comments 

While courts are litigating many copyright issues involving generative AI, from who owns AI-generated works to the fair use of training to infringement by AI outputs, the most fundamental changes generative AI will bring to copyright law don't fit in any of those categories. The new model of creativity generative AI brings puts considerable strain on copyright’s two most fundamental legal doctrines: the idea-expression dichotomy and the substantial similarity test for infringement. Increasingly creativity will be lodged in asking the right questions, not in creating the answers. Asking questions may sometimes be creative, but the AI does the bulk of the work that copyright traditionally exists to reward, and that work will not be protected. That inverts what copyright law now prizes. And because asking the questions will be the basis for copyrightability, similarity of expression in the answers will no longer be of much use in proving the fact of copying of the questions. That means we may need to throw out our test for infringement, or at least apply it in fundamentally different ways. 

'AI Providers as Criminal Essay Mills? Large Language Models meet Contract Cheating Law' (UCL Faculty of Laws, 2023) by Noëlle Gaumann & Michael Veale comments

Academic integrity has been a constant issue for higher education, already heightened by the easy availability of essay mill and contract cheating services over the Internet. Jurisdictions across the world have passed a range of laws making it an offence to offer or advertise such services. Because of the nature of these services, which may make students agree to not submit work they create or support, some of these offences have been drafted extremely broadly, without intent or knowledge requirements. The consequence of this is that there sit on statute books a range of very wide offences covering the support of, partial or complete authoring of assignments or work. 

At the same time, AI systems have become part of public consciousness, particularly since the launch of chatGPT from OpenAI. These large language models have quickly become part of workflows in many areas, and are widely used by students. These have concerned higher education institutions as they highly resemble essay mills in their functioning and result. 

This paper attempts to unravel the intersection between essay mills, general purpose AI services, and emerging academic cheating law. We:

  • Analyse, in context, academic cheating legislation from jurisdictions including England and Wales, Ireland, Australia, New Zealand, US States, and Austria in light of how it applies to both essay mills, AI-enhanced essay mills, and general purpose AI providers. (Chapter 2) 

  • Examine and document currently available services by new AI-enhanced essay mills, characterising them and examining the way they present themselves both on their own websites and apps, and in advertising on major social media platforms including Instagram and TikTok. These include systems which both write entire essays as well as those designed to reference AI-created work, provide outlines, and to deliberately ‘humanise’ text as to avoid nascent AI detectors. (Chapter 3) 

  • Outline the tensions between academic cheating legal regimes and both AI-enhanced essay mills and general purpose AI systems, which can allow students to cheat in much the same way. (Chapter 4) 

  • Provide recommendations to legislators and regulators about how to design regimes which both effectively limit AI powered contract cheating without, as in some current jurisdictions, accidentally bringing bona fide general purpose AI systems into scope unnecessarily. (Chapter 5)

We make some important findings. xx Firstly, there is already a significant market of AI-enhanced essay mills, many of which are developing features directly designed to frustrate education providers’ current attempts to detect and mitigate the academic integrity implications of AI generated work. 

Secondly, some jurisdictions have scoped their laws so widely, that it is hard to see how ‘general purpose’ large language models such as Open AI’s GPT-4 or Google’s Bard would not fall into their provisions, and thus be committing a criminal offence. This is particularly the case in England and Wales and in Australia. 

Thirdly, the boundaries between assistance and cheating are being directly blurred by essay mills utilizing AI tools. Most enforcement, given the nature of the academic cheating regimes, we suspect will result from private enforcement, rather than prosecutions. These regimes interact in important and until now unexplored ways with other legal regimes, such as the EU’s Digital Services Act, the UK’s proposed Online Safety Bill, and contractual governance mechanisms such as the terms of service of AI API providers, and the licensing terms of open source models. 

We conclude with recommendations for policymakers and HE providers. These include that:

  • Jurisdictions should explore creating obligations for AI-as-a-service providers to enforce their own terms and conditions, similar to obligations placed on intermediaries under the Digital Services Act and the Online Safety Bill. This would create an avenue to cut off professionalised essay mills using these services when notified or investigated. 

  • Jurisdictions should name a regulator and provide them with investigation and enforcement powers. If they are unwilling to do this, giving formal ability to higher education institutions to refer matters to prosecuting authorities would be a start. 

  • Regulators should issue guidelines on the boundaries of essay mills in the context of AI, considering general purpose systems and systems that allow co-writing, outlining or research. 

  • Regulators, when established, should have a formal, international forum to create shared guidance, which they should have regard to when enforcing. Legislation should be amended to give formal powers of joint investigation and cooperation through this forum. 

  • Legislation should be amended to give general-purpose AI systems a safe harbour from criminal consideration as an essay mill, insofar as they meet a series of criteria designed to lower their risk in this regard. We propose watermarking, regulatory co-operation, and time- limited data retention and querying capacity based on queries provided by educational institutions, as mechanisms to consider. 

  • Higher education institutions share funding to organise individuals to monitor advertising archives and other services for essay mills, and report these to prosecutors in relevant jurisdictions as well as take down adverts for these services rapidly. Reporting should be wide, including to payment service providers, who may be able to stop profit from these regimes, and to AI service providers.

20 August 2023

Thaler

Another unsurprising loss in the latest Thaler judgment, with Howell J in Stephen Thaler v Shira Perlmutter (Register of Copyrights and Director of the United States Copyright Office, et al) endorsing the Registrar's rejection of Thaler's attempt to register a computer generated (as distinct from computer assisted) work with the computer as author. 

The judgment states

Plaintiff Stephen Thaler owns a computer system he calls the “Creativity Machine,” which he claims generated a piece of visual art of its own accord. He sought to register the work for a copyright, listing the computer system as the author and explaining that the copyright should transfer to him as the owner of the machine. The Copyright Office denied the application on the grounds that the work lacked human authorship, a prerequisite for a valid copyright to issue, in the view of the Register of Copyrights. Plaintiff challenged that denial, culminating in this lawsuit against the United States Copyright Office and Shira Perlmutter, in her official capacity as the Register of Copyrights and the Director of the United States Copyright Office (“defendants”). Both parties have now moved for summary judgment, which motions present the sole issue of whether a work generated entirely by an artificial system absent human involvement should be eligible for copyright. .... 

For the reasons explained below, defendants are correct that human authorship is an essential part of a valid copyright claim, and therefore plaintiff’s pending motion for summary judgment is denied and defendants’ pending cross-motion for summary judgment is granted. 

I. BACKGROUND 

Plaintiff develops and owns computer programs he describes as having “artificial intelligence” (“AI”) capable of generating original pieces of visual art, akin to the output of a human artist. .... One such AI system—the so-called “Creativity Machine”—produced the work at issue here, titled “A Recent Entrance to Paradise:” ... 

After its creation, plaintiff attempted to register this work with the Copyright Office. In his application, he identified the author as the Creativity Machine, and explained the work had been “autonomously created by a computer algorithm running on a machine,” but that plaintiff sought to claim the copyright of the “computer-generated work” himself “as a work-for-hire to the owner of the Creativity Machine.” ... see also id. at 2 (listing “Author” as “Creativity Machine,” the work as “[c]reated autonomously by machine,” and the “Copyright Claimant” as “Steven [sic] Thaler” with the transfer statement, “Ownership of the machine”). 

The Copyright Office denied the application on the basis that the work “lack[ed] the human authorship necessary to support a copyright claim,” noting that copyright law only extends to works created by human beings. ... 

Plaintiff requested reconsideration of his application, confirming that the work “was autonomously generated by an AI” and “lack[ed] traditional human authorship,” but contesting the Copyright Office’s human authorship requirement and urging that AI should be “acknowledge[d] . . . as an author where it otherwise meets authorship criteria, with any copyright ownership vesting in the AI’s owner.” 

Again, the Copyright Office refused to register the work, reiterating its original rationale that “[b]ecause copyright law is limited to ‘original intellectual conceptions of the author,’ the Office will refuse to register a claim if it determines that a human being did not create the work.”... Plaintiff made a second request for reconsideration along the same lines as his first, see id., Ex. G, Second Request for Reconsideration at 2, ECF No. 13-7, and the Copyright Office Review Board affirmed the denial of registration, agreeing that copyright protection does not extend to the creations of non-human entities, Final Refusal Letter at 4, 7. 

Plaintiff timely challenged that decision in this Court, claiming that defendants’ denial of copyright registration to the work titled “A Recent Entrance to Paradise,” was “arbitrary, capricious, an abuse of discretion and not in accordance with the law, unsupported by substantial evidence, and in excess of Defendants’ statutory authority,” in violation of the Administrative Procedure Act (“APA”), 5 U.S.C. § 706(2). See Compl. ¶¶ 62–66, ECF No. 1. 

The parties agree upon the key facts narrated above to focus, in the pending cross-motions for summary judgment, on the sole legal issue of whether a work autonomously generated by an AI system is copyrightable. ... Those motions are now ripe for resolution. ... 

DISCUSSION 

Under the Copyright Act of 1976, copyright protection attaches “immediately” upon the creation of “original works of authorship fixed in any tangible medium of expression,” provided those works meet certain requirements. Fourth Estate v. Public Benefit Corporation v. Wall-Street.com, LLC, 139 S. Ct. 881, 887 (2019); 17 U.S.C. § 102(a). A copyright claimant can also register the work with the Register of Copyrights. Upon concluding that the work is indeed copyrightable, the Register will issue a certificate of registration, which, among other advantages, allows the claimant to pursue infringement claims in court. 17 U.S.C. §§ 410(a), 411(a); Unicolors v. H&M Hennes & Mauritz, L.P., 142 S. Ct. 941, 944–45 (2022). A valid copyright exists upon a qualifying work’s creation and “apart” from registration, however; a certificate of registration merely confirms that the copyright has existed all along. See Fourth Estate, 139 S. Ct. at 887. Conversely, if the Register denies an application for registration for lack of copyrightable subject matter—and did not err in doing so—then the work at issue was never subject to copyright protection at all. 

In considering plaintiff’s copyright registration application as to “A Recent Entrance to Paradise,” the Register concluded that “this particular work will not support a claim to copyright” because the work lacked human authorship and thus no copyright existed in the first instance. First Refusal Letter at 1; see also Final Refusal Letter at 3 (providing the same rationale in the final reconsideration decision). By design in plaintiff’s framing of the registration application, then, the single legal question presented here is whether a work generated autonomously by a computer falls under the protection of copyright law upon its creation. 

Plaintiff attempts to complicate the issues presented by devoting a substantial portion of his briefing to the viability of various legal theories under which a copyright in the computer’s work would transfer to him, as the computer’s owner; for example, by operation of common law property principles or the work-for-hire doctrine. ... These arguments concern to whom a valid copyright should have been registered, and in so doing put the cart before the horse. [In pursuing these arguments, plaintiff elaborates on his development, use, ownership, and prompting of the AI generating software in the so-called “Creativity Machine,” implying a level of human involvement in this case entirely absent in the administrative record. As detailed, supra, in Part I, plaintiff consistently represented to the Register that the AI system generated the work “autonomously” and that he played no role in its creation, see Application at 2, and judicial review of the Register’s final decision must be based on those same facts. ]

By denying registration, the Register concluded that no valid copyright had ever existed in a work generated absent human involvement, leaving nothing at all to register and thus no question as to whom that registration belonged. The only question properly presented, then, is whether the Register acted arbitrarily or capriciously or otherwise in violation of the APA in reaching that conclusion. 

The Register did not err in denying the copyright registration application presented by plaintiff. United States copyright law protects only works of human creation. Plaintiff correctly observes that throughout its long history, copyright law has proven malleable enough to cover works created with or involving technologies developed long after traditional media of writings memorialized on paper. See, e.g., Goldstein v. California, 412 U.S. 546, 561 (1973) (explaining that the constitutional scope of Congress’s power to “protect the ‘Writings’ of ‘Authors’” is “broad,” such that “writings” is not “limited to script or printed material,” but rather encompasses “any physical rendering of the fruits of creative intellectual or aesthetic labor”); Burrow-Giles Lithographic Co. v. Sarony, 111 U.S. 53, 58 (1884) (upholding the constitutionality of an amendment to the Copyright Act to cover photographs).  

In fact, that malleability is explicitly baked into the modern incarnation of the Copyright Act, which provides that copyright attaches to “original works of authorship fixed in any tangible medium of expression, now known or later developed.” 17 U.S.C. § 102(a) (emphasis added). Copyright is designed to adapt with the times. Underlying that adaptability, however, has been a consistent understanding that human creativity is the sine qua non at the core of copyrightability, even as that human creativity is channeled through new tools or into new media. In Sarony, for example, the Supreme Court reasoned that photographs amounted to copyrightable creations of “authors,” despite issuing from a mechanical device that merely reproduced an image of what is in front of the device, because the photographic result nonetheless “represent[ed]” the “original intellectual conceptions of the author.” Sarony, 111 U.S. at 59. 

A camera may generate only a “mechanical reproduction” of a scene, but does so only after the photographer develops a “mental conception” of the photograph, which is given its final form by that photographer’s decisions like “posing the [subject] in front of the camera, selecting and arranging the costume, draperies, and other various accessories in said photograph, arranging the subject so as to present graceful outlines, arranging and disposing the light and shade, suggesting and evoking the desired expression, and from such disposition, arrangement, or representation” crafting the overall image. Id. at 59–60. Human involvement in, and ultimate creative control over, the work at issue was key to the conclusion that the new type of work fell within the bounds of copyright. Copyright has never stretched so far, however, as to protect works generated by new forms of technology operating absent any guiding human hand, as plaintiff urges here. Human authorship is a bedrock requirement of copyright. 

That principle follows from the plain text of the Copyright Act. The current incarnation of the copyright law, the Copyright Act of 1976, provides copyright protection to “original works of authorship fixed in any tangible medium of expression, now known or later developed, from which they can be perceived, reproduced, or otherwise communicated, either directly or with the aid of a machine or device.” 17 U.S.C. § 102(a). The “fixing” of the work in the tangible medium must be done “by or under the authority of the author.” Id. § 101. In order to be eligible for copyright, then, a work must have an “author.” To be sure, as plaintiff points out, the critical word “author” is not defined in the Copyright Act. See Pl.’s Mem. at 24. “Author,” in its relevant sense, means “one that is the source of some form of intellectual or creative work,” “[t]he creator of an artistic work; a painter, photographer, filmmaker, etc.” ... 

By its plain text, the 1976 Act thus requires a copyrightable work to have an originator with the capacity for intellectual, creative, or artistic labor. Must that originator be a human being to claim copyright protection? The answer is yes. 

The 1976 Act’s “authorship” requirement as presumptively being human rests on centuries of settled understanding. The Constitution enables the enactment of copyright and patent law by granting Congress the authority to “promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective The issue of whether non-human sentient beings may be covered by “person” in the Copyright Act is only “fun conjecture for academics,” Justin Hughes, 'Restating Copyright Law’s Originality Requirement', 44 Columbia J L & Arts 383, 408–09 (2021), though useful in illuminating the purposes and limits of copyright protection as AI is increasingly employed. 

Nonetheless, delving into this debate is an unnecessary detour since “[t]he day sentient refugees from some intergalactic war arrive on Earth and are granted asylum in Iceland, copyright law will be the least of our problems.” 

As James Madison explained, “[t]he utility of this power will scarcely be questioned,” for “[t]he public good fully coincides in both cases [of copyright and patent] with the claims of individuals.” The Federalist No. 43 (James Madison). At the founding, both copyright and patent were conceived of as forms of property that the government was established to protect, and it was understood that recognizing exclusive rights in that property would further the public good by incentivizing individuals to create and invent. The act of human creation—and how to best encourage human individuals to engage in that creation, and thereby promote science and the useful arts—was thus central to American copyright from its very inception. Non-human actors need no incentivization with the promise of exclusive rights under United States law, and copyright was therefore not designed to reach them. xx The understanding that “authorship” is synonymous with human creation has persisted even as the copyright law has otherwise evolved. The immediate precursor to the modern copyright law—the Copyright Act of 1909—explicitly provided that only a “person” could “secure copyright for his work” under the Act. 

Copyright under the 1909 Act was thus unambiguously limited to the works of human creators. There is absolutely no indication that Congress intended to effect any change to this longstanding requirement with the modern incarnation of the copyright law. To the contrary, the relevant congressional report indicates that in enacting the 1976 Act, Congress intended to incorporate the “original work of authorship” standard “without change” from the previous 1909 Act. 

The human authorship requirement has also been consistently recognized by the Supreme Court when called upon to interpret the copyright law. As already noted, in Sarony, the Court’s recognition of the copyrightability of a photograph rested on the fact that the human creator, not the camera, conceived of and designed the image and then used the camera to capture the image. See Sarony, 111 U.S. at 60. The photograph was “the product of [the photographer’s] intellectual invention,” and given “the nature of authorship,” was deemed “an original work of art . . . of which [the photographer] is the author.” Id. at 60–61. Similarly, in Mazer v. Stein, the Court delineated a prerequisite for copyrightability to be that a work “must be original, that is, the author’s tangible expression of his ideas.” 347 U.S. 201, 214 (1954). Goldstein v. California, too, defines “author” as “an ‘originator,’ ‘he to whom anything owes its origin,’” 412 U.S. at 561 (quoting Sarony, 111 U.S. at 58). In all these cases, authorship centers on acts of human creativity. 

Accordingly, courts have uniformly declined to recognize copyright in works created absent any human involvement, even when, for example, the claimed author was divine. The Ninth Circuit, when confronted with a book “claimed to embody the words of celestial beings rather than human beings,” concluded that “some element of human creativity must have occurred in order for the Book to be copyrightable,” for “it is not creations of divine beings that the copyright laws were intended to protect.” Urantia Found. v. Kristen Maaherra, 114 F.3d 955, 958–59 (9th Cir. 1997) (finding that because the “members of the Contact Commission chose and formulated the specific questions asked” of the celestial beings, and then “select[ed] and arrange[d]” the resultant “revelations,” the Urantia Book was “at least partially the product of human creativity” and thus protected by copyright); see also Penguin Books U.S.A., Inc. v. New Christian Church of Full Endeavor, 96-cv-4126 (RWS), 2000 WL 1028634, at *2, 10–11 (S.D.N.Y. July 25, 2000) (finding a valid copyright where a woman had “filled nearly thirty stenographic notebooks with words she believed were dictated to her” by a “‘Voice’ which would speak to her whenever she was prepared to listen,” and who had worked with two human co-collaborators to revise and edit those notes into a book, a process which involved enough creativity to support human authorship); Oliver v. St. Germain Found., 41 F. Supp. 296, 297, 299 (S.D. Cal. 1941) (finding no copyright infringement where plaintiff claimed to have transcribed “letters” dictated to him by a spirit named Phylos the Thibetan, and defendant copied the same “spiritual world messages for recordation and use by the living” but was not charged with infringing plaintiff’s “style or arrangement” of those messages). Similarly, in Kelley v. Chicago Park District, the Seventh Circuit refused to “recognize[] copyright” in a cultivated garden, as doing so would “press[] too hard on the[] basic principle[]” that “[a]uthors of copyrightable works must be human.” 635 F.3d 290, 304–06 (7th Cir. 2011). The garden “ow[ed] [its] form to the forces of nature,” even if a human had originated the plan for the “initial arrangement of the plants,” and as such lay outside the bounds of copyright. Id. at 304. Finally, in Naruto v. Slater, the Ninth Circuit held that a crested macaque could not sue under the Copyright Act for the alleged infringement of photographs this monkey had taken of himself, for “all animals, since they are not human” lacked statutory standing under the Act. 888 F.3d 418, 420 (9th Cir. 2018). While resolving the case on standing grounds, rather than the copyrightability of the monkey’s work, the Naruto Court nonetheless had to consider whom the Copyright Act was designed to protect and, as with those courts confronted with the nature of authorship, concluded that only humans had standing, explaining that the terms used to describe who has rights under the Act, like “‘children,’ ‘grandchildren,’ ‘legitimate,’ ‘widow,’ and ‘widower[,]’ all imply humanity and necessarily exclude animals.” Id. at 426. Plaintiff can point to no case in which a court has recognized copyright in a work originating with a non-human. 

Undoubtedly, we are approaching new frontiers in copyright as artists put AI in their toolbox to be used in the generation of new visual and other artistic works. The increased attenuation of human creativity from the actual generation of the final work will prompt challenging questions regarding how much human input is necessary to qualify the user of an AI system as an “author” of a generated work, the scope of the protection obtained over the resultant image, how to assess the originality of AI-generated works where the systems may have been trained on unknown pre-existing works, how copyright might best be used to incentivize creative works involving AI, and more. See, e.g., Letter from Senators Thom Tillis and Chris Coons to Kathi Vidal, Under Secretary of Commerce for Intellectual Property and Director of the U.S. Patent and Trademark Office, and Shira Perlmutter, Register of Copyrights and Director of the U.S. Copyright Office (Oct. 27, 2022), https://www.copyright.gov/laws/hearings/Letter-to- USPTO-USCO-on-National-Commission-on-AI-1.pdf (requesting that the United States Patent and Trademark Office and the United States Copyright Office “jointly establish a national commission on AI” to assess, among other topics, how intellectual property law may best “incentivize future AI related innovations and creations”). 

This case, however, is not nearly so complex. While plaintiff attempts to transform the issue presented here, by asserting new facts that he “provided instructions and directed his AI to create the Work,” that “the AI is entirely controlled by [him],” and that “the AI only operates at [his] direction,” Pl.’s Mem. at 36–37—implying that he played a controlling role in generating the work—these statements directly contradict the administrative record. Judicial review of a final agency action under the APA is limited to the administrative record, because “[i]t is black- letter administrative law that in an [APA] case, a reviewing court should have before it neither more nor less information than did the agency when it made its decision.” ... 

Here, plaintiff informed the Register that the work was “[c]reated autonomously by machine,” and that his claim to the copyright was only based on the fact of his “[o]wnership of the machine.” Application at 2. The Register therefore made her decision based on the fact the application presented that plaintiff played no role in using the AI to generate the work, which plaintiff never attempted to correct. See First Request for Reconsideration at 2 (“It is correct that the present submission lacks traditional human authorship—it was autonomously generated by an AI.”); Second Request for Reconsideration at 2 (same). Plaintiff’s effort to update and modify the facts for judicial review on an APA claim is too late. 

On the record designed by plaintiff from the outset of his application for copyright registration, this case presents only the question of whether a work generated autonomously by a computer system is eligible for copyright. In the absence of any human involvement in the creation of the work, the clear and straightforward answer is the one given by the Register: No. 

Given that the work at issue did not give rise to a valid copyright upon its creation, plaintiff’s myriad theories for how ownership of such a copyright could have passed to him need not be further addressed. Common law doctrines of property transfer cannot be implicated where no property right exists to transfer in the first instance. The work-for-hire provisions of the Copyright Act, too, presuppose that an interest exists to be claimed. See 17 U.S.C § 201(b) (“In the case of a work made for hire, the employer . . . owns all of the rights comprised in the copyright.”). Here, the image autonomously generated by plaintiff’s computer system was never eligible for copyright, so none of the doctrines invoked by plaintiff conjure up a copyright over which ownership may be claimed.

In any event, plaintiff’s attempts to cast the work as a work-for-hire must fail as both definitions of a “work made for hire” available under the Copyright Act require that the individual who prepares the work is a human being. The first definition provides that “a ‘work made for hire’ is . . . a work prepared by an employee within the scope of his or her employment,” while the second qualifies certain eligible works “if the parties expressly agree in a written instrument signed by them that the work shall be considered a work made for hire.” 17 U.S.C. § 101 (emphasis added). The use of personal pronouns in the first definition clearly contemplates only human beings as eligible “employees,” while the second necessitates a meeting of the minds and exchange of signatures in a valid contract not possible with a non-human entity.