Data and digital technologies are the modern engines of economic growth. Emerging technologies like artificial intelligence (AI), which can extract useful insights from massive datasets in a fraction of a second, could transform the global economy and speed up productivity growth.
Australia needs to harness the consumer and productivity benefits of data and digital technology while managing and mitigating the downside risks. There is a role for government in setting the rules of the game to foster innovation and ensure that Australians reap the benefits of the data and digital opportunity.
The economic potential of AI is clear, and we are still in the early stages of its development and adoption. Early studies provide a broad range of estimates for the impact of AI on productivity. The Productivity Commission considers that multifactor productivity gains above 2.3% are likely over the next decade, though there is considerable uncertainty. This would translate into about 4.3% labour productivity growth over the same period. But poorly designed regulation could stifle the adoption and development of AI and limit its benefits. Australian governments should take an outcomes based approach to AI regulation – one that uses our existing laws and regulatory structures to minimise harms and introduces technology specific regulations as a last resort.
Data access and use can fuel productivity growth: insights from data can help reduce costs, increase the quality of products and services and lead to the creation of entirely new products. But some requirements in the Privacy Act, the main piece of legislation for protecting privacy, are constraining innovation without providing meaningful protection to individuals. For example, complying with the controls and processes baked into the Act can make consent and notification a ‘tick box’ exercise – where businesses comply with the letter of the law but not the spirit of it. The Australian Government should amend the Privacy Act to introduce an alternative compliance pathway that enables firms to fulfil their privacy obligations by meeting outcomes based criteria.
Data about individuals and businesses underpins growth and value in the digital economy. But often those same individuals and businesses cannot easily access and use this data themselves. Under the right conditions, giving people and businesses better access to data that relates to them can stimulate competition and allow businesses to develop innovative products and services. A mature data sharing regime could add up to $10 billion to Australia’s annual economic output.
Experience shows that we need a flexible approach to facilitating data access across the economy, where obligations placed on data holders and the level of government involvement can match the needs and digital maturity of different sectors. New lower cost and flexible regulatory pathways would help to guide expanded data access throughout the digital economy, focusing first on sectors where the gains can be significant and relatively easy to achieve.
Financial reports provide essential information about a company’s financial performance, ensuring transparency and accountability while informing the decisions of investors, businesses and regulators. Government can further spark productivity by making digital financial reporting the default – that is, mandatory lodgement of financial reports in machine readable form. At the same time, the Australian Government should remove the outdated requirement that financial reports be submitted in hard copy or PDF format. This change would increase the efficiency and accuracy with which information is extracted and analysed.
The draft recommendations are
Artificial intelligence
Draft recommendation 1.1 Productivity growth from AI will be built on existing legal foundations.
Gap analyses of current rules need to be expanded and completed. Australian governments play a key role in promoting investment in digital technology, including AI, by providing a stable regulatory environment. Any regulatory responses to potential harms from using AI must be proportionate, risk based, outcomes based and technology neutral where possible.
The Australian Government should continue, complete, publish and act on ongoing reviews into the potential gaps in the regulatory framework posed by AI as soon as possible.
Where relevant gap analyses have not begun, they should begin immediately.
All reviews of the regulatory gaps posed by AI should consider: • the uses of AI • the additional risk of harm posed by AI (compared to the status quo) in a specific use case • whether existing regulatory frameworks cover these risks potentially with improved guidance and enforcement; and if not how to modify existing regulatory frameworks to mitigate the additional risks.
Draft recommendation 1.2 AI specific regulation should be a last resort
AI specific regulations should only be considered as a last resort for the use cases of AI that meet two criteria. These are: • where existing regulatory frameworks cannot be sufficiently adapted to handle the issue • where technology neutral regulations are not feasible.
Draft recommendation 1.3 Pause steps to implement mandatory guardrails for high risk AI
The Australian Government should only apply the proposed ‘mandatory guardrails for high risk AI’ in circumstances that lead to harms that cannot be mitigated by existing regulatory frameworks and where new technology neutral regulation is not possible. Until the reviews of the gaps posed by AI to existing regulatory structures are completed, steps to mandate the guardrails should be paused.
Data access
Draft recommendation 2.1 Establish lower cost and more flexible regulatory pathways to expand basic data access for individuals and businesses
The Australian Government should support new pathways to allow individuals and businesses to access and share data that relates to them. These regulatory pathways will differ by sector recognising that the benefits (and the implementation costs) from data access and sharing are different by sector. This could include approaches such as: • industry led data access codes that support basic use cases by enabling consumers to export relatively non sensitive data on a periodic (snapshot) basis • standardised data transfers with government helping to formalise minimum technical standards to support use cases requiring high frequency data transfers and interoperability.
These pathways should be developed alongside efforts that are already underway to improve the Consumer Data Right (which will continue to provide for use cases that warrant its additional safeguards and technical infrastructure) and the My Health Record system.
The new pathways should begin in sectors where better data access could generate large benefits for relatively low cost; and there is clear value to consumers. Potential examples include: • enabling farmers to combine real time data feeds from their machinery and equipment to optimise their operations and easily switch between different manufacturers • giving tenants on demand access to their rental ledgers which they can share to prove on‑time payments to new landlords or lenders • allowing retail loyalty card holders to export an itemised copy of their purchase history to budgeting and price comparison tools that can analyse spending and suggest cheaper alternatives. The scope of the data access pathways should expand over time based on industry and consumer consultation, where new technology, overseas experience or domestic developments show that there are clear net benefits to Australia.
Privacy regulation
Draft recommendation 3.1 An alternative compliance pathway for privacy
The Australian Government should amend the Privacy Act 1988 (Cth) to provide an alternative compliance pathway that enables regulated entities to fulfil their privacy obligations by meeting criteria that are targeted at outcomes, rather than controls based rules.
Draft recommendation 3.2 Do not implement a right to erasure
The Australian Government should not amend the Privacy Act 1988 (Cth) to introduce a ‘right to erasure’, as this would impose a high compliance burden on regulated entities, with uncertain privacy benefits for individuals.
Digital financial reporting
Draft recommendation 4.1 Make digital financial reporting the default
The Australian Government should make the necessary amendments to the Corporations Act 2001 (Cth) and the Corporations Regulations 2001 (Cth) to make digital financial reporting mandatory for disclosing entities. The requirement for financial reports to be submitted in hard copy or PDF format should also be removed for those entities.
It goes on
AI specific regulation should be a last resort
AI specific regulations should only be considered as a last resort for the use cases of AI that meet two criteria. These are: • where existing regulatory frameworks cannot be sufficiently adapted to handle the issue • where technology neutral regulations are not feasible.
Economy wide efforts to regulate AI should be paused until all gap analyses are complete and implemented
In August 2024 Australian Government Department of Industry, Science and Resources released a set of 10 voluntary AI safety standards, or guardrails, based on risk management standards such as ISO/IEC 42001:2023 (Information technology – Artificial intelligence – Management system) and the National Institute of Standards and Technology’s Artificial Intelligence Risk Management Framework (AI RMF 1.0) (DISR 2024b, p. 5). The guardrails cover aspects of AI development and application. They require several risk-management processes. These include testing of models, developing a risk plan and providing transparency to users of AI tools and owners of copyrighted materials used in the training of models. The guardrails outline reasonable risk-management practices for many organisations. In this way they have served a very important and useful step in AI governance in Australia by equipping businesses with voluntary, structured and internationally recognised standards to support and guide their adoption of AI.
The guidelines are very useful for smaller businesses without comprehensive risk-management procedures in place. Indeed, submissions from participants to this inquiry (and submissions to the mandatory guardrails – discussed below – consultation process ) showed that many larger organisations have implemented risk management protocols that are similar in spirit to these guardrails.
Mandating the guardrails is not necessary
In September 2024 (DISR 2024a) a proposals paper for a set of mandatory guardrails for AI in high risk settings was released by the Australian Government. The proposal is to turn the voluntary guidelines into mandatory regulations for AI development and application.
The PC is concerned with two aspects of the guardrails being made mandatory. First, the proposals paper argued that the mandatory guardrails would apply to all high risk uses of AI – regardless of whether risks can be better mitigated through outcomes based regulations. Second, the proposals paper argued that General Purpose AIs – which would include many generative AI tools – above a certain threshold of capability be classified as high risk by default. The proposals paper did not argue for any particular measure or threshold for technical capability, though it could include aspects like FLOPS (DISR 2024a, p. 18). It was argued that these models can perform so many functions that their risks cannot be adequately foreseen. This could result in the guardrails being applied to common generative AI tools such as ChatGPT, Claude and Grok, depending on what is chosen as the threshold and measure of technical capability.
In general, high risk uses of AI can be split into three broad types.
1. High risk uses that can be adequately controlled by existing regulatory frameworks (potentially with some modification) – this could include issues with privacy law (which the PC thinks can be resolved within existing frameworks with modification to make the regulations more outcomes focused, chapter 2).
2. High risk uses that can be adequately controlled with new technology neutral regulations – this could include (non consensual) sexually explicit deepfake images which the Australian Government has recently banned (through the Criminal Code Amendment (Deepfake Sexual Material) Act 2024).
3. High risk use cases that require technology specific regulations – these would be use cases identified in the various gap analyses as having no technology neutral solution.
The PC’s concern with the guardrails is that they would not distinguish between these categories. This in our view raises significant issues, as the first two cases can already, by definition, be dealt with adequately by other regulatory mechanisms. It might also result in most commercial chatbots being classified as high risk regardless of the efficacy of existing regulations. The result of this approach is that many AI models would be complying with two different sets of regulation to achieve the same outcome.
For example, the TGA’s review noted that with respect to medical devices, all ten proposed guardrails had close parallels in existing regulations (2025, pp. 27–30). That is, it is likely that firms providing AI based medical devices in Australia would already be fulfilling the objectives of the guardrails if they are operating legally under the TGA’s existing regulations. But if the guardrails are mandated, then the provider of the medical device would need to demonstrate compliance with the TGA regulations and the guardrails, raising the regulatory burden with no change in outcomes.
The mandating of the guardrails is only appropriate in circumstances where existing regulatory frameworks or new technology-neutral regulations are not able to adequately mitigate the risk of harm. Once the Australian Government has completed and acted on all gap analyses of its existing policy framework, it will know what regulatory holes cannot be plugged by existing frameworks or new technology neutral legislation. Consideration of economy wide efforts to mandate the guardrails should be paused until these gap analyses are complete.
Pause steps to implement mandatory guardrails for high risk AI The Australian Government should only apply the proposed ‘mandatory guardrails for high risk AI’ in circumstances that lead to harms that cannot be mitigated by existing regulatory frameworks and where new technology neutral regulation is not possible. Until the reviews of the gaps posed by AI to existing regulatory structures are completed, steps to mandate the guardrails should be paused.
In dealing with copyright the PC states
Copyright violation is an example of a harm that AI could exacerbate by changing economic incentives. Previous waves of innovation in information and communication technology have made the sharing of copyrighted materials much cheaper and easier, creating challenges for copyright. In most instances, copyright law was able to be adapted (or better enforced) to mitigate the harm. This made it unnecessary to directly regulate technology by, for example, regulating computer software or hardware to prevent copyright breach. It is the PC’s view that the copyright issues posed by AI can also similarly be resolved through adapting existing copyright law frameworks rather than introducing AI specific regulation.
What is copyright?
Copyright law prohibits a person from using original works without the permission of the copyright holder – usually the author (AGD 2022a). The types of works that are protected include text, artistic works, music, computer code, sound recordings and films (ACC 2024a). It does not protect the underlying ideas or information (AGD 2022a). In some cases, data and datasets may be protected, ‘largely depend[ing] on how the data has been arranged, structured or presented’ (Allens 2020, p. 3).
The rise of AI technology has led to new challenges for copyright law.
The emergence of AI also raises some additional, principle based questions about how the copyright framework (as part of Australia’s broader intellectual property regime) works to benefit society by encouraging creation and innovation, rewarding intellectual effort and achievement, and supporting the dissemination of knowledge and ideas. (AGD 2023c, p. 12)
In 2023, the Attorney General established the Copyright and Artificial Intelligence Reference Group, which acts as ‘a standing mechanism to engage with stakeholders across a wide range of sectors on issues at the intersection of AI and copyright’ (AGD 2023a). Since then, the group has met on several occasions to discuss issues relating to AI technology and copyright law (AGD 2023a).
This section explores one issue particularly relevant to productivity: whether current Australian copyright law is a barrier to building and training AI models. There are other legal issues relating to the outputs of AI models that are less relevant to productivity – such as whether those outputs attract copyright protection and what happens when AI outputs infringe a third party’s copyright (Evans et al. 2024).
Training AI models
Building and refining AI models requires the use of large amounts of data.
The term ‘AI model training’ refers to this process: feeding the algorithm data, examining the results, and tweaking the model output to increase accuracy and efficacy. To do this, algorithms need massive amounts of data that capture the full range of incoming data. (Chen 2023) The datasets used to train AI models often contain digital copies of media such as web pages, books, videos, images and music. These media are often the subject of copyright protection, which means that their use to train AI models requires permission from the copyright holder.
Permission is required because AI models must ‘copy’ the protected material at least temporarily to undertake the training process. The use of copyrighted materials to train an AI model is a separate issue to the copyright status of anything the model produces. As discussed above, AI outputs may have their own copyright challenges.
A survey of the Copyright and Artificial Intelligence Reference Group indicated that, in practice, a range of copyrighted materials are used to train AI models – including literary and artistic works, sound recordings, films and musical works (AGD 2024, p. 12).
There is evidence to suggest that large AI models are already being trained on copyrighted materials without consent or compensation (APA and ASA, qr. 39, pp. 3–4; APDG, qr. 6, p. 4; APRA AMCOS, qr. 58, p. 4; ARIA and PPCA, qr. 65, p. 5, Creative Australia, qr. 62, p. 3). It should be noted that Australian copyright law only applies to copying that occurs within Australia’s boundaries – in other words, the training of AI models overseas is subject to the relevant laws of the jurisdiction in which it occurs. Lawsuits have been brought against technology companies – including Meta, Microsoft and OpenAI – in some overseas jurisdictions about the unlicensed use of copyrighted works to train AI models (Ryan 2023).
There are concerns that the Australian copyright regime is not keeping pace with the rise of AI technology – whether because it does not adequately facilitate the use of copyrighted works or because AI developers can too easily sidestep existing licensing and enforcement mechanisms. There are several policy options, including: • no policy change – that is, copyright owners would continue to enforce their rights under the existing copyright framework, including through the court system • policy measures to better facilitate the licensing of copyrighted materials, such as through collecting societies • amending the Copyright Act to include a fair dealing exception that would cover text and data mining.
The PC is seeking feedback on what reforms are needed to bring the copyright regime up to date.
Is there a need to bolster the licensing or enforcement regime?
Several participants expressed concern about the unauthorised use of copyrighted materials to train AI models. For example, Creative Australia said: Much of the data has been used reportedly without consent from the original creator, and without acknowledgement or remuneration. The global nature of the technology industry has made it difficult for the owners of creative work to enforce their intellectual property rights and be remunerated for the use of their work. (qr. 62, p. 3)
There are two points at which concerns of this type could be addressed. First, they could be addressed before the fact, through copyright licensing. Licensing is the key mechanism through which a copyright holder grants permission for others to use their work and often involves some form of payment. In Australia, licensing is often done through collecting societies, which are organisations that represent copyright holders. This can streamline the licensing process, because the collecting society can negotiate licences on behalf of multiple copyright holders at once. As the Copyright Agency said: We can help these sectors use third party content for AI related activities. Our annual licence for businesses now allows staff to use news media content in prompts for AI tools (e.g. for summarisation or analysis). We are extending this to other third party content later in the year. We are also in discussions with our members and licensees about other collective licensing solutions, including the use of datasets for AI related activities. (qr. 7, pp. 2–3)
The issue of unauthorised use of copyrighted materials could also be addressed after the fact, through enforcement. This encompasses a range of possible measures, including take down notices, alternative dispute resolution and court action. In 2022 23, the Attorney General’s Department undertook a Copyright Enforcement Review to assess ‘whether existing copyright enforcement mechanisms remain effective and proportionate’ (AGD 2022b). That review found that additional regulatory measures are needed to achieve an effective copyright enforcement regime, and work is currently underway to identify options for: • reducing barriers for Australians to use of the legal system to enforce copyright, including examining simple options to resolve ‘small value’ copyright infringements • improving understanding and awareness about copyright. (AGD 2023b)
In light of this ongoing work, the issue of copyright enforcement is not in scope for this inquiry.
Is there a case for a text and data mining exception?
Another option is to expand the existing ‘fair dealing’ regime, which provides certain exceptions to the requirement to obtain permission from the copyright holder (box 1.6). Currently, there is no exception that covers AI model training per se (The University of Notre Dame Australia 2024). However, depending on the case, a different exception could apply. For example, AI models built as part of research could fall within the scope of the ‘research or study’ exception.
Box 1.6 – What are fair dealing exceptions?
Fair dealing exceptions allow for the use of copyright material without permission from the copyright owner, so long as it is used for one of several specified purposes and is considered fair. What are the specified purposes? The Copyright Act specifies several purposes where the exception may apply. These include: research or study, criticism or review, parody or satire, reporting news, and enabling a person with a disability to access the material (Copyright Act 1968 (Cth), Part III, Div 3; Part VIA, Div 2).
What counts as ‘fair’?
Fairness is determined with regard to all the relevant circumstances – that is, it depends on the facts. Some purposes have specified criteria that must be taken into account. For example, where the use is for research or study, the following considerations apply: • the purpose and character of the dealing • the nature of the work • whether the work can be obtained within a reasonable time at an ordinary commercial price • the effect of the dealing upon the potential market for, or value of, the work • how amount and substantiality of the work that was copied (Copyright Act 1968 (Cth), s 40(2)).
The ‘fair use’ doctrine – an alternative approach
Some overseas jurisdictions (notably the United States) take a ‘fair use’ approach to copyright exceptions. Under this doctrine, any types of use can be considered non infringing, provided that it is considered ‘fair’ – in other words, the use need not fall within one of several defined categories. Several reviews have recommended the adoption of the fair use doctrine in Australia (including by the Australian Law Reform Commission and the Productivity Commission), but this has not occurred. Source: ACC (2024b); ALRC (2013); Copyright Act 1968 (Cth); PC (2021, p. 187).
In its report on Copyright and the Digital Economy, the Australian Law Reform Commission recommended amendments to enable text and data mining by adopting a fair use approach to copyright exceptions (box 1.6) – or, failing that, through a new fair dealing exception. It explained: There has been growing recognition that data and text mining should not be infringement because it is a ‘non expressive’ use. Non expressive use leans on the fundamental principle that copyright law protects the expression of ideas and information and not the information or data itself (2013, p. 261)
The Australian Government has since indicated that it is not inclined to introduce a fair use regime (Australian Government 2017, p. 7). Therefore, the PC is considering whether there is a case for a new fair dealing exception that explicitly covers text and data mining (a ‘TDM exception’). TDM exceptions exist in several comparable overseas jurisdictions (box 1.7).
Such an exception would cover not just AI model training, but all forms of analytical techniques that use machine read material to identify patterns, trends and other useful information. For example, the use of text and data mining techniques is common in research sectors to produce large datasets that can be interrogated through statistical analysis.
Box 1.7 – Text and data mining around the world
European Union: There are two text and data mining (TDM) exceptions embedded in the Digital Single Market Directive (EU 2019/790) – one for scientific research (article 3) and another for general use (article 4). The Artificial Intelligence Act (Regulation (EU) 2024/1689) specifically characterises the training of AI models as involving ‘text and data mining techniques’ (recital 105) and refers to the TDM exception (article 53). The recent case of Kneschke v. LAION [2024] endorsed the view that the TDM exception extends to cover AI training (Goldstein et al. 2024a, 2024b).
United States: It has been argued that training AI models falls within the scope of the fair use doctrine (Khan 2024; Klosek and Blumenthal 2024). However, the case Thomson Reuters v. Ross [2023] 694 F.Supp.3d 467 highlights that whether AI training is covered by the doctrine depends on whether the fair use factors are met in the circumstances (ReedSmith 2025).
United Kingdom: There is a TDM exception for that applies to non commercial research (UK Intellectual Property Office 2014). There have been proposals to expand the exception to cover all uses, though these are still under consideration (Pinsent Masons 2023; UK Government 2024).
Japan: The Japanese Copyright Act includes broad statutory exemptions for TDM (article 30 4(ii)), provided the work is used for ‘non enjoyment’ purposes (Senftleben 2022, p. 1494). In essence, the requirement for ‘non enjoyment’ distinguishes between whether the work is being consumed as a work or as data, and is broadly equivalent to the distinction between expressive and non expressive uses.
Singapore: The Singaporean Copyright Act includes a specific TDM exception, as well as a broader fair use exception (Ng-Loy 2024).
To assist its consideration of this option, the PC is seeking feedback about the likely effects of a TDM exception on the AI market, the creative sector and productivity in general – particularly in light of the following considerations. • At present, large AI models (including generative AI and large language models) are generally available to be used in Australia. The introduction (or not) of a TDM exception is unlikely to affect whether AI models continue to be available and used in Australia (PC 2024c, p. 13). • At present, large AI models are trained overseas, not in Australia. It is unclear whether the introduction of a TDM exception would change this trend. • As discussed above, large AI models are already being trained on unlicensed copyrighted materials. • A TDM exception could make a difference to whether smaller, low compute models (such as task specific models) can be built and trained in Australia, such as by Australian research institutions, medical technology firms, and research service providers. It should also be noted that a TDM exception would not be a ‘blank cheque’ for all copyrighted materials to be used as inputs into all AI models. As discussed in box 1.4, the use must also be considered ‘fair’ in the circumstances – this requirement would act as a check on copyrighted works being used unfairly, preserving the integrity of the copyright holder’s legal and commercial interests in the work. There may be a need for legislative criteria or regulatory guidance about what types of uses are likely to be considered fair.
Information request 1.1
The PC is seeking feedback on the issue of copyrighted materials being used to train AI models. • Are reforms to the copyright regime (including licensing arrangements) required? If so, what are they and why? The PC is also seeking feedback on the proposal to amend the Copyright Act 1968 (Cth) to include a fair dealing exception for text and data mining. • How would an exception covering text and data mining affect the development and use of AI in Australia? What are the costs, benefits and risks of a text and data mining exception likely to be? • How should the exception be implemented in the Copyright Act – for example, should it be through a broad text and data mining exception or one that covers non commercial uses only? • Is there a need for legislative criteria or regulatory guidance to help provide clarity about what types of uses are fair?