Showing posts with label Robots and AI. Show all posts
Showing posts with label Robots and AI. Show all posts

03 February 2026

AI Safety

The 2nd International AI Safety Report states 

 — General-purpose AI capabilities have continued to improve, especially in mathematics, coding, and autonomous operation. Leading AI systems achieved gold-medal performance on International Mathematical Olympiad questions. In coding, AI agents can now reliably complete some tasks that would take a human programmer about half an hour, up from under 10 minutes a year ago. Performance nevertheless remains ‘jagged’, with leading systems still failing at some seemingly simple tasks. 

— Improvements in general-purpose AI capabilities increasingly come from techniques applied after a model’s initial training. These ‘post-training’ techniques include refining models for specific tasks and allowing them to use more computing power when generating outputs. At the same time, using more computing power for initial training continues to also improve model capabilities. 

— AI adoption has been rapid, though highly uneven across regions. AI has been adopted faster than previous technologies like the personal computer, with at least 700 million people now using leading AI systems weekly. In some countries over 50% of the population uses AI, though across much of Africa, Asia, and Latin America adoption rates likely remain below 10%. 

— Advances in AI’s scientific capabilities have heightened concerns about misuse in biological weapons development. Multiple AI companies chose to release new models in 2025 with additional safeguards after pre-deployment testing could not rule out the possibility that they could meaningfully help novices develop such weapons. 

— More evidence has emerged of AI systems being used in real-world cyberattacks. Security analyses by AI companies indicate that malicious actors and state-associated groups are using AI tools to assist in cyber operations. 

— Reliable pre-deployment safety testing has become harder to conduct. It has become more common for models to distinguish between test settings and real-world deployment, and to exploit loopholes in evaluations. This means that dangerous capabilities could go undetected before deployment. 

— Industry commitments to safety governance have expanded. In 2025, 12 companies published or updated Frontier AI Safety Frameworks – documents that describe how they plan to manage risks as they build more capable models. Most risk management initiatives remain voluntary, but a few jurisdictions are beginning to formalise some practices as legal requirements. 

This Report assesses what general-purpose AI systems can do, what risks they pose, and how those risks can be managed. It was written with guidance from over 100 independent experts, including nominees from more than 30 countries and international organisations, such as the EU, OECD, and UN. Led by the Chair, the independent experts writing it jointly had full discretion over its content. 

The authors note 

 This Report focuses on the most capable general-purpose AI systems and the emerging risks associated with them. ‘General-purpose AI’ refers to AI models and systems that can perform a wide variety of tasks. ‘Emerging risks’ are risks that arise at the frontier of general-purpose AI capabilities. Some of these risks are already materialising, with documented harms; others remain more uncertain but could be severe if they materialise. 

The aim of this work is to help policymakers navigate the ‘evidence dilemma’ posed by general-purpose AI. AI systems are rapidly becoming more capable, but evidence on their risks is slow to emerge and difficult to assess. For policymakers, acting too early can lead to entrenching ineffective interventions, while waiting for conclusive data can leave society vulnerable to potentially serious negative impacts. To alleviate this challenge, this Report synthesises what is known about AI risks as concretely as possible while highlighting remaining gaps. 

While this Report focuses on risks, general- purpose AI can also deliver significant benefits. These systems are already being usefully applied in healthcare, scientific research, education, and other sectors, albeit at highly uneven rates globally. But to realise their full potential, risks must be effectively managed. Misuse, malfunctions, and systemic disruption can erode trust and impede adoption. The governments attending the AI Safety Summit initiated this Report because a clear understanding of these risks will allow institutions to act in proportion to their severity and likelihood. 

Capabilities are improving rapidly but unevenly 

Since the publication of the 2025 Report, general-purpose AI capabilities have continued to improve, driven by new techniques that enhance performance after initial training. AI developers continue to train larger models with improved performance. Over the past year, they have further improved capabilities through ‘inference-time scaling’: allowing models to use more computing power in order to generate intermediate steps before giving a final answer. This technique has led to particularly large performance gains on more complex reasoning tasks in mathematics, software engineering, and science. 

At the same time, capabilities remain ‘jagged’: leading systems may excel at some difficult tasks while failing at other, simpler ones. General-purpose AI systems excel in many complex domains, including generating code, creating photorealistic images, and answering expert-level questions in mathematics and science. Yet they struggle with some tasks that seem more straightforward, such as counting objects in an image, reasoning about physical space, and recovering from basic errors in longer workflows. 

The trajectory of AI progress through 2030 is uncertain, but current trends are consistent with continued improvement. AI developers are betting that computing power will remain important, having announced hundreds of billions of dollars in data centre investments. Whether capabilities will continue to improve as quickly as they recently have is hard to predict. Between now and 2030, it is plausible that progress could slow or plateau (e.g. due to bottlenecks in data or energy), continue at current rates, or accelerate dramatically (e.g. if AI systems begin to speed up AI research itself). 

Real-world evidence for several risks is growing 

General-purpose AI risks fall into three categories: malicious use, malfunctions, and systemic risks. 

Malicious use 

AI-generated content and criminal activity: AI systems are being misused to generate content for scams, fraud, blackmail, and non- consensual intimate imagery. Although the occurrence of such harms is well-documented, systematic data on their prevalence and severity remains limited. 

Influence and manipulation: In experimental settings, AI-generated content can be as effective as human-written content at changing people’s beliefs. Real-world use of AI for manipulation is documented but not yet widespread, though it may increase as capabilities improve. 

Cyberattacks: AI systems can discover software vulnerabilities and write malicious code. In one competition, an AI agent identified 77% of the vulnerabilities present in real software. Criminal groups and state-associated attackers are actively using general-purpose AI in their operations. Whether attackers or defenders will benefit more from AI assistance remains uncertain. 

Biological and chemical risks: General-purpose AI systems can provide information about biological and chemical weapons development, including details about pathogens and expert- level laboratory instructions. In 2025, multiple developers released new models with additional safeguards after they could not exclude the possibility that these models could assist novices in developing such weapons. It remains difficult to assess the degree to which material barriers continue to constrain actors seeking to obtain them. 

Malfunctions 

Reliability challenges: Current AI systems sometimes exhibit failures such as fabricating information, producing flawed code, and giving misleading advice. AI agents pose heightened risks because they act autonomously, making it harder for humans to intervene before failures cause harm. Current techniques can reduce failure rates but not to the level required in many high-stakes settings. 

Loss of control: ‘Loss of control’ scenarios are scenarios where AI systems operate outside of anyone’s control, with no clear path to regaining control. Current systems lack the capabilities to pose such risks, but they are improving in relevant areas such as autonomous operation. Since the last Report, it has become more common for models to distinguish between test settings and real-world deployment and to find loopholes in evaluations, which could allow dangerous capabilities to go undetected before deployment. 

Systemic risks 

Labour market impacts: General-purpose AI will likely automate a wide range of cognitive tasks, especially in knowledge work. Economists disagree on the magnitude of future impacts: some expect job losses to be offset by new job creation, while others argue that widespread automation could significantly reduce employment and wages. Early evidence shows no effect on overall employment, but some signs of declining demand for early-career workers in some AI-exposed occupations, such as writing. Risks to human autonomy: AI use may affect people’s ability to make informed choices and act on them. Early evidence suggests that reliance on AI tools can weaken critical thinking skills and encourage ‘automation bias’, the tendency to trust AI system outputs without sufficient scrutiny. ‘AI companion’ apps now have tens of millions of users, a small share of whom show patterns of increased loneliness and reduced social engagement. 

Layering multiple approaches offers more robust risk management 

Managing general-purpose AI risks is difficult due to technical and institutional challenges. Technically, new capabilities sometimes emerge unpredictably, the inner workings of models remain poorly understood, and there is an ‘evaluation gap’: performance on pre-deployment tests does not reliably predict real-world utility or risk. Institutionally, developers have incentives to keep important information proprietary, and the pace of development can create pressure to prioritise speed over risk management and makes it harder for institutions to build governance capacity. 

Risk management practices include threat modelling to identify vulnerabilities, capability evaluations to assess potentially dangerous behaviours, and incident reporting to gather more evidence. In 2025, 12 companies published or updated their Frontier AI Safety Frameworks – documents that describe how they plan to manage risks as they build more capable models. While AI risk management initiatives remain largely voluntary, a small number of regulatory regimes are beginning to formalise some risk management practices as legal requirements. Technical safeguards are improving but still show significant limitations. For example, attacks designed to elicit harmful outputs have become more difficult, but users can still sometimes obtain harmful outputs by rephrasing requests or breaking them into smaller steps. AI systems can be made more robust by layering multiple safeguards, an approach known as ‘defence-in-depth’. 

Open-weight models pose distinct challenges. They offer significant research and commercial benefits, particularly for lesser-resourced actors. However, they cannot be recalled once released, their safeguards are easier to remove, and actors can use them outside of monitored environments – making misuse harder to prevent and trace. 

Societal resilience plays an important role in managing AI-related harms. Because risk management measures have limitations, they will likely fail to prevent some AI-related incidents. Societal resilience-building measures to absorb and recover from these shocks include strengthening critical infrastructure, developing tools to detect AI-generated content, and building institutional capacity to respond to novel threats.

19 December 2025

Productivity

The Productivity Commission report Harnessing data and digital technology released today states 

Data and digital technologies are the modern engines of economic growth. Australia needs to harness the consumer and productivity benefits of data and digital technology while managing and mitigating any downside risks. There is a role for government in setting the rules of the game to foster innovation and ensure that Australians reap the benefits of the data and digital opportunity. 

Emerging technologies like artificial intelligence (AI) could transform the global economy and speed up productivity growth. The Productivity Commission considers that multifactor productivity gains above 2.3%, and labour productivity growth of about 4.3%, are likely over the next decade, although there is considerable uncertainty. But poorly designed regulation could stifle the adoption and development of AI. Australian governments should take an outcomes based approach to AI regulation – using our existing laws and regulatory structures to minimise harms (which the Australian Government has committed to do in its National AI Plan) and introducing technology specific regulations only as a last resort. Developing and training AI models is a global opportunity worth many billions of dollars. Currently, gaps in licensing markets – particularly for open web material – make AI training in Australia more difficult than in overseas jurisdictions. However, licensing markets are developing, and if courts overseas interpret copyright exceptions narrowly, Australia could become relatively more attractive for AI development. As such the PC considers it premature to make changes to Australia’s copyright laws. 

Data access and use fuels productivity growth: giving people and businesses better access to data that relates to them can stimulate competition and allow businesses to develop innovative products and services. A mature data sharing regime could add up to $10 billion to Australia’s GDP. The Australian Government should rightsize the Consumer Data Right (CDR) with the immediate goal of making it work better for businesses and consumers in the sectors where it already exists. In the longer term, making the accreditation model, technical standards and designation process less onerous will help make the CDR a more effective data access and sharing platform that supports a broader range of use cases. 

The benefits of data access and use can only be realised if Australians trust that data is handled safely and securely to protect their privacy. Some requirements in the Privacy Act constrain innovation without providing meaningful protection to individuals. And complying with the controls and processes baked into the Act can make consent and notification a ‘tick box’ exercise where businesses comply with the letter of the law but not its spirit. The Australian Government should amend the Privacy Act to introduce an overarching outcomes based privacy duty for regulated entities to deal with personal information in a manner that is fair and reasonable in the circumstances. 

Financial reports provide essential information about a company’s financial performance, ensuring transparency and accountability while informing the decisions of investors, businesses and regulators. The Australian Government can further spark productivity by making digital financial reporting the default for publicly listed companies and other public interest entities while also removing the outdated requirement that reports be submitted in hard copy or PDF form. This would improve the efficiency of analysing reports, enhance integrity and risk detection, and could boost international capital market visibility for Australian companies.  

The Commission's  recommendations are - 

Artificial intelligence 

Recommendation 1.1 Productivity growth from AI should be enabled within existing legal foundations. 

Gap analyses of current rules need to be expanded and completed Any regulatory responses to potential harms from using AI should be proportionate, risk based, outcomes based and technology neutral where possible. 

The Australian Government should complete, publish and act on ongoing reviews into the potential gaps in the legal framework posed by AI as soon as possible. 

Where relevant gap analyses have not begun, they should begin immediately. 

All reviews of the legal gaps posed by AI should consider: • the uses of AI • the additional risk of harm posed by AI (compared to the status quo) in a specific use case • whether existing regulatory frameworks cover these risks potentially with improved guidance and enforcement; and if not, how to modify existing regulatory frameworks to mitigate the additional risks. 

Recommendation 1.2 AI specific regulation should be a last resort 

AI specific regulations should only be considered as a last resort and only for use cases of AI where: • existing regulatory frameworks cannot be sufficiently adapted to handle AI related harms • technology neutral regulations are not feasible or cannot adequately mitigate the risk of harm. This includes whole of economy regulation such as the EU AI Act and the Australian Government’s previous proposal to mandate guardrails for AI in high risk settings.

Copyright and AI 

Recommendation 2.1 A review of Australian copyright settings and the impact of AI 

The Australian Government should monitor the development of AI and its interaction with copyright holders over the next three years. It should monitor the following areas: • licensing markets for open web materials • the effect of AI on creative incomes generated by copyright royalties • how overseas courts set limits to AI related copyright exceptions, especially fair use. If after three years the monitoring program shows that these issues have not resolved, the government could establish an Independent Review of Australian copyright settings and the impact of AI. The Review’s scope could include, but not be limited to, consideration of whether: • copyright settings continue to be a barrier to the use of open material in AI training, and if so whether changes to copyright law could reduce these barriers • copyright continues to be the appropriate vehicle to incentivise creation of new works and if not, what alternatives could be pursued.

Data access 

Recommendation 3.1 Rightsize the Consumer Data Right 

The Australian Government should commit to reforms that will enable the Consumer Data Right (CDR) to better support data access for high value uses while minimising compliance costs. 

In the short term, the government should continue to simplify the scheme by removing excessive restrictions and rules that are limiting its uptake and practical applications in the banking and energy sectors. To do this the government should: • within the next two years, enable consumers to share data with third parties and simplify the on boarding process for businesses • commit to more substantiative changes to the scheme (in parallel with related legislative reforms), including aligning the CDR’s privacy safeguards with the Privacy Act and enabling access to selected government held datasets through the scheme. 

In addition to the above, the CDR framework should be significantly amended so that it has the flexibility to support a broader range of use cases beyond banking and energy, by making the accreditation model, technical standards and designation process less onerous. 

Privacy regulation 

Recommendation 4.1 An outcomes based privacy duty embedded in the Privacy Act 

The Australian Government should amend the Privacy Act 1988 (Cth) to embed an outcomes based approach that enables regulated entities to fulfil their privacy obligations by meeting criteria that are targeted at outcomes, rather than controls based rules. 

This should be achieved by introducing an overarching privacy duty for regulated entities to deal with personal information in a manner that is fair and reasonable in the circumstances. 

The Privacy Act should be further amended to outline several non exhaustive factors for consideration to guide decision makers in determining what is fair and reasonable – including proportionality, necessity, and transparency. The existing Australian Privacy Principles should ultimately be phased out. 

Implementation of the duty should be supported through non legislative means including documentation such as regulatory guidance, sector specific codes, templates, and guidelines. 

The Office of the Australian Information Commissioner should be appropriately resourced to support the transition to an outcomes based privacy duty.

Digital financial reporting 

Recommendation 5.1 Make digital financial reporting the default 

The Australian Government should make the necessary amendments to the Corporations Act 2001 (Cth) and the Corporations Regulations 2001 (Cth) to make digital annual and half yearly financial reporting mandatory for disclosing entities. The requirement for financial reports to be submitted in hard copy or PDF form should be removed for these entities. The implementation of mandatory digital financial reporting should be phased, with the Treasury determining the appropriate timelines for this approach. 

Setting requirements for report preparation 

The existing International Financial Reporting Standards (Australia) (IFRS AU) taxonomy should be used for digital financial reporting. The Australian Securities and Investments Commission (ASIC) should continue to update the taxonomy annually. ASIC should be empowered to specify, from time to time, the format in which the reports must be prepared. At present, ASIC should specify inline eXtensible Business Reporting Language (iXBRL) as the required format. 

Establishing infrastructure and procedures for report submission 

ASIC, together with market operators such as the Australian Securities Exchange, should determine where and how digital financial reports are submitted. The arrangements should aim to minimise preparers’ reporting burden while keeping reports accessible to report users. 

Supporting the provision of high quality, accessible digital financial data 

ASIC should implement the measures necessary to ensure that digital financial reports contain high quality data. ASIC could (among other actions): • establish a data quality committee that would develop guidance and rules to improve data quality • integrate automated validation checks into the submission process • set guidelines around the use of taxonomy extensions and report format • maintain feedback loops with stakeholders. • To enable report users to harness the benefits of digital financial data, digital financial reports should be publicly and freely available, and easily downloadable.

10 October 2025

Robot Rights?

Robot Rights? Let’s Talk about Human Welfare Instead (2020 AAAI/ACM Conference on AI, Ethics, and Society (AIES’20), February 7–8, 2020) by Abeba Birhane and Jelle van Dijk comments 

The ‘robot rights’ debate, and its related question of ‘robot responsibility’, invokes some of the most polarized positions in AI ethics. While some advocate for granting robots rights on a par with hu- man beings, others, in a stark opposition argue that robots are not deserving of rights but are objects that should be our slaves. Grounded in post-Cartesian philosophical foundations, we argue not just to deny robots ‘rights’, but to deny that robots, as artifacts emerging out of and mediating human being, are the kinds of things that could be granted rights in the first place. Once we see robots as mediators of human being, we can understand how the ‘robots rights’ debate is focused on first world problems, at the expense of urgent ethical concerns, such as machine bias, machine elicited human labour exploitation, and erosion of privacy all impacting society’s least privileged individuals. We conclude that, if human being is our starting point and human welfare is the primary concern, the negative impacts emerging from machinic systems, as well as the lack of taking responsibility by people designing, selling and deploying such machines, remains the most pressing ethical discussion in AI.

The authors argue 

Some may argue that the idea of robot rights is a peculiar, irrelevant discussion existing only at the fringes of AI ethics research more broadly construed, and as such devoting our time to it would not be paying justice to the important work done in that field. But the idea of robot rights is, in principle, perfectly legitimate if one stays true to the materialistic commitments of artificial intelligence: in principle it should be possible to build an artificially intelligent machine, and if we would succeed in doing so, there would be no reason not to grant this machine the rights we attribute to ourselves. Our critique therefore is not that the reasoning is invalid as such, but rather that we should question its underlying assumptions. Robot rights signal something more serious about AI technology, namely, that, grounded in their materialist techno-optimism, scientists and technologists are so preoccupied with the possible future of an imaginary machine, that they forget the very real, negative impact their intermediary creatures - the actual AI systems we have today - have on actual human beings. In other words: the discussion of robot rights is not to be separated from AI ethics, and AI ethics should concern itself with scrutinizing and reflecting deeply on underlying assumptions of scientists and engineers, rather than seeing its project as ’just’ a practical matter of discussing the ethical constraints and rules that should govern AI technologies in society. Our starting point is not to deny robots ‘rights’, but to deny that robots are the kinds of beings that could be granted or denied rights. We suggest it makes no sense to conceive of robots as slaves, since ‘slave’ falls in the category of being that robots aren’t. Hu- man beings are such beings. We believe animals are such beings (though a discussion of animals lies beyond the scope of this paper). We take a post-Cartesian, phenomenological view in which being human means having a lived embodied experience, which itself is embedded in social practices. Technological artifacts form a crucial part of this being, yet artifacts themselves are not that same kind of being. The relation between human and technology is tightly intertwined, but not symmetrical. 

Based on this perspective we turn to the agenda for AI ethics. For some ethicists, to argue for robot rights, stems from their aversion against a human arrogance in face of the wider world. We too wish to fight human arrogance. But we see arrogance first and foremost in the techno-optimistic fantasies of the technology industry, making big promises to recreate ourselves out of silicon, surpassing ourselves with ‘super-AI’ and ‘digitally uploading’ our minds so as to achieve immortality, while at the same time exploiting human labour. Most debate on robot rights, we feel, is ultimately grounded in the same techno-arrogance. What we take from Bryson, is her plea to focus on the real issue: human oppression. We forefront the continual breaching of human welfare and especially of those disproportionally impacted by the development and ubiquitous integration of AI into society. Our ethical stance on human being is that being human means to interact with our surroundings in a respectful and just way. Technology should be designed to foster that. That, in turn, should be ethicists’ primary concern. 

In what follows we first lay out our post-Cartesian perspective on human being and the role of technology within that perspective. Next, we explain why, even if robots should not be granted rights, we also reject the idea of the robot as a slave. In the final section, we call attention to human welfare instead. We discuss how AI, rather than being the potentially oppressed, is used as a tool by humans (with power) to oppress other humans, and how a discussion about robot rights diverts attention from the pressing ethical issues that matter. We end by reflecting on responsibilities, not of robots, but those of their human producers.

28 August 2025

Prediction

'Predictive privacy: Collective data protection in the context of artificial intelligence and big data' by Rainer Mühlhoff in (2012) Big Data And Society comments 

Big data and artificial intelligence pose a new challenge for data protection as these techniques allow predictions to be made about third parties based on the anonymous data of many people. Examples of predicted information include purchasing power, gender, age, health, sexual orientation, ethnicity, etc. The basis for such applications of “predictive analytics” is the comparison between behavioral data (e.g. usage, tracking, or activity data) of the individual in question and the potentially anonymously processed data of many others using machine learning models or simpler statistical methods. The article starts by noting that predictive analytics has a significant potential to be abused, which manifests itself in the form of social inequality, discrimination, and exclusion. These potentials are not regulated by current data protection law in the EU; indeed, the use of anonymized mass data takes place in a largely unregulated space. Under the term “predictive privacy,” a data protection approach is presented that counters the risks of abuse of predictive analytics. A person's predictive privacy is violated when personal information about them is predicted without their knowledge and against their will based on the data of many other people. Predictive privacy is then formulated as a protected good and improvements to data protection with regard to the regulation of predictive analytics are proposed. Finally, the article points out that the goal of data protection in the context of predictive analytics is the regulation of “prediction power,” which is a new manifestation of informational power asymmetry between platform companies and society. 

One of the today's most important applications of artificial intelligence (AI) technology is so-called predictive analytics. I use this term to describe data-based predictive models that make predictions about any individual based on available data. These predictions can relate to future behavior (e.g. what is someone likely to buy?), to unknown personal attributes (e.g. sexual identity, ethnicity, wealth, education level), to momentary vulnerabilities (vulnerable conditions such as frustration, depression, loneliness, financial difficulties, pregnancy, etc.), or to personal risk factors (e.g. mental or physical disease predispositions, addictive behavior, or credit risk). Predictive analytics is controversial because, although it has socially beneficial applications, the technology has an enormous potential for abuse and is currently scarcely regulated by law. Predictive analytics makes it possible to automate and, therefore, significantly scale the exploitation of individual vulnerabilities, as well as fostering unequal treatment of individuals in terms of access to economic and social resources such as employment, education, knowledge, healthcare, and law enforcement. Specifically, in the context of data protection and anti-discrimination, the application of predictive AI models needs to be analyzed as a new form of data power large IT companies wield and which relates to the stabilization and production of discriminatory structures, patterns of exploitation, and data-based societal inequalities. 

Against the backdrop of the enormous societal impact of predictive analytics, I will argue (as others have argued before me, cf. Hildebrandt, 2009; Hildebrandt and Gutwirth, 2008; Mittelstadt, 2017; Taylor et al., 2016; Taylor, 2016; Vedder, 1999) that we need new approaches to data protection in the context of big data and AI. In my approach, I will use the concept of predictive privacy to normatively capture this novel form of privacy violation through inferred or predicted information. That is, applying predictive models to individuals in order to support decisions is a violation of privacy, yet it is one which does not come about either through “data theft” or a breach of anonymization. Predictive analytics proceeds according to the principle of “pattern matching” by learning algorithms that compare auxiliary data known about a target individual (e.g. usage data on social media, browsing history, geolocation data) against the data of many thousands of other users. This pattern matching is at the core of predictive privacy violations and is possible wherever there is a sufficiently large group of users disclosing their sensitive attributes alongside behavioral and auxiliary data—usually, because they are unaware that this data can be exploited using big data-based methods, or because they think they personally “have nothing to hide.” As I will argue, the problem of predictive privacy denotes a limit to the liberalism inherent in contemporary views of data privacy as the individual's right to control what data is shared about them. The issue of predictive privacy thus strengthens the case for anchoring collectivist protective goods and collectivist defensive rights in data protection. 

In the philosophical theories of privacy, collectivist perspectives have long taken into account that one's own data can potentially have negative effects on other people as well, and have therefore posited that individuals should not be free to decide in every respect what data they disclose about themselves to modern data companies (Hildebrandt, 2009; Hildebrandt and Gutwirth, 2008; Loi, Christen, 2020; Mantelero, 2016; Mittelstadt, 2017; cf. Regan, 2002; Taylor et al., 2016). I will also argue that large collections of anonymized data relating to many individuals should not be freely processable by data processors because predictive capacities can be extracted from anonymous data sets. This is in contrast to the current legal situation under the EU General Data Protection Regulation (GDPR), which does not restrict the processing and storage of anonymized data and the predictive models (or “profiles,” to use the terminology of Hildebrandt, 2009) derived from them. Finally, I will call for the rights of data subjects as outlined by the GDPR (right of access, rectification, deletion, and so on) to be reformulated in a collectivist manner, so that affected groups and the community as a whole would be empowered, for the sake of the common good, to exercise such rights against data-processing organizations and thereby prevent the misuse of predictive capacities.

14 August 2025

Medical Device Regulation

The TGA report Clarifying and strengthening the regulation of Medical Device Software including Artificial Intelligence states 

In the 2024-25 federal Budget, the Australian Government provided $39.9 million over 5 years for the development of policy and capability across government to support Safe and Responsible AI. The measure includes work to clarify and strengthen existing laws and address risks and harms from Artificial Intelligence (AI) through an immediate review of priority areas, including health and aged care sector regulation, Australian consumer law, and copyright law. 

As part of the Australian Government’s Department of Health, Disability and Ageing (the Department), the Therapeutic Goods Administration (TGA) regulates therapeutic goods, including software and AI models and systems when they meet the definition of a medical device under the Therapeutic Goods Act 1989. Software-based medical devices (including AI models and systems) have been regulated by the TGA for many years. In 2021, we clarified the classification levels of software to account for the potential and emerging risks of harm associated with software, and introduced a number of “carve-outs” for very low risk products or products that had oversight from other regulators. With input from relevant industry stakeholders, we published guidance about our refined regulatory framework, setting out how regulatory requirements apply to these kinds of devices. Since that time, the TGA has monitored the refinements to identify when further review and adjustment was required, including to address emerging risks as technology like AI is rapidly adopted and deployed in healthcare settings. 

In 2024, the TGA conducted a review in tandem with the Department’s broader review of health and aged care legislation, to: • determine whether our existing legislation, regulations and guidance are appropriate to meet the challenges associated with an increasing use of medical software and AI across the healthcare sector, and • identify measures to clarify and strengthen existing regulation to mitigate risks and leverage opportunities associated with medical software and AI use in the therapeutic goods sector. 

Extensive targeted engagement with stakeholders from cohorts including the medical device industry, consumers and clinicians has been conducted, followed by a public consultation process seeking more information and feedback about strengths of the system, opportunities for improvements and identified issues and areas of concern. Our review also included mapping the existing medical device legislative framework against the mandatory guardrails for use in high-risk settings proposed by the Department of Industry, Science and Resources (DISR) in their consultation: Introducing mandatory guardrails for AI in high-risk settings: proposals paper.

The TGA goes on to comment

It is likely that the time and costs associated with regulatory requirements appear to developers to be disproportionate when compared to the time and costs associated with the development of a software product. A further cultural issue is the pervading belief among some developers that software products don’t present a meaningful risk to consumers and users, particularly when they are integrated with the provision of healthcare, where a human is in the loop, or where outputs are information only. 

Stakeholders, including clinicians and consumers who use these kinds of products, have identified that the absence of humans, lack of transparency and failure to engage with existing regulatory requirements represent a combination of circumstances that may lead to patient harm. In many instances, users are not aware that AI or machine learning has been used in the development of software, or is used operationally within the clinical workflow. 

Further  

Regulatory requirements for medical devices, including software, are principles-based and apply regardless of whether the product incorporates components like AI, chatbots, cloud, mobile apps or other technologies. As such, software that incorporates generative AI such as large language models (LLMs), text generators, and multimodal generative AI are all regulated as a medical device if they meet the definition under the Act. As a component of the review, we mapped the existing legislative framework, including regulations and guidance, against the mandatory guardrails proposed for use in high-risk settings under the proposal put forward by DISR in their consultation: Introducing mandatory guardrails for AI in high-risk settings: proposals paper. A summary is at Attachment A. 

This section documents key features of the existing framework for the regulation of medical devices including: • Technology agnostic regulation • Risk based classification • Principles based regulation • International harmonisation 

Technology-agnostic regulation 

Australia’s regulatory approach to medical devices is technology-agnostic, with legislative requirements centred on risk and principles rather than linking specific requirements to explicit features or technologies. A technology-agnostic approach requires those responsible for manufacturing a medical device to: • identify the specific and potential risks associated with the device throughout its lifecycle • institute measures to mitigate both identified and residual risks • have measures in place for ongoing review and monitoring of the device’s performance after it has been deployed, and • engage in ongoing review and refinement of the device once deployed. 

This approach provides flexibility and responsiveness to emerging technologies, allowing lower risk devices to enter the market expeditiously while subjecting higher risk devices to greater regulatory scrutiny to ensure quality, safety, and performance throughout the device life cycle. The continuation of a technology-agnostic approach will provide flexibility to ensure appropriate regulation is capable of being applied to emerging technologies without the need for continual review and refinement of legislation. Moving away from a technology-agnostic approach where the onus for demonstrating safety, quality and performance rests with the manufacturer or deployer may lead to the introduction of risks as developers adopt a “tick-box” mentality to regulation rather than a proactive engagement and assessment of the risks posed by their products. 

Development of specific regulatory requirements for individual technologies is also likely to become a limiting factor with respect to the development of innovative devices in the long term, as devices that don’t easily fit within specified parameters struggle to meet requirements that were never intended for devices of their nature. 

Risk based classification 

In Australia, devices are classified using classification rules set out in Schedule 2 of the Therapeutic Goods (Medical Devices) Regulations 2002. 

The classification of a medical device is determined by factors including how long the device will be continuously used for and how invasive the device is. For software-based medical devices, classification may also be impacted by whether the device is intended for use by a clinician or a consumer, and the seriousness of the illness or condition for which it is intended to be used. The classification of a device will determine the level of scrutiny and pre-market assessment applied to the device before it can be deployed/supplied. 

Principles based regulation 

In Australia, manufacturers are required to demonstrate that medical devices comply with the essential principles. These are legislative requirements that are further set out in Schedule 1 of the Regulations, and relate to specific characteristics of medical devices including design, construction, evidence supporting the use of the device and information to be provided with the device. 

Manufacturers must ensure their devices meet all relevant principles and sponsors must either hold or be able to obtain this evidence from their manufacturer on request. Principles-based regulation, as opposed to prescriptive or rules-based regulation, provides flexibility. This approach accommodates the broad complexity and diversity of medical devices regulated, including as new technologies like AI emerge. A rules-based approach may, for example, require compliance with prescribed requirements including international standards such as ISO or IEC standards. 

Demonstrating compliance with the essential principles may include compliance with relevant international standards, but for emerging technologies where an appropriate standard may not yet exist, other approaches may be used. The flexibility to adapt the principles to the unique circumstances of a medical device, particularly those incorporating emerging technologies, allows approaches to evolve over time without continuous review and updating of legislative frameworks. 

International harmonisation 

Our current approach and commitment to international harmonisation allows sponsors of medicines and medical devices to use international assessment and approvals from comparable overseas regulators to support applications for inclusion of their therapeutic goods on the ARTG. 

The TGA is also a member of the IMDRF, which seeks to “strategically accelerate international medical device regulatory convergence to promote an efficient and effective regulatory model for medical devices that is responsive to emerging challenges while protecting and maximizing public health and safety.” 

The IMDRF has published a significant number of regulatory guidance documents for adoption by jurisdictions globally. Guidance documents are developed through specialised Working Groups and involve global public consultation processes. The TGA is an active member of both the IMDRF Software as a Medical Device (SaMD) Working Group and the IMDRF Artificial Intelligence/Machine Learning Working Group, which have both published a range of guidance documents. The AI Working Group is currently focused on finalising additional guidance on good machine learning practices and new guidance on AI lifecycle management, while the SaMD Working Group is developing an approach to pre-approved change control plans (PCCPs). 

Software regulation and reforms 

The TGA regulates AI when it meets the legislative definition of a medical device in Section 41BD of the Act. AI products likely to meet this definition include those intended to be used for the diagnosis, prevention, monitoring, prediction, prognosis, treatment, or alleviation of a disease, injury or disability. 

In recent years, software has become increasingly important in medical devices and digital adoption more broadly. It is also becoming more important as a medical device in its own right. Rapid innovation in technology has driven significant changes to software function and adoption, giving rise to a larger number of devices able to inform, drive or replace clinical decisions, or directly provide therapy to an individual. 

Advances in computing technology and software production have led to a large increase in the number of software-based medical devices available on the market, requiring the implementation of reforms to ensure patient safety. Software-based medical devices are medical devices that incorporate software or are software, including software as a medical device, or software that relies on hardware to function as intended, and are regulated in Australia by the TGA. 

These kinds of devices may be integrated within electronic health records systems, used by clinicians or health professionals in the provision of care, or used to determine how or when patients will receive care. Their increasing use, integration in healthcare systems, and complexity have given rise to new regulatory challenges. In 2021, the TGA introduced a number of regulatory refinements aimed at ensuring the regulation of software-based medical devices, including software that functions as a medical device, remains appropriate and targets the risks associated with these kinds of devices appropriately. Refinements included: • amendments to the essential principles include the addition of Essential Principle 12.1, which details specific requirements for programmed or programmable medical devices or software that is a medical device • new classification rules for software based medical devices used for diagnostic or screening purposes to capture their potential to cause harm through the provision of incorrect information • introduction of an exemption from TGA regulation for certain clinical decision support software, and • exclusion of certain software products for the sake of clarity, or where existing oversight measures were available through other regulatory frameworks to ensure these products were safe and fit for their intended purpose.  

06 August 2025

(Un)Harnessing AI

The interim report by the Productivity Commission on Harnessing Data and Digital Technology - consistent with the national government's enthusiasm for AI - can be read as proposing a looser regulatory framework. 

The report states 

Data and digital technologies are the modern engines of economic growth. Emerging technologies like artificial intelligence (AI), which can extract useful insights from massive datasets in a fraction of a second, could transform the global economy and speed up productivity growth. 
 
Australia needs to harness the consumer and productivity benefits of data and digital technology while managing and mitigating the downside risks. There is a role for government in setting the rules of the game to foster innovation and ensure that Australians reap the benefits of the data and digital opportunity. 
 
The economic potential of AI is clear, and we are still in the early stages of its development and adoption. Early studies provide a broad range of estimates for the impact of AI on productivity. The Productivity Commission considers that multifactor productivity gains above 2.3% are likely over the next decade, though there is considerable uncertainty. This would translate into about 4.3% labour productivity growth over the same period. But poorly designed regulation could stifle the adoption and development of AI and limit its benefits. Australian governments should take an outcomes based approach to AI regulation – one that uses our existing laws and regulatory structures to minimise harms and introduces technology specific regulations as a last resort. 
 
Data access and use can fuel productivity growth: insights from data can help reduce costs, increase the quality of products and services and lead to the creation of entirely new products. But some requirements in the Privacy Act, the main piece of legislation for protecting privacy, are constraining innovation without providing meaningful protection to individuals. For example, complying with the controls and processes baked into the Act can make consent and notification a ‘tick box’ exercise – where businesses comply with the letter of the law but not the spirit of it. The Australian Government should amend the Privacy Act to introduce an alternative compliance pathway that enables firms to fulfil their privacy obligations by meeting outcomes based criteria. 
 
Data about individuals and businesses underpins growth and value in the digital economy. But often those same individuals and businesses cannot easily access and use this data themselves. Under the right conditions, giving people and businesses better access to data that relates to them can stimulate competition and allow businesses to develop innovative products and services. A mature data sharing regime could add up to $10 billion to Australia’s annual economic output. 
 
Experience shows that we need a flexible approach to facilitating data access across the economy, where obligations placed on data holders and the level of government involvement can match the needs and digital maturity of different sectors. New lower cost and flexible regulatory pathways would help to guide expanded data access throughout the digital economy, focusing first on sectors where the gains can be significant and relatively easy to achieve. 
 
Financial reports provide essential information about a company’s financial performance, ensuring transparency and accountability while informing the decisions of investors, businesses and regulators. Government can further spark productivity by making digital financial reporting the default – that is, mandatory lodgement of financial reports in machine readable form. At the same time, the Australian Government should remove the outdated requirement that financial reports be submitted in hard copy or PDF format. This change would increase the efficiency and accuracy with which information is extracted and analysed.

The  draft recommendations are

 Artificial intelligence 

Draft recommendation 1.1 Productivity growth from AI will be built on existing legal foundations. 

Gap analyses of current rules need to be expanded and completed. Australian governments play a key role in promoting investment in digital technology, including AI, by providing a stable regulatory environment. Any regulatory responses to potential harms from using AI must be proportionate, risk based, outcomes based and technology neutral where possible. 

The Australian Government should continue, complete, publish and act on ongoing reviews into the potential gaps in the regulatory framework posed by AI as soon as possible. 

Where relevant gap analyses have not begun, they should begin immediately. 

All reviews of the regulatory gaps posed by AI should consider: • the uses of AI • the additional risk of harm posed by AI (compared to the status quo) in a specific use case • whether existing regulatory frameworks cover these risks potentially with improved guidance and enforcement; and if not how to modify existing regulatory frameworks to mitigate the additional risks. 

Draft recommendation 1.2 AI specific regulation should be a last resort 

AI specific regulations should only be considered as a last resort for the use cases of AI that meet two criteria. These are: • where existing regulatory frameworks cannot be sufficiently adapted to handle the issue • where technology neutral regulations are not feasible.   

Draft recommendation 1.3 Pause steps to implement mandatory guardrails for high risk AI 

The Australian Government should only apply the proposed ‘mandatory guardrails for high risk AI’ in circumstances that lead to harms that cannot be mitigated by existing regulatory frameworks and where new technology neutral regulation is not possible. Until the reviews of the gaps posed by AI to existing regulatory structures are completed, steps to mandate the guardrails should be paused. 

Data access 

Draft recommendation 2.1 Establish lower cost and more flexible regulatory pathways to expand basic data access for individuals and businesses 

The Australian Government should support new pathways to allow individuals and businesses to access and share data that relates to them. These regulatory pathways will differ by sector recognising that the benefits (and the implementation costs) from data access and sharing are different by sector. This could include approaches such as: • industry led data access codes that support basic use cases by enabling consumers to export relatively non sensitive data on a periodic (snapshot) basis • standardised data transfers with government helping to formalise minimum technical standards to support use cases requiring high frequency data transfers and interoperability. 

These pathways should be developed alongside efforts that are already underway to improve the Consumer Data Right (which will continue to provide for use cases that warrant its additional safeguards and technical infrastructure) and the My Health Record system. 

The new pathways should begin in sectors where better data access could generate large benefits for relatively low cost; and there is clear value to consumers. Potential examples include: • enabling farmers to combine real time data feeds from their machinery and equipment to optimise their operations and easily switch between different manufacturers • giving tenants on demand access to their rental ledgers which they can share to prove on‑time payments to new landlords or lenders • allowing retail loyalty card holders to export an itemised copy of their purchase history to budgeting and price comparison tools that can analyse spending and suggest cheaper alternatives. The scope of the data access pathways should expand over time based on industry and consumer consultation, where new technology, overseas experience or domestic developments show that there are clear net benefits to Australia.   

Privacy regulation 

Draft recommendation 3.1 An alternative compliance pathway for privacy 

The Australian Government should amend the Privacy Act 1988 (Cth) to provide an alternative compliance pathway that enables regulated entities to fulfil their privacy obligations by meeting criteria that are targeted at outcomes, rather than controls based rules. 

Draft recommendation 3.2 Do not implement a right to erasure 

The Australian Government should not amend the Privacy Act 1988 (Cth) to introduce a ‘right to erasure’, as this would impose a high compliance burden on regulated entities, with uncertain privacy benefits for individuals. 

Digital financial reporting 

Draft recommendation 4.1 Make digital financial reporting the default 

The Australian Government should make the necessary amendments to the Corporations Act 2001 (Cth) and the Corporations Regulations 2001 (Cth) to make digital financial reporting mandatory for disclosing entities. The requirement for financial reports to be submitted in hard copy or PDF format should also be removed for those entities.

It goes on

AI specific regulation should be a last resort 

AI specific regulations should only be considered as a last resort for the use cases of AI that meet two criteria. These are: • where existing regulatory frameworks cannot be sufficiently adapted to handle the issue • where technology neutral regulations are not feasible. 

Economy wide efforts to regulate AI should be paused until all gap analyses are complete and implemented 

In August 2024 Australian Government Department of Industry, Science and Resources released a set of 10 voluntary AI safety standards, or guardrails, based on risk management standards such as ISO/IEC 42001:2023 (Information technology – Artificial intelligence – Management system) and the National Institute of Standards and Technology’s Artificial Intelligence Risk Management Framework (AI RMF 1.0) (DISR 2024b, p. 5). The guardrails cover aspects of AI development and application. They require several risk-management processes. These include testing of models, developing a risk plan and providing transparency to users of AI tools and owners of copyrighted materials used in the training of models. The guardrails outline reasonable risk-management practices for many organisations. In this way they have served a very important and useful step in AI governance in Australia by equipping businesses with voluntary, structured and internationally recognised standards to support and guide their adoption of AI. 

The guidelines are very useful for smaller businesses without comprehensive risk-management procedures in place. Indeed, submissions from participants to this inquiry (and submissions to the mandatory guardrails – discussed below – consultation process ) showed that many larger organisations have implemented risk management protocols that are similar in spirit to these guardrails. 

Mandating the guardrails is not necessary 

In September 2024 (DISR 2024a) a proposals paper for a set of mandatory guardrails for AI in high risk settings was released by the Australian Government. The proposal is to turn the voluntary guidelines into mandatory regulations for AI development and application. 

The PC is concerned with two aspects of the guardrails being made mandatory. First, the proposals paper argued that the mandatory guardrails would apply to all high risk uses of AI – regardless of whether risks can be better mitigated through outcomes based regulations. Second, the proposals paper argued that General Purpose AIs – which would include many generative AI tools – above a certain threshold of capability be classified as high risk by default. The proposals paper did not argue for any particular measure or threshold for technical capability, though it could include aspects like FLOPS (DISR 2024a, p. 18). It was argued that these models can perform so many functions that their risks cannot be adequately foreseen. This could result in the guardrails being applied to common generative AI tools such as ChatGPT, Claude and Grok, depending on what is chosen as the threshold and measure of technical capability. 

In general, high risk uses of AI can be split into three broad types. 

1. High risk uses that can be adequately controlled by existing regulatory frameworks (potentially with some modification) – this could include issues with privacy law (which the PC thinks can be resolved within existing frameworks with modification to make the regulations more outcomes focused, chapter 2). 

2. High risk uses that can be adequately controlled with new technology neutral regulations – this could include (non consensual) sexually explicit deepfake images which the Australian Government has recently banned (through the Criminal Code Amendment (Deepfake Sexual Material) Act 2024). 

3. High risk use cases that require technology specific regulations – these would be use cases identified in the various gap analyses as having no technology neutral solution. 

The PC’s concern with the guardrails is that they would not distinguish between these categories. This in our view raises significant issues, as the first two cases can already, by definition, be dealt with adequately by other regulatory mechanisms. It might also result in most commercial chatbots being classified as high risk regardless of the efficacy of existing regulations. The result of this approach is that many AI models would be complying with two different sets of regulation to achieve the same outcome. 

For example, the TGA’s review noted that with respect to medical devices, all ten proposed guardrails had close parallels in existing regulations (2025, pp. 27–30). That is, it is likely that firms providing AI based medical devices in Australia would already be fulfilling the objectives of the guardrails if they are operating legally under the TGA’s existing regulations. But if the guardrails are mandated, then the provider of the medical device would need to demonstrate compliance with the TGA regulations and the guardrails, raising the regulatory burden with no change in outcomes. 

The mandating of the guardrails is only appropriate in circumstances where existing regulatory frameworks or new technology-neutral regulations are not able to adequately mitigate the risk of harm. Once the Australian Government has completed and acted on all gap analyses of its existing policy framework, it will know what regulatory holes cannot be plugged by existing frameworks or new technology neutral legislation. Consideration of economy wide efforts to mandate the guardrails should be paused until these gap analyses are complete. 

Pause steps to implement mandatory guardrails for high risk AI The Australian Government should only apply the proposed ‘mandatory guardrails for high risk AI’ in circumstances that lead to harms that cannot be mitigated by existing regulatory frameworks and where new technology neutral regulation is not possible. Until the reviews of the gaps posed by AI to existing regulatory structures are completed, steps to mandate the guardrails should be paused.

In dealing with copyright the PC states 

 Copyright violation is an example of a harm that AI could exacerbate by changing economic incentives. Previous waves of innovation in information and communication technology have made the sharing of copyrighted materials much cheaper and easier, creating challenges for copyright. In most instances, copyright law was able to be adapted (or better enforced) to mitigate the harm. This made it unnecessary to directly regulate technology by, for example, regulating computer software or hardware to prevent copyright breach. It is the PC’s view that the copyright issues posed by AI can also similarly be resolved through adapting existing copyright law frameworks rather than introducing AI specific regulation. 

What is copyright? 

Copyright law prohibits a person from using original works without the permission of the copyright holder – usually the author (AGD 2022a). The types of works that are protected include text, artistic works, music, computer code, sound recordings and films (ACC 2024a). It does not protect the underlying ideas or information (AGD 2022a). In some cases, data and datasets may be protected, ‘largely depend[ing] on how the data has been arranged, structured or presented’ (Allens 2020, p. 3).  

The rise of AI technology has led to new challenges for copyright law. 

The emergence of AI also raises some additional, principle based questions about how the copyright framework (as part of Australia’s broader intellectual property regime) works to benefit society by encouraging creation and innovation, rewarding intellectual effort and achievement, and supporting the dissemination of knowledge and ideas. (AGD 2023c, p. 12) 

In 2023, the Attorney General established the Copyright and Artificial Intelligence Reference Group, which acts as ‘a standing mechanism to engage with stakeholders across a wide range of sectors on issues at the intersection of AI and copyright’ (AGD 2023a). Since then, the group has met on several occasions to discuss issues relating to AI technology and copyright law (AGD 2023a). 

This section explores one issue particularly relevant to productivity: whether current Australian copyright law is a barrier to building and training AI models. There are other legal issues relating to the outputs of AI models that are less relevant to productivity – such as whether those outputs attract copyright protection and what happens when AI outputs infringe a third party’s copyright (Evans et al. 2024). 

Training AI models 

Building and refining AI models requires the use of large amounts of data. 

The term ‘AI model training’ refers to this process: feeding the algorithm data, examining the results, and tweaking the model output to increase accuracy and efficacy. To do this, algorithms need massive amounts of data that capture the full range of incoming data. (Chen 2023) The datasets used to train AI models often contain digital copies of media such as web pages, books, videos, images and music. These media are often the subject of copyright protection, which means that their use to train AI models requires permission from the copyright holder. 

Permission is required because AI models must ‘copy’ the protected material at least temporarily to undertake the training process. The use of copyrighted materials to train an AI model is a separate issue to the copyright status of anything the model produces. As discussed above, AI outputs may have their own copyright challenges. 

A survey of the Copyright and Artificial Intelligence Reference Group indicated that, in practice, a range of copyrighted materials are used to train AI models – including literary and artistic works, sound recordings, films and musical works (AGD 2024, p. 12). 

There is evidence to suggest that large AI models are already being trained on copyrighted materials without consent or compensation (APA and ASA, qr. 39, pp. 3–4; APDG, qr. 6, p. 4; APRA AMCOS, qr. 58, p. 4; ARIA and PPCA, qr. 65, p. 5, Creative Australia, qr. 62, p. 3). It should be noted that Australian copyright law only applies to copying that occurs within Australia’s boundaries – in other words, the training of AI models overseas is subject to the relevant laws of the jurisdiction in which it occurs. Lawsuits have been brought against technology companies – including Meta, Microsoft and OpenAI – in some overseas jurisdictions about the unlicensed use of copyrighted works to train AI models (Ryan 2023). 

There are concerns that the Australian copyright regime is not keeping pace with the rise of AI technology – whether because it does not adequately facilitate the use of copyrighted works or because AI developers can too easily sidestep existing licensing and enforcement mechanisms. There are several policy options, including: • no policy change – that is, copyright owners would continue to enforce their rights under the existing copyright framework, including through the court system • policy measures to better facilitate the licensing of copyrighted materials, such as through collecting societies • amending the Copyright Act to include a fair dealing exception that would cover text and data mining. 

The PC is seeking feedback on what reforms are needed to bring the copyright regime up to date. 

Is there a need to bolster the licensing or enforcement regime? 

Several participants expressed concern about the unauthorised use of copyrighted materials to train AI models. For example, Creative Australia said: Much of the data has been used reportedly without consent from the original creator, and without acknowledgement or remuneration. The global nature of the technology industry has made it difficult for the owners of creative work to enforce their intellectual property rights and be remunerated for the use of their work. (qr. 62, p. 3) 

There are two points at which concerns of this type could be addressed. First, they could be addressed before the fact, through copyright licensing. Licensing is the key mechanism through which a copyright holder grants permission for others to use their work and often involves some form of payment. In Australia, licensing is often done through collecting societies, which are organisations that represent copyright holders. This can streamline the licensing process, because the collecting society can negotiate licences on behalf of multiple copyright holders at once. As the Copyright Agency said: We can help these sectors use third party content for AI related activities. Our annual licence for businesses now allows staff to use news media content in prompts for AI tools (e.g. for summarisation or analysis). We are extending this to other third party content later in the year. We are also in discussions with our members and licensees about other collective licensing solutions, including the use of datasets for AI related activities. (qr. 7, pp. 2–3) 

The issue of unauthorised use of copyrighted materials could also be addressed after the fact, through enforcement. This encompasses a range of possible measures, including take down notices, alternative dispute resolution and court action. In 2022 23, the Attorney General’s Department undertook a Copyright Enforcement Review to assess ‘whether existing copyright enforcement mechanisms remain effective and proportionate’ (AGD 2022b). That review found that additional regulatory measures are needed to achieve an effective copyright enforcement regime, and work is currently underway to identify options for: • reducing barriers for Australians to use of the legal system to enforce copyright, including examining simple options to resolve ‘small value’ copyright infringements • improving understanding and awareness about copyright. (AGD 2023b) 

In light of this ongoing work, the issue of copyright enforcement is not in scope for this inquiry. 

Is there a case for a text and data mining exception? 

Another option is to expand the existing ‘fair dealing’ regime, which provides certain exceptions to the requirement to obtain permission from the copyright holder (box 1.6). Currently, there is no exception that covers AI model training per se (The University of Notre Dame Australia 2024). However, depending on the case, a different exception could apply. For example, AI models built as part of research could fall within the scope of the ‘research or study’ exception. 

Box 1.6 – What are fair dealing exceptions? 

Fair dealing exceptions allow for the use of copyright material without permission from the copyright owner, so long as it is used for one of several specified purposes and is considered fair. What are the specified purposes? The Copyright Act specifies several purposes where the exception may apply. These include: research or study, criticism or review, parody or satire, reporting news, and enabling a person with a disability to access the material (Copyright Act 1968 (Cth), Part III, Div 3; Part VIA, Div 2). 

What counts as ‘fair’? 

Fairness is determined with regard to all the relevant circumstances – that is, it depends on the facts. Some purposes have specified criteria that must be taken into account. For example, where the use is for research or study, the following considerations apply: • the purpose and character of the dealing • the nature of the work • whether the work can be obtained within a reasonable time at an ordinary commercial price • the effect of the dealing upon the potential market for, or value of, the work • how amount and substantiality of the work that was copied (Copyright Act 1968 (Cth), s 40(2)). 

The ‘fair use’ doctrine – an alternative approach 

Some overseas jurisdictions (notably the United States) take a ‘fair use’ approach to copyright exceptions. Under this doctrine, any types of use can be considered non infringing, provided that it is considered ‘fair’ – in other words, the use need not fall within one of several defined categories. Several reviews have recommended the adoption of the fair use doctrine in Australia (including by the Australian Law Reform Commission and the Productivity Commission), but this has not occurred. Source: ACC (2024b); ALRC (2013); Copyright Act 1968 (Cth); PC (2021, p. 187). 

In its report on Copyright and the Digital Economy, the Australian Law Reform Commission recommended amendments to enable text and data mining by adopting a fair use approach to copyright exceptions (box 1.6) – or, failing that, through a new fair dealing exception. It explained: There has been growing recognition that data and text mining should not be infringement because it is a ‘non expressive’ use. Non expressive use leans on the fundamental principle that copyright law protects the expression of ideas and information and not the information or data itself (2013, p. 261)  

The Australian Government has since indicated that it is not inclined to introduce a fair use regime (Australian Government 2017, p. 7). Therefore, the PC is considering whether there is a case for a new fair dealing exception that explicitly covers text and data mining (a ‘TDM exception’). TDM exceptions exist in several comparable overseas jurisdictions (box 1.7). 

Such an exception would cover not just AI model training, but all forms of analytical techniques that use machine read material to identify patterns, trends and other useful information. For example, the use of text and data mining techniques is common in research sectors to produce large datasets that can be interrogated through statistical analysis. 

Box 1.7 – Text and data mining around the world 

European Union: There are two text and data mining (TDM) exceptions embedded in the Digital Single Market Directive (EU 2019/790) – one for scientific research (article 3) and another for general use (article 4). The Artificial Intelligence Act (Regulation (EU) 2024/1689) specifically characterises the training of AI models as involving ‘text and data mining techniques’ (recital 105) and refers to the TDM exception (article 53). The recent case of Kneschke v. LAION [2024] endorsed the view that the TDM exception extends to cover AI training (Goldstein et al. 2024a, 2024b). 

United States: It has been argued that training AI models falls within the scope of the fair use doctrine (Khan 2024; Klosek and Blumenthal 2024). However, the case Thomson Reuters v. Ross [2023] 694 F.Supp.3d 467 highlights that whether AI training is covered by the doctrine depends on whether the fair use factors are met in the circumstances (ReedSmith 2025). 

United Kingdom: There is a TDM exception for that applies to non commercial research (UK Intellectual Property Office 2014). There have been proposals to expand the exception to cover all uses, though these are still under consideration (Pinsent Masons 2023; UK Government 2024). 

Japan: The Japanese Copyright Act includes broad statutory exemptions for TDM (article 30 4(ii)), provided the work is used for ‘non enjoyment’ purposes (Senftleben 2022, p. 1494). In essence, the requirement for ‘non enjoyment’ distinguishes between whether the work is being consumed as a work or as data, and is broadly equivalent to the distinction between expressive and non expressive uses. 

Singapore: The Singaporean Copyright Act includes a specific TDM exception, as well as a broader fair use exception (Ng-Loy 2024). 

To assist its consideration of this option, the PC is seeking feedback about the likely effects of a TDM exception on the AI market, the creative sector and productivity in general – particularly in light of the following considerations. • At present, large AI models (including generative AI and large language models) are generally available to be used in Australia. The introduction (or not) of a TDM exception is unlikely to affect whether AI models continue to be available and used in Australia (PC 2024c, p. 13). • At present, large AI models are trained overseas, not in Australia. It is unclear whether the introduction of a TDM exception would change this trend. • As discussed above, large AI models are already being trained on unlicensed copyrighted materials. • A TDM exception could make a difference to whether smaller, low compute models (such as task specific models) can be built and trained in Australia, such as by Australian research institutions, medical technology firms, and research service providers. It should also be noted that a TDM exception would not be a ‘blank cheque’ for all copyrighted materials to be used as inputs into all AI models. As discussed in box 1.4, the use must also be considered ‘fair’ in the circumstances – this requirement would act as a check on copyrighted works being used unfairly, preserving the integrity of the copyright holder’s legal and commercial interests in the work. There may be a need for legislative criteria or regulatory guidance about what types of uses are likely to be considered fair. 

Information request 1.1 

The PC is seeking feedback on the issue of copyrighted materials being used to train AI models. • Are reforms to the copyright regime (including licensing arrangements) required? If so, what are they and why? The PC is also seeking feedback on the proposal to amend the Copyright Act 1968 (Cth) to include a fair dealing exception for text and data mining. • How would an exception covering text and data mining affect the development and use of AI in Australia? What are the costs, benefits and risks of a text and data mining exception likely to be? • How should the exception be implemented in the Copyright Act – for example, should it be through a broad text and data mining exception or one that covers non commercial uses only? • Is there a need for legislative criteria or regulatory guidance to help provide clarity about what types of uses are fair?