Showing posts with label Regulation. Show all posts
Showing posts with label Regulation. Show all posts

03 February 2026

AI Safety

The 2nd International AI Safety Report states 

 — General-purpose AI capabilities have continued to improve, especially in mathematics, coding, and autonomous operation. Leading AI systems achieved gold-medal performance on International Mathematical Olympiad questions. In coding, AI agents can now reliably complete some tasks that would take a human programmer about half an hour, up from under 10 minutes a year ago. Performance nevertheless remains ‘jagged’, with leading systems still failing at some seemingly simple tasks. 

— Improvements in general-purpose AI capabilities increasingly come from techniques applied after a model’s initial training. These ‘post-training’ techniques include refining models for specific tasks and allowing them to use more computing power when generating outputs. At the same time, using more computing power for initial training continues to also improve model capabilities. 

— AI adoption has been rapid, though highly uneven across regions. AI has been adopted faster than previous technologies like the personal computer, with at least 700 million people now using leading AI systems weekly. In some countries over 50% of the population uses AI, though across much of Africa, Asia, and Latin America adoption rates likely remain below 10%. 

— Advances in AI’s scientific capabilities have heightened concerns about misuse in biological weapons development. Multiple AI companies chose to release new models in 2025 with additional safeguards after pre-deployment testing could not rule out the possibility that they could meaningfully help novices develop such weapons. 

— More evidence has emerged of AI systems being used in real-world cyberattacks. Security analyses by AI companies indicate that malicious actors and state-associated groups are using AI tools to assist in cyber operations. 

— Reliable pre-deployment safety testing has become harder to conduct. It has become more common for models to distinguish between test settings and real-world deployment, and to exploit loopholes in evaluations. This means that dangerous capabilities could go undetected before deployment. 

— Industry commitments to safety governance have expanded. In 2025, 12 companies published or updated Frontier AI Safety Frameworks – documents that describe how they plan to manage risks as they build more capable models. Most risk management initiatives remain voluntary, but a few jurisdictions are beginning to formalise some practices as legal requirements. 

This Report assesses what general-purpose AI systems can do, what risks they pose, and how those risks can be managed. It was written with guidance from over 100 independent experts, including nominees from more than 30 countries and international organisations, such as the EU, OECD, and UN. Led by the Chair, the independent experts writing it jointly had full discretion over its content. 

The authors note 

 This Report focuses on the most capable general-purpose AI systems and the emerging risks associated with them. ‘General-purpose AI’ refers to AI models and systems that can perform a wide variety of tasks. ‘Emerging risks’ are risks that arise at the frontier of general-purpose AI capabilities. Some of these risks are already materialising, with documented harms; others remain more uncertain but could be severe if they materialise. 

The aim of this work is to help policymakers navigate the ‘evidence dilemma’ posed by general-purpose AI. AI systems are rapidly becoming more capable, but evidence on their risks is slow to emerge and difficult to assess. For policymakers, acting too early can lead to entrenching ineffective interventions, while waiting for conclusive data can leave society vulnerable to potentially serious negative impacts. To alleviate this challenge, this Report synthesises what is known about AI risks as concretely as possible while highlighting remaining gaps. 

While this Report focuses on risks, general- purpose AI can also deliver significant benefits. These systems are already being usefully applied in healthcare, scientific research, education, and other sectors, albeit at highly uneven rates globally. But to realise their full potential, risks must be effectively managed. Misuse, malfunctions, and systemic disruption can erode trust and impede adoption. The governments attending the AI Safety Summit initiated this Report because a clear understanding of these risks will allow institutions to act in proportion to their severity and likelihood. 

Capabilities are improving rapidly but unevenly 

Since the publication of the 2025 Report, general-purpose AI capabilities have continued to improve, driven by new techniques that enhance performance after initial training. AI developers continue to train larger models with improved performance. Over the past year, they have further improved capabilities through ‘inference-time scaling’: allowing models to use more computing power in order to generate intermediate steps before giving a final answer. This technique has led to particularly large performance gains on more complex reasoning tasks in mathematics, software engineering, and science. 

At the same time, capabilities remain ‘jagged’: leading systems may excel at some difficult tasks while failing at other, simpler ones. General-purpose AI systems excel in many complex domains, including generating code, creating photorealistic images, and answering expert-level questions in mathematics and science. Yet they struggle with some tasks that seem more straightforward, such as counting objects in an image, reasoning about physical space, and recovering from basic errors in longer workflows. 

The trajectory of AI progress through 2030 is uncertain, but current trends are consistent with continued improvement. AI developers are betting that computing power will remain important, having announced hundreds of billions of dollars in data centre investments. Whether capabilities will continue to improve as quickly as they recently have is hard to predict. Between now and 2030, it is plausible that progress could slow or plateau (e.g. due to bottlenecks in data or energy), continue at current rates, or accelerate dramatically (e.g. if AI systems begin to speed up AI research itself). 

Real-world evidence for several risks is growing 

General-purpose AI risks fall into three categories: malicious use, malfunctions, and systemic risks. 

Malicious use 

AI-generated content and criminal activity: AI systems are being misused to generate content for scams, fraud, blackmail, and non- consensual intimate imagery. Although the occurrence of such harms is well-documented, systematic data on their prevalence and severity remains limited. 

Influence and manipulation: In experimental settings, AI-generated content can be as effective as human-written content at changing people’s beliefs. Real-world use of AI for manipulation is documented but not yet widespread, though it may increase as capabilities improve. 

Cyberattacks: AI systems can discover software vulnerabilities and write malicious code. In one competition, an AI agent identified 77% of the vulnerabilities present in real software. Criminal groups and state-associated attackers are actively using general-purpose AI in their operations. Whether attackers or defenders will benefit more from AI assistance remains uncertain. 

Biological and chemical risks: General-purpose AI systems can provide information about biological and chemical weapons development, including details about pathogens and expert- level laboratory instructions. In 2025, multiple developers released new models with additional safeguards after they could not exclude the possibility that these models could assist novices in developing such weapons. It remains difficult to assess the degree to which material barriers continue to constrain actors seeking to obtain them. 

Malfunctions 

Reliability challenges: Current AI systems sometimes exhibit failures such as fabricating information, producing flawed code, and giving misleading advice. AI agents pose heightened risks because they act autonomously, making it harder for humans to intervene before failures cause harm. Current techniques can reduce failure rates but not to the level required in many high-stakes settings. 

Loss of control: ‘Loss of control’ scenarios are scenarios where AI systems operate outside of anyone’s control, with no clear path to regaining control. Current systems lack the capabilities to pose such risks, but they are improving in relevant areas such as autonomous operation. Since the last Report, it has become more common for models to distinguish between test settings and real-world deployment and to find loopholes in evaluations, which could allow dangerous capabilities to go undetected before deployment. 

Systemic risks 

Labour market impacts: General-purpose AI will likely automate a wide range of cognitive tasks, especially in knowledge work. Economists disagree on the magnitude of future impacts: some expect job losses to be offset by new job creation, while others argue that widespread automation could significantly reduce employment and wages. Early evidence shows no effect on overall employment, but some signs of declining demand for early-career workers in some AI-exposed occupations, such as writing. Risks to human autonomy: AI use may affect people’s ability to make informed choices and act on them. Early evidence suggests that reliance on AI tools can weaken critical thinking skills and encourage ‘automation bias’, the tendency to trust AI system outputs without sufficient scrutiny. ‘AI companion’ apps now have tens of millions of users, a small share of whom show patterns of increased loneliness and reduced social engagement. 

Layering multiple approaches offers more robust risk management 

Managing general-purpose AI risks is difficult due to technical and institutional challenges. Technically, new capabilities sometimes emerge unpredictably, the inner workings of models remain poorly understood, and there is an ‘evaluation gap’: performance on pre-deployment tests does not reliably predict real-world utility or risk. Institutionally, developers have incentives to keep important information proprietary, and the pace of development can create pressure to prioritise speed over risk management and makes it harder for institutions to build governance capacity. 

Risk management practices include threat modelling to identify vulnerabilities, capability evaluations to assess potentially dangerous behaviours, and incident reporting to gather more evidence. In 2025, 12 companies published or updated their Frontier AI Safety Frameworks – documents that describe how they plan to manage risks as they build more capable models. While AI risk management initiatives remain largely voluntary, a small number of regulatory regimes are beginning to formalise some risk management practices as legal requirements. Technical safeguards are improving but still show significant limitations. For example, attacks designed to elicit harmful outputs have become more difficult, but users can still sometimes obtain harmful outputs by rephrasing requests or breaking them into smaller steps. AI systems can be made more robust by layering multiple safeguards, an approach known as ‘defence-in-depth’. 

Open-weight models pose distinct challenges. They offer significant research and commercial benefits, particularly for lesser-resourced actors. However, they cannot be recalled once released, their safeguards are easier to remove, and actors can use them outside of monitored environments – making misuse harder to prevent and trace. 

Societal resilience plays an important role in managing AI-related harms. Because risk management measures have limitations, they will likely fail to prevent some AI-related incidents. Societal resilience-building measures to absorb and recover from these shocks include strengthening critical infrastructure, developing tools to detect AI-generated content, and building institutional capacity to respond to novel threats.

19 December 2025

Productivity

The Productivity Commission report Harnessing data and digital technology released today states 

Data and digital technologies are the modern engines of economic growth. Australia needs to harness the consumer and productivity benefits of data and digital technology while managing and mitigating any downside risks. There is a role for government in setting the rules of the game to foster innovation and ensure that Australians reap the benefits of the data and digital opportunity. 

Emerging technologies like artificial intelligence (AI) could transform the global economy and speed up productivity growth. The Productivity Commission considers that multifactor productivity gains above 2.3%, and labour productivity growth of about 4.3%, are likely over the next decade, although there is considerable uncertainty. But poorly designed regulation could stifle the adoption and development of AI. Australian governments should take an outcomes based approach to AI regulation – using our existing laws and regulatory structures to minimise harms (which the Australian Government has committed to do in its National AI Plan) and introducing technology specific regulations only as a last resort. Developing and training AI models is a global opportunity worth many billions of dollars. Currently, gaps in licensing markets – particularly for open web material – make AI training in Australia more difficult than in overseas jurisdictions. However, licensing markets are developing, and if courts overseas interpret copyright exceptions narrowly, Australia could become relatively more attractive for AI development. As such the PC considers it premature to make changes to Australia’s copyright laws. 

Data access and use fuels productivity growth: giving people and businesses better access to data that relates to them can stimulate competition and allow businesses to develop innovative products and services. A mature data sharing regime could add up to $10 billion to Australia’s GDP. The Australian Government should rightsize the Consumer Data Right (CDR) with the immediate goal of making it work better for businesses and consumers in the sectors where it already exists. In the longer term, making the accreditation model, technical standards and designation process less onerous will help make the CDR a more effective data access and sharing platform that supports a broader range of use cases. 

The benefits of data access and use can only be realised if Australians trust that data is handled safely and securely to protect their privacy. Some requirements in the Privacy Act constrain innovation without providing meaningful protection to individuals. And complying with the controls and processes baked into the Act can make consent and notification a ‘tick box’ exercise where businesses comply with the letter of the law but not its spirit. The Australian Government should amend the Privacy Act to introduce an overarching outcomes based privacy duty for regulated entities to deal with personal information in a manner that is fair and reasonable in the circumstances. 

Financial reports provide essential information about a company’s financial performance, ensuring transparency and accountability while informing the decisions of investors, businesses and regulators. The Australian Government can further spark productivity by making digital financial reporting the default for publicly listed companies and other public interest entities while also removing the outdated requirement that reports be submitted in hard copy or PDF form. This would improve the efficiency of analysing reports, enhance integrity and risk detection, and could boost international capital market visibility for Australian companies.  

The Commission's  recommendations are - 

Artificial intelligence 

Recommendation 1.1 Productivity growth from AI should be enabled within existing legal foundations. 

Gap analyses of current rules need to be expanded and completed Any regulatory responses to potential harms from using AI should be proportionate, risk based, outcomes based and technology neutral where possible. 

The Australian Government should complete, publish and act on ongoing reviews into the potential gaps in the legal framework posed by AI as soon as possible. 

Where relevant gap analyses have not begun, they should begin immediately. 

All reviews of the legal gaps posed by AI should consider: • the uses of AI • the additional risk of harm posed by AI (compared to the status quo) in a specific use case • whether existing regulatory frameworks cover these risks potentially with improved guidance and enforcement; and if not, how to modify existing regulatory frameworks to mitigate the additional risks. 

Recommendation 1.2 AI specific regulation should be a last resort 

AI specific regulations should only be considered as a last resort and only for use cases of AI where: • existing regulatory frameworks cannot be sufficiently adapted to handle AI related harms • technology neutral regulations are not feasible or cannot adequately mitigate the risk of harm. This includes whole of economy regulation such as the EU AI Act and the Australian Government’s previous proposal to mandate guardrails for AI in high risk settings.

Copyright and AI 

Recommendation 2.1 A review of Australian copyright settings and the impact of AI 

The Australian Government should monitor the development of AI and its interaction with copyright holders over the next three years. It should monitor the following areas: • licensing markets for open web materials • the effect of AI on creative incomes generated by copyright royalties • how overseas courts set limits to AI related copyright exceptions, especially fair use. If after three years the monitoring program shows that these issues have not resolved, the government could establish an Independent Review of Australian copyright settings and the impact of AI. The Review’s scope could include, but not be limited to, consideration of whether: • copyright settings continue to be a barrier to the use of open material in AI training, and if so whether changes to copyright law could reduce these barriers • copyright continues to be the appropriate vehicle to incentivise creation of new works and if not, what alternatives could be pursued.

Data access 

Recommendation 3.1 Rightsize the Consumer Data Right 

The Australian Government should commit to reforms that will enable the Consumer Data Right (CDR) to better support data access for high value uses while minimising compliance costs. 

In the short term, the government should continue to simplify the scheme by removing excessive restrictions and rules that are limiting its uptake and practical applications in the banking and energy sectors. To do this the government should: • within the next two years, enable consumers to share data with third parties and simplify the on boarding process for businesses • commit to more substantiative changes to the scheme (in parallel with related legislative reforms), including aligning the CDR’s privacy safeguards with the Privacy Act and enabling access to selected government held datasets through the scheme. 

In addition to the above, the CDR framework should be significantly amended so that it has the flexibility to support a broader range of use cases beyond banking and energy, by making the accreditation model, technical standards and designation process less onerous. 

Privacy regulation 

Recommendation 4.1 An outcomes based privacy duty embedded in the Privacy Act 

The Australian Government should amend the Privacy Act 1988 (Cth) to embed an outcomes based approach that enables regulated entities to fulfil their privacy obligations by meeting criteria that are targeted at outcomes, rather than controls based rules. 

This should be achieved by introducing an overarching privacy duty for regulated entities to deal with personal information in a manner that is fair and reasonable in the circumstances. 

The Privacy Act should be further amended to outline several non exhaustive factors for consideration to guide decision makers in determining what is fair and reasonable – including proportionality, necessity, and transparency. The existing Australian Privacy Principles should ultimately be phased out. 

Implementation of the duty should be supported through non legislative means including documentation such as regulatory guidance, sector specific codes, templates, and guidelines. 

The Office of the Australian Information Commissioner should be appropriately resourced to support the transition to an outcomes based privacy duty.

Digital financial reporting 

Recommendation 5.1 Make digital financial reporting the default 

The Australian Government should make the necessary amendments to the Corporations Act 2001 (Cth) and the Corporations Regulations 2001 (Cth) to make digital annual and half yearly financial reporting mandatory for disclosing entities. The requirement for financial reports to be submitted in hard copy or PDF form should be removed for these entities. The implementation of mandatory digital financial reporting should be phased, with the Treasury determining the appropriate timelines for this approach. 

Setting requirements for report preparation 

The existing International Financial Reporting Standards (Australia) (IFRS AU) taxonomy should be used for digital financial reporting. The Australian Securities and Investments Commission (ASIC) should continue to update the taxonomy annually. ASIC should be empowered to specify, from time to time, the format in which the reports must be prepared. At present, ASIC should specify inline eXtensible Business Reporting Language (iXBRL) as the required format. 

Establishing infrastructure and procedures for report submission 

ASIC, together with market operators such as the Australian Securities Exchange, should determine where and how digital financial reports are submitted. The arrangements should aim to minimise preparers’ reporting burden while keeping reports accessible to report users. 

Supporting the provision of high quality, accessible digital financial data 

ASIC should implement the measures necessary to ensure that digital financial reports contain high quality data. ASIC could (among other actions): • establish a data quality committee that would develop guidance and rules to improve data quality • integrate automated validation checks into the submission process • set guidelines around the use of taxonomy extensions and report format • maintain feedback loops with stakeholders. • To enable report users to harness the benefits of digital financial data, digital financial reports should be publicly and freely available, and easily downloadable.

14 August 2025

Medical Device Regulation

The TGA report Clarifying and strengthening the regulation of Medical Device Software including Artificial Intelligence states 

In the 2024-25 federal Budget, the Australian Government provided $39.9 million over 5 years for the development of policy and capability across government to support Safe and Responsible AI. The measure includes work to clarify and strengthen existing laws and address risks and harms from Artificial Intelligence (AI) through an immediate review of priority areas, including health and aged care sector regulation, Australian consumer law, and copyright law. 

As part of the Australian Government’s Department of Health, Disability and Ageing (the Department), the Therapeutic Goods Administration (TGA) regulates therapeutic goods, including software and AI models and systems when they meet the definition of a medical device under the Therapeutic Goods Act 1989. Software-based medical devices (including AI models and systems) have been regulated by the TGA for many years. In 2021, we clarified the classification levels of software to account for the potential and emerging risks of harm associated with software, and introduced a number of “carve-outs” for very low risk products or products that had oversight from other regulators. With input from relevant industry stakeholders, we published guidance about our refined regulatory framework, setting out how regulatory requirements apply to these kinds of devices. Since that time, the TGA has monitored the refinements to identify when further review and adjustment was required, including to address emerging risks as technology like AI is rapidly adopted and deployed in healthcare settings. 

In 2024, the TGA conducted a review in tandem with the Department’s broader review of health and aged care legislation, to: • determine whether our existing legislation, regulations and guidance are appropriate to meet the challenges associated with an increasing use of medical software and AI across the healthcare sector, and • identify measures to clarify and strengthen existing regulation to mitigate risks and leverage opportunities associated with medical software and AI use in the therapeutic goods sector. 

Extensive targeted engagement with stakeholders from cohorts including the medical device industry, consumers and clinicians has been conducted, followed by a public consultation process seeking more information and feedback about strengths of the system, opportunities for improvements and identified issues and areas of concern. Our review also included mapping the existing medical device legislative framework against the mandatory guardrails for use in high-risk settings proposed by the Department of Industry, Science and Resources (DISR) in their consultation: Introducing mandatory guardrails for AI in high-risk settings: proposals paper.

The TGA goes on to comment

It is likely that the time and costs associated with regulatory requirements appear to developers to be disproportionate when compared to the time and costs associated with the development of a software product. A further cultural issue is the pervading belief among some developers that software products don’t present a meaningful risk to consumers and users, particularly when they are integrated with the provision of healthcare, where a human is in the loop, or where outputs are information only. 

Stakeholders, including clinicians and consumers who use these kinds of products, have identified that the absence of humans, lack of transparency and failure to engage with existing regulatory requirements represent a combination of circumstances that may lead to patient harm. In many instances, users are not aware that AI or machine learning has been used in the development of software, or is used operationally within the clinical workflow. 

Further  

Regulatory requirements for medical devices, including software, are principles-based and apply regardless of whether the product incorporates components like AI, chatbots, cloud, mobile apps or other technologies. As such, software that incorporates generative AI such as large language models (LLMs), text generators, and multimodal generative AI are all regulated as a medical device if they meet the definition under the Act. As a component of the review, we mapped the existing legislative framework, including regulations and guidance, against the mandatory guardrails proposed for use in high-risk settings under the proposal put forward by DISR in their consultation: Introducing mandatory guardrails for AI in high-risk settings: proposals paper. A summary is at Attachment A. 

This section documents key features of the existing framework for the regulation of medical devices including: • Technology agnostic regulation • Risk based classification • Principles based regulation • International harmonisation 

Technology-agnostic regulation 

Australia’s regulatory approach to medical devices is technology-agnostic, with legislative requirements centred on risk and principles rather than linking specific requirements to explicit features or technologies. A technology-agnostic approach requires those responsible for manufacturing a medical device to: • identify the specific and potential risks associated with the device throughout its lifecycle • institute measures to mitigate both identified and residual risks • have measures in place for ongoing review and monitoring of the device’s performance after it has been deployed, and • engage in ongoing review and refinement of the device once deployed. 

This approach provides flexibility and responsiveness to emerging technologies, allowing lower risk devices to enter the market expeditiously while subjecting higher risk devices to greater regulatory scrutiny to ensure quality, safety, and performance throughout the device life cycle. The continuation of a technology-agnostic approach will provide flexibility to ensure appropriate regulation is capable of being applied to emerging technologies without the need for continual review and refinement of legislation. Moving away from a technology-agnostic approach where the onus for demonstrating safety, quality and performance rests with the manufacturer or deployer may lead to the introduction of risks as developers adopt a “tick-box” mentality to regulation rather than a proactive engagement and assessment of the risks posed by their products. 

Development of specific regulatory requirements for individual technologies is also likely to become a limiting factor with respect to the development of innovative devices in the long term, as devices that don’t easily fit within specified parameters struggle to meet requirements that were never intended for devices of their nature. 

Risk based classification 

In Australia, devices are classified using classification rules set out in Schedule 2 of the Therapeutic Goods (Medical Devices) Regulations 2002. 

The classification of a medical device is determined by factors including how long the device will be continuously used for and how invasive the device is. For software-based medical devices, classification may also be impacted by whether the device is intended for use by a clinician or a consumer, and the seriousness of the illness or condition for which it is intended to be used. The classification of a device will determine the level of scrutiny and pre-market assessment applied to the device before it can be deployed/supplied. 

Principles based regulation 

In Australia, manufacturers are required to demonstrate that medical devices comply with the essential principles. These are legislative requirements that are further set out in Schedule 1 of the Regulations, and relate to specific characteristics of medical devices including design, construction, evidence supporting the use of the device and information to be provided with the device. 

Manufacturers must ensure their devices meet all relevant principles and sponsors must either hold or be able to obtain this evidence from their manufacturer on request. Principles-based regulation, as opposed to prescriptive or rules-based regulation, provides flexibility. This approach accommodates the broad complexity and diversity of medical devices regulated, including as new technologies like AI emerge. A rules-based approach may, for example, require compliance with prescribed requirements including international standards such as ISO or IEC standards. 

Demonstrating compliance with the essential principles may include compliance with relevant international standards, but for emerging technologies where an appropriate standard may not yet exist, other approaches may be used. The flexibility to adapt the principles to the unique circumstances of a medical device, particularly those incorporating emerging technologies, allows approaches to evolve over time without continuous review and updating of legislative frameworks. 

International harmonisation 

Our current approach and commitment to international harmonisation allows sponsors of medicines and medical devices to use international assessment and approvals from comparable overseas regulators to support applications for inclusion of their therapeutic goods on the ARTG. 

The TGA is also a member of the IMDRF, which seeks to “strategically accelerate international medical device regulatory convergence to promote an efficient and effective regulatory model for medical devices that is responsive to emerging challenges while protecting and maximizing public health and safety.” 

The IMDRF has published a significant number of regulatory guidance documents for adoption by jurisdictions globally. Guidance documents are developed through specialised Working Groups and involve global public consultation processes. The TGA is an active member of both the IMDRF Software as a Medical Device (SaMD) Working Group and the IMDRF Artificial Intelligence/Machine Learning Working Group, which have both published a range of guidance documents. The AI Working Group is currently focused on finalising additional guidance on good machine learning practices and new guidance on AI lifecycle management, while the SaMD Working Group is developing an approach to pre-approved change control plans (PCCPs). 

Software regulation and reforms 

The TGA regulates AI when it meets the legislative definition of a medical device in Section 41BD of the Act. AI products likely to meet this definition include those intended to be used for the diagnosis, prevention, monitoring, prediction, prognosis, treatment, or alleviation of a disease, injury or disability. 

In recent years, software has become increasingly important in medical devices and digital adoption more broadly. It is also becoming more important as a medical device in its own right. Rapid innovation in technology has driven significant changes to software function and adoption, giving rise to a larger number of devices able to inform, drive or replace clinical decisions, or directly provide therapy to an individual. 

Advances in computing technology and software production have led to a large increase in the number of software-based medical devices available on the market, requiring the implementation of reforms to ensure patient safety. Software-based medical devices are medical devices that incorporate software or are software, including software as a medical device, or software that relies on hardware to function as intended, and are regulated in Australia by the TGA. 

These kinds of devices may be integrated within electronic health records systems, used by clinicians or health professionals in the provision of care, or used to determine how or when patients will receive care. Their increasing use, integration in healthcare systems, and complexity have given rise to new regulatory challenges. In 2021, the TGA introduced a number of regulatory refinements aimed at ensuring the regulation of software-based medical devices, including software that functions as a medical device, remains appropriate and targets the risks associated with these kinds of devices appropriately. Refinements included: • amendments to the essential principles include the addition of Essential Principle 12.1, which details specific requirements for programmed or programmable medical devices or software that is a medical device • new classification rules for software based medical devices used for diagnostic or screening purposes to capture their potential to cause harm through the provision of incorrect information • introduction of an exemption from TGA regulation for certain clinical decision support software, and • exclusion of certain software products for the sake of clarity, or where existing oversight measures were available through other regulatory frameworks to ensure these products were safe and fit for their intended purpose.  

06 August 2025

(Un)Harnessing AI

The interim report by the Productivity Commission on Harnessing Data and Digital Technology - consistent with the national government's enthusiasm for AI - can be read as proposing a looser regulatory framework. 

The report states 

Data and digital technologies are the modern engines of economic growth. Emerging technologies like artificial intelligence (AI), which can extract useful insights from massive datasets in a fraction of a second, could transform the global economy and speed up productivity growth. 
 
Australia needs to harness the consumer and productivity benefits of data and digital technology while managing and mitigating the downside risks. There is a role for government in setting the rules of the game to foster innovation and ensure that Australians reap the benefits of the data and digital opportunity. 
 
The economic potential of AI is clear, and we are still in the early stages of its development and adoption. Early studies provide a broad range of estimates for the impact of AI on productivity. The Productivity Commission considers that multifactor productivity gains above 2.3% are likely over the next decade, though there is considerable uncertainty. This would translate into about 4.3% labour productivity growth over the same period. But poorly designed regulation could stifle the adoption and development of AI and limit its benefits. Australian governments should take an outcomes based approach to AI regulation – one that uses our existing laws and regulatory structures to minimise harms and introduces technology specific regulations as a last resort. 
 
Data access and use can fuel productivity growth: insights from data can help reduce costs, increase the quality of products and services and lead to the creation of entirely new products. But some requirements in the Privacy Act, the main piece of legislation for protecting privacy, are constraining innovation without providing meaningful protection to individuals. For example, complying with the controls and processes baked into the Act can make consent and notification a ‘tick box’ exercise – where businesses comply with the letter of the law but not the spirit of it. The Australian Government should amend the Privacy Act to introduce an alternative compliance pathway that enables firms to fulfil their privacy obligations by meeting outcomes based criteria. 
 
Data about individuals and businesses underpins growth and value in the digital economy. But often those same individuals and businesses cannot easily access and use this data themselves. Under the right conditions, giving people and businesses better access to data that relates to them can stimulate competition and allow businesses to develop innovative products and services. A mature data sharing regime could add up to $10 billion to Australia’s annual economic output. 
 
Experience shows that we need a flexible approach to facilitating data access across the economy, where obligations placed on data holders and the level of government involvement can match the needs and digital maturity of different sectors. New lower cost and flexible regulatory pathways would help to guide expanded data access throughout the digital economy, focusing first on sectors where the gains can be significant and relatively easy to achieve. 
 
Financial reports provide essential information about a company’s financial performance, ensuring transparency and accountability while informing the decisions of investors, businesses and regulators. Government can further spark productivity by making digital financial reporting the default – that is, mandatory lodgement of financial reports in machine readable form. At the same time, the Australian Government should remove the outdated requirement that financial reports be submitted in hard copy or PDF format. This change would increase the efficiency and accuracy with which information is extracted and analysed.

The  draft recommendations are

 Artificial intelligence 

Draft recommendation 1.1 Productivity growth from AI will be built on existing legal foundations. 

Gap analyses of current rules need to be expanded and completed. Australian governments play a key role in promoting investment in digital technology, including AI, by providing a stable regulatory environment. Any regulatory responses to potential harms from using AI must be proportionate, risk based, outcomes based and technology neutral where possible. 

The Australian Government should continue, complete, publish and act on ongoing reviews into the potential gaps in the regulatory framework posed by AI as soon as possible. 

Where relevant gap analyses have not begun, they should begin immediately. 

All reviews of the regulatory gaps posed by AI should consider: • the uses of AI • the additional risk of harm posed by AI (compared to the status quo) in a specific use case • whether existing regulatory frameworks cover these risks potentially with improved guidance and enforcement; and if not how to modify existing regulatory frameworks to mitigate the additional risks. 

Draft recommendation 1.2 AI specific regulation should be a last resort 

AI specific regulations should only be considered as a last resort for the use cases of AI that meet two criteria. These are: • where existing regulatory frameworks cannot be sufficiently adapted to handle the issue • where technology neutral regulations are not feasible.   

Draft recommendation 1.3 Pause steps to implement mandatory guardrails for high risk AI 

The Australian Government should only apply the proposed ‘mandatory guardrails for high risk AI’ in circumstances that lead to harms that cannot be mitigated by existing regulatory frameworks and where new technology neutral regulation is not possible. Until the reviews of the gaps posed by AI to existing regulatory structures are completed, steps to mandate the guardrails should be paused. 

Data access 

Draft recommendation 2.1 Establish lower cost and more flexible regulatory pathways to expand basic data access for individuals and businesses 

The Australian Government should support new pathways to allow individuals and businesses to access and share data that relates to them. These regulatory pathways will differ by sector recognising that the benefits (and the implementation costs) from data access and sharing are different by sector. This could include approaches such as: • industry led data access codes that support basic use cases by enabling consumers to export relatively non sensitive data on a periodic (snapshot) basis • standardised data transfers with government helping to formalise minimum technical standards to support use cases requiring high frequency data transfers and interoperability. 

These pathways should be developed alongside efforts that are already underway to improve the Consumer Data Right (which will continue to provide for use cases that warrant its additional safeguards and technical infrastructure) and the My Health Record system. 

The new pathways should begin in sectors where better data access could generate large benefits for relatively low cost; and there is clear value to consumers. Potential examples include: • enabling farmers to combine real time data feeds from their machinery and equipment to optimise their operations and easily switch between different manufacturers • giving tenants on demand access to their rental ledgers which they can share to prove on‑time payments to new landlords or lenders • allowing retail loyalty card holders to export an itemised copy of their purchase history to budgeting and price comparison tools that can analyse spending and suggest cheaper alternatives. The scope of the data access pathways should expand over time based on industry and consumer consultation, where new technology, overseas experience or domestic developments show that there are clear net benefits to Australia.   

Privacy regulation 

Draft recommendation 3.1 An alternative compliance pathway for privacy 

The Australian Government should amend the Privacy Act 1988 (Cth) to provide an alternative compliance pathway that enables regulated entities to fulfil their privacy obligations by meeting criteria that are targeted at outcomes, rather than controls based rules. 

Draft recommendation 3.2 Do not implement a right to erasure 

The Australian Government should not amend the Privacy Act 1988 (Cth) to introduce a ‘right to erasure’, as this would impose a high compliance burden on regulated entities, with uncertain privacy benefits for individuals. 

Digital financial reporting 

Draft recommendation 4.1 Make digital financial reporting the default 

The Australian Government should make the necessary amendments to the Corporations Act 2001 (Cth) and the Corporations Regulations 2001 (Cth) to make digital financial reporting mandatory for disclosing entities. The requirement for financial reports to be submitted in hard copy or PDF format should also be removed for those entities.

It goes on

AI specific regulation should be a last resort 

AI specific regulations should only be considered as a last resort for the use cases of AI that meet two criteria. These are: • where existing regulatory frameworks cannot be sufficiently adapted to handle the issue • where technology neutral regulations are not feasible. 

Economy wide efforts to regulate AI should be paused until all gap analyses are complete and implemented 

In August 2024 Australian Government Department of Industry, Science and Resources released a set of 10 voluntary AI safety standards, or guardrails, based on risk management standards such as ISO/IEC 42001:2023 (Information technology – Artificial intelligence – Management system) and the National Institute of Standards and Technology’s Artificial Intelligence Risk Management Framework (AI RMF 1.0) (DISR 2024b, p. 5). The guardrails cover aspects of AI development and application. They require several risk-management processes. These include testing of models, developing a risk plan and providing transparency to users of AI tools and owners of copyrighted materials used in the training of models. The guardrails outline reasonable risk-management practices for many organisations. In this way they have served a very important and useful step in AI governance in Australia by equipping businesses with voluntary, structured and internationally recognised standards to support and guide their adoption of AI. 

The guidelines are very useful for smaller businesses without comprehensive risk-management procedures in place. Indeed, submissions from participants to this inquiry (and submissions to the mandatory guardrails – discussed below – consultation process ) showed that many larger organisations have implemented risk management protocols that are similar in spirit to these guardrails. 

Mandating the guardrails is not necessary 

In September 2024 (DISR 2024a) a proposals paper for a set of mandatory guardrails for AI in high risk settings was released by the Australian Government. The proposal is to turn the voluntary guidelines into mandatory regulations for AI development and application. 

The PC is concerned with two aspects of the guardrails being made mandatory. First, the proposals paper argued that the mandatory guardrails would apply to all high risk uses of AI – regardless of whether risks can be better mitigated through outcomes based regulations. Second, the proposals paper argued that General Purpose AIs – which would include many generative AI tools – above a certain threshold of capability be classified as high risk by default. The proposals paper did not argue for any particular measure or threshold for technical capability, though it could include aspects like FLOPS (DISR 2024a, p. 18). It was argued that these models can perform so many functions that their risks cannot be adequately foreseen. This could result in the guardrails being applied to common generative AI tools such as ChatGPT, Claude and Grok, depending on what is chosen as the threshold and measure of technical capability. 

In general, high risk uses of AI can be split into three broad types. 

1. High risk uses that can be adequately controlled by existing regulatory frameworks (potentially with some modification) – this could include issues with privacy law (which the PC thinks can be resolved within existing frameworks with modification to make the regulations more outcomes focused, chapter 2). 

2. High risk uses that can be adequately controlled with new technology neutral regulations – this could include (non consensual) sexually explicit deepfake images which the Australian Government has recently banned (through the Criminal Code Amendment (Deepfake Sexual Material) Act 2024). 

3. High risk use cases that require technology specific regulations – these would be use cases identified in the various gap analyses as having no technology neutral solution. 

The PC’s concern with the guardrails is that they would not distinguish between these categories. This in our view raises significant issues, as the first two cases can already, by definition, be dealt with adequately by other regulatory mechanisms. It might also result in most commercial chatbots being classified as high risk regardless of the efficacy of existing regulations. The result of this approach is that many AI models would be complying with two different sets of regulation to achieve the same outcome. 

For example, the TGA’s review noted that with respect to medical devices, all ten proposed guardrails had close parallels in existing regulations (2025, pp. 27–30). That is, it is likely that firms providing AI based medical devices in Australia would already be fulfilling the objectives of the guardrails if they are operating legally under the TGA’s existing regulations. But if the guardrails are mandated, then the provider of the medical device would need to demonstrate compliance with the TGA regulations and the guardrails, raising the regulatory burden with no change in outcomes. 

The mandating of the guardrails is only appropriate in circumstances where existing regulatory frameworks or new technology-neutral regulations are not able to adequately mitigate the risk of harm. Once the Australian Government has completed and acted on all gap analyses of its existing policy framework, it will know what regulatory holes cannot be plugged by existing frameworks or new technology neutral legislation. Consideration of economy wide efforts to mandate the guardrails should be paused until these gap analyses are complete. 

Pause steps to implement mandatory guardrails for high risk AI The Australian Government should only apply the proposed ‘mandatory guardrails for high risk AI’ in circumstances that lead to harms that cannot be mitigated by existing regulatory frameworks and where new technology neutral regulation is not possible. Until the reviews of the gaps posed by AI to existing regulatory structures are completed, steps to mandate the guardrails should be paused.

In dealing with copyright the PC states 

 Copyright violation is an example of a harm that AI could exacerbate by changing economic incentives. Previous waves of innovation in information and communication technology have made the sharing of copyrighted materials much cheaper and easier, creating challenges for copyright. In most instances, copyright law was able to be adapted (or better enforced) to mitigate the harm. This made it unnecessary to directly regulate technology by, for example, regulating computer software or hardware to prevent copyright breach. It is the PC’s view that the copyright issues posed by AI can also similarly be resolved through adapting existing copyright law frameworks rather than introducing AI specific regulation. 

What is copyright? 

Copyright law prohibits a person from using original works without the permission of the copyright holder – usually the author (AGD 2022a). The types of works that are protected include text, artistic works, music, computer code, sound recordings and films (ACC 2024a). It does not protect the underlying ideas or information (AGD 2022a). In some cases, data and datasets may be protected, ‘largely depend[ing] on how the data has been arranged, structured or presented’ (Allens 2020, p. 3).  

The rise of AI technology has led to new challenges for copyright law. 

The emergence of AI also raises some additional, principle based questions about how the copyright framework (as part of Australia’s broader intellectual property regime) works to benefit society by encouraging creation and innovation, rewarding intellectual effort and achievement, and supporting the dissemination of knowledge and ideas. (AGD 2023c, p. 12) 

In 2023, the Attorney General established the Copyright and Artificial Intelligence Reference Group, which acts as ‘a standing mechanism to engage with stakeholders across a wide range of sectors on issues at the intersection of AI and copyright’ (AGD 2023a). Since then, the group has met on several occasions to discuss issues relating to AI technology and copyright law (AGD 2023a). 

This section explores one issue particularly relevant to productivity: whether current Australian copyright law is a barrier to building and training AI models. There are other legal issues relating to the outputs of AI models that are less relevant to productivity – such as whether those outputs attract copyright protection and what happens when AI outputs infringe a third party’s copyright (Evans et al. 2024). 

Training AI models 

Building and refining AI models requires the use of large amounts of data. 

The term ‘AI model training’ refers to this process: feeding the algorithm data, examining the results, and tweaking the model output to increase accuracy and efficacy. To do this, algorithms need massive amounts of data that capture the full range of incoming data. (Chen 2023) The datasets used to train AI models often contain digital copies of media such as web pages, books, videos, images and music. These media are often the subject of copyright protection, which means that their use to train AI models requires permission from the copyright holder. 

Permission is required because AI models must ‘copy’ the protected material at least temporarily to undertake the training process. The use of copyrighted materials to train an AI model is a separate issue to the copyright status of anything the model produces. As discussed above, AI outputs may have their own copyright challenges. 

A survey of the Copyright and Artificial Intelligence Reference Group indicated that, in practice, a range of copyrighted materials are used to train AI models – including literary and artistic works, sound recordings, films and musical works (AGD 2024, p. 12). 

There is evidence to suggest that large AI models are already being trained on copyrighted materials without consent or compensation (APA and ASA, qr. 39, pp. 3–4; APDG, qr. 6, p. 4; APRA AMCOS, qr. 58, p. 4; ARIA and PPCA, qr. 65, p. 5, Creative Australia, qr. 62, p. 3). It should be noted that Australian copyright law only applies to copying that occurs within Australia’s boundaries – in other words, the training of AI models overseas is subject to the relevant laws of the jurisdiction in which it occurs. Lawsuits have been brought against technology companies – including Meta, Microsoft and OpenAI – in some overseas jurisdictions about the unlicensed use of copyrighted works to train AI models (Ryan 2023). 

There are concerns that the Australian copyright regime is not keeping pace with the rise of AI technology – whether because it does not adequately facilitate the use of copyrighted works or because AI developers can too easily sidestep existing licensing and enforcement mechanisms. There are several policy options, including: • no policy change – that is, copyright owners would continue to enforce their rights under the existing copyright framework, including through the court system • policy measures to better facilitate the licensing of copyrighted materials, such as through collecting societies • amending the Copyright Act to include a fair dealing exception that would cover text and data mining. 

The PC is seeking feedback on what reforms are needed to bring the copyright regime up to date. 

Is there a need to bolster the licensing or enforcement regime? 

Several participants expressed concern about the unauthorised use of copyrighted materials to train AI models. For example, Creative Australia said: Much of the data has been used reportedly without consent from the original creator, and without acknowledgement or remuneration. The global nature of the technology industry has made it difficult for the owners of creative work to enforce their intellectual property rights and be remunerated for the use of their work. (qr. 62, p. 3) 

There are two points at which concerns of this type could be addressed. First, they could be addressed before the fact, through copyright licensing. Licensing is the key mechanism through which a copyright holder grants permission for others to use their work and often involves some form of payment. In Australia, licensing is often done through collecting societies, which are organisations that represent copyright holders. This can streamline the licensing process, because the collecting society can negotiate licences on behalf of multiple copyright holders at once. As the Copyright Agency said: We can help these sectors use third party content for AI related activities. Our annual licence for businesses now allows staff to use news media content in prompts for AI tools (e.g. for summarisation or analysis). We are extending this to other third party content later in the year. We are also in discussions with our members and licensees about other collective licensing solutions, including the use of datasets for AI related activities. (qr. 7, pp. 2–3) 

The issue of unauthorised use of copyrighted materials could also be addressed after the fact, through enforcement. This encompasses a range of possible measures, including take down notices, alternative dispute resolution and court action. In 2022 23, the Attorney General’s Department undertook a Copyright Enforcement Review to assess ‘whether existing copyright enforcement mechanisms remain effective and proportionate’ (AGD 2022b). That review found that additional regulatory measures are needed to achieve an effective copyright enforcement regime, and work is currently underway to identify options for: • reducing barriers for Australians to use of the legal system to enforce copyright, including examining simple options to resolve ‘small value’ copyright infringements • improving understanding and awareness about copyright. (AGD 2023b) 

In light of this ongoing work, the issue of copyright enforcement is not in scope for this inquiry. 

Is there a case for a text and data mining exception? 

Another option is to expand the existing ‘fair dealing’ regime, which provides certain exceptions to the requirement to obtain permission from the copyright holder (box 1.6). Currently, there is no exception that covers AI model training per se (The University of Notre Dame Australia 2024). However, depending on the case, a different exception could apply. For example, AI models built as part of research could fall within the scope of the ‘research or study’ exception. 

Box 1.6 – What are fair dealing exceptions? 

Fair dealing exceptions allow for the use of copyright material without permission from the copyright owner, so long as it is used for one of several specified purposes and is considered fair. What are the specified purposes? The Copyright Act specifies several purposes where the exception may apply. These include: research or study, criticism or review, parody or satire, reporting news, and enabling a person with a disability to access the material (Copyright Act 1968 (Cth), Part III, Div 3; Part VIA, Div 2). 

What counts as ‘fair’? 

Fairness is determined with regard to all the relevant circumstances – that is, it depends on the facts. Some purposes have specified criteria that must be taken into account. For example, where the use is for research or study, the following considerations apply: • the purpose and character of the dealing • the nature of the work • whether the work can be obtained within a reasonable time at an ordinary commercial price • the effect of the dealing upon the potential market for, or value of, the work • how amount and substantiality of the work that was copied (Copyright Act 1968 (Cth), s 40(2)). 

The ‘fair use’ doctrine – an alternative approach 

Some overseas jurisdictions (notably the United States) take a ‘fair use’ approach to copyright exceptions. Under this doctrine, any types of use can be considered non infringing, provided that it is considered ‘fair’ – in other words, the use need not fall within one of several defined categories. Several reviews have recommended the adoption of the fair use doctrine in Australia (including by the Australian Law Reform Commission and the Productivity Commission), but this has not occurred. Source: ACC (2024b); ALRC (2013); Copyright Act 1968 (Cth); PC (2021, p. 187). 

In its report on Copyright and the Digital Economy, the Australian Law Reform Commission recommended amendments to enable text and data mining by adopting a fair use approach to copyright exceptions (box 1.6) – or, failing that, through a new fair dealing exception. It explained: There has been growing recognition that data and text mining should not be infringement because it is a ‘non expressive’ use. Non expressive use leans on the fundamental principle that copyright law protects the expression of ideas and information and not the information or data itself (2013, p. 261)  

The Australian Government has since indicated that it is not inclined to introduce a fair use regime (Australian Government 2017, p. 7). Therefore, the PC is considering whether there is a case for a new fair dealing exception that explicitly covers text and data mining (a ‘TDM exception’). TDM exceptions exist in several comparable overseas jurisdictions (box 1.7). 

Such an exception would cover not just AI model training, but all forms of analytical techniques that use machine read material to identify patterns, trends and other useful information. For example, the use of text and data mining techniques is common in research sectors to produce large datasets that can be interrogated through statistical analysis. 

Box 1.7 – Text and data mining around the world 

European Union: There are two text and data mining (TDM) exceptions embedded in the Digital Single Market Directive (EU 2019/790) – one for scientific research (article 3) and another for general use (article 4). The Artificial Intelligence Act (Regulation (EU) 2024/1689) specifically characterises the training of AI models as involving ‘text and data mining techniques’ (recital 105) and refers to the TDM exception (article 53). The recent case of Kneschke v. LAION [2024] endorsed the view that the TDM exception extends to cover AI training (Goldstein et al. 2024a, 2024b). 

United States: It has been argued that training AI models falls within the scope of the fair use doctrine (Khan 2024; Klosek and Blumenthal 2024). However, the case Thomson Reuters v. Ross [2023] 694 F.Supp.3d 467 highlights that whether AI training is covered by the doctrine depends on whether the fair use factors are met in the circumstances (ReedSmith 2025). 

United Kingdom: There is a TDM exception for that applies to non commercial research (UK Intellectual Property Office 2014). There have been proposals to expand the exception to cover all uses, though these are still under consideration (Pinsent Masons 2023; UK Government 2024). 

Japan: The Japanese Copyright Act includes broad statutory exemptions for TDM (article 30 4(ii)), provided the work is used for ‘non enjoyment’ purposes (Senftleben 2022, p. 1494). In essence, the requirement for ‘non enjoyment’ distinguishes between whether the work is being consumed as a work or as data, and is broadly equivalent to the distinction between expressive and non expressive uses. 

Singapore: The Singaporean Copyright Act includes a specific TDM exception, as well as a broader fair use exception (Ng-Loy 2024). 

To assist its consideration of this option, the PC is seeking feedback about the likely effects of a TDM exception on the AI market, the creative sector and productivity in general – particularly in light of the following considerations. • At present, large AI models (including generative AI and large language models) are generally available to be used in Australia. The introduction (or not) of a TDM exception is unlikely to affect whether AI models continue to be available and used in Australia (PC 2024c, p. 13). • At present, large AI models are trained overseas, not in Australia. It is unclear whether the introduction of a TDM exception would change this trend. • As discussed above, large AI models are already being trained on unlicensed copyrighted materials. • A TDM exception could make a difference to whether smaller, low compute models (such as task specific models) can be built and trained in Australia, such as by Australian research institutions, medical technology firms, and research service providers. It should also be noted that a TDM exception would not be a ‘blank cheque’ for all copyrighted materials to be used as inputs into all AI models. As discussed in box 1.4, the use must also be considered ‘fair’ in the circumstances – this requirement would act as a check on copyrighted works being used unfairly, preserving the integrity of the copyright holder’s legal and commercial interests in the work. There may be a need for legislative criteria or regulatory guidance about what types of uses are likely to be considered fair. 

Information request 1.1 

The PC is seeking feedback on the issue of copyrighted materials being used to train AI models. • Are reforms to the copyright regime (including licensing arrangements) required? If so, what are they and why? The PC is also seeking feedback on the proposal to amend the Copyright Act 1968 (Cth) to include a fair dealing exception for text and data mining. • How would an exception covering text and data mining affect the development and use of AI in Australia? What are the costs, benefits and risks of a text and data mining exception likely to be? • How should the exception be implemented in the Copyright Act – for example, should it be through a broad text and data mining exception or one that covers non commercial uses only? • Is there a need for legislative criteria or regulatory guidance to help provide clarity about what types of uses are fair?

03 April 2025

AI in Cth public sector

The Australian Parliament Joint Committee of Public Accounts and Audit in its report Inquiry into the use and governance of artificial intelligence systems by public sector entities - 'Proceed with Caution' offers the following recommendations 

1  The Committee recommends that the Australian Public Service Commission introduces questions on the use and understanding of artificial intelligence and other emerging technologies into its annual APS Employee Census. This should be done as soon as possible in consultation with the Digital Transformation Agency and other entities with the relevant domain expertise. The following should be among the information sought by this part of the census: the specific types and sources of the technology being used, from automated decision-making through to generative AI activities and tasks for which these technologies are being utilised and the impact of this on specific decisions and actions the levels and types of training that had been provided for these systems and the level of confidence of the respondent to effectively use them the control and management of the outputs of these technologies the level of understanding of the risks associated with AI-generated decision-making, including potential biases. 

2 The Committee recommends that the Australian Government convenes a whole of Government working group within 12 months of this report to consider the following:

  • standalone legislation that will govern the use of artificial intelligence and other emerging technologies for the benefit of the Australian public 

  • updates to the Archives Act 1983 to account for emerging technologies in accordance with the advice of the National Archives of Australia 

  • the establishment of mandatory rules and governance requirements for the use of artificial intelligence and other emerging technologies across the Commonwealth public sector 

  • all mandatory requirements to stipulate that AI systems must not be deployed for the surveillance of public sector staff and must establish clear thresholds for AI involvement in decision making processes 

  • a standardised glossary of terms and definitions that will apply across government 

  • a consistent and coordinated training framework that is mandatory for all Commonwealth entities 

  • where the responsibility for enforcing any new legislation or mandatory rules should sit and 

  • the frameworks that will be used to ensure compliance increased cooperation between the Australian Government and international partners (Allies and multilateral forums) to maximise the benefits of AI while mitigating risk and utilising best practice 

These frameworks must ensure any risks, including sovereign risks, and biases that result from the adoption of these technologies can be effectively mitigated. This includes inadequate datasets, biometric biases and inaccuracies, disinformation and propaganda, foreign and electoral interference, online harm, cyber-crime and copyright violations. 

3 The Committee recommends that the Australian Government establishes a statutory Joint Committee on Artificial Intelligence and Emerging Technologies to provide effective and continuous Parliamentary oversight of the adoption of these systems across the Australian government and more widely. 

The Act that establishes this Committee should include provisions that:

  • any legislation that concerns the use of these technologies must automatically be referred to the Committee for inquiry 

  • the committee reviews relevant legislation or regulations to ensure no loopholes exist in the protection of human rights, democracy and freedoms 

  • the enactment or amendment of any legislative rules regarding the use of these technologies must be approved by the Committee 

  • the implementation and amendment of any Commonwealth rules and guidelines regarding the use of these technologies by the public sector must be approved by the Committee 

  • any statutory appointments that directly relate to the use of these technologies must be referred to the Committee for approval 

  • the Committee report to the Parliament annually on the use of AI systems across the federal government. 

4  The Committee recommends that any guidance issued by the Digital Transformation Agency, or any other Australian Government agency, clearly define all AI systems and applications. Given the significant differences between some of these technologies, separate guidance should be developed for each.

31 January 2025

Governance

The Senate Education and Employment Legislation Committee has launched an inquiry into governance at Australian higher education providers. 

The Terms of Reference are 

 The adequacy of the powers available to the Tertiary Education Quality and Standards Agency to perform its role in identifying and addressing corporate governance issues at Australian higher education providers, with particular reference to: 
a. The composition of providers' governing bodies and the transparency, accountability and effectiveness of their functions and processes, including in relation to expenditure, risk management and conflicts of interest; 
b. The standard and accuracy of providers' financial reporting, and the effectiveness of financial safeguards and controls; 
c. Providers' compliance with legislative requirements, including compliance with workplace laws and regulations; 
d. The impact of providers' employment practices, executive remuneration, and the use of external consultants, on staff, students and the quality of higher education offered; and 
e. Any related matters. 

The committee will present its report on 4 April 2025, with submissions due by 3 March 2025.

17 December 2024

Meta Settlement

Amid excitement about today's announcement of a settlement between Meta Inc and the OAIC privacy and regulatory analysts might wonder whether Meta got off lightly: a trivial amount and the ending of litigation in the Federal Court. 

The Enforceable Undertaking reads

1. Background 

1.1. This enforceable undertaking is given by Meta Platforms, Inc. (Meta) to the Australian Information Commissioner (Commissioner) under section 114 of the Regulatory Powers (Standard Provisions) Act 2014 (Regulatory Powers Act) in conjunction with the discontinuance of Federal Court of Australia Proceeding No NSD 246 of 2020 (the Civil Penalty Proceedings) against all Respondents, on a without prejudice basis and without any admission of liability. The Civil Penalty Proceedings followed investigations by the OAIC concerning the Cambridge Analytica Incident, the facts of which are described below together with a background to the Civil Penalty Proceedings. 

1.2. Meta offers this enforceable undertaking in its capacity as the provider of the Facebook service to users in Australia from 14 July 2018 onwards. Prior to 14 July 2018, and during the period in which the Cambridge Analytica Incident described below occurred, Meta Platforms Ireland Limited provided the Facebook service to users in Australia. 

The Cambridge Analytica Incident 

1.3. In April 2010, Meta launched the Graph Application Programming Interface (Graph API). The Graph API allowed third party apps to access, with permission from users who installed the third party app using the Facebook Login tool, certain information, e.g., their name, birthdate, etc., from installers of the app and their friends (if both users’ privacy settings allowed it). Under the first version of Graph API (Graph API Version 1), which was in place from 21 April 2010 to 30 April 2015 for pre-existing apps, third party apps could request access to certain information (1) from the installing user’s account; and (2) that the installing user’s Facebook friends had chosen to share with the installing user. The Graph API would provide the information sought on an automated basis, so long as the installing user authorised the request, the user and their friends had not opted out of the Facebook platform (which would allow the user to opt out of providing access to information to third party apps), subject to the privacy and application settings of the user and their friends. 

1.4. In November 2013, Dr Aleksandr Kogan, a professor at Cambridge University, launched a third party app relevantly known as “thisisyourdigitallife” (the Life App) using Graph API Version 1. Before doing so, Dr Kogan agreed to Meta’s terms of service and its terms for developers of third party apps using the Facebook platform and the Graph API. The Life App, which presented itself to users as a quiz app, requested via a dialog box at the time of installation, installing users’ permission to access certain categories of their information as well as certain categories of information that their Facebook friends shared with them. 

1.5. In December 2015, upon learning from media reports that Dr Kogan and his company, Global Science Research Limited (GSR), may have been transferring user information to Cambridge Analytica (UK) Ltd, a British data analytics company, and its parent company, Strategic Communication Laboratories (together, SCL) (in contravention of contractual obligations owed to Meta), Meta launched an investigation and terminated the Life App’s use of the Graph API and access to Facebook Login. 

1.6. Based on this investigation, Meta concluded that Dr Kogan and GSR had violated its terms in several respects. Meta subsequently obtained certifications that Dr. Kogan, GSR, and other third parties (including SCL) with whom Dr Kogan had shared user information had deleted the information. The information that was transferred to SCL related primarily to users in the United States. Neither Meta, nor Meta Platforms Ireland Limited, are aware of any evidence that Dr Kogan provided SCL with information on Facebook users from Australia. 

The OAIC’s Investigation and the Civil Penalty Proceedings 

1.7. On 5 April 2018, the Commissioner initiated an investigation under section 40(2) of the Privacy Act 1988 (Cth) (Privacy Act) in relation to reports that Australian users’ information may have been improperly shared with Cambridge Analytica (UK) Ltd via the Life App. During the investigation, which extended to Meta, Meta Platforms Ireland Limited and Facebook Australia Pty Ltd, the Commissioner raised concerns that Meta may have interfered with the privacy of Australian individuals in contravention of Australian Privacy Principles (APPs) 1.2, 5, 6, 10 and 11 of the Privacy Act (Investigation). 

1.8. On 9 March 2020, the Commissioner commenced the Civil Penalty Proceedings and concluded the above investigation. In the Civil Penalty Proceedings, as further particularised in the Amended Statement of Claim dated 2 June 2023, the Commissioner alleged that Meta’s systems and practices raised concerns about the protection of personal information of Australian Facebook users in relation to the Cambridge Analytica incident, and that, based on its Investigation, Meta and Meta Platforms Ireland Limited may have contravened section 13G of the Privacy Act through serious or repeated breaches of APPs 6.1 and 11.1. The Commissioner alleged that, throughout the time the Life App was available to Facebook users, approximately: 1.8.1. 53 Facebook users located in Australia installed the Life App; and 1.8.2. 311,074 Facebook users located in Australia could have had their personal information requested by the Life App as friends of installing Facebook users. 

2. Meta’s Response to the Cambridge Analytica Incident 

2.1. Meta acknowledges: 2.1.1. that under the Privacy Act, Meta must not do an act, or engage in a practice, that breaches an APP; 2.1.2. the Commissioner’s concerns identified in paragraphs 1.7 and 1.8. 

2.2. Meta represents, and the Commissioner acknowledges, that: 2.2.1. Meta no longer permits third party app developers to access from Meta an installing user’s friend’s information, unless that friend has also installed the app and authorised it to have access to that information; 2.2.2. since the period relevant to the Civil Penalty Proceedings, being 12 March 2014 to 1 May 2015 (Relevant Period), Meta has dedicated significant and increased resources to monitoring third party apps and enforcing Meta’s terms and policies; 2.2.3. since the Relevant Period, Meta substantially reduced the number of information fields available that third party app developers (via Facebook Login) may request an installing user’s permission to access, examples of information fields that have been removed include: (i) the installing user’s friends’ information, excluding the circumstances specified in paragraph 2.2.1; and (ii) the installing user’s religion, political views and relationship details; 2.2.4. since the Relevant Period, Meta has continued to implement granular data permissions processes to allow a user who installs a third party app to decide which categories of certain information they will share with the third party app; and 2.2.5. Meta monitors the compliance of third party app developers of consumer apps with Meta’s Platform Terms through measures including, but not limited to, ongoing manual reviews and automated scans, and regular assessments, audits, or other technical and operational testing at least once every 12 months. 

3. Meta’s Enforceable Undertaking to the Commissioner 

3.1. Meta offers this enforceable undertaking to the Commissioner under section 114 of the Regulatory Powers Act, including to address the concerns in paragraphs 1.7 and 1.8. 

3.2. This undertaking comes into effect when: 3.2.1. it is executed by Meta; and 3.2.2. this undertaking, so executed, is accepted by the Commissioner (the Commencement Date). 

3.3. This undertaking ceases to have effect upon the completion of the Payment Program (as defined at paragraph 4.1 below). 

4. Undertaking to Establish Payment Program 

4.1. Meta undertakes to implement a payment program open to Eligible Australian Users in recognition of the Commissioner’s concern that those users may have suffered loss or damage as a result of interferences with their privacy arising from the conduct the subject of the Commissioner’s concerns as identified in paragraphs 1.7 and 1.8 above in accordance with Parts 5 and 6 of this enforceable undertaking and fulfill each of its obligations set out in Parts 4 to 7 of this enforceable undertaking (Payment Program). 

4.2. Meta undertakes to: 4.2.1. engage an independent third party administrator (the Administrator); 4.2.2. direct the Administrator to administer the Payment Program in accordance with: 4.2.2.1. Parts 5 and 6 of this enforceable undertaking; and 4.2.2.2. any instructions for the Payment Program given to the Administrator by Meta (Scheme Instructions); and 4.2.3. complete the Payment Program within 2 years from the Commencement Date or such longer period as agreed between the Commissioner and Meta. 

5. Eligible Australian Users 

5.1. A person is an “Eligible Australian User” if the person: 5.1.1. held a Facebook Account at any time during the period of 2 November 2013 and 17 December 2015 ( Eligibility Period) 5.1.2. was located in Australia for 30 days or more during the Eligibility Period; and 5.1.3. during the Eligibility Period, either: 5.1.3.1. installed the Life App using Facebook Login; or 5.1.3.2. did not install the Life App but was Facebook friends with another Facebook user who had installed the Life App using Facebook Login. 

5.2. Subject to paragraphs 5.3 to 5.5, an Eligible Australian User can register with the Administrator as a “ Claimant ” under the Payment Program if they submit to the Administrator within the registration period prescribed by the Administrator (Registration Period) a valid Registration Form and evidence in such form as prescribed, verifying that the person: 5.2.1. is an Eligible Australian User under paragraph 5.1; 5.2.2. holds a genuine belief that as a direct consequence of the conduct the subject of the Commissioner’s concerns identified in paragraphs 1.7 and 1.8, they have suffered loss or damage, being either: 5.2.2.1. specific economic and/or non-economic loss and/or damage (beyond a generalised concern or embarrassment) (Class 1); or 5.2.2.2. a generalised concern or embarrassment (Class 2). 

5.3. The Registration Form will be prepared by the Administrator in consultation with Meta and may set the standard of verification and evidence that a Claimant must provide for each eligibility criterion by the end of the Registration Period, including by way of statutory declaration or identity verification as considered appropriate. 5.3.1. For paragraphs 5.1.3 and 5.2.2.2, Meta must direct the Administrator to not require more than a valid statutory declaration. 

5.4. Notwithstanding paragraphs 5.2 and 5.3, the Administrator may, in its absolute discretion, determine that a person will not be: 5.4.1. an Eligible Australian User where the Administrator is unable to verify that the person meets the requirements of Part 5 of this enforceable undertaking based on the information available to the Administrator; 5.4.2. a Claimant where the Administrator determines that: 5.4.2.1. the person provided the Administrator with false information, or that the person’s registration is otherwise fraudulent; 5.4.2.2. the person has previously registered as a Claimant; 5.4.2.3. if the person registered to receive payment from Meta, or any of its affiliated or related entities, in a proceeding, investigation or other legal action in any jurisdiction outside of Australia that relates to, or arose out of, the factual background detailed in paragraphs 1.3 to 1.6 of this enforceable undertaking, such as the US settlement of In re: Facebook, Inc. Consumer Privacy User Profile Litigation, Case No. 3:18-md-02843-VC (N.D. Cal.); or 5.4.2.4. the person is not otherwise eligible in accordance with the Scheme Instructions. 

5.5. For the avoidance of any doubt, a person: 5.5.1. is not a Claimant if the person has not registered in accordance with paragraphs 5.2 and 5.3 during the Registration Period; and 5.5.2. cannot register as a Claimant in both Class 1 and Class 2. 

6. Payment Program 

6.1. Meta undertakes to, within 60 days of the Commissioner filing a Notice of Discontinuance in the Civil Penalty Proceedings, pay an amount of $50 million (the Contribution Amount) to the Administrator for the Administrator to use to make payments to Claimants (Payments) in accordance with paragraphs 6.2 to 6.9. 

6.2. Following the payment of the Contribution Amount by Meta in accordance with paragraph 6.1, Meta will: 6.2.1. notify the Commissioner that the Contribution Amount has been paid to the Administrator; 6.2.2. direct the Administrator to make information available on a website established by the Administrator regarding the Payment Program, including how Eligible Australian Users can register with the Administrator as a Claimant; 6.2.3. use reasonable best efforts to: 6.2.3.1. identify, based on Meta’s available records, persons that may be Eligible Australian Users; and 6.2.3.2. facilitate electronic notice of the Payment Program to those persons; 6.2.4. direct the Administrator to take reasonable steps to publicise the Payment Program within Australia. 

6.3. The Payment that a Claimant receives will depend on whether the Administrator determines that the Claimant is a Class 1 or Class 2 Claimant. 

6.4. In performing its obligations under Parts 5 and 6, the Administrator will apply any Scheme Instructions, including any cap to apply to Payments made to Claimants and the principle that all Class 2 Claimants be paid the same amount. 

6.5. Subject to the Scheme Instructions, following the end of the Registration Period, the Administrator will: 6.5.1. evaluate and determine, using evidence available to the Administrator at that time, in the Administrator’s absolute discretion whether: 6.5.1.1. a person is an Eligible Australian User (in accordance with Part 5); and 6.5.1.2. if a person registers as a Claimant in Class 1, the person has provided sufficient supporting evidence to substantiate their claim that they have suffered loss or damage in Class 1; 6.5.2. determine the number of Claimants in each of Class 1 and Class 2; 6.5.3. commence the process for determining the Payment that each Class 1 and Class 2 Claimant is entitled to receive, in accordance with this Part 6; and 6.5.4. notify Meta that the process referred to in paragraph 6.5.3 above has begun, at which point Meta will within 24 hours notify the Commissioner thereof. 

6.6. The Scheme Instructions will provide for the Administrator to include a timely internal review avenue for: 6.6.1. any decision by the Administrator to reject a Claimant’s Class 1 registration and allocate the Claimant to Class 2; and 6.6.2. assessment of any Payment amount that is to be made to a Claimant in Class 1. 

6.7. Following the conclusion of the process in 6.5, in accordance with paragraphs 6.3 and 6.4, the Administrator will: 6.7.1. finalise its determination including any internal review of any Payment that is to be made to a Claimant in either Class 1 or Class 2; 6.7.2. once all determinations are completed in accordance with paragraph 6.7.1, notify Meta of: 6.7.2.1. the total number of Claimants; and 6.7.2.2. the aggregated amount to be distributed to all Claimants; and 6.7.3. make a timely Payment to each such Claimant. 

6.8. Following receipt of the notification set out at paragraph 6.7.2, Meta will within 24 hours notify the Commissioner thereof. 

6.9. If the total aggregate sum of Payments made to Claimants under paragraph 6.7 is less than the Contribution Amount, Meta will direct the Administrator to pay the residual amount to the Australian Government’s Consolidated Revenue Fund. 

6.10. If, when performing its obligations under Parts 5 and 6 of this enforceable undertaking, the Administrator informs Meta that it will not be able to comply with any deadline specified in this undertaking, Meta will: 6.10.1. promptly inform the Commissioner, and the OAIC, of the extent and reasons for the delay; 6.10.2. in consultation with the Administrator, determine a date by which the Administrator will reasonably be able to complete the actions specified; 6.10.3. propose the modified date(s) to the Commissioner and seek to agree any necessary extension; and 6.10.4. cause the Administrator to notify Claimants of the delay and the amended date(s) agreed with the Commissioner (if applicable). 

7. Compliance 

7.1. Subject to any confidentiality obligations owed by Meta, the OAIC may request in writing from time to time and Meta will provide to it, documents and information that are reasonably necessary for the purpose of assessing Meta’s compliance with Parts 4 to 6 of this enforceable undertaking. 

7.2. Meta will use its best endeavours to provide documents and information in response to any request under paragraph 7.1 within 14 days of the request. 

8. Other matters 

8.1. Meta acknowledges that the Commissioner: 8.1.1. will publish this enforceable undertaking as well as a summary of the undertaking, on the OAIC website; 8.1.2. may issue a statement on acceptance of this enforceable undertaking referring to its terms and to the circumstances which led to the Commissioner’s acceptance of the undertaking; and 8.1.3. may from time to time publicly refer to this enforceable undertaking, including any breach of this enforceable undertaking by Meta. 

8.2. Meta acknowledges that: 8.2.1. The Commissioner’s acceptance of this enforceable undertaking does not preclude the Commissioner’s power to investigate, power not to investigate further, or the exercise of any of the Commissioner’s functions under the Privacy Act in relation to: (i) the representative investigation opened by the Commissioner under sub-section 40(1) of the Privacy Act on 21 October 2019 (referred to by the Commissioner using the reference number CP18/01262); or (ii) any contravention that concerns conduct that is outside the scope of the Civil Penalty Proceedings or Investigation. 8.2.2. If the Commissioner considers that Meta has breached this enforceable undertaking, the Commissioner may apply to the Federal Court or Federal Circuit Court to enforce the undertaking under s 115 of the Regulatory Powers Act. 

8.3. The Commissioner’s acceptance of this enforceable undertaking is not a finding that Meta has contravened the Privacy Act or the APPs. 

8.4. Meta gives this enforceable undertaking on a without prejudice basis, and without any admission of liability as to the matters raised in the Investigation or Civil Penalty Proceedings. Any representations made or acknowledgments given by Meta in this enforceable undertaking, whether express or implied, are made without prejudice or admission of liability. In giving this enforceable undertaking, neither Meta nor any of its affiliated or associated entities are precluded from taking any position or relying on any facts or factual statements in any legal or regulatory proceedings in Australia or in any other jurisdiction in relation to any matter that was within the scope of the Commissioner’s investigations referred to in paragraphs 1.7 and 8.2.1, the Civil Penalty Proceedings or which otherwise relate to the Cambridge Analytica Incident described at paragraphs 1.3 to 1.6. 9. 

Confidentiality 

9.1. The Commissioner acknowledges that information provided by Meta, or the Administrator, to the Commissioner and OAIC in accordance with this enforceable undertaking may contain sensitive commercial information (Commercial-in-confidence Information). 

9.2. The Commissioner acknowledges that any such Commercial-in-confidence Information is provided by Meta, or the Administrator, in confidence. 

9.3. The Commissioner: 9.3.1. will only publish or otherwise disclose any Commercial-in-confidence Information with Meta’s written agreement, unless otherwise required by law; and 9.3.2. will only use any Commercial-in-confidence Information for the purpose of exercising the Commissioner’s powers, or performing functions or duties in the Privacy Act.