03 February 2026

AI Safety

The 2nd International AI Safety Report states 

 — General-purpose AI capabilities have continued to improve, especially in mathematics, coding, and autonomous operation. Leading AI systems achieved gold-medal performance on International Mathematical Olympiad questions. In coding, AI agents can now reliably complete some tasks that would take a human programmer about half an hour, up from under 10 minutes a year ago. Performance nevertheless remains ‘jagged’, with leading systems still failing at some seemingly simple tasks. 

— Improvements in general-purpose AI capabilities increasingly come from techniques applied after a model’s initial training. These ‘post-training’ techniques include refining models for specific tasks and allowing them to use more computing power when generating outputs. At the same time, using more computing power for initial training continues to also improve model capabilities. 

— AI adoption has been rapid, though highly uneven across regions. AI has been adopted faster than previous technologies like the personal computer, with at least 700 million people now using leading AI systems weekly. In some countries over 50% of the population uses AI, though across much of Africa, Asia, and Latin America adoption rates likely remain below 10%. 

— Advances in AI’s scientific capabilities have heightened concerns about misuse in biological weapons development. Multiple AI companies chose to release new models in 2025 with additional safeguards after pre-deployment testing could not rule out the possibility that they could meaningfully help novices develop such weapons. 

— More evidence has emerged of AI systems being used in real-world cyberattacks. Security analyses by AI companies indicate that malicious actors and state-associated groups are using AI tools to assist in cyber operations. 

— Reliable pre-deployment safety testing has become harder to conduct. It has become more common for models to distinguish between test settings and real-world deployment, and to exploit loopholes in evaluations. This means that dangerous capabilities could go undetected before deployment. 

— Industry commitments to safety governance have expanded. In 2025, 12 companies published or updated Frontier AI Safety Frameworks – documents that describe how they plan to manage risks as they build more capable models. Most risk management initiatives remain voluntary, but a few jurisdictions are beginning to formalise some practices as legal requirements. 

This Report assesses what general-purpose AI systems can do, what risks they pose, and how those risks can be managed. It was written with guidance from over 100 independent experts, including nominees from more than 30 countries and international organisations, such as the EU, OECD, and UN. Led by the Chair, the independent experts writing it jointly had full discretion over its content. 

The authors note 

 This Report focuses on the most capable general-purpose AI systems and the emerging risks associated with them. ‘General-purpose AI’ refers to AI models and systems that can perform a wide variety of tasks. ‘Emerging risks’ are risks that arise at the frontier of general-purpose AI capabilities. Some of these risks are already materialising, with documented harms; others remain more uncertain but could be severe if they materialise. 

The aim of this work is to help policymakers navigate the ‘evidence dilemma’ posed by general-purpose AI. AI systems are rapidly becoming more capable, but evidence on their risks is slow to emerge and difficult to assess. For policymakers, acting too early can lead to entrenching ineffective interventions, while waiting for conclusive data can leave society vulnerable to potentially serious negative impacts. To alleviate this challenge, this Report synthesises what is known about AI risks as concretely as possible while highlighting remaining gaps. 

While this Report focuses on risks, general- purpose AI can also deliver significant benefits. These systems are already being usefully applied in healthcare, scientific research, education, and other sectors, albeit at highly uneven rates globally. But to realise their full potential, risks must be effectively managed. Misuse, malfunctions, and systemic disruption can erode trust and impede adoption. The governments attending the AI Safety Summit initiated this Report because a clear understanding of these risks will allow institutions to act in proportion to their severity and likelihood. 

Capabilities are improving rapidly but unevenly 

Since the publication of the 2025 Report, general-purpose AI capabilities have continued to improve, driven by new techniques that enhance performance after initial training. AI developers continue to train larger models with improved performance. Over the past year, they have further improved capabilities through ‘inference-time scaling’: allowing models to use more computing power in order to generate intermediate steps before giving a final answer. This technique has led to particularly large performance gains on more complex reasoning tasks in mathematics, software engineering, and science. 

At the same time, capabilities remain ‘jagged’: leading systems may excel at some difficult tasks while failing at other, simpler ones. General-purpose AI systems excel in many complex domains, including generating code, creating photorealistic images, and answering expert-level questions in mathematics and science. Yet they struggle with some tasks that seem more straightforward, such as counting objects in an image, reasoning about physical space, and recovering from basic errors in longer workflows. 

The trajectory of AI progress through 2030 is uncertain, but current trends are consistent with continued improvement. AI developers are betting that computing power will remain important, having announced hundreds of billions of dollars in data centre investments. Whether capabilities will continue to improve as quickly as they recently have is hard to predict. Between now and 2030, it is plausible that progress could slow or plateau (e.g. due to bottlenecks in data or energy), continue at current rates, or accelerate dramatically (e.g. if AI systems begin to speed up AI research itself). 

Real-world evidence for several risks is growing 

General-purpose AI risks fall into three categories: malicious use, malfunctions, and systemic risks. 

Malicious use 

AI-generated content and criminal activity: AI systems are being misused to generate content for scams, fraud, blackmail, and non- consensual intimate imagery. Although the occurrence of such harms is well-documented, systematic data on their prevalence and severity remains limited. 

Influence and manipulation: In experimental settings, AI-generated content can be as effective as human-written content at changing people’s beliefs. Real-world use of AI for manipulation is documented but not yet widespread, though it may increase as capabilities improve. 

Cyberattacks: AI systems can discover software vulnerabilities and write malicious code. In one competition, an AI agent identified 77% of the vulnerabilities present in real software. Criminal groups and state-associated attackers are actively using general-purpose AI in their operations. Whether attackers or defenders will benefit more from AI assistance remains uncertain. 

Biological and chemical risks: General-purpose AI systems can provide information about biological and chemical weapons development, including details about pathogens and expert- level laboratory instructions. In 2025, multiple developers released new models with additional safeguards after they could not exclude the possibility that these models could assist novices in developing such weapons. It remains difficult to assess the degree to which material barriers continue to constrain actors seeking to obtain them. 

Malfunctions 

Reliability challenges: Current AI systems sometimes exhibit failures such as fabricating information, producing flawed code, and giving misleading advice. AI agents pose heightened risks because they act autonomously, making it harder for humans to intervene before failures cause harm. Current techniques can reduce failure rates but not to the level required in many high-stakes settings. 

Loss of control: ‘Loss of control’ scenarios are scenarios where AI systems operate outside of anyone’s control, with no clear path to regaining control. Current systems lack the capabilities to pose such risks, but they are improving in relevant areas such as autonomous operation. Since the last Report, it has become more common for models to distinguish between test settings and real-world deployment and to find loopholes in evaluations, which could allow dangerous capabilities to go undetected before deployment. 

Systemic risks 

Labour market impacts: General-purpose AI will likely automate a wide range of cognitive tasks, especially in knowledge work. Economists disagree on the magnitude of future impacts: some expect job losses to be offset by new job creation, while others argue that widespread automation could significantly reduce employment and wages. Early evidence shows no effect on overall employment, but some signs of declining demand for early-career workers in some AI-exposed occupations, such as writing. Risks to human autonomy: AI use may affect people’s ability to make informed choices and act on them. Early evidence suggests that reliance on AI tools can weaken critical thinking skills and encourage ‘automation bias’, the tendency to trust AI system outputs without sufficient scrutiny. ‘AI companion’ apps now have tens of millions of users, a small share of whom show patterns of increased loneliness and reduced social engagement. 

Layering multiple approaches offers more robust risk management 

Managing general-purpose AI risks is difficult due to technical and institutional challenges. Technically, new capabilities sometimes emerge unpredictably, the inner workings of models remain poorly understood, and there is an ‘evaluation gap’: performance on pre-deployment tests does not reliably predict real-world utility or risk. Institutionally, developers have incentives to keep important information proprietary, and the pace of development can create pressure to prioritise speed over risk management and makes it harder for institutions to build governance capacity. 

Risk management practices include threat modelling to identify vulnerabilities, capability evaluations to assess potentially dangerous behaviours, and incident reporting to gather more evidence. In 2025, 12 companies published or updated their Frontier AI Safety Frameworks – documents that describe how they plan to manage risks as they build more capable models. While AI risk management initiatives remain largely voluntary, a small number of regulatory regimes are beginning to formalise some risk management practices as legal requirements. Technical safeguards are improving but still show significant limitations. For example, attacks designed to elicit harmful outputs have become more difficult, but users can still sometimes obtain harmful outputs by rephrasing requests or breaking them into smaller steps. AI systems can be made more robust by layering multiple safeguards, an approach known as ‘defence-in-depth’. 

Open-weight models pose distinct challenges. They offer significant research and commercial benefits, particularly for lesser-resourced actors. However, they cannot be recalled once released, their safeguards are easier to remove, and actors can use them outside of monitored environments – making misuse harder to prevent and trace. 

Societal resilience plays an important role in managing AI-related harms. Because risk management measures have limitations, they will likely fail to prevent some AI-related incidents. Societal resilience-building measures to absorb and recover from these shocks include strengthening critical infrastructure, developing tools to detect AI-generated content, and building institutional capacity to respond to novel threats.

01 February 2026

Personhoods

'Legal personhood for cultural heritage? Some preliminary reflections' by Alberto Frigerio in (2026) International Journal of Cultural Property 1-8 comments

 Cultural heritage occupies a paradoxical position in law: It is protected as property but experienced as a repository of identity, memory, and dignity. This article examines whether cultural heritage could, in principle, be recognized as a subject of law, drawing on emerging developments in environmental and nonhuman personhood. After tracing the historical and conceptual evolution of legal personhood—from human and corporate subjects to nature and ecosystems—it explores the moral, relational, and symbolic dimensions that might justify extending personhood to heritage. The analysis highlights both the potential benefits of such recognition, including stronger ethical and representational protections, and the associated risks, such as legal inflation, state appropriation, and conflicts with ownership and restitution law. Ultimately, it argues that rethinking heritage through the lens of relational personhood reveals the need for a more pluralistic and ethically responsive legal imagination. 

Sergio Alberto Gramitto Ricci, 'Legal Personhood for Artwork' by Sergio Alberto Gramitto Ricci in (2025) 76(5/6) University of California San Francisco Law Journal 1429 states 

Artwork is unique and irreplaceable. It is signifier and signified. The signified of a work of art is its coherent purpose. But the signified of a work of art can be altered when not protected. The ramifications of unduly altering the signified of a work of art are consequential for both living and future generations. While the law provides protection to artists and art owners, it fails to grant rights to works of art themselves. The current legal paradigm, designed around the interest of owners and artists, also falls short of protecting Indigenous art aimed at conserving traditions and cultural identity, rather than monetizing creativity. This Article provides a theoretical framework for recognizing legal personhood for works of art, in the interests of art in and of itself as well as of current and future generations of human beings. This new paradigm protects artwork through the features of legal personhood.

23 January 2026

Pseudolaw

Yet another judgment re pseudolaw. In Commonwealth Bank of Australia v Cahill & Anor [2025] VCC 1860 the Court notes 

The amended defences deny the existence of any lawful credit agreement between the parties, assert that CBA is a “corporate fiction,”and contend that no valid mortgage was created or that CBA lacks standing to enforce it. The defendants also dispute the quantum of the debt and demand production of “wet-ink” originals of various loan and title documents. Judge’s amended counterclaim makes bald and sweeping allegations that CBA engaged in misleading or deceptive conduct, relied on an unfair standard form contract contrary to the Australian Consumer Law, and “securitised” the mortgage in breach of the Corporations Act 2001 (Cth), thereby losing the right to enforce it. It further alleges that enforcement of the mortgage constitutes modern slavery and seeks, among other relief, the return of all payments made, the discharge of the mortgage, and damages.

In referring to 'Sovereign Citizens and pseudo law' the judgment  states

 The documents and submissions made by the defendants fall into a by now well-known quasi-philosophy known as the “sovereign citizen” movement. The guiding philosophy appears to be that these persons consider that they are not subject to the laws of the Commonwealth of Australia unless they have expressly “contracted” or consented to be so bound. This philosophy has no basis in law and has been rejected in many cases to date. All persons living under the protection of the Crown in right of the Commonwealth or State are, as a matter of law, subject to the laws of the Commonwealth. Any suggestion to the contrary is both dangerous and undermines the orderly arrangement of any society. The courts of this country will give no credence to such philosophy. 

The documents and submissions filed by the defendants are informed by half-baked statements that contain traces of legal tit-bits scraped from current and ancient sources otherwise also referred to as “ pseudo-law ”. They are legal gibberish and do not constitute proper statements of principles known to law. 

In Re Coles Supermarkets Australia Pty Ltd [2022] VSC 438, Hetyey Asj said the following of such submissions:

The defendants appear to be seeking to draw a distinction between themselves as ‘natural’ or ‘living’ persons, on the one hand, and their status as ‘legal’ personalities, on the other. However, contemporary Australian law does not distinguish between a human being and their legal personality. Any such distinction would potentially leave a human being without legal rights, which would be unacceptable in modern society. The contentions put forward by the defendants in this regard are artificial and have no legal consequence. 

I adopt the analysis of John Dixon J in Stefan v McLachlan [2023] VSC 501, dealing with the fictional concept of the ‘living man’, stating that:

The law recognises a living person as having status in law and any person is, in this sense, a legal person. Conceptually, there may be differences between the legal status of a person and that of an entity that is granted a like legal status, but whatever they might be they have no application on this appeal. In asserting that he is a ‘living man’, the appellant does no more than identify that he is a person, an individual. Every person, every individual, and every entity accorded status as a legal person is subject to the rule of law. There are no exceptions in Australian society. 

I also refer to AsJ Gobbo’s decision in Nelson v Greenman & Anor [2024] VSC 704 in which her Honour gives a comprehensive treatment of the fallacies underlying the sovereign citizen and pseudo law movements. I concur with and adopt her Honour’s treatment of the subject at paragraphs [53] – [78].

03 January 2026

Security

The NSW Auditor General December 2025 report on Cyber security in Local Health Districts states 

 NSW Health is not effectively managing cyber security risks to clinical systems that support healthcare delivery in Local Health Districts. In addition, Local Health Districts have not met the minimum NSW Government cyber secuements, including maintaining adequate cyber security response plans, business continuity planning and disaster recovery for cyber security incidents, means that Local Health Districts could not demonstrate that they are prepared for, or resilient to, cyber threats. This exposes the risk that a preventable cyber security incident could disrupt access to healthcare services and compromise the security of sensitive patient information. eHealth NSW has not clearly defined or communicated its roles and the expected roles of Local Health Districts regarding cyber security. This has led to confusion amongst Local Health Districts on the cyber security risks they manage, including for crown jewel assets (the ICT assets regarded as valuable or operationally vital for service delivery), and identifying and mitigating critical vulnerabilities, threats and risks. Local Health District management of cyber security is hampered by a lack of support, coordination and oversight from eHealth NSW in cyber security matters.  

The report states 

 The New South Wales (NSW) public health system includes more than 220 public hospitals, community and other public health services. 15 Local Health Districts across NSW administer the hospitals and other health services. eHealth NSW was established in 2014 to provide statewide leadership on the planning, implementation and support of information communication technologies (ICT) and digital capabilities across NSW Health. Health service delivery is increasingly reliant on digital systems, which in turn requires the effective management of cyber security risks. Cyber attacks can harm health service delivery and may include the theft of information, breaches of private health information, denial of access to critical technology or even the hijacking of systems for profit or malicious intent. These outcomes can adversely affect the community and damage trust in government. 

Audit objective 

This audit assessed whether NSW Health is effectively safeguarding clinical systems, required to support healthcare delivery in Local Health Districts, from cyber threats. The audit assessed this with the following questions: Do relevant NSW Health organisations effectively manage cyber security risks to clinical systems? Do relevant NSW Health organisations effectively respond to cyber attacks that affect the clinical systems that are essential for service delivery? To focus the audit, 4 of the 15 Local Health Districts were selected for audit. These districts are referred to as ‘the audited Local Health Districts’ throughout this report. The audit further focused on one facility in each of the audited Local Health Districts that provided a common type of healthcare service. The names of the audited Local Health Districts, selected facilities and healthcare services are not disclosed. 

Conclusion 

NSW Health is not effectively managing cyber security risks to clinical systems that support healthcare delivery in Local Health Districts. In addition, Local Health Districts have not met the minimum NSW Government cyber security requirements that have been outlined in NSW Cyber Security Policy since 2019. Local Health Districts are not adequately prepared to respond effectively to cyber security incidents. Systemic non-compliance with NSW Government cyber security requirements, including maintaining adequate cyber security response plans, business continuity planning and disaster recovery for cyber security incidents, means that Local Health Districts could not demonstrate that they are prepared for, or resilient to, cyber threats. This exposes the risk that a preventable cyber security incident could disrupt access to healthcare services and compromise the security of sensitive patient information. eHealth NSW has not clearly defined or communicated its roles and the expected roles of Local Health Districts regarding cyber security. This has led to confusion amongst Local Health Districts on the cyber security risks they manage, including for crown jewel assets (the ICT assets regarded as valuable or operationally vital for service delivery), and identifying and mitigating critical vulnerabilities, threats and risks. Local Health District management of cyber security is hampered by a lack of support, coordination and oversight from eHealth NSW in cyber security matters.

Key findings are 

  Local Health Districts do not manage cyber security risks effectively 

Local Health Districts generate, use and maintain large volumes of sensitive personal and health information about patients. The NSW Cyber Security Policy sets out an expectation that cyber security efforts are commensurate with the potential effect of a successful cyber breach. Under NSW Health policy, Local Health Districts, in collaboration with eHealth NSW, are responsible for managing cyber security and resourcing a fit-for-purpose cyber security function. The current NSW Cyber Security Policy 2023–2024 recognises that agencies providing critical or high-risk services, such as Local Health Districts, should implement a wider range of controls and aim for broader coverage and effective implementation of additional controls. However, the audited Local Health Districts have not complied with the minimum requirements of the NSW Cyber Security Policy since it was introduced in 2019. None of the four districts had effective cyber security plans. Local Health Districts that do not have effective cyber security plans cannot articulate their approach to managing cyber security risks and are not adequately prepared to respond to and manage cyber security risks and incidents. 

Local Health Districts do not have plans and processes in place to respond effectively to a cyber attack 

None of the audited Local Health Districts had effective cyber security response plans. Nor did Local Health District business continuity plans and disaster recovery plans consider cyber security risks. Local Health Districts that do not have effective cyber security response, disaster recovery or business continuity plans that include considerations of cyber security, may not be able to safeguard clinical systems against potential cyber security incidents. This may also hamper responses during an incident because roles and responsibilities may not be understood, and actions to address cyber security incidents may not be undertaken as quickly as required, affecting the delivery of services to patients. 

NSW Health has not clearly communicated cyber security roles and responsibilities amongst NSW Health organisations 

eHealth NSW coordinates cyber security matters within NSW Health. However, eHealth NSW has not clearly defined and communicated its roles and the expected roles of Local Health Districts for cyber security. This has led to confusion amongst Local Health Districts on the cyber security risks they manage, including for crown jewel assets (the ICT assets regarded as valuable or operationally vital for service delivery) and identifying and mitigating critical vulnerabilities, threats and risks. eHealth NSW does not provide Local Health Districts with sufficient support to manage cyber security risks, and Local Health Districts have not applied the tools provided by eHealth NSW to all clinically important systems eHealth NSW has developed and distributed cyber security frameworks, guidance and training to all Local Health Districts. eHealth NSW has developed whole-of-system tools to meet key requirements of the NSW Cyber Security Policy and improve the effectiveness of Local Health Districts’ cyber security activities. These tools include risk assessment frameworks. However, eHealth NSW has not ensured that its tools have been implemented in Local Health Districts, nor whether Local Health Districts have the capability or capacity to do so. In the audited Local Health Districts, the effectiveness of eHealth’s cyber threat identification tools is hampered by incomplete application to all clinically important ICT assets. This means that critical systems used by Local Health Districts to deliver, or support the delivery of, clinical treatment are not effectively protected from cyber security incidents. 

Local Health Districts do not have an effective cyber security culture 

In all audited Local Health Districts, critical cyber security controls are not consistently applied by clinical staff who perceive a tension between the urgency of clinical service delivery and the importance of cyber security policies. This has led to normalisation of non-compliance with cyber security controls. This audit observed clinical staff non-compliance at all audited Local Health Districts with multiple cyber security controls that Local Health Districts had put in place. Despite known systemic non-compliance by clinical staff, the audited Local Health Districts have not assessed the effectiveness of the controls they have put in place, nor have they identified any alternatives that might balance the need for clinical urgency with effective cyber security practice. In addition, they have not considered investing in alternative ICT solutions that better meet the needs of clinical staff while also addressing cyber security concerns. 

NSW Health’s Cyber Security Policy attestation lacks transparency on the level of cyber security capability within the health system 

The NSW Cyber Security Policy requires an agency head to attest to the agency’s compliance with the policy. In 2023, eHealth NSW surveyed all NSW Health organisations, including Local Health Districts, on their self-assessed maturity against the NSW Cyber Security Policy in developing a summary assessment for NSW Health to inform its attestation of NSW Cyber Security Policy compliance. That summary showed that Local Health Districts had immature cyber security controls, including for the Essential Eight controls – the most effective set of controls identified by the Australian Cyber Security Centre. However, in 2024, the survey was not completed, so NSW Health aggregated its assessment of whether NSW Health organisations had met NSW Cyber Security Policy requirements. This audit identified systemic Local Health District non-compliance with NSW Cyber Security Policy. The 2024 attestation therefore obscures the risks that exist in Local Health Districts. If NSW Health continues to attest to Cyber Security Policy compliance in the aggregate, the risk is that neither NSW Health nor Cyber Security NSW fully understand where and what the cyber security risks are across NSW Health organisations. 

Recommendations 

The Ministry of Health should: 

by October 2025, collate and validate information on compliance with NSW Cyber Security Policy by each entity that reports to or via the Ministry of Health prior to annual attestation by December 2025, finalise and communicate cyber security roles and responsibilities within the NSW Health system. 

By December 2025, eHealth NSW should: 

work with the Ministry of Health to develop clear guidance for Local Health Districts on the obligation to manage the need to deliver clinical services while meeting critical cyber security requirements determine and apply sufficient resources to support the Privacy and Security Assessment Framework and Cyber Security Risk Assessments in Local Health Districts support Local Health Districts to improve cyber security capability by articulating a whole-of-health cyber security risk appetite statement providing direct assistance to localise centrally developed tools and frameworks ensuring all Local Health District crown jewel assets are monitored by the Health Security Operations Centre. 

By December 2025, Local Health Districts should: 

design and implement a fit-for-purpose cyber security risk management framework incorporating: an enterprise cyber security risk appetite statement, which aligns with the whole-of-health statement complete up-to-date cyber security and cyber security response plans, which are regularly tested and updated investment in establishing and maintaining the Essential Eight cyber controls cyber security controls that identify and address the root causes of non-compliance and balance the need for clinical urgency with effective cyber security consideration of cyber security needs in the implementation of any new clinical systems.

19 December 2025

Productivity

The Productivity Commission report Harnessing data and digital technology released today states 

Data and digital technologies are the modern engines of economic growth. Australia needs to harness the consumer and productivity benefits of data and digital technology while managing and mitigating any downside risks. There is a role for government in setting the rules of the game to foster innovation and ensure that Australians reap the benefits of the data and digital opportunity. 

Emerging technologies like artificial intelligence (AI) could transform the global economy and speed up productivity growth. The Productivity Commission considers that multifactor productivity gains above 2.3%, and labour productivity growth of about 4.3%, are likely over the next decade, although there is considerable uncertainty. But poorly designed regulation could stifle the adoption and development of AI. Australian governments should take an outcomes based approach to AI regulation – using our existing laws and regulatory structures to minimise harms (which the Australian Government has committed to do in its National AI Plan) and introducing technology specific regulations only as a last resort. Developing and training AI models is a global opportunity worth many billions of dollars. Currently, gaps in licensing markets – particularly for open web material – make AI training in Australia more difficult than in overseas jurisdictions. However, licensing markets are developing, and if courts overseas interpret copyright exceptions narrowly, Australia could become relatively more attractive for AI development. As such the PC considers it premature to make changes to Australia’s copyright laws. 

Data access and use fuels productivity growth: giving people and businesses better access to data that relates to them can stimulate competition and allow businesses to develop innovative products and services. A mature data sharing regime could add up to $10 billion to Australia’s GDP. The Australian Government should rightsize the Consumer Data Right (CDR) with the immediate goal of making it work better for businesses and consumers in the sectors where it already exists. In the longer term, making the accreditation model, technical standards and designation process less onerous will help make the CDR a more effective data access and sharing platform that supports a broader range of use cases. 

The benefits of data access and use can only be realised if Australians trust that data is handled safely and securely to protect their privacy. Some requirements in the Privacy Act constrain innovation without providing meaningful protection to individuals. And complying with the controls and processes baked into the Act can make consent and notification a ‘tick box’ exercise where businesses comply with the letter of the law but not its spirit. The Australian Government should amend the Privacy Act to introduce an overarching outcomes based privacy duty for regulated entities to deal with personal information in a manner that is fair and reasonable in the circumstances. 

Financial reports provide essential information about a company’s financial performance, ensuring transparency and accountability while informing the decisions of investors, businesses and regulators. The Australian Government can further spark productivity by making digital financial reporting the default for publicly listed companies and other public interest entities while also removing the outdated requirement that reports be submitted in hard copy or PDF form. This would improve the efficiency of analysing reports, enhance integrity and risk detection, and could boost international capital market visibility for Australian companies.  

The Commission's  recommendations are - 

Artificial intelligence 

Recommendation 1.1 Productivity growth from AI should be enabled within existing legal foundations. 

Gap analyses of current rules need to be expanded and completed Any regulatory responses to potential harms from using AI should be proportionate, risk based, outcomes based and technology neutral where possible. 

The Australian Government should complete, publish and act on ongoing reviews into the potential gaps in the legal framework posed by AI as soon as possible. 

Where relevant gap analyses have not begun, they should begin immediately. 

All reviews of the legal gaps posed by AI should consider: • the uses of AI • the additional risk of harm posed by AI (compared to the status quo) in a specific use case • whether existing regulatory frameworks cover these risks potentially with improved guidance and enforcement; and if not, how to modify existing regulatory frameworks to mitigate the additional risks. 

Recommendation 1.2 AI specific regulation should be a last resort 

AI specific regulations should only be considered as a last resort and only for use cases of AI where: • existing regulatory frameworks cannot be sufficiently adapted to handle AI related harms • technology neutral regulations are not feasible or cannot adequately mitigate the risk of harm. This includes whole of economy regulation such as the EU AI Act and the Australian Government’s previous proposal to mandate guardrails for AI in high risk settings.

Copyright and AI 

Recommendation 2.1 A review of Australian copyright settings and the impact of AI 

The Australian Government should monitor the development of AI and its interaction with copyright holders over the next three years. It should monitor the following areas: • licensing markets for open web materials • the effect of AI on creative incomes generated by copyright royalties • how overseas courts set limits to AI related copyright exceptions, especially fair use. If after three years the monitoring program shows that these issues have not resolved, the government could establish an Independent Review of Australian copyright settings and the impact of AI. The Review’s scope could include, but not be limited to, consideration of whether: • copyright settings continue to be a barrier to the use of open material in AI training, and if so whether changes to copyright law could reduce these barriers • copyright continues to be the appropriate vehicle to incentivise creation of new works and if not, what alternatives could be pursued.

Data access 

Recommendation 3.1 Rightsize the Consumer Data Right 

The Australian Government should commit to reforms that will enable the Consumer Data Right (CDR) to better support data access for high value uses while minimising compliance costs. 

In the short term, the government should continue to simplify the scheme by removing excessive restrictions and rules that are limiting its uptake and practical applications in the banking and energy sectors. To do this the government should: • within the next two years, enable consumers to share data with third parties and simplify the on boarding process for businesses • commit to more substantiative changes to the scheme (in parallel with related legislative reforms), including aligning the CDR’s privacy safeguards with the Privacy Act and enabling access to selected government held datasets through the scheme. 

In addition to the above, the CDR framework should be significantly amended so that it has the flexibility to support a broader range of use cases beyond banking and energy, by making the accreditation model, technical standards and designation process less onerous. 

Privacy regulation 

Recommendation 4.1 An outcomes based privacy duty embedded in the Privacy Act 

The Australian Government should amend the Privacy Act 1988 (Cth) to embed an outcomes based approach that enables regulated entities to fulfil their privacy obligations by meeting criteria that are targeted at outcomes, rather than controls based rules. 

This should be achieved by introducing an overarching privacy duty for regulated entities to deal with personal information in a manner that is fair and reasonable in the circumstances. 

The Privacy Act should be further amended to outline several non exhaustive factors for consideration to guide decision makers in determining what is fair and reasonable – including proportionality, necessity, and transparency. The existing Australian Privacy Principles should ultimately be phased out. 

Implementation of the duty should be supported through non legislative means including documentation such as regulatory guidance, sector specific codes, templates, and guidelines. 

The Office of the Australian Information Commissioner should be appropriately resourced to support the transition to an outcomes based privacy duty.

Digital financial reporting 

Recommendation 5.1 Make digital financial reporting the default 

The Australian Government should make the necessary amendments to the Corporations Act 2001 (Cth) and the Corporations Regulations 2001 (Cth) to make digital annual and half yearly financial reporting mandatory for disclosing entities. The requirement for financial reports to be submitted in hard copy or PDF form should be removed for these entities. The implementation of mandatory digital financial reporting should be phased, with the Treasury determining the appropriate timelines for this approach. 

Setting requirements for report preparation 

The existing International Financial Reporting Standards (Australia) (IFRS AU) taxonomy should be used for digital financial reporting. The Australian Securities and Investments Commission (ASIC) should continue to update the taxonomy annually. ASIC should be empowered to specify, from time to time, the format in which the reports must be prepared. At present, ASIC should specify inline eXtensible Business Reporting Language (iXBRL) as the required format. 

Establishing infrastructure and procedures for report submission 

ASIC, together with market operators such as the Australian Securities Exchange, should determine where and how digital financial reports are submitted. The arrangements should aim to minimise preparers’ reporting burden while keeping reports accessible to report users. 

Supporting the provision of high quality, accessible digital financial data 

ASIC should implement the measures necessary to ensure that digital financial reports contain high quality data. ASIC could (among other actions): • establish a data quality committee that would develop guidance and rules to improve data quality • integrate automated validation checks into the submission process • set guidelines around the use of taxonomy extensions and report format • maintain feedback loops with stakeholders. • To enable report users to harness the benefits of digital financial data, digital financial reports should be publicly and freely available, and easily downloadable.

01 November 2025

Academia

In Stella v Griffith University [2025] QCA 203 Bradley JA comments 

 [5] The respondent (the University) is a university established by statute to advance, develop and disseminate knowledge and promote scholarship, with a focus on study, research, and recognition in the form of conferring degrees. 

[6] A well-tempered university serves the public interest, including in ways identified in Schedule 4, Part 2 of the Right to Information Act 2009 (Qld) (RTI Act). It can promote open discussion of public affairs, contribute to positive and informed debate on important issues or matters of serious interest, ensure effective oversight of expenditure of public funds, allow or assist inquiry into possible deficiencies in the conduct or administration of an agency or official, advance the fair treatment of individuals and other entities in accordance with the law in their dealings with agencies, contribute to the protection of the environment, reveal environmental or health risks or measures relating to public health and safety, contribute to the maintenance of peace and order, contribute to the administration of justice generally or for a person, and contribute to innovation and the facilitation of research. 

[7] Students share a status as members of the university with the chancellor and others in administration, and the academic staff in teaching and research roles. The relationship between university members has been described as domestic.  Its quality may vary from time to time. It may be at its best when aligned and directed to the university’s noble objects; and at its least when splintered or transactional. The relationship is not that between a customer and a service provider.Mr Stella s analysis of a student as a “client” of a university is wholly inadequate. 

[8] Whatever the state of the relationship, it is important that university decision-makers can engage openly with colleagues about how they should deal with a complaint about a student’s conduct. Such a decision concerns the interactions between and amongst students, staff, and administrators. It affects the relationship between the members of the university. In Mr Stella s case, the complaint involved questions about intellectual freedom – in the form of critical and open debate and inquiry in a public forum, and the desire “to afford others respect and courtesy in the manner of its exercise.” Such decisions call for prudent judgment. They were part of the internal management and domestic affairs of the University.

10 October 2025

Robot Rights?

Robot Rights? Let’s Talk about Human Welfare Instead (2020 AAAI/ACM Conference on AI, Ethics, and Society (AIES’20), February 7–8, 2020) by Abeba Birhane and Jelle van Dijk comments 

The ‘robot rights’ debate, and its related question of ‘robot responsibility’, invokes some of the most polarized positions in AI ethics. While some advocate for granting robots rights on a par with hu- man beings, others, in a stark opposition argue that robots are not deserving of rights but are objects that should be our slaves. Grounded in post-Cartesian philosophical foundations, we argue not just to deny robots ‘rights’, but to deny that robots, as artifacts emerging out of and mediating human being, are the kinds of things that could be granted rights in the first place. Once we see robots as mediators of human being, we can understand how the ‘robots rights’ debate is focused on first world problems, at the expense of urgent ethical concerns, such as machine bias, machine elicited human labour exploitation, and erosion of privacy all impacting society’s least privileged individuals. We conclude that, if human being is our starting point and human welfare is the primary concern, the negative impacts emerging from machinic systems, as well as the lack of taking responsibility by people designing, selling and deploying such machines, remains the most pressing ethical discussion in AI.

The authors argue 

Some may argue that the idea of robot rights is a peculiar, irrelevant discussion existing only at the fringes of AI ethics research more broadly construed, and as such devoting our time to it would not be paying justice to the important work done in that field. But the idea of robot rights is, in principle, perfectly legitimate if one stays true to the materialistic commitments of artificial intelligence: in principle it should be possible to build an artificially intelligent machine, and if we would succeed in doing so, there would be no reason not to grant this machine the rights we attribute to ourselves. Our critique therefore is not that the reasoning is invalid as such, but rather that we should question its underlying assumptions. Robot rights signal something more serious about AI technology, namely, that, grounded in their materialist techno-optimism, scientists and technologists are so preoccupied with the possible future of an imaginary machine, that they forget the very real, negative impact their intermediary creatures - the actual AI systems we have today - have on actual human beings. In other words: the discussion of robot rights is not to be separated from AI ethics, and AI ethics should concern itself with scrutinizing and reflecting deeply on underlying assumptions of scientists and engineers, rather than seeing its project as ’just’ a practical matter of discussing the ethical constraints and rules that should govern AI technologies in society. Our starting point is not to deny robots ‘rights’, but to deny that robots are the kinds of beings that could be granted or denied rights. We suggest it makes no sense to conceive of robots as slaves, since ‘slave’ falls in the category of being that robots aren’t. Hu- man beings are such beings. We believe animals are such beings (though a discussion of animals lies beyond the scope of this paper). We take a post-Cartesian, phenomenological view in which being human means having a lived embodied experience, which itself is embedded in social practices. Technological artifacts form a crucial part of this being, yet artifacts themselves are not that same kind of being. The relation between human and technology is tightly intertwined, but not symmetrical. 

Based on this perspective we turn to the agenda for AI ethics. For some ethicists, to argue for robot rights, stems from their aversion against a human arrogance in face of the wider world. We too wish to fight human arrogance. But we see arrogance first and foremost in the techno-optimistic fantasies of the technology industry, making big promises to recreate ourselves out of silicon, surpassing ourselves with ‘super-AI’ and ‘digitally uploading’ our minds so as to achieve immortality, while at the same time exploiting human labour. Most debate on robot rights, we feel, is ultimately grounded in the same techno-arrogance. What we take from Bryson, is her plea to focus on the real issue: human oppression. We forefront the continual breaching of human welfare and especially of those disproportionally impacted by the development and ubiquitous integration of AI into society. Our ethical stance on human being is that being human means to interact with our surroundings in a respectful and just way. Technology should be designed to foster that. That, in turn, should be ethicists’ primary concern. 

In what follows we first lay out our post-Cartesian perspective on human being and the role of technology within that perspective. Next, we explain why, even if robots should not be granted rights, we also reject the idea of the robot as a slave. In the final section, we call attention to human welfare instead. We discuss how AI, rather than being the potentially oppressed, is used as a tool by humans (with power) to oppress other humans, and how a discussion about robot rights diverts attention from the pressing ethical issues that matter. We end by reflecting on responsibilities, not of robots, but those of their human producers.