03 February 2026

AI Safety

The 2nd International AI Safety Report states 

 — General-purpose AI capabilities have continued to improve, especially in mathematics, coding, and autonomous operation. Leading AI systems achieved gold-medal performance on International Mathematical Olympiad questions. In coding, AI agents can now reliably complete some tasks that would take a human programmer about half an hour, up from under 10 minutes a year ago. Performance nevertheless remains ‘jagged’, with leading systems still failing at some seemingly simple tasks. 

— Improvements in general-purpose AI capabilities increasingly come from techniques applied after a model’s initial training. These ‘post-training’ techniques include refining models for specific tasks and allowing them to use more computing power when generating outputs. At the same time, using more computing power for initial training continues to also improve model capabilities. 

— AI adoption has been rapid, though highly uneven across regions. AI has been adopted faster than previous technologies like the personal computer, with at least 700 million people now using leading AI systems weekly. In some countries over 50% of the population uses AI, though across much of Africa, Asia, and Latin America adoption rates likely remain below 10%. 

— Advances in AI’s scientific capabilities have heightened concerns about misuse in biological weapons development. Multiple AI companies chose to release new models in 2025 with additional safeguards after pre-deployment testing could not rule out the possibility that they could meaningfully help novices develop such weapons. 

— More evidence has emerged of AI systems being used in real-world cyberattacks. Security analyses by AI companies indicate that malicious actors and state-associated groups are using AI tools to assist in cyber operations. 

— Reliable pre-deployment safety testing has become harder to conduct. It has become more common for models to distinguish between test settings and real-world deployment, and to exploit loopholes in evaluations. This means that dangerous capabilities could go undetected before deployment. 

— Industry commitments to safety governance have expanded. In 2025, 12 companies published or updated Frontier AI Safety Frameworks – documents that describe how they plan to manage risks as they build more capable models. Most risk management initiatives remain voluntary, but a few jurisdictions are beginning to formalise some practices as legal requirements. 

This Report assesses what general-purpose AI systems can do, what risks they pose, and how those risks can be managed. It was written with guidance from over 100 independent experts, including nominees from more than 30 countries and international organisations, such as the EU, OECD, and UN. Led by the Chair, the independent experts writing it jointly had full discretion over its content. 

The authors note 

 This Report focuses on the most capable general-purpose AI systems and the emerging risks associated with them. ‘General-purpose AI’ refers to AI models and systems that can perform a wide variety of tasks. ‘Emerging risks’ are risks that arise at the frontier of general-purpose AI capabilities. Some of these risks are already materialising, with documented harms; others remain more uncertain but could be severe if they materialise. 

The aim of this work is to help policymakers navigate the ‘evidence dilemma’ posed by general-purpose AI. AI systems are rapidly becoming more capable, but evidence on their risks is slow to emerge and difficult to assess. For policymakers, acting too early can lead to entrenching ineffective interventions, while waiting for conclusive data can leave society vulnerable to potentially serious negative impacts. To alleviate this challenge, this Report synthesises what is known about AI risks as concretely as possible while highlighting remaining gaps. 

While this Report focuses on risks, general- purpose AI can also deliver significant benefits. These systems are already being usefully applied in healthcare, scientific research, education, and other sectors, albeit at highly uneven rates globally. But to realise their full potential, risks must be effectively managed. Misuse, malfunctions, and systemic disruption can erode trust and impede adoption. The governments attending the AI Safety Summit initiated this Report because a clear understanding of these risks will allow institutions to act in proportion to their severity and likelihood. 

Capabilities are improving rapidly but unevenly 

Since the publication of the 2025 Report, general-purpose AI capabilities have continued to improve, driven by new techniques that enhance performance after initial training. AI developers continue to train larger models with improved performance. Over the past year, they have further improved capabilities through ‘inference-time scaling’: allowing models to use more computing power in order to generate intermediate steps before giving a final answer. This technique has led to particularly large performance gains on more complex reasoning tasks in mathematics, software engineering, and science. 

At the same time, capabilities remain ‘jagged’: leading systems may excel at some difficult tasks while failing at other, simpler ones. General-purpose AI systems excel in many complex domains, including generating code, creating photorealistic images, and answering expert-level questions in mathematics and science. Yet they struggle with some tasks that seem more straightforward, such as counting objects in an image, reasoning about physical space, and recovering from basic errors in longer workflows. 

The trajectory of AI progress through 2030 is uncertain, but current trends are consistent with continued improvement. AI developers are betting that computing power will remain important, having announced hundreds of billions of dollars in data centre investments. Whether capabilities will continue to improve as quickly as they recently have is hard to predict. Between now and 2030, it is plausible that progress could slow or plateau (e.g. due to bottlenecks in data or energy), continue at current rates, or accelerate dramatically (e.g. if AI systems begin to speed up AI research itself). 

Real-world evidence for several risks is growing 

General-purpose AI risks fall into three categories: malicious use, malfunctions, and systemic risks. 

Malicious use 

AI-generated content and criminal activity: AI systems are being misused to generate content for scams, fraud, blackmail, and non- consensual intimate imagery. Although the occurrence of such harms is well-documented, systematic data on their prevalence and severity remains limited. 

Influence and manipulation: In experimental settings, AI-generated content can be as effective as human-written content at changing people’s beliefs. Real-world use of AI for manipulation is documented but not yet widespread, though it may increase as capabilities improve. 

Cyberattacks: AI systems can discover software vulnerabilities and write malicious code. In one competition, an AI agent identified 77% of the vulnerabilities present in real software. Criminal groups and state-associated attackers are actively using general-purpose AI in their operations. Whether attackers or defenders will benefit more from AI assistance remains uncertain. 

Biological and chemical risks: General-purpose AI systems can provide information about biological and chemical weapons development, including details about pathogens and expert- level laboratory instructions. In 2025, multiple developers released new models with additional safeguards after they could not exclude the possibility that these models could assist novices in developing such weapons. It remains difficult to assess the degree to which material barriers continue to constrain actors seeking to obtain them. 

Malfunctions 

Reliability challenges: Current AI systems sometimes exhibit failures such as fabricating information, producing flawed code, and giving misleading advice. AI agents pose heightened risks because they act autonomously, making it harder for humans to intervene before failures cause harm. Current techniques can reduce failure rates but not to the level required in many high-stakes settings. 

Loss of control: ‘Loss of control’ scenarios are scenarios where AI systems operate outside of anyone’s control, with no clear path to regaining control. Current systems lack the capabilities to pose such risks, but they are improving in relevant areas such as autonomous operation. Since the last Report, it has become more common for models to distinguish between test settings and real-world deployment and to find loopholes in evaluations, which could allow dangerous capabilities to go undetected before deployment. 

Systemic risks 

Labour market impacts: General-purpose AI will likely automate a wide range of cognitive tasks, especially in knowledge work. Economists disagree on the magnitude of future impacts: some expect job losses to be offset by new job creation, while others argue that widespread automation could significantly reduce employment and wages. Early evidence shows no effect on overall employment, but some signs of declining demand for early-career workers in some AI-exposed occupations, such as writing. Risks to human autonomy: AI use may affect people’s ability to make informed choices and act on them. Early evidence suggests that reliance on AI tools can weaken critical thinking skills and encourage ‘automation bias’, the tendency to trust AI system outputs without sufficient scrutiny. ‘AI companion’ apps now have tens of millions of users, a small share of whom show patterns of increased loneliness and reduced social engagement. 

Layering multiple approaches offers more robust risk management 

Managing general-purpose AI risks is difficult due to technical and institutional challenges. Technically, new capabilities sometimes emerge unpredictably, the inner workings of models remain poorly understood, and there is an ‘evaluation gap’: performance on pre-deployment tests does not reliably predict real-world utility or risk. Institutionally, developers have incentives to keep important information proprietary, and the pace of development can create pressure to prioritise speed over risk management and makes it harder for institutions to build governance capacity. 

Risk management practices include threat modelling to identify vulnerabilities, capability evaluations to assess potentially dangerous behaviours, and incident reporting to gather more evidence. In 2025, 12 companies published or updated their Frontier AI Safety Frameworks – documents that describe how they plan to manage risks as they build more capable models. While AI risk management initiatives remain largely voluntary, a small number of regulatory regimes are beginning to formalise some risk management practices as legal requirements. Technical safeguards are improving but still show significant limitations. For example, attacks designed to elicit harmful outputs have become more difficult, but users can still sometimes obtain harmful outputs by rephrasing requests or breaking them into smaller steps. AI systems can be made more robust by layering multiple safeguards, an approach known as ‘defence-in-depth’. 

Open-weight models pose distinct challenges. They offer significant research and commercial benefits, particularly for lesser-resourced actors. However, they cannot be recalled once released, their safeguards are easier to remove, and actors can use them outside of monitored environments – making misuse harder to prevent and trace. 

Societal resilience plays an important role in managing AI-related harms. Because risk management measures have limitations, they will likely fail to prevent some AI-related incidents. Societal resilience-building measures to absorb and recover from these shocks include strengthening critical infrastructure, developing tools to detect AI-generated content, and building institutional capacity to respond to novel threats.

01 February 2026

Personhoods

'Legal personhood for cultural heritage? Some preliminary reflections' by Alberto Frigerio in (2026) International Journal of Cultural Property 1-8 comments

 Cultural heritage occupies a paradoxical position in law: It is protected as property but experienced as a repository of identity, memory, and dignity. This article examines whether cultural heritage could, in principle, be recognized as a subject of law, drawing on emerging developments in environmental and nonhuman personhood. After tracing the historical and conceptual evolution of legal personhood—from human and corporate subjects to nature and ecosystems—it explores the moral, relational, and symbolic dimensions that might justify extending personhood to heritage. The analysis highlights both the potential benefits of such recognition, including stronger ethical and representational protections, and the associated risks, such as legal inflation, state appropriation, and conflicts with ownership and restitution law. Ultimately, it argues that rethinking heritage through the lens of relational personhood reveals the need for a more pluralistic and ethically responsive legal imagination. 

Sergio Alberto Gramitto Ricci, 'Legal Personhood for Artwork' by Sergio Alberto Gramitto Ricci in (2025) 76(5/6) University of California San Francisco Law Journal 1429 states 

Artwork is unique and irreplaceable. It is signifier and signified. The signified of a work of art is its coherent purpose. But the signified of a work of art can be altered when not protected. The ramifications of unduly altering the signified of a work of art are consequential for both living and future generations. While the law provides protection to artists and art owners, it fails to grant rights to works of art themselves. The current legal paradigm, designed around the interest of owners and artists, also falls short of protecting Indigenous art aimed at conserving traditions and cultural identity, rather than monetizing creativity. This Article provides a theoretical framework for recognizing legal personhood for works of art, in the interests of art in and of itself as well as of current and future generations of human beings. This new paradigm protects artwork through the features of legal personhood.

23 January 2026

Pseudolaw

Yet another judgment re pseudolaw. In Commonwealth Bank of Australia v Cahill & Anor [2025] VCC 1860 the Court notes 

The amended defences deny the existence of any lawful credit agreement between the parties, assert that CBA is a “corporate fiction,”and contend that no valid mortgage was created or that CBA lacks standing to enforce it. The defendants also dispute the quantum of the debt and demand production of “wet-ink” originals of various loan and title documents. Judge’s amended counterclaim makes bald and sweeping allegations that CBA engaged in misleading or deceptive conduct, relied on an unfair standard form contract contrary to the Australian Consumer Law, and “securitised” the mortgage in breach of the Corporations Act 2001 (Cth), thereby losing the right to enforce it. It further alleges that enforcement of the mortgage constitutes modern slavery and seeks, among other relief, the return of all payments made, the discharge of the mortgage, and damages.

In referring to 'Sovereign Citizens and pseudo law' the judgment  states

 The documents and submissions made by the defendants fall into a by now well-known quasi-philosophy known as the “sovereign citizen” movement. The guiding philosophy appears to be that these persons consider that they are not subject to the laws of the Commonwealth of Australia unless they have expressly “contracted” or consented to be so bound. This philosophy has no basis in law and has been rejected in many cases to date. All persons living under the protection of the Crown in right of the Commonwealth or State are, as a matter of law, subject to the laws of the Commonwealth. Any suggestion to the contrary is both dangerous and undermines the orderly arrangement of any society. The courts of this country will give no credence to such philosophy. 

The documents and submissions filed by the defendants are informed by half-baked statements that contain traces of legal tit-bits scraped from current and ancient sources otherwise also referred to as “ pseudo-law ”. They are legal gibberish and do not constitute proper statements of principles known to law. 

In Re Coles Supermarkets Australia Pty Ltd [2022] VSC 438, Hetyey Asj said the following of such submissions:

The defendants appear to be seeking to draw a distinction between themselves as ‘natural’ or ‘living’ persons, on the one hand, and their status as ‘legal’ personalities, on the other. However, contemporary Australian law does not distinguish between a human being and their legal personality. Any such distinction would potentially leave a human being without legal rights, which would be unacceptable in modern society. The contentions put forward by the defendants in this regard are artificial and have no legal consequence. 

I adopt the analysis of John Dixon J in Stefan v McLachlan [2023] VSC 501, dealing with the fictional concept of the ‘living man’, stating that:

The law recognises a living person as having status in law and any person is, in this sense, a legal person. Conceptually, there may be differences between the legal status of a person and that of an entity that is granted a like legal status, but whatever they might be they have no application on this appeal. In asserting that he is a ‘living man’, the appellant does no more than identify that he is a person, an individual. Every person, every individual, and every entity accorded status as a legal person is subject to the rule of law. There are no exceptions in Australian society. 

I also refer to AsJ Gobbo’s decision in Nelson v Greenman & Anor [2024] VSC 704 in which her Honour gives a comprehensive treatment of the fallacies underlying the sovereign citizen and pseudo law movements. I concur with and adopt her Honour’s treatment of the subject at paragraphs [53] – [78].

03 January 2026

Security

The NSW Auditor General December 2025 report on Cyber security in Local Health Districts states 

 NSW Health is not effectively managing cyber security risks to clinical systems that support healthcare delivery in Local Health Districts. In addition, Local Health Districts have not met the minimum NSW Government cyber secuements, including maintaining adequate cyber security response plans, business continuity planning and disaster recovery for cyber security incidents, means that Local Health Districts could not demonstrate that they are prepared for, or resilient to, cyber threats. This exposes the risk that a preventable cyber security incident could disrupt access to healthcare services and compromise the security of sensitive patient information. eHealth NSW has not clearly defined or communicated its roles and the expected roles of Local Health Districts regarding cyber security. This has led to confusion amongst Local Health Districts on the cyber security risks they manage, including for crown jewel assets (the ICT assets regarded as valuable or operationally vital for service delivery), and identifying and mitigating critical vulnerabilities, threats and risks. Local Health District management of cyber security is hampered by a lack of support, coordination and oversight from eHealth NSW in cyber security matters.  

The report states 

 The New South Wales (NSW) public health system includes more than 220 public hospitals, community and other public health services. 15 Local Health Districts across NSW administer the hospitals and other health services. eHealth NSW was established in 2014 to provide statewide leadership on the planning, implementation and support of information communication technologies (ICT) and digital capabilities across NSW Health. Health service delivery is increasingly reliant on digital systems, which in turn requires the effective management of cyber security risks. Cyber attacks can harm health service delivery and may include the theft of information, breaches of private health information, denial of access to critical technology or even the hijacking of systems for profit or malicious intent. These outcomes can adversely affect the community and damage trust in government. 

Audit objective 

This audit assessed whether NSW Health is effectively safeguarding clinical systems, required to support healthcare delivery in Local Health Districts, from cyber threats. The audit assessed this with the following questions: Do relevant NSW Health organisations effectively manage cyber security risks to clinical systems? Do relevant NSW Health organisations effectively respond to cyber attacks that affect the clinical systems that are essential for service delivery? To focus the audit, 4 of the 15 Local Health Districts were selected for audit. These districts are referred to as ‘the audited Local Health Districts’ throughout this report. The audit further focused on one facility in each of the audited Local Health Districts that provided a common type of healthcare service. The names of the audited Local Health Districts, selected facilities and healthcare services are not disclosed. 

Conclusion 

NSW Health is not effectively managing cyber security risks to clinical systems that support healthcare delivery in Local Health Districts. In addition, Local Health Districts have not met the minimum NSW Government cyber security requirements that have been outlined in NSW Cyber Security Policy since 2019. Local Health Districts are not adequately prepared to respond effectively to cyber security incidents. Systemic non-compliance with NSW Government cyber security requirements, including maintaining adequate cyber security response plans, business continuity planning and disaster recovery for cyber security incidents, means that Local Health Districts could not demonstrate that they are prepared for, or resilient to, cyber threats. This exposes the risk that a preventable cyber security incident could disrupt access to healthcare services and compromise the security of sensitive patient information. eHealth NSW has not clearly defined or communicated its roles and the expected roles of Local Health Districts regarding cyber security. This has led to confusion amongst Local Health Districts on the cyber security risks they manage, including for crown jewel assets (the ICT assets regarded as valuable or operationally vital for service delivery), and identifying and mitigating critical vulnerabilities, threats and risks. Local Health District management of cyber security is hampered by a lack of support, coordination and oversight from eHealth NSW in cyber security matters.

Key findings are 

  Local Health Districts do not manage cyber security risks effectively 

Local Health Districts generate, use and maintain large volumes of sensitive personal and health information about patients. The NSW Cyber Security Policy sets out an expectation that cyber security efforts are commensurate with the potential effect of a successful cyber breach. Under NSW Health policy, Local Health Districts, in collaboration with eHealth NSW, are responsible for managing cyber security and resourcing a fit-for-purpose cyber security function. The current NSW Cyber Security Policy 2023–2024 recognises that agencies providing critical or high-risk services, such as Local Health Districts, should implement a wider range of controls and aim for broader coverage and effective implementation of additional controls. However, the audited Local Health Districts have not complied with the minimum requirements of the NSW Cyber Security Policy since it was introduced in 2019. None of the four districts had effective cyber security plans. Local Health Districts that do not have effective cyber security plans cannot articulate their approach to managing cyber security risks and are not adequately prepared to respond to and manage cyber security risks and incidents. 

Local Health Districts do not have plans and processes in place to respond effectively to a cyber attack 

None of the audited Local Health Districts had effective cyber security response plans. Nor did Local Health District business continuity plans and disaster recovery plans consider cyber security risks. Local Health Districts that do not have effective cyber security response, disaster recovery or business continuity plans that include considerations of cyber security, may not be able to safeguard clinical systems against potential cyber security incidents. This may also hamper responses during an incident because roles and responsibilities may not be understood, and actions to address cyber security incidents may not be undertaken as quickly as required, affecting the delivery of services to patients. 

NSW Health has not clearly communicated cyber security roles and responsibilities amongst NSW Health organisations 

eHealth NSW coordinates cyber security matters within NSW Health. However, eHealth NSW has not clearly defined and communicated its roles and the expected roles of Local Health Districts for cyber security. This has led to confusion amongst Local Health Districts on the cyber security risks they manage, including for crown jewel assets (the ICT assets regarded as valuable or operationally vital for service delivery) and identifying and mitigating critical vulnerabilities, threats and risks. eHealth NSW does not provide Local Health Districts with sufficient support to manage cyber security risks, and Local Health Districts have not applied the tools provided by eHealth NSW to all clinically important systems eHealth NSW has developed and distributed cyber security frameworks, guidance and training to all Local Health Districts. eHealth NSW has developed whole-of-system tools to meet key requirements of the NSW Cyber Security Policy and improve the effectiveness of Local Health Districts’ cyber security activities. These tools include risk assessment frameworks. However, eHealth NSW has not ensured that its tools have been implemented in Local Health Districts, nor whether Local Health Districts have the capability or capacity to do so. In the audited Local Health Districts, the effectiveness of eHealth’s cyber threat identification tools is hampered by incomplete application to all clinically important ICT assets. This means that critical systems used by Local Health Districts to deliver, or support the delivery of, clinical treatment are not effectively protected from cyber security incidents. 

Local Health Districts do not have an effective cyber security culture 

In all audited Local Health Districts, critical cyber security controls are not consistently applied by clinical staff who perceive a tension between the urgency of clinical service delivery and the importance of cyber security policies. This has led to normalisation of non-compliance with cyber security controls. This audit observed clinical staff non-compliance at all audited Local Health Districts with multiple cyber security controls that Local Health Districts had put in place. Despite known systemic non-compliance by clinical staff, the audited Local Health Districts have not assessed the effectiveness of the controls they have put in place, nor have they identified any alternatives that might balance the need for clinical urgency with effective cyber security practice. In addition, they have not considered investing in alternative ICT solutions that better meet the needs of clinical staff while also addressing cyber security concerns. 

NSW Health’s Cyber Security Policy attestation lacks transparency on the level of cyber security capability within the health system 

The NSW Cyber Security Policy requires an agency head to attest to the agency’s compliance with the policy. In 2023, eHealth NSW surveyed all NSW Health organisations, including Local Health Districts, on their self-assessed maturity against the NSW Cyber Security Policy in developing a summary assessment for NSW Health to inform its attestation of NSW Cyber Security Policy compliance. That summary showed that Local Health Districts had immature cyber security controls, including for the Essential Eight controls – the most effective set of controls identified by the Australian Cyber Security Centre. However, in 2024, the survey was not completed, so NSW Health aggregated its assessment of whether NSW Health organisations had met NSW Cyber Security Policy requirements. This audit identified systemic Local Health District non-compliance with NSW Cyber Security Policy. The 2024 attestation therefore obscures the risks that exist in Local Health Districts. If NSW Health continues to attest to Cyber Security Policy compliance in the aggregate, the risk is that neither NSW Health nor Cyber Security NSW fully understand where and what the cyber security risks are across NSW Health organisations. 

Recommendations 

The Ministry of Health should: 

by October 2025, collate and validate information on compliance with NSW Cyber Security Policy by each entity that reports to or via the Ministry of Health prior to annual attestation by December 2025, finalise and communicate cyber security roles and responsibilities within the NSW Health system. 

By December 2025, eHealth NSW should: 

work with the Ministry of Health to develop clear guidance for Local Health Districts on the obligation to manage the need to deliver clinical services while meeting critical cyber security requirements determine and apply sufficient resources to support the Privacy and Security Assessment Framework and Cyber Security Risk Assessments in Local Health Districts support Local Health Districts to improve cyber security capability by articulating a whole-of-health cyber security risk appetite statement providing direct assistance to localise centrally developed tools and frameworks ensuring all Local Health District crown jewel assets are monitored by the Health Security Operations Centre. 

By December 2025, Local Health Districts should: 

design and implement a fit-for-purpose cyber security risk management framework incorporating: an enterprise cyber security risk appetite statement, which aligns with the whole-of-health statement complete up-to-date cyber security and cyber security response plans, which are regularly tested and updated investment in establishing and maintaining the Essential Eight cyber controls cyber security controls that identify and address the root causes of non-compliance and balance the need for clinical urgency with effective cyber security consideration of cyber security needs in the implementation of any new clinical systems.