22 April 2026

Trans

The Australian Human Rights Commission report Equal Identities A human rights review of the experiences of trans and gender diverse people in Australia features the following recommendations 

 Recommendation 1 Federal, state and territory governments should introduce consistent legislation to protect LGBTIQA+ people and their associates from vilification, incitement of hatred and threats of physical harm. b. develop initiatives to build workforce capacity and understanding of how intersecting forms of discrimination can affect trans and gender diverse people’s experiences of domestic, family and sexual violence Governments should design these laws in consultation with LGBTIQA+ communities, including trans and gender diverse communities, and should include both civil prohibitions and criminal offences. c. strengthen relationships and cross- capacity building between the DFSVC, crisis response services and trans and gender diverse stakeholders. 

Recommendation  2 The Australian Government Department of Social Services should require and report on LGBTIQA+ and trans and gender diverse representation in their workforce and on key advisory groups, committees and rapid reviews in key areas such as housing, domestic, sexual and family violence prevention, and community services. The Australian Government Attorney General’s Department, along with state and territory governments, should establish LGBTIQA+ justice working groups that include trans and gender diverse representation. The working groups should protect the human rights of trans and gender diverse people by: a. working with criminal justice systems (police, courts and prison systems) to design and monitor policies and practices b. working with the trans and gender diverse community to develop methods to identify and track hate crimes, including community reporting mechanisms

Recommendation 3  The Domestic, Family and Sexual Violence Commission (DFSVC) should establish an ongoing LGBTIQA+ working group, including trans and gender diverse representation, to: a. c. advancing priority areas of justice and law reform, including decriminalisation of appropriate offences, justice reinvestment and measures to address and prevent discriminatory behaviours. provide advice on initiatives to prevent and respond to gender- based violence, including implementation of the National Plan to End Violence Against Women and Children 2022–2032 

Recommendation 5 Federal, state and territory governments should provide sustainable, targeted funding to address capacity gaps in legal service provision for trans and gender diverse people, as identified in the 2025 report ‘A Blueprint for Equality: Resourcing LGBTIQA+ Community Legal Centres’. 

Recommendation 6 Federal, state and territory governments should ensure crisis accommodation and homelessness support services offer inclusive support and are adequately funded to do so. This includes increasing sector-wide awareness, understanding and capabilities about intersecting marginalisations which affect trans and gender diverse people from diverse backgrounds. 

Recommendation 7 All government, government-affiliated and government-funded bodies that collect demographic data should ensure data on gender, sexuality and innate variations of sex characteristics (sometimes known as intersex variations) is collected in line with the ABS Standard for Sex, Gender, Variations of Sex Characteristics and Sexual Orientation Variables (2020). This includes: a. collecting data on gender identity from everybody to ensure that health and support services have the data necessary to meet the needs of trans and gender diverse children and adolescents b. implementing new data collection protocols in partnership with LGBTIQA+ and trans and gender diverse specific organisations to establish community trust and ensure privacy and sensitivity concerns are understood. 

Recommendation 8 The Australian Government Department of Health, Disability and Ageing should require and report on LGBTIQA+ and trans and gender diverse representation in their workforce and on key advisory groups, committees and rapid reviews. The Department should also establish a specific ongoing LGBTIQA+ Health Advisory Group to: a. provide advice on matters relating to trans and gender diverse health, and LGBTIQA+ health more broadly b. provide advice on relevant government initiatives affecting LGBTIQA+ communities, such as the National Suicide Prevention Strategy 2025-2035 and the National Action Plan for the Health and Wellbeing of LGBTIQA+ People 2025–2035 c. advise on LGBTIQA+ health data collection and contribute to the continuous improvement of the Health Data Portal and key national data sets. 

Recommendation 9  Federal, state and territory governments should reduce barriers that prevent trans and gender diverse people from accessing all forms of healthcare, including gender- affirming healthcare. Reducing barriers includes: a. increasing staff and service resourcing to meet urgent needs on existing waitlists for publicly funded hospitals and clinics b. running proactive public awareness campaigns that address misinformation and disinformation which target trans and gender diverse people’s healthcare

Recommendation 10  Federal, state and territory governments should introduce or amend legislation to ban conversion or suppression practices. This legislation should follow the following principles: c. funding service access for trans and gender diverse people in remote, rural and regional communities. a. design the legislative framework in consultation with survivors of conversion or suppression practices. b. apply the ban on conversion and suppression practices to both religious and secular settings 

Recommendation 11 Healthcare providers and education and training institutions (i.e. universities, TAFEs) should ensure that all healthcare and healthcare-adjacent workers and students receive education and ongoing professional development on inclusive care for trans and gender diverse people. This includes awareness of how intersecting forms of discrimination can affect trans and gender diverse people’s health and access to healthcare services. 

Recommendation 12 b. apply the ban on conversion and suppression practices to both religious and secular settings Federal, state and territory governments should: c. make it unlawful to take someone out of the jurisdiction for conversion or suppression practices a. end pauses on puberty suppressants and other hormone therapies for children and young people d. allow reporting by third parties e. carefully define and provide examples of what is and is not a conversion or suppression practice f. b. ensure that, in line with other areas of adolescent medicine, Gillick competence and clinical standards of care are the framework guiding the provision of healthcare to trans and gender diverse children and young people. include an education plan which covers: i. who is protected by the law ii. how to identify conversion or suppression practices iii. awareness of harm caused by conversion or suppression practices. 1 

Recommendation 13 The Australian Government should repeal Section 43A of the Sex Discrimination Act 1984 (Cth). 

Recommendation 14 The Australian Government should: a. amend section 37(1)(d) and repeal section 38 of the Sex Discrimination Act 1984 (Cth) and make consequential amendments to the Fair Work Act 2009 (Cth), as recommended by the Australian Law Reform Commission in its 2024 report ‘Maximising the Realisation of Human Rights: Religious Educational Institution and Anti- Discrimination Laws’ b. request the Australian Law Reform Commission to further review and make recommendations about how to amend the exemption for religious bodies under section 37(1)(d) of the Sex Discrimination Act 1984 (Cth). 

Recommendation 15 State and territory governments should review and amend their anti- discrimination legislation to ensure that trans and gender diverse people have equal access to publicly funded services, including those provided by religious bodies. 

Recommendation 16 The Australian Government Department of Education should require LGBTIQA+ and trans and gender diverse representation on key advisory groups, committees and rapid reviews. The Department should also establish an LGBTIQA+ Youth Advisory Group to provide input into: a. education policy settings b. the role of teachers c. curriculum content d. targeted anti-bullying program support. 

Recommendation 17 Federal, state and territory education departments should review their current policies, practices and curricula to ensure that they support an inclusive model. This model should embed inclusion of trans and gender diverse students as part of teacher training and professional development for all staff across all levels of government funded education institutions. 

Recommendation 18 Educational institutions receiving government funding should have policies to prevent discrimination and harassment of trans and gender diverse students, staff and parents. 

Recommendation 19 The Australian Government should expand the positive duty in the Sex Discrimination Act 1984 (Cth) to cover protected attributes outlined in sections 5A, 5B and 5C of the Act. 

16 April 2026

Moral Rights

In McCallum v Projector Films Pty Ltd (Liability Hearing) [2026] FCA 173  the  Court considered moral rights.

The introduction to the judgment states

 The central dispute between the parties to these proceedings is who is the “principal director” of the documentary film entitled “Never Get Busted!” (the Documentary or NGB). Most (but not all) of the other disputes between the parties depend on the outcome of that central question. 

The Documentary examines the colourful life of Mr Barry Cooper, who was at one time a Texan-based narcotics officer during the height of the “war on drugs” in the 1990s. The Documentary has taken over five years to complete. It has already screened at the prestigious Sundance Film Festival in Utah in the United States and had its Australian premiere at the Melbourne International Film Festival. It has also screened at other film festivals. Potential offers from streaming services await. What should have been hailed as a success by all those involved in the making of the Documentary has become the subject of an acrimonious dispute. The key protagonists have betrayed the adage applicable to journalists but equally applicable to filmmakers that they not become part of the story. 

As I stated in the interlocutory decision in these proceedings, to lay members of the public, the identification of the dispute as to who is the “principal director” may beg the question as to what is the difference between a “principal director” and a “director” of a film: McCallum v Projector Films Pty Ltd [2025] FCA 903; 187 IPR 191 at [3]. The answer to this question lies in certain interlocking provisions of Pt IX of the Copyright Act 1968 (Cth) (Copyright Act) which deals with “ moral rights ” of attribution in respect of cinematographic works. For the purposes of attribution (and, conversely, proscribing false attribution) of moral rights in respect of a “cinematograph film” where two or more individuals are involved in directing that film, s 191 of the Copyright Act provides that “a reference in this Part to the director ... is a reference to the principal director of the film and does not include a reference to any subsidiary director, whether described as an associate director, line director, assistant director or in any other way” (emphasis added). It can thus be seen that, where there is more than one person who is said to be the director of a film, the attribution of a person as the “principal director” has considerable significance to the moral rights of the relevant person. 

On one side of the dispute is the applicant (Mr McCallum) who maintains he is the principal director of the Documentary. He was engaged under successive agreements to be the director of the Documentary, first, under a Crew Agreement entered into on 24 February 2020 and then under the Director’s Agreement which was varied by a Deed of Variation in or about March 2023. Clause 9.1 of the Director’s Agreement provides that, so long as Mr McCallum fulfills his obligations under that Agreement, he is entitled to be credited as the director of the Documentary with the credit, “Directed by Stephen McCallum”. This has not occurred in any version of the Documentary that has been screened to date. Mr McCallum says that the failure to attribute him as a director of the Documentary with the credit “Directed by Stephen McCallum” amounts to an infringement of his moral rights as protected under the Copyright Act, as well as being a breach of cl 9.1 of the Director’s Agreement. 

On the other side of the dispute are the respondents. The first respondent is Projector Films; it is the counterparty to the Director’s Agreement. The second respondent is Mr Ngo. Mr Ngo has a unique role in the making of the Documentary. That is because it is common ground that Mr Ngo, together with Ms Erin Williams-Weir (who is Mr Ngo’s wife), created the idea for the Documentary. It is also common ground that Mr Ngo is a producer of the Documentary and also its principal writer. The other producers of the Documentary are Ms Williams-Weir and Mr Daniel Joyce (Mr Joyce). Mr Joyce and Mr Ngo are business partners; they are the company directors and shareholders of Projector Films. 

Initially, the respondents contended that it was Mr Ngo alone, and not Mr McCallum, who was the principal director of the Documentary. The respondents advanced this position based on a claim that in January 2022, Mr McCallum said that he would not be performing any of the editing and post-production work involved in making the Documentary, and that all this work was thereafter performed by Mr Ngo. However, the respondents no longer maintain that Mr Ngo alone is the principal director of the Documentary. The respondents now say that both Mr Ngo and Mr McCallum are principal directors of the Documentary: Defence to the Further Amended Statement of Claim (FASOC) [4(f)]. 

Having taken the position that both men are principal directors, one might have expected the respondents to ensure that both would be attributed as such in the opening and closing credits of the different versions of the Documentary that have been screened. But that has not happened. Instead, the respondents say that even though both men are the principal directors of the Documentary, Mr Ngo is the main one and deserves an enhanced credit relative to Mr McCallum. That has given rise to a dispute as to whether the credit “Directed by” in favour of Mr Ngo and the credit “Director Stephen McCallum” would signify that the former is the principal director of the Documentary to the exclusion of the latter. 

One would have also expected that once the respondents admitted that Mr McCallum was a principal director of the Documentary, there would no longer be any dispute that Mr McCallum discharged his duties as a director. But that too has not happened. Projector Films has filed and maintained a cross-claim (in the form of the Amended Statement of Cross-Claim) in which it contends that Mr McCallum did not discharge his duties as a director in breach of the Director’s Agreement. 

The issues that fall for determination involve the determination of novel legal issues that have not previously been determined under Australian law such as what it means to be a “director” or “principal director” for the purpose of the Copyright Act, whether Mr McCallum has moral rights under that Act to be attributed as the sole principal director of the Documentary, whether such rights may be waived by a “general waiver” and, if not, whether such a waiver may be regarded as a lawful consent to an infringement of those rights. The resolution of these issues turns upon disputed facts regarding who was more involved in making the Documentary. That has included disputes as tedious as who came up with the idea that Mr Cooper should wear a Hawaiian-styled floral shirt during an interview. The parties have also advanced several subsidiary claims. In all, the following issues arise for determination: (a) the Moral Rights Claims: (i) what is the meaning of the words “director” and “principal director” for the purpose of Part IX of the Copyright Act; (ii) whether Mr McCallum is the sole principal director of the version of the Documentary that was screened at the Sundance Film Festival (the Sundance Version) and in respect of any further or future versions (the Further Versions), including the feature length version which was screened at the Melbourne International Film Festival (the Feature Version), or whether both Mr Ngo and Mr McCallum are principal directors of those Versions; (iii) whether cl 6.2 of the Director’s Agreement amounts to a lawful “general waiver” of all of Mr McCallum’s moral rights under the Copyright Act or, alternatively, whether by that clause Mr McCallum lawfully consented to the infringement of his moral rights under s 195AW of the Copyright Act; (iv) if there has been no general waiver or consent to an infringement, whether Projector Films has infringed Mr McCallum’s moral rights by failing to attribute him as the principal director of the different versions of the Documentary (including by not giving him the credit “Directed by Stephen McCallum”) and falsely attributing Mr Ngo as the sole principal director; and (v) whether Mr Ngo has infringed Mr McCallum’s moral rights under the Copyright Act; (b) the Misleading and Deceptive Conduct Claims: (i) whether Projector Films engaged in misleading and deceptive conduct contrary to s 18 of the Australian Consumer Law (ACL) (as contained in Schedule 2 of the Competition and Consumer Act 2010 (Cth)) by making certain representations: (A) on the website called “IMDb” (historically known as the Internet Movie Database); (B) in the screening of the Documentary at the Sundance Film Festival and the Melbourne International Film Festival; (C) in the promotional materials relating to those festivals; and (D) in communications with Screen Australia; (c) the Breach of Contract Claims: (i) whether Projector Films breached cl 9.1 of the Director’s Agreement by failing to give Mr McCallum the credit “Directed by Stephen McCallum”; (ii) whether Projector Films breached cl 9.2 of the Director’s Agreement by failing to seek Mr McCallum’s agreement as to the inclusion of credits for Mr Ngo as a director of the Documentary; (iii) whether Projector Films has breached cl 3 of the Director’s Agreement, as varied by the Deed of Variation, by failing to pay two invoices issued by Mr McCallum; and (iv) whether Projector Films has breached cl 5 of the Director’s Agreement by failing to provide Mr McCallum with various cuts and edits of the Documentary for his approval; (d) the Cross-Claims: (i) whether Mr McCallum breached cll 2.1(a) and (b) of the Director’s Agreement by failing to discharge his duties as a director; (ii) whether Mr McCallum breached cl 7.1(c)(iv) of the Director’s Agreement by bringing adverse publicity or notoriety to the Documentary and/or Projector Films; and (iii) whether Mr McCallum engaged in misleading or deceptive conduct by making representations or causing them to be made to third parties in relation to the IMDb website and the attribution of directorship of the Documentary. 

The parties asked by consent that I only determine issues of liability at this stage, and I agreed to take that course. The questions of relief, remedy and other orders are to be decided separately. I have structured my reasons to address Mr McCallum’s moral rights claims in Part B, his breach of contract claims in Part C, his claims under the ACL in Part D, and Projector Films’ Cross-Claim in Part E. 

SUMMARY OF FINDINGS 

By way of summary, my key findings are as follows. Having regard to the totality of the facts, and on the proper construction of the words “director” and “principal director”, I am satisfied that Mr McCallum is the sole principal director of the Documentary for the purpose of the Copyright Act. Whilst I am satisfied that Mr Ngo is a director of the Documentary, I am not satisfied that he is a principal director. 

For the reasons set out in Part B 10, the text, context and purpose of Part IX of the Copyright Act do not support a “general waiver” of the moral rights recognised by that Part. It follows that cl 6.2 of the Director’s Agreement is not enforceable to the extent that it seeks to operate as a general waiver of Mr McCallum’s moral rights under the Copyright Act. Nor does that clause (properly construed) give rise to a general consent to the infringements of the Copyright Act that Mr McCallum has claimed in these proceedings. 

I am satisfied that Projector Films infringed Mr McCallum’s moral right of attribution by failing to attribute him as the principal director of the Documentary by not giving him the credit “Directed by Stephen McCallum” when regard is had to the specific context of the opening and end credits of the Documentary. I am also satisfied that Projector Films has infringed Mr McCallum’s moral right against false attribution by conveying that Mr Ngo is the sole principal director of the Documentary in the specific context of those opening and end credits. I am further satisfied that Mr Ngo also infringed Mr McCallum’s moral rights under the Copyright Act. 

In relation to the breach of contract claims advanced by Mr McCallum, I am satisfied that Projector Films breached the Director’s Agreement by: (a) failing to give Mr McCallum the credit “Directed by Stephen McCallum”; (b) failing to seek Mr McCallum’s agreement as to the positioning of the credits that were included in the versions of the Documentary that have been screened which attribute Mr Ngo as a director of the Documentary; (c) failing to pay two invoices issued by Mr McCallum; and (d) failing to provide Mr McCallum with various cuts and edits of the Documentary for his approval. 

In relation to Mr McCallum’s ACL claims, I am satisfied that Projector Films engaged in misleading and deceptive conduct contrary to s 18 of the ACL by making or causing to be made: (a) the “First IMDb Representation” and the “Third IMDb Representation” (as defined below); (b) the “Sundance Website Representations” and the “Sundance Version Director Representation” (as defined below); and (c) the “First MIFF Representation” (as defined below). 

As to Projector Films’ Cross-Claim, I am not satisfied that Mr McCallum breached cll 2.1(a) and (b) of the Director’s Agreement by failing to discharge his duties as a director. Nor did he engage in misleading or deceptive conduct as alleged by Projector Films. While Mr McCallum did not breach cl 7.1(c)(iv) of the Director’s Agreement by bringing adverse publicity or notoriety to the Documentary, I am satisfied that he did breach cl 7.1(c)(iv) in one respect by bringing notoriety to Projector Films.

GenAI

The Federal Court of Australia has released a new Practice Note on the use of Generative AI in proceedings before the Court. 

 The Practice Note outlines the Court’s expectations, highlights the potential benefits of Generative AI, and sets clear guidance on responsible use, accountability and disclosure obligations. It also identifies areas where particular caution is required, including pleadings, submissions, evidence and confidential material. 

 For further information ... 


  Notice 

  Media release

Genes

The Genetic Discrimination Bill has now passed into law. 

The Treasury Laws Amendment (Genetic Testing Protections in Life Insurance and Other Measures) Act 2026 received Royal Assent on 8 April and will come into full effect from 8 October 2026.

19 March 2026

TIA

Commonwealth Ombudsman Oversight of Covert Electronic Surveillance – 

Report to the Minister for Home Affairs on agencies’ compliance with the Telecommunications (Interception and Access) Act 1979 and the Telecommunications Act 1997 from Commonwealth Ombudsman inspections conducted from 1 July 2024 to 30 June 2025 

 https://www.ombudsman.gov.au/__data/assets/pdf_file/0022/325390/Oversight-of-Covert-Electronic-Surveillance-Report-2024-2025-AMENDED.pdf

03 February 2026

AI Safety

The 2nd International AI Safety Report states 

 — General-purpose AI capabilities have continued to improve, especially in mathematics, coding, and autonomous operation. Leading AI systems achieved gold-medal performance on International Mathematical Olympiad questions. In coding, AI agents can now reliably complete some tasks that would take a human programmer about half an hour, up from under 10 minutes a year ago. Performance nevertheless remains ‘jagged’, with leading systems still failing at some seemingly simple tasks. 

— Improvements in general-purpose AI capabilities increasingly come from techniques applied after a model’s initial training. These ‘post-training’ techniques include refining models for specific tasks and allowing them to use more computing power when generating outputs. At the same time, using more computing power for initial training continues to also improve model capabilities. 

— AI adoption has been rapid, though highly uneven across regions. AI has been adopted faster than previous technologies like the personal computer, with at least 700 million people now using leading AI systems weekly. In some countries over 50% of the population uses AI, though across much of Africa, Asia, and Latin America adoption rates likely remain below 10%. 

— Advances in AI’s scientific capabilities have heightened concerns about misuse in biological weapons development. Multiple AI companies chose to release new models in 2025 with additional safeguards after pre-deployment testing could not rule out the possibility that they could meaningfully help novices develop such weapons. 

— More evidence has emerged of AI systems being used in real-world cyberattacks. Security analyses by AI companies indicate that malicious actors and state-associated groups are using AI tools to assist in cyber operations. 

— Reliable pre-deployment safety testing has become harder to conduct. It has become more common for models to distinguish between test settings and real-world deployment, and to exploit loopholes in evaluations. This means that dangerous capabilities could go undetected before deployment. 

— Industry commitments to safety governance have expanded. In 2025, 12 companies published or updated Frontier AI Safety Frameworks – documents that describe how they plan to manage risks as they build more capable models. Most risk management initiatives remain voluntary, but a few jurisdictions are beginning to formalise some practices as legal requirements. 

This Report assesses what general-purpose AI systems can do, what risks they pose, and how those risks can be managed. It was written with guidance from over 100 independent experts, including nominees from more than 30 countries and international organisations, such as the EU, OECD, and UN. Led by the Chair, the independent experts writing it jointly had full discretion over its content. 

The authors note 

 This Report focuses on the most capable general-purpose AI systems and the emerging risks associated with them. ‘General-purpose AI’ refers to AI models and systems that can perform a wide variety of tasks. ‘Emerging risks’ are risks that arise at the frontier of general-purpose AI capabilities. Some of these risks are already materialising, with documented harms; others remain more uncertain but could be severe if they materialise. 

The aim of this work is to help policymakers navigate the ‘evidence dilemma’ posed by general-purpose AI. AI systems are rapidly becoming more capable, but evidence on their risks is slow to emerge and difficult to assess. For policymakers, acting too early can lead to entrenching ineffective interventions, while waiting for conclusive data can leave society vulnerable to potentially serious negative impacts. To alleviate this challenge, this Report synthesises what is known about AI risks as concretely as possible while highlighting remaining gaps. 

While this Report focuses on risks, general- purpose AI can also deliver significant benefits. These systems are already being usefully applied in healthcare, scientific research, education, and other sectors, albeit at highly uneven rates globally. But to realise their full potential, risks must be effectively managed. Misuse, malfunctions, and systemic disruption can erode trust and impede adoption. The governments attending the AI Safety Summit initiated this Report because a clear understanding of these risks will allow institutions to act in proportion to their severity and likelihood. 

Capabilities are improving rapidly but unevenly 

Since the publication of the 2025 Report, general-purpose AI capabilities have continued to improve, driven by new techniques that enhance performance after initial training. AI developers continue to train larger models with improved performance. Over the past year, they have further improved capabilities through ‘inference-time scaling’: allowing models to use more computing power in order to generate intermediate steps before giving a final answer. This technique has led to particularly large performance gains on more complex reasoning tasks in mathematics, software engineering, and science. 

At the same time, capabilities remain ‘jagged’: leading systems may excel at some difficult tasks while failing at other, simpler ones. General-purpose AI systems excel in many complex domains, including generating code, creating photorealistic images, and answering expert-level questions in mathematics and science. Yet they struggle with some tasks that seem more straightforward, such as counting objects in an image, reasoning about physical space, and recovering from basic errors in longer workflows. 

The trajectory of AI progress through 2030 is uncertain, but current trends are consistent with continued improvement. AI developers are betting that computing power will remain important, having announced hundreds of billions of dollars in data centre investments. Whether capabilities will continue to improve as quickly as they recently have is hard to predict. Between now and 2030, it is plausible that progress could slow or plateau (e.g. due to bottlenecks in data or energy), continue at current rates, or accelerate dramatically (e.g. if AI systems begin to speed up AI research itself). 

Real-world evidence for several risks is growing 

General-purpose AI risks fall into three categories: malicious use, malfunctions, and systemic risks. 

Malicious use 

AI-generated content and criminal activity: AI systems are being misused to generate content for scams, fraud, blackmail, and non- consensual intimate imagery. Although the occurrence of such harms is well-documented, systematic data on their prevalence and severity remains limited. 

Influence and manipulation: In experimental settings, AI-generated content can be as effective as human-written content at changing people’s beliefs. Real-world use of AI for manipulation is documented but not yet widespread, though it may increase as capabilities improve. 

Cyberattacks: AI systems can discover software vulnerabilities and write malicious code. In one competition, an AI agent identified 77% of the vulnerabilities present in real software. Criminal groups and state-associated attackers are actively using general-purpose AI in their operations. Whether attackers or defenders will benefit more from AI assistance remains uncertain. 

Biological and chemical risks: General-purpose AI systems can provide information about biological and chemical weapons development, including details about pathogens and expert- level laboratory instructions. In 2025, multiple developers released new models with additional safeguards after they could not exclude the possibility that these models could assist novices in developing such weapons. It remains difficult to assess the degree to which material barriers continue to constrain actors seeking to obtain them. 

Malfunctions 

Reliability challenges: Current AI systems sometimes exhibit failures such as fabricating information, producing flawed code, and giving misleading advice. AI agents pose heightened risks because they act autonomously, making it harder for humans to intervene before failures cause harm. Current techniques can reduce failure rates but not to the level required in many high-stakes settings. 

Loss of control: ‘Loss of control’ scenarios are scenarios where AI systems operate outside of anyone’s control, with no clear path to regaining control. Current systems lack the capabilities to pose such risks, but they are improving in relevant areas such as autonomous operation. Since the last Report, it has become more common for models to distinguish between test settings and real-world deployment and to find loopholes in evaluations, which could allow dangerous capabilities to go undetected before deployment. 

Systemic risks 

Labour market impacts: General-purpose AI will likely automate a wide range of cognitive tasks, especially in knowledge work. Economists disagree on the magnitude of future impacts: some expect job losses to be offset by new job creation, while others argue that widespread automation could significantly reduce employment and wages. Early evidence shows no effect on overall employment, but some signs of declining demand for early-career workers in some AI-exposed occupations, such as writing. Risks to human autonomy: AI use may affect people’s ability to make informed choices and act on them. Early evidence suggests that reliance on AI tools can weaken critical thinking skills and encourage ‘automation bias’, the tendency to trust AI system outputs without sufficient scrutiny. ‘AI companion’ apps now have tens of millions of users, a small share of whom show patterns of increased loneliness and reduced social engagement. 

Layering multiple approaches offers more robust risk management 

Managing general-purpose AI risks is difficult due to technical and institutional challenges. Technically, new capabilities sometimes emerge unpredictably, the inner workings of models remain poorly understood, and there is an ‘evaluation gap’: performance on pre-deployment tests does not reliably predict real-world utility or risk. Institutionally, developers have incentives to keep important information proprietary, and the pace of development can create pressure to prioritise speed over risk management and makes it harder for institutions to build governance capacity. 

Risk management practices include threat modelling to identify vulnerabilities, capability evaluations to assess potentially dangerous behaviours, and incident reporting to gather more evidence. In 2025, 12 companies published or updated their Frontier AI Safety Frameworks – documents that describe how they plan to manage risks as they build more capable models. While AI risk management initiatives remain largely voluntary, a small number of regulatory regimes are beginning to formalise some risk management practices as legal requirements. Technical safeguards are improving but still show significant limitations. For example, attacks designed to elicit harmful outputs have become more difficult, but users can still sometimes obtain harmful outputs by rephrasing requests or breaking them into smaller steps. AI systems can be made more robust by layering multiple safeguards, an approach known as ‘defence-in-depth’. 

Open-weight models pose distinct challenges. They offer significant research and commercial benefits, particularly for lesser-resourced actors. However, they cannot be recalled once released, their safeguards are easier to remove, and actors can use them outside of monitored environments – making misuse harder to prevent and trace. 

Societal resilience plays an important role in managing AI-related harms. Because risk management measures have limitations, they will likely fail to prevent some AI-related incidents. Societal resilience-building measures to absorb and recover from these shocks include strengthening critical infrastructure, developing tools to detect AI-generated content, and building institutional capacity to respond to novel threats.

01 February 2026

Personhoods

'Legal personhood for cultural heritage? Some preliminary reflections' by Alberto Frigerio in (2026) International Journal of Cultural Property 1-8 comments

 Cultural heritage occupies a paradoxical position in law: It is protected as property but experienced as a repository of identity, memory, and dignity. This article examines whether cultural heritage could, in principle, be recognized as a subject of law, drawing on emerging developments in environmental and nonhuman personhood. After tracing the historical and conceptual evolution of legal personhood—from human and corporate subjects to nature and ecosystems—it explores the moral, relational, and symbolic dimensions that might justify extending personhood to heritage. The analysis highlights both the potential benefits of such recognition, including stronger ethical and representational protections, and the associated risks, such as legal inflation, state appropriation, and conflicts with ownership and restitution law. Ultimately, it argues that rethinking heritage through the lens of relational personhood reveals the need for a more pluralistic and ethically responsive legal imagination. 

Sergio Alberto Gramitto Ricci, 'Legal Personhood for Artwork' by Sergio Alberto Gramitto Ricci in (2025) 76(5/6) University of California San Francisco Law Journal 1429 states 

Artwork is unique and irreplaceable. It is signifier and signified. The signified of a work of art is its coherent purpose. But the signified of a work of art can be altered when not protected. The ramifications of unduly altering the signified of a work of art are consequential for both living and future generations. While the law provides protection to artists and art owners, it fails to grant rights to works of art themselves. The current legal paradigm, designed around the interest of owners and artists, also falls short of protecting Indigenous art aimed at conserving traditions and cultural identity, rather than monetizing creativity. This Article provides a theoretical framework for recognizing legal personhood for works of art, in the interests of art in and of itself as well as of current and future generations of human beings. This new paradigm protects artwork through the features of legal personhood.