16 April 2026

Moral Rights

In McCallum v Projector Films Pty Ltd (Liability Hearing) [2026] FCA 173  the  Court considered moral rights.

The introduction to the judgment states

 The central dispute between the parties to these proceedings is who is the “principal director” of the documentary film entitled “Never Get Busted!” (the Documentary or NGB). Most (but not all) of the other disputes between the parties depend on the outcome of that central question. 

The Documentary examines the colourful life of Mr Barry Cooper, who was at one time a Texan-based narcotics officer during the height of the “war on drugs” in the 1990s. The Documentary has taken over five years to complete. It has already screened at the prestigious Sundance Film Festival in Utah in the United States and had its Australian premiere at the Melbourne International Film Festival. It has also screened at other film festivals. Potential offers from streaming services await. What should have been hailed as a success by all those involved in the making of the Documentary has become the subject of an acrimonious dispute. The key protagonists have betrayed the adage applicable to journalists but equally applicable to filmmakers that they not become part of the story. 

As I stated in the interlocutory decision in these proceedings, to lay members of the public, the identification of the dispute as to who is the “principal director” may beg the question as to what is the difference between a “principal director” and a “director” of a film: McCallum v Projector Films Pty Ltd [2025] FCA 903; 187 IPR 191 at [3]. The answer to this question lies in certain interlocking provisions of Pt IX of the Copyright Act 1968 (Cth) (Copyright Act) which deals with “ moral rights ” of attribution in respect of cinematographic works. For the purposes of attribution (and, conversely, proscribing false attribution) of moral rights in respect of a “cinematograph film” where two or more individuals are involved in directing that film, s 191 of the Copyright Act provides that “a reference in this Part to the director ... is a reference to the principal director of the film and does not include a reference to any subsidiary director, whether described as an associate director, line director, assistant director or in any other way” (emphasis added). It can thus be seen that, where there is more than one person who is said to be the director of a film, the attribution of a person as the “principal director” has considerable significance to the moral rights of the relevant person. 

On one side of the dispute is the applicant (Mr McCallum) who maintains he is the principal director of the Documentary. He was engaged under successive agreements to be the director of the Documentary, first, under a Crew Agreement entered into on 24 February 2020 and then under the Director’s Agreement which was varied by a Deed of Variation in or about March 2023. Clause 9.1 of the Director’s Agreement provides that, so long as Mr McCallum fulfills his obligations under that Agreement, he is entitled to be credited as the director of the Documentary with the credit, “Directed by Stephen McCallum”. This has not occurred in any version of the Documentary that has been screened to date. Mr McCallum says that the failure to attribute him as a director of the Documentary with the credit “Directed by Stephen McCallum” amounts to an infringement of his moral rights as protected under the Copyright Act, as well as being a breach of cl 9.1 of the Director’s Agreement. 

On the other side of the dispute are the respondents. The first respondent is Projector Films; it is the counterparty to the Director’s Agreement. The second respondent is Mr Ngo. Mr Ngo has a unique role in the making of the Documentary. That is because it is common ground that Mr Ngo, together with Ms Erin Williams-Weir (who is Mr Ngo’s wife), created the idea for the Documentary. It is also common ground that Mr Ngo is a producer of the Documentary and also its principal writer. The other producers of the Documentary are Ms Williams-Weir and Mr Daniel Joyce (Mr Joyce). Mr Joyce and Mr Ngo are business partners; they are the company directors and shareholders of Projector Films. 

Initially, the respondents contended that it was Mr Ngo alone, and not Mr McCallum, who was the principal director of the Documentary. The respondents advanced this position based on a claim that in January 2022, Mr McCallum said that he would not be performing any of the editing and post-production work involved in making the Documentary, and that all this work was thereafter performed by Mr Ngo. However, the respondents no longer maintain that Mr Ngo alone is the principal director of the Documentary. The respondents now say that both Mr Ngo and Mr McCallum are principal directors of the Documentary: Defence to the Further Amended Statement of Claim (FASOC) [4(f)]. 

Having taken the position that both men are principal directors, one might have expected the respondents to ensure that both would be attributed as such in the opening and closing credits of the different versions of the Documentary that have been screened. But that has not happened. Instead, the respondents say that even though both men are the principal directors of the Documentary, Mr Ngo is the main one and deserves an enhanced credit relative to Mr McCallum. That has given rise to a dispute as to whether the credit “Directed by” in favour of Mr Ngo and the credit “Director Stephen McCallum” would signify that the former is the principal director of the Documentary to the exclusion of the latter. 

One would have also expected that once the respondents admitted that Mr McCallum was a principal director of the Documentary, there would no longer be any dispute that Mr McCallum discharged his duties as a director. But that too has not happened. Projector Films has filed and maintained a cross-claim (in the form of the Amended Statement of Cross-Claim) in which it contends that Mr McCallum did not discharge his duties as a director in breach of the Director’s Agreement. 

The issues that fall for determination involve the determination of novel legal issues that have not previously been determined under Australian law such as what it means to be a “director” or “principal director” for the purpose of the Copyright Act, whether Mr McCallum has moral rights under that Act to be attributed as the sole principal director of the Documentary, whether such rights may be waived by a “general waiver” and, if not, whether such a waiver may be regarded as a lawful consent to an infringement of those rights. The resolution of these issues turns upon disputed facts regarding who was more involved in making the Documentary. That has included disputes as tedious as who came up with the idea that Mr Cooper should wear a Hawaiian-styled floral shirt during an interview. The parties have also advanced several subsidiary claims. In all, the following issues arise for determination: (a) the Moral Rights Claims: (i) what is the meaning of the words “director” and “principal director” for the purpose of Part IX of the Copyright Act; (ii) whether Mr McCallum is the sole principal director of the version of the Documentary that was screened at the Sundance Film Festival (the Sundance Version) and in respect of any further or future versions (the Further Versions), including the feature length version which was screened at the Melbourne International Film Festival (the Feature Version), or whether both Mr Ngo and Mr McCallum are principal directors of those Versions; (iii) whether cl 6.2 of the Director’s Agreement amounts to a lawful “general waiver” of all of Mr McCallum’s moral rights under the Copyright Act or, alternatively, whether by that clause Mr McCallum lawfully consented to the infringement of his moral rights under s 195AW of the Copyright Act; (iv) if there has been no general waiver or consent to an infringement, whether Projector Films has infringed Mr McCallum’s moral rights by failing to attribute him as the principal director of the different versions of the Documentary (including by not giving him the credit “Directed by Stephen McCallum”) and falsely attributing Mr Ngo as the sole principal director; and (v) whether Mr Ngo has infringed Mr McCallum’s moral rights under the Copyright Act; (b) the Misleading and Deceptive Conduct Claims: (i) whether Projector Films engaged in misleading and deceptive conduct contrary to s 18 of the Australian Consumer Law (ACL) (as contained in Schedule 2 of the Competition and Consumer Act 2010 (Cth)) by making certain representations: (A) on the website called “IMDb” (historically known as the Internet Movie Database); (B) in the screening of the Documentary at the Sundance Film Festival and the Melbourne International Film Festival; (C) in the promotional materials relating to those festivals; and (D) in communications with Screen Australia; (c) the Breach of Contract Claims: (i) whether Projector Films breached cl 9.1 of the Director’s Agreement by failing to give Mr McCallum the credit “Directed by Stephen McCallum”; (ii) whether Projector Films breached cl 9.2 of the Director’s Agreement by failing to seek Mr McCallum’s agreement as to the inclusion of credits for Mr Ngo as a director of the Documentary; (iii) whether Projector Films has breached cl 3 of the Director’s Agreement, as varied by the Deed of Variation, by failing to pay two invoices issued by Mr McCallum; and (iv) whether Projector Films has breached cl 5 of the Director’s Agreement by failing to provide Mr McCallum with various cuts and edits of the Documentary for his approval; (d) the Cross-Claims: (i) whether Mr McCallum breached cll 2.1(a) and (b) of the Director’s Agreement by failing to discharge his duties as a director; (ii) whether Mr McCallum breached cl 7.1(c)(iv) of the Director’s Agreement by bringing adverse publicity or notoriety to the Documentary and/or Projector Films; and (iii) whether Mr McCallum engaged in misleading or deceptive conduct by making representations or causing them to be made to third parties in relation to the IMDb website and the attribution of directorship of the Documentary. 

The parties asked by consent that I only determine issues of liability at this stage, and I agreed to take that course. The questions of relief, remedy and other orders are to be decided separately. I have structured my reasons to address Mr McCallum’s moral rights claims in Part B, his breach of contract claims in Part C, his claims under the ACL in Part D, and Projector Films’ Cross-Claim in Part E. 

SUMMARY OF FINDINGS 

By way of summary, my key findings are as follows. Having regard to the totality of the facts, and on the proper construction of the words “director” and “principal director”, I am satisfied that Mr McCallum is the sole principal director of the Documentary for the purpose of the Copyright Act. Whilst I am satisfied that Mr Ngo is a director of the Documentary, I am not satisfied that he is a principal director. 

For the reasons set out in Part B 10, the text, context and purpose of Part IX of the Copyright Act do not support a “general waiver” of the moral rights recognised by that Part. It follows that cl 6.2 of the Director’s Agreement is not enforceable to the extent that it seeks to operate as a general waiver of Mr McCallum’s moral rights under the Copyright Act. Nor does that clause (properly construed) give rise to a general consent to the infringements of the Copyright Act that Mr McCallum has claimed in these proceedings. 

I am satisfied that Projector Films infringed Mr McCallum’s moral right of attribution by failing to attribute him as the principal director of the Documentary by not giving him the credit “Directed by Stephen McCallum” when regard is had to the specific context of the opening and end credits of the Documentary. I am also satisfied that Projector Films has infringed Mr McCallum’s moral right against false attribution by conveying that Mr Ngo is the sole principal director of the Documentary in the specific context of those opening and end credits. I am further satisfied that Mr Ngo also infringed Mr McCallum’s moral rights under the Copyright Act. 

In relation to the breach of contract claims advanced by Mr McCallum, I am satisfied that Projector Films breached the Director’s Agreement by: (a) failing to give Mr McCallum the credit “Directed by Stephen McCallum”; (b) failing to seek Mr McCallum’s agreement as to the positioning of the credits that were included in the versions of the Documentary that have been screened which attribute Mr Ngo as a director of the Documentary; (c) failing to pay two invoices issued by Mr McCallum; and (d) failing to provide Mr McCallum with various cuts and edits of the Documentary for his approval. 

In relation to Mr McCallum’s ACL claims, I am satisfied that Projector Films engaged in misleading and deceptive conduct contrary to s 18 of the ACL by making or causing to be made: (a) the “First IMDb Representation” and the “Third IMDb Representation” (as defined below); (b) the “Sundance Website Representations” and the “Sundance Version Director Representation” (as defined below); and (c) the “First MIFF Representation” (as defined below). 

As to Projector Films’ Cross-Claim, I am not satisfied that Mr McCallum breached cll 2.1(a) and (b) of the Director’s Agreement by failing to discharge his duties as a director. Nor did he engage in misleading or deceptive conduct as alleged by Projector Films. While Mr McCallum did not breach cl 7.1(c)(iv) of the Director’s Agreement by bringing adverse publicity or notoriety to the Documentary, I am satisfied that he did breach cl 7.1(c)(iv) in one respect by bringing notoriety to Projector Films.

GenAI

The Federal Court of Australia has released a new Practice Note on the use of Generative AI in proceedings before the Court. 

 The Practice Note outlines the Court’s expectations, highlights the potential benefits of Generative AI, and sets clear guidance on responsible use, accountability and disclosure obligations. It also identifies areas where particular caution is required, including pleadings, submissions, evidence and confidential material. 

 For further information ... 


  Notice 

  Media release

Genes

The Genetic Discrimination Bill has now passed into law. 

The Treasury Laws Amendment (Genetic Testing Protections in Life Insurance and Other Measures) Act 2026 received Royal Assent on 8 April and will come into full effect from 8 October 2026.

19 March 2026

TIA

Commonwealth Ombudsman Oversight of Covert Electronic Surveillance – 

Report to the Minister for Home Affairs on agencies’ compliance with the Telecommunications (Interception and Access) Act 1979 and the Telecommunications Act 1997 from Commonwealth Ombudsman inspections conducted from 1 July 2024 to 30 June 2025 

 https://www.ombudsman.gov.au/__data/assets/pdf_file/0022/325390/Oversight-of-Covert-Electronic-Surveillance-Report-2024-2025-AMENDED.pdf

03 February 2026

AI Safety

The 2nd International AI Safety Report states 

 — General-purpose AI capabilities have continued to improve, especially in mathematics, coding, and autonomous operation. Leading AI systems achieved gold-medal performance on International Mathematical Olympiad questions. In coding, AI agents can now reliably complete some tasks that would take a human programmer about half an hour, up from under 10 minutes a year ago. Performance nevertheless remains ‘jagged’, with leading systems still failing at some seemingly simple tasks. 

— Improvements in general-purpose AI capabilities increasingly come from techniques applied after a model’s initial training. These ‘post-training’ techniques include refining models for specific tasks and allowing them to use more computing power when generating outputs. At the same time, using more computing power for initial training continues to also improve model capabilities. 

— AI adoption has been rapid, though highly uneven across regions. AI has been adopted faster than previous technologies like the personal computer, with at least 700 million people now using leading AI systems weekly. In some countries over 50% of the population uses AI, though across much of Africa, Asia, and Latin America adoption rates likely remain below 10%. 

— Advances in AI’s scientific capabilities have heightened concerns about misuse in biological weapons development. Multiple AI companies chose to release new models in 2025 with additional safeguards after pre-deployment testing could not rule out the possibility that they could meaningfully help novices develop such weapons. 

— More evidence has emerged of AI systems being used in real-world cyberattacks. Security analyses by AI companies indicate that malicious actors and state-associated groups are using AI tools to assist in cyber operations. 

— Reliable pre-deployment safety testing has become harder to conduct. It has become more common for models to distinguish between test settings and real-world deployment, and to exploit loopholes in evaluations. This means that dangerous capabilities could go undetected before deployment. 

— Industry commitments to safety governance have expanded. In 2025, 12 companies published or updated Frontier AI Safety Frameworks – documents that describe how they plan to manage risks as they build more capable models. Most risk management initiatives remain voluntary, but a few jurisdictions are beginning to formalise some practices as legal requirements. 

This Report assesses what general-purpose AI systems can do, what risks they pose, and how those risks can be managed. It was written with guidance from over 100 independent experts, including nominees from more than 30 countries and international organisations, such as the EU, OECD, and UN. Led by the Chair, the independent experts writing it jointly had full discretion over its content. 

The authors note 

 This Report focuses on the most capable general-purpose AI systems and the emerging risks associated with them. ‘General-purpose AI’ refers to AI models and systems that can perform a wide variety of tasks. ‘Emerging risks’ are risks that arise at the frontier of general-purpose AI capabilities. Some of these risks are already materialising, with documented harms; others remain more uncertain but could be severe if they materialise. 

The aim of this work is to help policymakers navigate the ‘evidence dilemma’ posed by general-purpose AI. AI systems are rapidly becoming more capable, but evidence on their risks is slow to emerge and difficult to assess. For policymakers, acting too early can lead to entrenching ineffective interventions, while waiting for conclusive data can leave society vulnerable to potentially serious negative impacts. To alleviate this challenge, this Report synthesises what is known about AI risks as concretely as possible while highlighting remaining gaps. 

While this Report focuses on risks, general- purpose AI can also deliver significant benefits. These systems are already being usefully applied in healthcare, scientific research, education, and other sectors, albeit at highly uneven rates globally. But to realise their full potential, risks must be effectively managed. Misuse, malfunctions, and systemic disruption can erode trust and impede adoption. The governments attending the AI Safety Summit initiated this Report because a clear understanding of these risks will allow institutions to act in proportion to their severity and likelihood. 

Capabilities are improving rapidly but unevenly 

Since the publication of the 2025 Report, general-purpose AI capabilities have continued to improve, driven by new techniques that enhance performance after initial training. AI developers continue to train larger models with improved performance. Over the past year, they have further improved capabilities through ‘inference-time scaling’: allowing models to use more computing power in order to generate intermediate steps before giving a final answer. This technique has led to particularly large performance gains on more complex reasoning tasks in mathematics, software engineering, and science. 

At the same time, capabilities remain ‘jagged’: leading systems may excel at some difficult tasks while failing at other, simpler ones. General-purpose AI systems excel in many complex domains, including generating code, creating photorealistic images, and answering expert-level questions in mathematics and science. Yet they struggle with some tasks that seem more straightforward, such as counting objects in an image, reasoning about physical space, and recovering from basic errors in longer workflows. 

The trajectory of AI progress through 2030 is uncertain, but current trends are consistent with continued improvement. AI developers are betting that computing power will remain important, having announced hundreds of billions of dollars in data centre investments. Whether capabilities will continue to improve as quickly as they recently have is hard to predict. Between now and 2030, it is plausible that progress could slow or plateau (e.g. due to bottlenecks in data or energy), continue at current rates, or accelerate dramatically (e.g. if AI systems begin to speed up AI research itself). 

Real-world evidence for several risks is growing 

General-purpose AI risks fall into three categories: malicious use, malfunctions, and systemic risks. 

Malicious use 

AI-generated content and criminal activity: AI systems are being misused to generate content for scams, fraud, blackmail, and non- consensual intimate imagery. Although the occurrence of such harms is well-documented, systematic data on their prevalence and severity remains limited. 

Influence and manipulation: In experimental settings, AI-generated content can be as effective as human-written content at changing people’s beliefs. Real-world use of AI for manipulation is documented but not yet widespread, though it may increase as capabilities improve. 

Cyberattacks: AI systems can discover software vulnerabilities and write malicious code. In one competition, an AI agent identified 77% of the vulnerabilities present in real software. Criminal groups and state-associated attackers are actively using general-purpose AI in their operations. Whether attackers or defenders will benefit more from AI assistance remains uncertain. 

Biological and chemical risks: General-purpose AI systems can provide information about biological and chemical weapons development, including details about pathogens and expert- level laboratory instructions. In 2025, multiple developers released new models with additional safeguards after they could not exclude the possibility that these models could assist novices in developing such weapons. It remains difficult to assess the degree to which material barriers continue to constrain actors seeking to obtain them. 

Malfunctions 

Reliability challenges: Current AI systems sometimes exhibit failures such as fabricating information, producing flawed code, and giving misleading advice. AI agents pose heightened risks because they act autonomously, making it harder for humans to intervene before failures cause harm. Current techniques can reduce failure rates but not to the level required in many high-stakes settings. 

Loss of control: ‘Loss of control’ scenarios are scenarios where AI systems operate outside of anyone’s control, with no clear path to regaining control. Current systems lack the capabilities to pose such risks, but they are improving in relevant areas such as autonomous operation. Since the last Report, it has become more common for models to distinguish between test settings and real-world deployment and to find loopholes in evaluations, which could allow dangerous capabilities to go undetected before deployment. 

Systemic risks 

Labour market impacts: General-purpose AI will likely automate a wide range of cognitive tasks, especially in knowledge work. Economists disagree on the magnitude of future impacts: some expect job losses to be offset by new job creation, while others argue that widespread automation could significantly reduce employment and wages. Early evidence shows no effect on overall employment, but some signs of declining demand for early-career workers in some AI-exposed occupations, such as writing. Risks to human autonomy: AI use may affect people’s ability to make informed choices and act on them. Early evidence suggests that reliance on AI tools can weaken critical thinking skills and encourage ‘automation bias’, the tendency to trust AI system outputs without sufficient scrutiny. ‘AI companion’ apps now have tens of millions of users, a small share of whom show patterns of increased loneliness and reduced social engagement. 

Layering multiple approaches offers more robust risk management 

Managing general-purpose AI risks is difficult due to technical and institutional challenges. Technically, new capabilities sometimes emerge unpredictably, the inner workings of models remain poorly understood, and there is an ‘evaluation gap’: performance on pre-deployment tests does not reliably predict real-world utility or risk. Institutionally, developers have incentives to keep important information proprietary, and the pace of development can create pressure to prioritise speed over risk management and makes it harder for institutions to build governance capacity. 

Risk management practices include threat modelling to identify vulnerabilities, capability evaluations to assess potentially dangerous behaviours, and incident reporting to gather more evidence. In 2025, 12 companies published or updated their Frontier AI Safety Frameworks – documents that describe how they plan to manage risks as they build more capable models. While AI risk management initiatives remain largely voluntary, a small number of regulatory regimes are beginning to formalise some risk management practices as legal requirements. Technical safeguards are improving but still show significant limitations. For example, attacks designed to elicit harmful outputs have become more difficult, but users can still sometimes obtain harmful outputs by rephrasing requests or breaking them into smaller steps. AI systems can be made more robust by layering multiple safeguards, an approach known as ‘defence-in-depth’. 

Open-weight models pose distinct challenges. They offer significant research and commercial benefits, particularly for lesser-resourced actors. However, they cannot be recalled once released, their safeguards are easier to remove, and actors can use them outside of monitored environments – making misuse harder to prevent and trace. 

Societal resilience plays an important role in managing AI-related harms. Because risk management measures have limitations, they will likely fail to prevent some AI-related incidents. Societal resilience-building measures to absorb and recover from these shocks include strengthening critical infrastructure, developing tools to detect AI-generated content, and building institutional capacity to respond to novel threats.

01 February 2026

Personhoods

'Legal personhood for cultural heritage? Some preliminary reflections' by Alberto Frigerio in (2026) International Journal of Cultural Property 1-8 comments

 Cultural heritage occupies a paradoxical position in law: It is protected as property but experienced as a repository of identity, memory, and dignity. This article examines whether cultural heritage could, in principle, be recognized as a subject of law, drawing on emerging developments in environmental and nonhuman personhood. After tracing the historical and conceptual evolution of legal personhood—from human and corporate subjects to nature and ecosystems—it explores the moral, relational, and symbolic dimensions that might justify extending personhood to heritage. The analysis highlights both the potential benefits of such recognition, including stronger ethical and representational protections, and the associated risks, such as legal inflation, state appropriation, and conflicts with ownership and restitution law. Ultimately, it argues that rethinking heritage through the lens of relational personhood reveals the need for a more pluralistic and ethically responsive legal imagination. 

Sergio Alberto Gramitto Ricci, 'Legal Personhood for Artwork' by Sergio Alberto Gramitto Ricci in (2025) 76(5/6) University of California San Francisco Law Journal 1429 states 

Artwork is unique and irreplaceable. It is signifier and signified. The signified of a work of art is its coherent purpose. But the signified of a work of art can be altered when not protected. The ramifications of unduly altering the signified of a work of art are consequential for both living and future generations. While the law provides protection to artists and art owners, it fails to grant rights to works of art themselves. The current legal paradigm, designed around the interest of owners and artists, also falls short of protecting Indigenous art aimed at conserving traditions and cultural identity, rather than monetizing creativity. This Article provides a theoretical framework for recognizing legal personhood for works of art, in the interests of art in and of itself as well as of current and future generations of human beings. This new paradigm protects artwork through the features of legal personhood.

23 January 2026

Pseudolaw

Yet another judgment re pseudolaw. In Commonwealth Bank of Australia v Cahill & Anor [2025] VCC 1860 the Court notes 

The amended defences deny the existence of any lawful credit agreement between the parties, assert that CBA is a “corporate fiction,”and contend that no valid mortgage was created or that CBA lacks standing to enforce it. The defendants also dispute the quantum of the debt and demand production of “wet-ink” originals of various loan and title documents. Judge’s amended counterclaim makes bald and sweeping allegations that CBA engaged in misleading or deceptive conduct, relied on an unfair standard form contract contrary to the Australian Consumer Law, and “securitised” the mortgage in breach of the Corporations Act 2001 (Cth), thereby losing the right to enforce it. It further alleges that enforcement of the mortgage constitutes modern slavery and seeks, among other relief, the return of all payments made, the discharge of the mortgage, and damages.

In referring to 'Sovereign Citizens and pseudo law' the judgment  states

 The documents and submissions made by the defendants fall into a by now well-known quasi-philosophy known as the “sovereign citizen” movement. The guiding philosophy appears to be that these persons consider that they are not subject to the laws of the Commonwealth of Australia unless they have expressly “contracted” or consented to be so bound. This philosophy has no basis in law and has been rejected in many cases to date. All persons living under the protection of the Crown in right of the Commonwealth or State are, as a matter of law, subject to the laws of the Commonwealth. Any suggestion to the contrary is both dangerous and undermines the orderly arrangement of any society. The courts of this country will give no credence to such philosophy. 

The documents and submissions filed by the defendants are informed by half-baked statements that contain traces of legal tit-bits scraped from current and ancient sources otherwise also referred to as “ pseudo-law ”. They are legal gibberish and do not constitute proper statements of principles known to law. 

In Re Coles Supermarkets Australia Pty Ltd [2022] VSC 438, Hetyey Asj said the following of such submissions:

The defendants appear to be seeking to draw a distinction between themselves as ‘natural’ or ‘living’ persons, on the one hand, and their status as ‘legal’ personalities, on the other. However, contemporary Australian law does not distinguish between a human being and their legal personality. Any such distinction would potentially leave a human being without legal rights, which would be unacceptable in modern society. The contentions put forward by the defendants in this regard are artificial and have no legal consequence. 

I adopt the analysis of John Dixon J in Stefan v McLachlan [2023] VSC 501, dealing with the fictional concept of the ‘living man’, stating that:

The law recognises a living person as having status in law and any person is, in this sense, a legal person. Conceptually, there may be differences between the legal status of a person and that of an entity that is granted a like legal status, but whatever they might be they have no application on this appeal. In asserting that he is a ‘living man’, the appellant does no more than identify that he is a person, an individual. Every person, every individual, and every entity accorded status as a legal person is subject to the rule of law. There are no exceptions in Australian society. 

I also refer to AsJ Gobbo’s decision in Nelson v Greenman & Anor [2024] VSC 704 in which her Honour gives a comprehensive treatment of the fallacies underlying the sovereign citizen and pseudo law movements. I concur with and adopt her Honour’s treatment of the subject at paragraphs [53] – [78].