06 November 2024

Faith

'Human Law, Human Lawyers and the Emerging AI Faith' by Giulia Gentile in (2024) LSE Public Policy Review comments 

 The advent of AI has generated remarkable interest in the legal sector. A new ‘faith’ in the transformative power of AI has emerged among law practitioners. According to this new religion, AI would significantly improve the law and the legal profession thanks to automation and the ensuing gains. This development has a messianic taste insofar as it would support lawyers to deal with increasingly complex legal frameworks and a rising demand of legal services. Should lawyers embrace this new faith and allow themselves to be guided by the algorithmic power in the development of their practise? As for all new faiths emerging in times of crisis, this paper argues, caution is needed. The implications of the AI religion in the legal sector are far-reaching and shake the very understanding of human law and human lawyers. A critical perspective should be embraced by individual operators, firms and regulators when reflecting on the potential of AI for the legal sector. 

The law and legal professionals are currently experiencing a profound rethinking entailed by the advancement of legal artificial intelligence (AI) and data-driven automation (DDA). As history teaches us, in moments of crisis new gods and religions surface (1). Likewise, a novel faith in the transformative force of AI has emerged among legal professionals (2, 3, 4, 5, 6). These developments (AI and DDA) are creating new narratives and beliefs in the power of technology and its impact on law and the legal profession. 

With the emergence of AI systems, so has come the expectation that they may transform the law and the legal profession. AI and DDA are still developing, yet the hype among laywers is high, and not completely unfounded. AI and data-driven automation are both quantitatively and qualitatively noteworthy. Numerous scholars and professionals (2, 3, 4, 5, 6) argue that it is just a matter of time until the technology overtakes the legal profession. Now that AI has come, nothing will be the same for the law and lawyers. AI and DDA are mushrooming, beginning to colonise all aspects of the legal world, but there are real concerns about the quality of these tools. A telling example involves a US lawyer who used ChatGPT to draft briefs for a case. The judge hearing the case later discovered that the briefs included citations that did not exist and that had been forged by ChatGPT (7). 

It is evident that, while AI may be transformative, we must be cautious. The promised new land of artificial law and artificial lawyers may not be as proximate (or as promising) as one might think. Currently, the legal sector is floating somewhere between tradition and automation. 

This offers us the opportunity to go back to first principles. What distinguishes human law and human lawyers from AI law and AI lawyers? What does AI promise to bring to the legal sector and what may it take away? Reflecting on these issues is crucial to ensure the adequacy of prospective public policy developments on the development of the legal sector, and to ensure that we avoid lapsing into uncritical acceptance of the novel faith in AI. 

This paper explores these questions and provides reflections on the future of the law and lawyers in the AI era. It firstly illustrates the main scholarly theories about human law and human lawyers, before exploring recent developments in the legal sector of the 21st century. It closes with remarks on the implications of the novel AI religion for human law and human lawyers.

05 November 2024

Jurisdiction

'Māori Rejections of the State’s Criminal Jurisdiction Over Māori in Aotearoa New Zealand’s Courts' by Fleur Te Aho and Julia Tolmie in (2023) 30 New Zealand Universities Law Review comments 

A significant and little-known protest is happening in Aotearoa New Zealand's criminal court. For years, on an almost daily basis, Māori defendants have been rejecting the state's exercise of criminal jurisdiction over them - claims that have been repeatedly rejected by the courts. In this article, we examine the extent and nature of this jurisdictional protest in the criminal court and offer some initial reflections on the implications of the protest and the court's response to date. 

We suggest that this protest is notable both for its scale and, at times, sophistication but that the court's response has been simplistic - dismissing without truly addressing the defendants' arguments. In our view, the courts cannot authentically address such claims without first acknowledging that their jurisdiction - and the state's authority to govern Māori - is founded on an illegitimate and unilateral assumption of power.

Ouch

'Could a robot feel pain?' by Amanda Sharkey in (2024) AI and Society comments 

 Questions about robots feeling pain are important because the experience of pain implies sentience and the ability to suffer. Pain is not the same as nociception, a reflex response to an aversive stimulus. The experience of pain in others has to be inferred. Danaher’s (Sci Eng Ethics 26(4):2023–2049, 2020. https://doi.org/10.1007/s11948-019-00119-x) ‘ethical behaviourist’ account claims that if a robot behaves in the same way as an animal that is recognised to have moral status, then its moral status should also be assumed. Similarly, under a precautionary approach (Sebo in Harvard Rev Philos 25:51–70, 2018. https://doi.org/10.5840/harvardreview20185913), entities from foetuses to plants and robots are given the benefit of the doubt and assumed to be sentient. However, there is a growing consensus about the scientific criteria used to indicate pain and the ability to suffer in animals (Birch in Anim Sentience, 2017. https://doi.org/10.51291/2377-7478.1200; Sneddon et al. in Anim Behav 97:201–212, 2014. https://doi.org/10.1016/j.anbehav.2014.09.007). These include the presence of a central nervous system, changed behaviour in response to pain, and the effects of analgesic pain relief. Few of these criteria are met by robots, and there are risks to assuming that they are sentient and capable of suffering pain. Since robots lack nervous systems and living bodies there is little reason to believe that future robots capable of feeling pain could (or should) be developed. 

Questions have been asked about whether or not a robot might be able to feel pain (Danaher 2020; Smids 2020; Sebo 2018). This issue is of particular interest because of the relationship between the experience of pain and sentience. An entity that has the phenomenological experience of pain must be sentient, because the ability to feel pain requires sentience. Those that can experience pain can suffer, and hence should be afforded moral status. 

What does it mean to have moral status? DeGrazia and Millum (2021) define moral status as follows: ‘To have moral status, an individual must be vulnerable to harm or wrongdoing. More specifically, a being has moral status only if it is for that being’s sake that the being should not be harmed, disrespected, or treated in some other morally problematic fashion.’ Terms closely related to moral status include moral patient, moral standing, moral considerability, personhood, and moral subject (Muhelhauser 2017). An entity that has moral status is one for which we should have moral concern. Sebo (2018) writes, ‘Where there is sentience there is reason for moral concern, for an entity that can experience pain can suffer”. Balcombe (2016) in his book ‘What a fish knows’ is clear about the relationship between pain, suffering and sentience. ‘Organisms that can feel pain can suffer, and therefore have an interest in avoiding pain and suffering. Being able to feel pain is not a trifling thing. It requires conscious experience’ (pp 71). 

If robots were shown to be able to feel pain, they would also deserve moral status. Conversely, if they are unable to feel pain, it is not clear that they would deserve moral concern. Sparrow (2004) reports the view that ‘unless machines can be said to suffer, they cannot be appropriate objects for concern at all’. Nussbaum (2022), writing about animals, concludes that ‘We do no harm to non-sentient creatures when we kill them, and since they do not feel pain we need not worry too much about the manner’. 

Being able to experience pain is not the only indication of sentience—sentient beings can also feel pleasure and other emotions and will have a subjective view of the world. As Nussbaum (2022) writes, ‘the world looks like something to them, and they strive for the good as they see it. Sometimes sentience is reduced to the ability to feel pain; but it is really a much broader notion, the notion of having a subjective view of the world’. Nonetheless, having the ability to feel pain requires sentience. 

It is possible for a being to be sentient yet unable to feel pain, as illustrated by the example of congenital analgesia, a rare genetic disorder of humans who do not feel pain. This possibility is not explored further here since our focus is on the experience of pain and what that means. Human individuals with congenital analgesia are clearly still sentient, for they have conscious experience of the world. However, consideration of the possibility that one day there could be machines that were deemed sentient yet were unable to experience pain is beyond the scope of this paper. The emphasis here is on the idea that if an entity has the phenomenological experience of pain, it must be sentient and capable of suffering. The experience of pain is like a litmus test for sentience. 

The terms ‘sentience’ and ‘consciousness’ are often treated as meaning the same, although some authors prefer one or the other. Damasio and Damasio (2022) use the term ‘consciousness’ rather than sentience. In what they describe as ‘a new theory of consciousness’, they distinguish between “the simpler ability to ‘sense’ or ‘detect’ objects and conditions in the environment” and consciousness, which “occurs when the contents of mind are ‘spontaneously identified as belonging to a specific owner’” (pp 2231). They point out that there are living species such as bacteria and plants that can sense or detect objects and conditions in the environment without having either a nervous system or internal representations of those objects or conditions. By contrast, they argue, consciousness involves internal representations. For them, ‘consciousness is present in living organisms capable of constructing sensory representations of components and states of their own bodies, but not in organisms limited to sensing/detecting’ (pp 2234). Nussbaum (2022) talks about sentience rather than consciousness. She describes how living creatures, from mammals to fish and birds, are assumed to be sentient. 

In this paper, it is assumed that sentience and consciousness are the same—a common assumption. Some writers do make a distinction between sentience and consciousness. For instance, Nani et al. (2021) suggest that plants may be sentient, but not conscious. For them, sentience represents ‘the immediate perception of an organism that something internal or external is actually happening to itself—it requires, therefore, feedback through a basic system of transmission signals.’ Although Nani et al. propose a conception ‘of different degrees of sentience, ranging from non-conscious sentience to conscious and self-conscious sentience’ they acknowledge that this is unusual ‘as it is commonly assumed that being sentient is the same as being conscious’. 

Discussions about the possibility of robots feeling pain, and of how we might determine whether they do, have tended to rely on speculations about future possibilities, as discussed in Sect. 2. Connections have been drawn between accounts of animal rights and robots (e.g. Gellers 2020; Gunkel 2012; Ryland 2020). However, these discussions pay little attention to the scientific experimental methods that have recently been used for exploring animal sentience (see Sect. 4). 

Many philosophers have speculated about the subjective experiences, or lack of experience, of animals. For Descartes, animals were effectively clockwork mechanisms without subjective awareness or reasoning powers. His follower, Malebranch, provides an account that represents this view (1689):dogs, cats and other animals, ‘eat without pleasure; they cry without pain; they believe without knowing it; they desire nothing; they know nothing’ (Malebranch 1689; Translated from Huxley 1896). Kant also saw animals as little more than machines, although he objected to the cruel treatment of animals by humans on the grounds that it would make the perpetrators more likely to be cruel towards fellow humans. He argued that we have an indirect moral responsibility towards them (Kant, Lectures on Ethics, 1997). The utilitarians were sensitive to animal suffering and wished to prevent it, as indicated by the quotation above from Bentham (1780), and further elaborated by Singer (1975) in his book on Animal Rights. 

Far more scientific evidence is available now about animal experiences and reasoning ability than was available to Descartes, or even to Kant or Bentham. Some of that evidence is summarised in the following Sect. 4 on ‘Inferring pain in animals’. Increasingly that evidence is being taken into account by writers such as Korsgaard (2018) and Nussbaum (2022). At the same time, there are those such as Danaher (2020), and Gordon and Gunkel (2022) who speculate about the possibility of robot suffering with little reference to available scientific evidence. In this consideration of the possibility of pain and suffering in robots, we begin with a brief description of pain itself. We then turn to an examination of the idea of pain in robots. The current progress towards developing robots that react to aversive stimuli is reviewed, followed by a discussion of how robot pain and sentience might be inferred or recognised. This is contrasted to the scientific approach to determining the experience of pain in animals. Some arguments against the possibility of robots feeling pain are presented and, in a final section, the consequences that follow from an assumption of sentience for both animals and robots are considered.

03 November 2024

Kafka

'Kafka in the Age of AI and the Futility of Privacy as Control' by Daniel Solove and Woodrow Hartzog in (2024) 104 Boston University Law Review 1021 comments 

Although writing more than a century ago, Franz Kafka captured the core problem of digital technologies – how individuals are rendered powerless and vulnerable. During the past fifty years, and especially in the 21st century, privacy laws have been sprouting up around the world. These laws are often based heavily on an Individual Control Model that aims to empower individuals with rights to help them control the collection, use, and disclosure of their data. 

In this Essay, we argue that although Kafka starkly shows us the plight of the disempowered individual, his work also paradoxically suggests that empowering the individual isn’t the answer to protecting privacy, especially in the age of artificial intelligence. In Kafka’s world, characters readily submit to authority, even when they aren’t forced and even when doing so leads to injury or death. The victims are blamed, and they even blame themselves. 

Although Kafka’s view of human nature is exaggerated for darkly comedic effect, it nevertheless captures many truths that privacy law must reckon with. Even if dark patterns and dirty manipulative practices are cleaned up, people will still make bad decisions about privacy. Despite warnings, people will embrace the technologies that hurt them. When given control over their data, people will give it right back. And when people’s data is used in unexpected and harmful ways, people will often blame themselves. 

Kafka’s provides key insights for regulating privacy in the age of AI. The law can’t empower individuals when it is the system that renders them powerless. Ultimately, privacy law’s primary goal should not be to give individuals control over their data. Instead, the law should focus on ensuring a societal structure that brings the collection, use, and disclosure of personal data under control.

02 November 2024

Biometrics

A detailed review by Scottish Biometrics Commissioner of biometric data retention practices under sections 18 to 19C of the Criminal Procedure (Scotland) Act 1995 considers the collection, retention, and destruction of biometric data such as fingerprints, DNA profiles, and custody images by Police Scotland, assessed as relying on broader criminal record retention rules rather than a specific biometric data retention policy. 

The report states 

1. Following the 2018 report of the Independent Advisory Group (IAG) on the use of biometric data in Scotland[1], the Scottish Government (SG) committed to reviewing the retention of biometric data provided for under sections 18 to 19C of the Criminal Procedure (Scotland) Act 1995 ("the 1995 Act") as recommended by the IAG. The 1995 Act is the primary Scottish legislation allowing the collection and retention of fingerprints and other biometric samples (including hair samples and nail clippings) from a person arrested by the police. Although the 1995 Act does not include reference to facial images, the IAG did include these in its review. 

2. Since 2018, it is recognised that many of the IAG's findings relevant to the laws of retention have been at least partly addressed or mitigated by subsequent developments in the intervening period such as the: commencement of United Kingdom (UK) -wide Data Protection Act 2018 ("the 2018 Act") and in particular the provision for data controllers to ensure that retention is subject to periodical review; the European Court of Human Rights' judgement on 'Gaughran v UK ("the Gaughran judgement) in 2020; commencement of the Scottish Biometrics Commissioner (SBC) Act 2020 ("the 2020 Act") and the appointment of the SBC in 2021 which provides oversight of the collection, use, retention and destruction of biometric data by Police Scotland, the Scottish Police Authority (SPA) and the Police Investigations and Review Commissioner (PIRC); implementation of the SBC's Code of Practice ("the Code") and public complaints mechanism in 2022; planned rolling programme of annual compliance assessments with the Code undertaken by the SBC which began from late 2022; annual thematic assurance reviews already undertaken/proposed by the SBC on particular subject themes; establishment of the Police Scotland Biometrics Oversight Board (which meets every 6 months and has the following areas of business: Biometric Data; Forensic Biometrics; Biometrics Technology; Ethics). 

3. In the course of 2023, the SG and the SBC (the review team) therefore considered whether a review of retention periods as recommended by the IAG was still required. Such considerations also included consultation with the SBC's own independent advisory group. The review team concluded that such a review could still be beneficial. 

4. The purpose of this review is therefore to generate findings in respect of the retention of biometric data for policing purposes at this current time. The review makes recommendations as considered necessary – with a view to ensuring that the approach taken to the retention of biometric data is lawful, ethical, effective and proportionate. .... 

5. It is recognised that biometric retention in Scotland is informed by a combination of legislative requirements and police retention policies. The review therefore considers three distinct areas - the legal provision concerning the retention of biometric data; the current policy and procedures adopted by Police Scotland; and the current available evidence base in order to support its findings. The review also recognises the importance of ethical and human rights considerations on such matters. It is expected that the findings from this review may inform the policies and procedures of the SG, the SBC and policing partners going forward. 

6. For the avoidance of doubt, the scope of biometric data considered in this report are fingerprints, deoxyribonucleic acid (DNA) profiles and custody images. Although images do not feature in the 1995 Act, the review team have included them in the course of their work for completeness. This reflects their longstanding use by police in the detection and prevention of crime and their inclusion in the 2018 IAG report, from which this review originates. 

7. The review team also wish to highlight that they were unable to ascertain a legal definition of what constitutes indefinite retention of data. As such, the review team interpreted it as meaning of long duration with no end date specified.

Further 

11. Biometric data is important to the detection of crime - bringing perpetrators to justice and exonerating the innocent. How biometric data is managed following acquisition continues to raise questions for society as a whole in ensuring that this is undertaken by the police in a lawful, effective, ethical and proportionate way. 

12. This review has therefore focused on the current legislative landscape around retention; the retention policies and procedures currently adopted by Police Scotland; and the current research and evidence base available on biometric retention in the UK, European Union (EU) and beyond. 

Key Findings and Recommendations 

13. The review finds that the current research and evidence base on biometric retention in the UK, EU, and wider is not sufficiently developed to enable a robust proposal to be made around alternatives to current Scottish law and operational rules. It is considered, at this time, that there is not a gold standard of retention that Scotland should seek to follow, as a variety of approaches are being taken by different countries in regard to the retention of biometric data. 

14. The review finds that the law on the retention of biometric data as set out under Section 18 of the 1995 Act complies with human rights and recent legal judgments, based on the available evidence. In terms of future-proofing the legislation, the review found that a more robust evidence base was required in order to determine whether and how Scotland should change its existing legislation for biometric retention. The review does however acknowledge that (although not included in the scope of the review), issues around the acquisition and use of biometric data - particularly images - would benefit from further exploration going forward. 

15. The review finds that Police Scotland does not currently have a bespoke policy on the retention of biometric data taken for policing purposes. The retention of biometrics is instead aligned to a separate retention policy for retaining a person's criminal record and retaining productions as part of evidence in criminal proceedings. 

16. The review finds that Police Scotland and the SPA do not hold sufficient management information on biometric data to determine a suitably overarching policy on retention. However, the review team acknowledges that as a result of self-assessment activity relating to the SBC's Code of Practice, Police Scotland and the SPA Forensic Services are currently developing distinct biometric strategies as part of strengthening their internal governance. 

17. The review finds, in principle that the one-month timeframe for the taking of samples and prints under section 19 of the 1995 Act (where previous acquisition proved unsuitable/inadequate) could benefit from further review – subject to further evidence being provided by Police Scotland to support a case for change to that timeframe. 

The following recommendations aim to assist these findings:

1. The existing retention periods for the biometric data of non-convicted persons should remain as set out in the 1995 Act. 

2. For now, the current legislative silence in the 1995 Act should be retained with regard to the retention period for the biometric data of convicted persons, subject to the outcome of Police Scotland reviewing its retention policies. the findings of a robust evidence base once this has been assembled by Police Scotland. 

3. Police Scotland to set up a Short Life Working Group to develop an options appraisal for their retention policies for the biometric data of convicted persons, which is evidence-based; observes the need for proportionality and necessity; and complies with the law and relevant legal rulings of the European Court of Human Rights (ECtHR), particularly Article 8 ECHR. The options must expressly prohibit indefinite retention without periodic review. The options should be consulted on, and new policies should be put in place by 31 October 2025. 

4. Police Scotland should, as matter of routine, collect and retain accurate and robust management information in respect of the retention of biometric data going forward. This information should provide a solid and transparent evidence base to support future assurances that such retention policies are lawful, ethical, effective and proportionate. 

5. Police Scotland should accelerate their current review of retention periods for volunteer data and put changes into place by 31 October 2025. 

6. Police Scotland should collect management information to ascertain whether the one-month timeframe under Section 19 has caused any operational difficulty. If such evidence exists to support the need for change, the SG should consider bringing forward primary legislation, subject to consultation.

01 November 2024

Robot Rights?

'Debunking robot rights metaphysically, ethically, and legally' by Abeba Birhane, Jelle van Dijk and Frank Pasquale in (2024) 29(4) First Monday comments

For some theorists of technology, the question of AI ethics must extend beyond consideration of what robots and computers may or may not do to persons. Rather, persons may have some ethical and legal duties to robots–up to and including recognizing robots’ “rights”. Gunkel (2018) hailed this Copernican shift in robot ethics debates as “the other question: can and should robots have rights?” Or, to adapt John F. Kennedy’s classic challenge: Ask not what robots can do for you, but rather, what you can, should, or must do for robots. 

In this work we challenge arguments for robot rights on metaphysical, ethical and legal grounds. Metaphysically, we argue that machines are not the kinds of things that could be denied or granted rights. Ethically, we argue that, given machines’ current and potential harms to the most marginalized in society, limits on (rather than rights for) machines should be at the centre of current AI ethics debate. From a legal perspective, the best analogy to robot rights is not human rights but corporate rights, rights which have frequently undermined the democratic electoral process, as well as workers’ and consumers’ rights. The idea of robot rights, we conclude, acts as a smoke screen, allowing theorists to fantasize about benevolently sentient machines, while so much of current AI and robotics is fuelling surveillance capitalism, accelerating environmental destruction, and entrenching injustice and human suffering. 

Building on theories of phenomenology and post-Cartesian approaches to cognitive science, we ground our position in the lived reality of actual humans in an increasingly ubiquitously connected, automated and surveilled society. What we find is the seamless integration of machinic systems into daily lives in the name of convenience and efficiency. The last thing these systems need is legally enforceable “rights” to ensure persons defer to them. Rights are exceptionally powerful legal constructs that, when improvidently granted, short-circuit exactly the type of democratic debate and empirical research on the relative priority of claims to autonomy that are necessary in our increasingly technologized world (Dworkin, 1977). Conversely, the ‘fully autonomous intelligent machine’ is, for the foreseeable future, a sci-fi fantasy, primarily functioning now as a meme masking the environmental costs and human labour which form the backbone of contemporary AI. The robot rights debate further mystifies and obscures these problems. And it paves the way for a normative rationale for permitting powerful entities developing and selling AI to be absolved from accountability and responsibility, once they can program their technology to claim rights to be left alone by the state. 

Existing robotic and AI systems (from large language models (LLMs), “general-purpose AI”, chatbots, to humanoid robots) are often portrayed as fully autonomous systems, which is part of the appeal for granting them rights. However, these systems are never fully autonomous, but always human-machine systems that run on exploited human labour and environmental resources. They are socio-technical systems, human through and through — from the source of training data, to model development, societal uptake following deployment, they necessarily depend on humans. Yet, the “rights” debate too often proceeds from the assumption that the entity in question is somewhat autonomous, or worse that it is devoid of exploited human labour and not a tool that disproportionately harms society’s disenfranchised and minoritized. Current realities require instead a reimagining of technologies from the perspectives, needs, and rights of the most marginalized and underserved. This means that any robot rights discussion that overlooks underpaid and exploited populations that serve as the backbone for “robots” (as well as the environmental cost required to develop AI) risks being disingenuous. As a matter of public policy, the question should not be whether robotic systems deserve rights, but rather, if we grant or deny rights to a robotic system, what consequences and implications arise for people owning, using, profiting, developing, and affected by actual robotic and AI systems?  

The time has come to change the narrative, from “robot rights” to the duties and responsibilities of the corporations and powerful persons now profiting from sociotechnical systems (including, but not limited to, robots). Damages, harm and suffering have been repeatedly documented as a result of the integration of AI systems into the social world. Rather than speculating about the desert of hypothetical machines, the far more urgent conversation concerns robots and AI as concrete artifacts built by powerful corporations, further invading our private, public, and political space, and perpetuating injustice. A purely intellectual and theoretical debate obscures the real threat: that many of the actual robotic and AI systems that powerful corporations are building are harming people both directly and indirectly, and that a premature and speculative robot rights discourse risks even further unravelling our frail systems of accountability for technological harm. 

The rise of ‘gun owners’ rights” in the U.S. is but one of many prefigurative phenomena that should lead to deep and abiding caution about fetishization of technology via rights claims. U.S. gun owners’ emotional attachments to their weapons are often intense. The U.S. has more gun violence than any other developed country, and endures frequent and bloody mass shootings. Nevertheless, the U.S. Supreme Court has advanced a strained interpretation of the U.S. Constitution’s Second Amendment to promote gun owners’ rights above public safety. We should be very careful about robot rights discourse, lest similar developments empower judiciaries to immunize exploitive, harmful, and otherwise socially destructive technologies from necessary regulations and accountability. 

The rest of the paper is structured as follows: Section 2 sets the scene by outlining the robot rights debate. In Section 3, we delve into what exactly “robot” entails in this debate. Section 4 clarifies some of the core underlying errors committed by robot rights advocates surrounding persons, human cognition, and machines and presents our arguments, followed by a brief overview of embodied and enactive perspectives on cognition in Section 5. In Section 6, we examine posthumanism, a core philosophical approach by proponents of robot rights and illustrate the problems surrounding it. Section 7 outlines the legal arguments, which is followed by Section 8 which illustrates how rights talk “cashes out” in actionable claims in courts of law, demonstrating the real danger of robot rights talk. We conclude in Section 9 by emphasizing the enduring irresponsibility of robot/AI rights talk.

In 'Taking AI Welfare Seriously' Robert Long, Patrick Butlin, Jacqueline Harding, Jeff Sebo, Jonathan Birch, Kathleen Finlinson, Kyle Fish, Toni Sims and David Chalmers argue that 

 there is a realistic possibility that some AI systems will be conscious and/or robustly agentic in the near future. That means that the prospect of AI welfare and moral patienthood — of AI systems with their own interests and moral significance — is no longer an issue only for sci-fi or the distant future. It is an issue for the near future, and AI companies and other actors have a responsibility to start taking it seriously. We also recommend three early steps that AI companies and other actors can take: They can (1) acknowledge that AI welfare is an important and difficult issue (and ensure that language model outputs do the same), (2) start assessing AI systems for evidence of consciousness and robust agency, and (3) prepare policies and procedures for treating AI systems with an appropriate level of moral concern. To be clear, our argument in this report is not that AI systems definitely are — or will be — conscious, robustly agentic, or otherwise morally significant. Instead, our argument is that there is substantial uncertainty about these possibilities, and so we need to improve our understanding of AI welfare and our ability to make wise decisions about this issue. Otherwise there is a significant risk that we will mishandle decisions about AI welfare, mistakenly harming AI systems that matter morally and/or mistakenly caring for AI systems that do not.

The authors refer to 'A transitional moment for AI welfare' 

Plausible philosophical and scientific theories, which accord with mainstream expert views in the relevant fields, have striking implications for this issue, for which we are not adequately prepared. We need to take steps toward improving our understanding of AI welfare and making wise decisions moving forward. 

... For most of the past decade, AI companies appeared to mostly treat AI welfare as either an imaginary problem or, at best, as a problem only for the far future. As a result, there appeared to be little or no acknowledgment that AI welfare is an important and difficult issue; little or no effort to understand the science and philosophy of AI welfare; little or no effort to develop policies and procedures for mitigating welfare risks for AI systems if and when the time comes; little or no effort to navigate a social and political context in which many people have mixed views about AI welfare; and little or no effort to seek input from experts or the general public on any of these issues. 

Recently, however, some AI companies have started to acknowledge that AI welfare might emerge soon, and thus merits consideration today. For example, Sam Bowman, an AI safety research lead at Anthropic, recently argued (in a personal capacity) that Anthropic needs to “lay the groundwork for AI welfare commitments,” and to begin to “build out a defensible initial understanding of our situation, implement low-hanging-fruit interventions that seem robustly good, and cautiously try out formal policies to protect any interests that warrant protecting.” Google recently announced that they are seeking a research scientist to work on “cutting-edge societal questions around machine cognition, consciousness and multi-agent systems”. High-ranking members of other companies have expressed concerns as well. 

This growing recognition at AI companies that AI welfare is a credible and legitimate issue reflects a similar transitional moment taking place in the research community. Many experts now believe that AI welfare and moral significance is not only possible in principle, but also a realistic possibility in the near future. And even researchers who are skeptical of AI welfare and moral significance in the near term advocate for caution; for example, leading neuroscientist and consciousness researcher Anil Seth writes, “While some researchers suggest that conscious AI is close at hand, others, including me, believe it remains far away and might not be possible at all. But even if unlikely, it is unwise to dismiss the possibility altogether [emphasis ours].” 

Ouraiminthisreportistoprovidecontext and guidance forthistransitionalmoment. Toimprove our understanding and decision-making regarding AI welfare, we need more precise empirical frameworks for evaluating AI systems for consciousness, robust agency, and other welfare-relevant features. We also need more precise normative frameworks for interacting with potentially morally significant AI systems and for navigating disagreement and uncertainty about these issues as a society. ThisreportoutlinesseveralstepsthatAIcompaniescantaketodayinordertostartpreparing for the possible emergence of morally significant AI systems in the near future, as a precautionary measure.  

We begin in section 1 by explaining why AI welfare is an important and difficult issue. Leaders in this space have a responsibility to understand this issue as best they can, because errors in either direction — either over-attributing or under-attributing moral significance to AI systems — could lead to grave harm. However, understanding this issue will be challenging, since forecasting the mental capacities and moral significance of near-future AI systems requires improving our understanding of topics like the nature of consciousness, the nature of morality, and the future of AI. It also requires overcoming well-known human biases, including a tendency to both over-attribute and under-attribute capacities like consciousness to nonhuman minds. 

In section 2, we argue that given the best information and arguments currently available, there is a realistic possibility of morally significant AI in the near future. We focus on two mental capacities that plausibly suffice for moral significance: consciousness and robust agency. In each case, we argue that caution and humility require allowing for a realistic possibility that (1) this capacity suffices for moral significance and (2) there are certain computations that (2a) suffice for this capacity and (2b) will exist in near-future AI systems. Thus, while there might not be certainty about these issues in either direction, there is a risk of morally significant AI in the near future, and AI companies have a responsibility to take this risk seriously now. 

We argue that, according to the best evidence currently available, there is a realistic possibility that some AI systems will be welfare subjects and moral patients in the near future. 

We close, in section 3, by presenting three procedural steps that AI companies can take today, in order to start taking AI welfare risks seriously. Specifically, AI companies can (1) acknowledge that AI welfare is an issue, (2) take steps to assess AI systems for indicators of consciousness, robust agency, and other potentially morally significant capacities, and (3) take steps to prepare policies and procedures that will allow them to treat AI systems with an appropriate level of moral concern in the future. In each case we also present principles and potential templates for doing this work, emphasizing the importance of developing ecumenical, pluralistic decision procedures that draw from expert and public input. 

Recommendations. 

We recommend that AI companies take these minimal first steps towards taking AI welfare seriously. 

Acknowledge. Acknowledge that AI welfare is an important and difficult issue, and that there is a realistic, non-negligible chance that some AI systems will be welfare subjects and moral patients in the near future. That means taking AI welfare seriously in any relevant internal or external statements you might make. It means ensuring that language model outputs take the issue seriously as well. 

Assess. Develop a framework for estimating the probability that particular AI systems are welfare subjects and moral patients, and that particular policies are good or bad for them. We have templates that we can use as sources of inspiration, including the “marker method” that we use to make estimates about nonhuman animals. We can consider these templates when developing a probabilistic, pluralistic method for assessing AI systems. 

Prepare. Develop policies and procedures that will allow AI companies to treat potentially morally significant AI systems with an appropriate level of moral concern. We have many templates to consider, including AI safety frameworks, research ethics frameworks, and forums for expert and public input in policy decisions. These frameworks can be sources of inspiration — and, in some cases, of cautionary tales. 

These steps are necessary but far from sufficient. AI companies and other actors have a responsibility to start considering and mitigating AI welfare risks. 

Before we begin, it will help to emphasize five important features of our discussion. First, our discussion will concern whether near-future AI systems might be welfare subjects and moral patients. An entity is a moral patient when that entity morally matters for its own sake, and an entity is a welfare subject when that entity has morally significant interests and, relatedly, is capable of being benefited (made better off ) and harmed (made worse off ). Being a welfare subject makes you a moral patient — when an entity can be harmed, we have a responsibility to (at least) avoid harming that entity unnecessarily. But there may be other ways of being a moral patient; our approach is compatible with many different perspectives on these issues. 

Second, our discussion often focuses on large language models (LLMs) as a central case study for the sake of simplicity and specificity, and because we expect that LLMs — as well as broader systems that include LLMs, such as language agents — will continue to be a focal point in public debates regarding AI welfare. But while some of our recommendations are specific to such systems (primarily, our recommendations regarding how AI companies should train these systems to discuss their own potential moral significance), our three general procedural recommendations (acknowledge, assess, and prepare) apply for any AI system whose architecture is complex enough to at least potentially have features associated with consciousness or robust agency. 

Third, our discussion often focuses on initial steps that AI companies can take to address these issues. These recommendations are incomplete in two key respects. First, AI companies are not the only actors with a responsibility to take AI welfare seriously. Many other actors have this responsibility too, including researchers, policymakers, and the general public. Second, these steps are not the only steps that AI companies have a responsibility to take. They are the minimum necessary first steps for taking this issue seriously. Still, we emphasize these steps in this report because by taking them now, AI companies can help lay the groundwork for further steps — at AI companies and elsewhere — that might be sufficient. 

Fourth, our aim in what follows is not to argue that AI systems will definitely be welfare subjects or moral patients in the near future. Instead, our aim is to argue that given current evidence, there is a realistic possibility that AI systems will have these properties in the near future. Thus, our analysis is not an expression of anything like consensus or certainty about these issues. On the contrary, it is an expression of caution and humility in the face of what we can expect will be substantial ongoing disagreement and uncertainty. In our view, this kind of caution and humility is the only stance that one can responsibly take about this issue at this stage. It is also all that we need to support our conclusions and recommendations here. 

Finally, and relatedly, our aim in what follows is not to argue for any particular view about how humans should interact with AI systems in the event that they do become welfare subjects and moral patients. We would need to examine many further issues to make progress on this topic, including: how much AI systems matter, what counts as good or bad for them, what humans and AI systems owe each other, and how AI welfare interacts with AI safety and other important issues. These issues are all important and difficult as well, and we intend to examine them in upcoming work. However, we do not take a stand on any of these issues in this report, nor does one need to take a stand on any of them to accept our conclusions or recommendations here.

Activism and NZ Administrative Law

Hammond J in Lab Tests Auckland Ltd v Auckland District Health Board [2008] NZCA 385; [2009] 1 NZLR 776 states 

 [348] I agree with the result of this appeal as set out in the judgment of Arnold J and, in general, with the reasoning by which that result was arrived at. 

[349] Because this is an important administrative law case, I propose to add some broad comments on the proper scope of judicial review in a case such as this. I emphasise that they are not intended to detract from the actual resolution of this case as set out in the judgment of Arnold J, to which the entire panel has subscribed. 

[350] As a matter of convenience, I have grouped my comments under four heads, which might be called four “P’s”: the point of entry of judicial review; the purpose of judicial review; the principles of judicial review; and the place of judicial review in New Zealand today. I will then add some brief comments on this particular case. 

The point of entry of judicial review 

[351] The point at which judicial review may be resorted to is a matter of distinct importance. While, in principle, any decision of a public nature is potentially reviewable, there seems to be a growing misconception that just about any decision is amenable to judicial review. However, there are some “no-go” areas, as well as “twilight” contexts which have occasioned real, and still largely unresolved, arguments as to the appropriateness of making judicial review available in those areas. 

[352] One of these twilight areas is public sector contracting, where governmental bodies provide or arrange for the provision of services to the public by means of contractual relations with private sector enterprises. “Government by contract” has had major ramifications for administrative law theory and practice as it has become the dominant paradigm for the provision of public services over the last quarter of a century. See Harlow “Law and New Public Management: Ships that Pass in the Night” in Gordon (ed) Judicial Review in the New Millenium (2003) at 5 – 18; McLean “Contracting in the Corporatised and Privatised Environment” (1996) 7 PLR 223; and Allars “Administrative Law, Government Contracts and the Level Playing Field” [1989] UNSWLawJl 7; (1989) 12 UNSWLJ 114.

[353] Leaving to one side any applicable statutory provisions, the problem for the law, stated in the simplest terms, is whether to apply private law principles, public law principles, or some admixture of the two. See, for example, Oliver Common Values and the Public-Private Divide (1999) and Taggart “‘The Peculiarities of the English’: Resisting the Public/Private Law Distinction” in Craig and Rawlings (eds) Law and Administration in Europe: Essays in Honour of Carol Harlow (2003) 107 at 120. Some commentators have suggested that the courts should develop a stand-alone set of “government contract” principles which are to be applied. See, for instance, Davies Accountability: A Public Law Analysis of Government by Contract (2001). For a comparative common law and continental perspective, see Auby “Comparative Approaches to the Rise of Contract in the Public Sphere” (2007) PL 40. 

[354] There has been real ambivalence on the part of both commentators and courts on this issue. Professor Freedland, a prominent commentator on “government by contract”, started out by arguing for the application of public law principles: “Government by Contract and Public Law” (1994) PL 86. Yet more recently, Professor Freedland has oriented his overall approach more firmly in the direction of private law (“Government by Contract Re-examined – Some Functional Issues” in Craig and Rawlings (eds) Law and Administration in Europe: Essays in Honour of Carol Harlow (2003) 123 at 133): My real reason for sketching out an area of public/private enterprise law, which is not specially oriented towards public law, is not so much the view that ‘government by contract’ should be regulated by a body of law which is not specially oriented towards public law, but rather a prediction that English law will on the whole tend to generate a mixed but private law-based body of law for that purpose. Indeed, Professor Freedland now goes so far as to suggest that (at 134): ... we might expect that the techniques of private law in the areas of contract, tort, and restraint of trade will be the tools mainly used to address issues arising from the tension or conflict between the public contracting role and the public/private market-making function, and that our primary concern should be to ensure that these private law-based instruments are tuned to register the sound of public interest. 

[355] A contrary view can be found in Collins Regulating Contracts (1999), which argues that markets do not provide an appropriate mechanism for distributing public services and questions the efficacy of contract law principles in this area. For a critical discussion of Collins’ analysis, see Cane “Administrative Law as Regulation” in Parker and others (eds) Regulating Law (2004) 206 at 210 – 213. 

[356] Unsurprisingly, courts have had the same sort of difficulties as to what approach the law should adopt. As a general proposition, which I can only sketch here, the early cases around the British Commonwealth and in New Zealand did not favour judicial review. But some courts then began to adopt a stance that judicial review is available if there is a sufficient “public” component. The high water-mark of that approach is R v Panel on Take-overs and Mergers, ex parte Datafin plc [1986] EWCA Civ 8; [1987] QB 815 (CA) which evidenced a shift from a “source of the power” test for reviewability to a “nature of the function” approach: Hunt “Constitutionalism and the Contractualisation of Government in the United Kingdom” in Taggart (ed) The Province of Administrative Law (1997) 21 at 29. In the “government by contract” context, that kind of thinking rests on a market contract paradigm which somehow becomes sufficiently suffused with public characteristics, or has a sufficient impact on the public, so as to render events attendant on it reviewable. 

[357] With respect, this analysis is much too simplistic. The stereotype of the market contract involves a purchaser going into a market, which offers many opportunities (or service providers) for the transaction in question. That purchaser then has the option of purchasing the services in question either in a single transaction or a number of distinct transactions. 

[358] There are, however, two characteristics which differentiate “government by contract” from the market orthodoxy. The first is that government contracting arrangements are functionally a form of regulation. (This conclusion is shared by Walsh and others Contracting for Change: Contracts in Health, Social Care, and Other Local Government Services (1997).) The second is that these kinds of agreements are a classic example of what I have referred to elsewhere as “relational” contracts: Dymocks Franchise Systems (NSW) Pty Ltd v Bilgola Enterprises Ltd (1999) 8 TCLR 612 at [93] (HC). The contracting parties routinely provide that the contract will run for some time, involving ongoing evolutionary elements, and obligations of good faith and the like. In short, they are not closed market contracts. Moreover, the government has a powerful interest in ensuring that goods or services are supplied in accordance with a contract. If a contractor defaults, the continuity of essential public services may be jeopardised. Thus, these contracts involve what we could loosely call wider public interests. 

[359] The characteristics I have noted might suggest that, as with any other government activity, government contracting should ultimately take place within a framework of public law precepts, modified to the particular contractual and statutory context, but nonetheless underpinned by constitutional values such as respect for the rule of law and democratic principles. But the pull in favour of private law still remains strong. 

[360] My purpose in making these general points is not to attempt to resolve the present case in an abstract way. Each case will have its own complexities, as Arnold J has convincingly demonstrated, and the statutory and contractual context will be of the greatest importance. My concern is that I would not want it to be thought in other cases that, on the basis of what has happened in the case in front of us at this time, counsel can automatically assume reviewability in this subject area. 

[361] In this case, we are faced with a somewhat unusual position. Typically a case of the kind which is before us would have attracted strenuous debate as to its amenability to judicial review in the first place. Here, both Diagnostic Medlab Limited (DML) and Lab Tests Auckland Limited (Lab Tests) have accepted that judicial review is appropriate. But they are poles apart as to why and how reviewability should come into play. For Lab Tests, Mr Curry takes a very narrow line. He contends that the wider arguments as to reviewability do not really matter very much in this case because the only possible ground for review is the admittedly narrow statement of Lord Templeman, for the Judicial Committee of the Privy Council, in Mercury Energy Ltd v Electricity Corporation of NZ Ltd [1994] 2 NZLR 385 at 391 that “[i]t does not seem likely that a decision by a [SOE] to enter into or determine a commercial contract to supply goods or services will ever be the subject of judicial review in the absence of fraud, corruption or bad faith.” Mr Hodder, on the other hand, doubtless delighted to have got over the preliminary hurdle of reviewability without opposition, has advanced a far-reaching basis for judicial review: namely an ability in the High Court to constrain, at least in some respects, decisions “tainted by a serious lack of integrity, i.e., fraud, corruption, bad faith or any other material departure from accepted public sector ethical standards which requires judicial intervention” (emphasis added). I will enlarge on what Mr Hodder meant by that later in this judgment. 

The purpose of judicial review 

[362] Broadly, there are two schools of thought about the Judge’s task when engaged in judicial review. 

[363] The traditional stance is that the Judge’s predominant task is to ensure that administrative authorities remain within the powers granted to them by law. Whatever the Court may do by way of judicial intervention, that intervention must be linked, in one way or another, to the legal powers of the relevant public authority. This orthodox approach to administrative law has been defended, most magisterially, by Sir William Wade: Wade and Forsyth Administrative Law (9ed 2004) at 4-5. There can hardly be any argument that the legality principle is the first and most important limb of judicial review. While cases decided under the legality rubric routinely throw up difficult issues of statutory construction, that is nevertheless a “comfortable” task for a court, which can set about it without any disconcerting suggestion that the court is outside its proper bailiwick. 

[364] On this traditional approach, the only long stop for challenging the decision itself, as opposed to what led to it, was so-called Wednesbury review for unreasonableness: Associated Provincial Picture Houses Ltd v Wednesbury Corporation [1947] EWCA Civ 1; [1948] 1 KB 223 (CA). The primary decision is that of the first instance decision maker and courts have a highly constrained ability to interfere with respect to the decision actually taken. 

[365] Wednesbury review is logically circular, distinctly indeterminate and functions as a “cloak” which, on the one hand, has the potential to seduce lawyers and courts into the merits rather than legality of decisions and, on the other hand, can lead to abject caution. See Le Sueur “The Rise and Ruin of Unreasonableness?” (2005) JR 32 at 32 and, more generally, Taggart “Reinventing Administrative Law” in Bamforth and Leyland (eds) Public Law in a Multi-layered Constitution (2003) at 311 – 335. Famously, Lord Cooke of Thorndon in R (Daly) v Secretary of State for the Home Department [2001] UKHL 26; [2001] 2 AC 532 (HL) regarded Wednesbury as (at 549): ... an unfortunately retrogressive decision in English administrative law, in so far as it suggested that there are degrees of unreasonableness and that only a very extreme degree can bring an administrative decision within the legitimate scope of judicial invalidation. The depth of judicial review and the deference due to administrative discretion vary with the subject matter. It may well be, however, that the law can never be satisfied in any administrative field merely by a finding that the decision under review is not capricious or absurd. 

[366] Instances of successful intervention on the basis of Wednesbury unreasonableness appear to be much more common in the United Kingdom than in New Zealand. In “The Rise and Ruin of Unreasonableness?” (above at [365]), Le Sueur observes that close to half of the Wednesbury unreasonableness/irrationality cases (some 40 cases) heard between January 2000 and July 2003 in the UK succeeded on those grounds (at 44 – 51). Even then, Lord Woolf has suggested extra-judicially that judicial review is still excessively executive-friendly in the UK: “Judicial Review – The Tensions Between the Executive and the Judiciary” in Campbell-Holt (ed) The Pursuit of Justice (2008) 131 at 142. In New Zealand, success under this head is a distinct rarity. I am reminded of an observation by Bauer CJ in the United States of America: the decision must “strike us [as] wrong with the force of a five-week old dead, unrefrigerated fish [to attract review]” (Parts and Electric Motors Inc v Sterling Electrical Inc 866 F2d 228 at 233 (7th Cir. 1988; cert. denied, 493 US 847 (1989)). 

[367] The second, and more modern, school of thought challenges the traditional orthodoxy. At heart it holds that High Court judges have always had, and still have, an independent capacity to intervene by way of judicial review to restrain the abuse of power and to secure good administration. Protagonists of this school of thought include, amongst commentators, Professors Oliver and Craig in the United Kingdom, and Professor Cane in Australia. Amongst the senior judiciary its adherents include Sir John Laws and Sir Stephen Sedley. At rock bottom the broad concern is to identify what might be termed “core public law values” and secure better governance. 

[368] Again, these two schools of thought are reflected in the position of the parties before us: Mr Curry stands firmly on what I have called the “traditional” orthodoxy while Mr Hodder on this occasion advances a thoroughly “modernist” argument. 

[369] As a matter of fairness, to exercise a putative right of reply for Mr Curry, there are a number of decisions in courts of the highest authority (particularly the High Court of Australia) to the effect that judicial review should not allow courts to impose ideas about “good administration” or “good governance” on the executive or other governmental bodies. Historically, or so the argument runs, judicial review involved a “power grab” by the courts which is bearable and even beneficial, so long as it is kept within its traditional bounds and goes no further than it already has. 

[370] In light of this division, it is obvious that one of the fundamental difficulties which afflicts judicial review is that there is a widespread disagreement about the fundamental task of the reviewing judge. It is true that all basic building blocks of the law attract some measure of disagreement about “purposes”, but none have the difficulties, or the “edge” that judicial review attracts, given its impact on government and governance. And when fundamental disputes about “purpose” are leavened with confusion as to the principles on which courts will intervene (often called the “grounds for review”), the state of the law is rendered distinctly problematic. 

The principles of judicial review 

[371] The Chief Justice of New Zealand, writing extra-judicially, has suggested that “the Courts are largely adrift” in dealing with cases where the decision maker has (to put it broadly) got the decision wrong: Elias “The Impact of International Conventions on Domestic Law” (Address to the Conference of International Association of Refugee Law Judges, March 2000) at 8. 

[372] The nautical metaphor can be pressed further. William Prosser, the doyen of American torts scholars, once recounted something said by a West Coast North American Indian sitting on a rock and looking out to sea: Lighthouse, him no good for fog. Lighthouse, him whistle, him blow, him ring bell, him flash light, him raise hell; but fog come in just the same. Prosser went on: That quotation has been haunting me. I have the feeling that it has some application to something connected with the law, but I do not know exactly what. I have shown it to a number of lawyers, and some of them have told me that it summarizes for them a lifetime of argument before the courts. Some of the judges seem to think that it describes the thankless task of writing opinions for the bar to read. To some morose and melancholy attorneys it calls at once to mind their relations with their clients. One man was sure that it must have something to do with the income-tax regulations, although he was by no means clear as to precisely how. Among only one group have I found general and enthusiastic agreement. I have yet to show that quotation to any professor of law who did not immediately say, with a lofty disregard of the laws of English grammar, “That’s me!” See “Lighthouse No Good” (1948) 1 J Leg Ed 257 at 257. 

[373] The public law practitioner could also say: “That’s me!” The reason is that judicial review is a critically important beacon and guard against abuses of power. But it does presently stand in something of a fog of mushy dogma. And lighthouses do not work by themselves. They function effectively only in concert with complete and precise charts. It is a pressing task for the courts to ameliorate the problem of fog in judicial review. 

[374] There is one possibility I can get out of the way at the outset. Every so often a senior judge attempts to formulate a unified theory of judicial review, by reducing everything to one theorem. 

[375] One example was the extra-judicial suggestion by Sir Robin Cooke (as he then was) that “it might not be an altogether absurd over-simplification to say that the day might come when the whole of administrative law could be summed up in the proposition that the administrator must act fairly and reasonably”: “The Struggle for Simplicity in Administrative Law” in Taggart (ed) Judicial Review of Administrative Action in the 1980s: Problems and Prospects (1986) 1 at 5. 

[376] More recently, in “Administrative law in Australia: Themes and values” in Groves and Lee (eds) Australian Administrative Law: Fundamentals, Principles and Doctrines (2007) 15, the newly appointed Chief Justice of Australia, Robert French, has suggested that (at 23): ... [A]dministrative justice in the sense administered by the courts may be identified as follows: Lawfulness – that official decisions are authorised by statute, prerogative or constitution. Good faith – that official decisions are made honestly and conscientiously. Rationality – that official decisions comply with the logical framework created by the grant of power under which they are made. Fairness – that official decisions are reached fairly, that is impartially in fact and appearance and with a proper opportunity to persons affected to be heard. The learned Chief Justice explicitly gives his “grand theory” objective (and background in physics) away, when he goes on to note that “the identification of these elements of administrative justice is a little like the identification of ‘fundamental’ particles in physics” (at 24). 

[377] Even senior appellate courts are not immune from this sort of approach. Recently, the Supreme Court of Canada opted for a dual standard of review, “correctness” and “reasonableness”, which one suspects will bring its own very real share of difficulties: Dunsmuir v New Brunswick 2008 SCC 9 at [34]. 

[378] Both practitioners and representatives of governmental bodies will rightly state the obvious: that grand theorem approaches fail to drill down far enough to enable respectable advice to be given to parties who are supposed to abide by the law. In short, better charts are needed, without simply exchanging one shibboleth for another. 

[379] Another concern is that things like spectrums of response and “deference” in this subject area are ultimately quite unhelpful, and even unworkable. To say that something rests somewhere on a “continuum” is a conclusion, not a principle; it does not tell us how that point in a spectrum is reached. And courts do not defer to anything or anybody: the job of courts is to decide what is lawful and what is not. 

[380] As far as the grounds of review are concerned, the difficulty stems partly from the lack of an agreed classification or taxonomy, accompanied by properly developed substantive principles as to when a court will intervene by way of judicial review, particularly in “merits” cases. Then too, there will always be problems of application in the law, but when the underlying principles are obfuscated, there is cause for real concern. The costs of litigation are extremely high in this area, and “uncertainty” is, I think, a major contributing factor to those costs. This in turn restricts access to the courts, which is most undesirable in judicial review. 

[381] Perhaps the best way to understand the concerns which judicial review endeavours to reach is to consider the various grounds in functional rather than doctrinal terms. One good reason for a functional rather than doctrinal analysis is that it helps to transcend unhelpful semantic or terminological quibbles. 

[382] First, there are procedural grounds of review. These focus on the conduct of the decision maker and include procedural fairness requirements, fair hearing rules, and rules against bias. These sort of rules are well enough settled. 

[383] Secondly, there may be concern over the decision maker’s reasoning processes. This is where the vast majority of judicial review cases fit given that it includes things like misappreciation of the law; unauthorised delegation; and the perennial problem of control of the exercise of a discretion. All of this is the stuff of legality and everyday lawyering and, in fairness, the principles “fog” is not at its densest here. 

[384] Thirdly, there are grounds which in one sense or another relate to the decision itself, rather than the procedures adopted or the reasoning process. This is easily the most contentious functionalist category of the grounds for judicial review. The argument here is that there should be substantive grounds of review, even where a decision maker has assiduously followed all required procedures and has made no errors of reasoning. But here the fog is presently a “pea souper”. 

[385] One thing should be said at the outset. Every so often some commentator suggests that “activist” judges are somehow intent on taking over and making “merits” decisions for themselves. However, in my experience, judges do not like making merit decisions. They are relieved when “government” makes a clear or at least workable decision. Knowing – or purporting to know – what is best for somebody or something else is a dangerous enterprise; judges, of all people, see in their daily work instances of ill or insufficiently considered actions which can cause great difficulties in the lives of others. And they appreciate that judicial review is not an appeal: it is a “review” of what has occurred, but with an emphasis upon principles which ought, in terms of Prosser’s fog metaphor, to be respectably well defined. 

[386] If, therefore, judges are going to approach the merits of a decision, the analysis has to be undergirded by something other than concern about the decision as such. That is, there has to be something or some things in a sense standing “outside” the particular decision which rightly attracts judicial concern. The most obvious candidate is the concept of abuse of power, which lies at the very heart of administrative law. See Sedley LJ in R (Bancoult) v Secretary of State for Foreign and Commonwealth Affairs (No 2) [2007] EWCA Civ 498; [2008] QB 365 at [60] (CA): “[Abuse of power] is what the courts of public law are there to identify and, in proper cases, to correct ...”. The French would say that abuse of power is a stand-alone type of illegality: see Auby “The Abuse of Power in French Administrative Law” (1970) 18 Am. J. Comp. L. 549. The term “abuse of power” should not be understood as necessarily pejorative: to act outside one’s powers, in genuine error, is still an abuse of power and the traditional “four-corners” doctrine reels in the large majority of abuses of power. The central issue is, what beyond that orthodoxy ought to be addressed, and how? 

[387] There is here a preliminary issue which has vexed pre-eminent social and political philosophers worldwide, and is an issue for lawyers: the very nature of power. There are broadly two possible responses. 

[388] The continental school tends to see power as a thing in itself. De Tocqueville suggested that it is men who build up institutions and enslave themselves in a universally tragic way. M de Jouvenal in Du Pouvoir treated power as if it is a morbid pathology, rather like a terrible god with deterministic outcomes. The notion of power as a thing in itself can be seen in the writings of Kant and Nietzsche. [389] The English school is more pragmatic: it does not see power as a thing in itself. Lord Radcliffe of Werneth put it wonderfully well in Lecture VII of his 1951 Reith Lectures (published as The Problem of Power (1952) at 99 – 100): Take away the abstract idea and there remains nothing but the conduct of men, human beings, who occupy in their turn the seats of authority. It does not seem to me that there is only one possible attitude towards authority or one inevitable set of rules that govern its exercise. Attitudes change with the social conditions which surround authority and, as we have seen, men in their turn exalt and denigrate power under the impulse of their general attitude towards life itself. You can see it your own way, so long as you know what that way is. It reminds me of an old saying: ‘Take what you want’, said God; ‘take it, and pay for it.’ 

[390] If Lord Radcliffe is right, and I think he is, it would follow that it ought to be possible to do something practical about the problem of abuse of power through the development of distinct substantive principles in relation to merit decisions. 

[391] It is not possible in a judgment to describe what a full scheme of principles based on that fundamental objective might look like. But the law is already moving slowly in the direction of building on that concept. For instance, one area which is now relatively well recognised by the Anglo-New Zealand judiciary is that, in the area of human rights, an otherwise lawful response must still be a proportionate one. 

[392] Another possible doctrine is that of substantive unfairness, to be deployed in situations where a result is arrived at which is within the powers of the particular authority but which is so grossly unfair that it ought to be impugned. That is what I effectively held in NZFP Pulp and Paper Ltd v Thames Valley Electric Power Board HC HAM CP35/93 1 November 1993. Although that approach was not favoured by this Court on appeal (see Thames Valley Electric Power Board v NZFP Pulp and Paper Ltd [1994] 2 NZLR 641), in Pharmaceutical Management Agency Ltd v Roussel Uclaf Australia Pty Ltd [1998] NZAR 58, this Court held that (at 66): The concept of substantive fairness ... also requires further consideration. The law in this country applicable to situations of that kind will no doubt be developed on a case by case basis. 

[393] In this instance, I did not understand Mr Hodder to be arguing for an incremental gloss on the well-known Mercury Energy “fraud, corruption or bad faith” test. His argument, at least as I apprehended it before us, was that there should be a distinct substantive principle on which the merits of a decision can be attacked. Mr Hodder put it this way in oral argument: Public powers and resources under our system are to be used in the public interest, and they are misused or abused if they are used and diverted to private advantage, obviously, apart from statutory authorised grants or where there is contract for mutual benefit. But that’s the essence of the responsibility of public power. It has to be used in the public interest not for private interests. 

[394] I will deal with this proposition of a “no-conflict” principle in government contracts later in this judgment. I mention it at this point only because, as I apprehended it, this is where Mr Hodder’s principle would fit in the sort of taxonomy I have been discussing. It must be at least implicit, if not explicit, in Mr Hodder’s proposition that this is a substantive principle which we need in New Zealand today. That brings me to the next subset of comment. 

Place 

[395] Francis Cooke QC has recently noted that in New Zealand administrative law, “we still take our lead from the United Kingdom”: “Relief at Last” in Administrative Law (New Zealand Law Society Intensive, August 2008) 31 at 31. Whilst a respectful eye will doubtless continue to be cast on judicial review developments in England, I agree that New Zealand has to develop its own solutions in terms of its own needs and aspirations. There are some difficulties which ought to be made explicit here. 

[396] One is the question of “opportunity”. Professor Burrows has remarked that case-made law “scores its runs in singles”. That is a real difficulty in a small country like New Zealand, with only an irregular supply of cases (“the bowling”), and consequently the run accumulation technique becomes highly problematic. Commentators in New Zealand routinely fail to focus sufficiently on the “supply” side of bowling from which a respectable innings may be fashioned. It is difficult for senior judges to work at the problem systematically. There is instead an intermittent and somewhat mad-headed chase after the “latest case” on the part of the bar and commentators, and seminars sprout up as if there has been a seismic shift when one case is decided. 

[397] A second and related problem is, if I may resort to Willis Airey’s splendid phrase of a “Small Democracy”, that single judicial review decisions in New Zealand have a disproportionate impact. In recent years in the United Kingdom, Lord Woolf, then Lord Bingham, have had to deal with the tensions which arise between the judiciary and the executive when the judiciary exercises a firmer hand. The English judiciary has survived, and many may think it has undertaken its task admirably across a real run of cases. But quite how things would go in a much smaller and more visible “Small Democracy”, where a pebble in a pond has the effect of a boulder, is more problematic. 

[398] Thirdly, we should not overlook the problem that if the goal of administrative law is to be defined partly in terms of somewhat broader objectives – such as, for instance, the promotion of good governance – one would normally pay close regard to the empirical evidence that administrative law can actually achieve that end. Regrettably, there is little in the way of empirical evidence in the New Zealand context as to whether administrative law as a behaviour modification mechanism in government actually works. Such empirical evidence as there is in other jurisdictions tends to suggest that administrative law is likely to be able to make only a modest contribution to the promotion of external goals. If that is right, it may suggest that such substantive doctrines as are developed for merit review should go only to what might be termed “true excesses”.