05 November 2024

Jurisdiction

'Māori Rejections of the State’s Criminal Jurisdiction Over Māori in Aotearoa New Zealand’s Courts' by Fleur Te Aho and Julia Tolmie in (2023) 30 New Zealand Universities Law Review comments 

A significant and little-known protest is happening in Aotearoa New Zealand's criminal court. For years, on an almost daily basis, Māori defendants have been rejecting the state's exercise of criminal jurisdiction over them - claims that have been repeatedly rejected by the courts. In this article, we examine the extent and nature of this jurisdictional protest in the criminal court and offer some initial reflections on the implications of the protest and the court's response to date. We suggest that this protest is notable both for its scale and, at times, sophistication but that the court's response has been simplistic - dismissing without truly addressing the defendants' arguments. In our view, the courts cannot authentically address such claims without first acknowledging that their jurisdiction - and the state's authority to govern Māori - is founded on an illegitimate and unilateral assumption of power.

Ouch

'Could a robot feel pain?' by Amanda Sharkey in (2024) AI and Society comments 

 Questions about robots feeling pain are important because the experience of pain implies sentience and the ability to suffer. Pain is not the same as nociception, a reflex response to an aversive stimulus. The experience of pain in others has to be inferred. Danaher’s (Sci Eng Ethics 26(4):2023–2049, 2020. https://doi.org/10.1007/s11948-019-00119-x) ‘ethical behaviourist’ account claims that if a robot behaves in the same way as an animal that is recognised to have moral status, then its moral status should also be assumed. Similarly, under a precautionary approach (Sebo in Harvard Rev Philos 25:51–70, 2018. https://doi.org/10.5840/harvardreview20185913), entities from foetuses to plants and robots are given the benefit of the doubt and assumed to be sentient. However, there is a growing consensus about the scientific criteria used to indicate pain and the ability to suffer in animals (Birch in Anim Sentience, 2017. https://doi.org/10.51291/2377-7478.1200; Sneddon et al. in Anim Behav 97:201–212, 2014. https://doi.org/10.1016/j.anbehav.2014.09.007). These include the presence of a central nervous system, changed behaviour in response to pain, and the effects of analgesic pain relief. Few of these criteria are met by robots, and there are risks to assuming that they are sentient and capable of suffering pain. Since robots lack nervous systems and living bodies there is little reason to believe that future robots capable of feeling pain could (or should) be developed. 

Questions have been asked about whether or not a robot might be able to feel pain (Danaher 2020; Smids 2020; Sebo 2018). This issue is of particular interest because of the relationship between the experience of pain and sentience. An entity that has the phenomenological experience of pain must be sentient, because the ability to feel pain requires sentience. Those that can experience pain can suffer, and hence should be afforded moral status. 

What does it mean to have moral status? DeGrazia and Millum (2021) define moral status as follows: ‘To have moral status, an individual must be vulnerable to harm or wrongdoing. More specifically, a being has moral status only if it is for that being’s sake that the being should not be harmed, disrespected, or treated in some other morally problematic fashion.’ Terms closely related to moral status include moral patient, moral standing, moral considerability, personhood, and moral subject (Muhelhauser 2017). An entity that has moral status is one for which we should have moral concern. Sebo (2018) writes, ‘Where there is sentience there is reason for moral concern, for an entity that can experience pain can suffer”. Balcombe (2016) in his book ‘What a fish knows’ is clear about the relationship between pain, suffering and sentience. ‘Organisms that can feel pain can suffer, and therefore have an interest in avoiding pain and suffering. Being able to feel pain is not a trifling thing. It requires conscious experience’ (pp 71). 

If robots were shown to be able to feel pain, they would also deserve moral status. Conversely, if they are unable to feel pain, it is not clear that they would deserve moral concern. Sparrow (2004) reports the view that ‘unless machines can be said to suffer, they cannot be appropriate objects for concern at all’. Nussbaum (2022), writing about animals, concludes that ‘We do no harm to non-sentient creatures when we kill them, and since they do not feel pain we need not worry too much about the manner’. 

Being able to experience pain is not the only indication of sentience—sentient beings can also feel pleasure and other emotions and will have a subjective view of the world. As Nussbaum (2022) writes, ‘the world looks like something to them, and they strive for the good as they see it. Sometimes sentience is reduced to the ability to feel pain; but it is really a much broader notion, the notion of having a subjective view of the world’. Nonetheless, having the ability to feel pain requires sentience. 

It is possible for a being to be sentient yet unable to feel pain, as illustrated by the example of congenital analgesia, a rare genetic disorder of humans who do not feel pain. This possibility is not explored further here since our focus is on the experience of pain and what that means. Human individuals with congenital analgesia are clearly still sentient, for they have conscious experience of the world. However, consideration of the possibility that one day there could be machines that were deemed sentient yet were unable to experience pain is beyond the scope of this paper. The emphasis here is on the idea that if an entity has the phenomenological experience of pain, it must be sentient and capable of suffering. The experience of pain is like a litmus test for sentience. 

The terms ‘sentience’ and ‘consciousness’ are often treated as meaning the same, although some authors prefer one or the other. Damasio and Damasio (2022) use the term ‘consciousness’ rather than sentience. In what they describe as ‘a new theory of consciousness’, they distinguish between “the simpler ability to ‘sense’ or ‘detect’ objects and conditions in the environment” and consciousness, which “occurs when the contents of mind are ‘spontaneously identified as belonging to a specific owner’” (pp 2231). They point out that there are living species such as bacteria and plants that can sense or detect objects and conditions in the environment without having either a nervous system or internal representations of those objects or conditions. By contrast, they argue, consciousness involves internal representations. For them, ‘consciousness is present in living organisms capable of constructing sensory representations of components and states of their own bodies, but not in organisms limited to sensing/detecting’ (pp 2234). Nussbaum (2022) talks about sentience rather than consciousness. She describes how living creatures, from mammals to fish and birds, are assumed to be sentient. 

In this paper, it is assumed that sentience and consciousness are the same—a common assumption. Some writers do make a distinction between sentience and consciousness. For instance, Nani et al. (2021) suggest that plants may be sentient, but not conscious. For them, sentience represents ‘the immediate perception of an organism that something internal or external is actually happening to itself—it requires, therefore, feedback through a basic system of transmission signals.’ Although Nani et al. propose a conception ‘of different degrees of sentience, ranging from non-conscious sentience to conscious and self-conscious sentience’ they acknowledge that this is unusual ‘as it is commonly assumed that being sentient is the same as being conscious’. 

Discussions about the possibility of robots feeling pain, and of how we might determine whether they do, have tended to rely on speculations about future possibilities, as discussed in Sect. 2. Connections have been drawn between accounts of animal rights and robots (e.g. Gellers 2020; Gunkel 2012; Ryland 2020). However, these discussions pay little attention to the scientific experimental methods that have recently been used for exploring animal sentience (see Sect. 4). 

Many philosophers have speculated about the subjective experiences, or lack of experience, of animals. For Descartes, animals were effectively clockwork mechanisms without subjective awareness or reasoning powers. His follower, Malebranch, provides an account that represents this view (1689):dogs, cats and other animals, ‘eat without pleasure; they cry without pain; they believe without knowing it; they desire nothing; they know nothing’ (Malebranch 1689; Translated from Huxley 1896). Kant also saw animals as little more than machines, although he objected to the cruel treatment of animals by humans on the grounds that it would make the perpetrators more likely to be cruel towards fellow humans. He argued that we have an indirect moral responsibility towards them (Kant, Lectures on Ethics, 1997). The utilitarians were sensitive to animal suffering and wished to prevent it, as indicated by the quotation above from Bentham (1780), and further elaborated by Singer (1975) in his book on Animal Rights. 

Far more scientific evidence is available now about animal experiences and reasoning ability than was available to Descartes, or even to Kant or Bentham. Some of that evidence is summarised in the following Sect. 4 on ‘Inferring pain in animals’. Increasingly that evidence is being taken into account by writers such as Korsgaard (2018) and Nussbaum (2022). At the same time, there are those such as Danaher (2020), and Gordon and Gunkel (2022) who speculate about the possibility of robot suffering with little reference to available scientific evidence. In this consideration of the possibility of pain and suffering in robots, we begin with a brief description of pain itself. We then turn to an examination of the idea of pain in robots. The current progress towards developing robots that react to aversive stimuli is reviewed, followed by a discussion of how robot pain and sentience might be inferred or recognised. This is contrasted to the scientific approach to determining the experience of pain in animals. Some arguments against the possibility of robots feeling pain are presented and, in a final section, the consequences that follow from an assumption of sentience for both animals and robots are considered.

03 November 2024

Kafka

'Kafka in the Age of AI and the Futility of Privacy as Control' by Daniel Solove and Woodrow Hartzog in (2024) 104 Boston University Law Review 1021 comments 

Although writing more than a century ago, Franz Kafka captured the core problem of digital technologies – how individuals are rendered powerless and vulnerable. During the past fifty years, and especially in the 21st century, privacy laws have been sprouting up around the world. These laws are often based heavily on an Individual Control Model that aims to empower individuals with rights to help them control the collection, use, and disclosure of their data. 

In this Essay, we argue that although Kafka starkly shows us the plight of the disempowered individual, his work also paradoxically suggests that empowering the individual isn’t the answer to protecting privacy, especially in the age of artificial intelligence. In Kafka’s world, characters readily submit to authority, even when they aren’t forced and even when doing so leads to injury or death. The victims are blamed, and they even blame themselves. 

Although Kafka’s view of human nature is exaggerated for darkly comedic effect, it nevertheless captures many truths that privacy law must reckon with. Even if dark patterns and dirty manipulative practices are cleaned up, people will still make bad decisions about privacy. Despite warnings, people will embrace the technologies that hurt them. When given control over their data, people will give it right back. And when people’s data is used in unexpected and harmful ways, people will often blame themselves. 

Kafka’s provides key insights for regulating privacy in the age of AI. The law can’t empower individuals when it is the system that renders them powerless. Ultimately, privacy law’s primary goal should not be to give individuals control over their data. Instead, the law should focus on ensuring a societal structure that brings the collection, use, and disclosure of personal data under control.