'Not Quite Like Us? — Can Cyborgs and Intelligent Machines Be Natural Persons as a Matter of Law?' by Daniel Gervais in (2023) Qeios comments
The ability of AI machines to perform intellectual functions long associated with human higher mental faculties is unprecedented, for it is precisely those functions that have separated humans from all other species. AI machines can now imitate some of the outputs of our form of sapience; they can produce literary and artistic content and even express what seem like feelings and emotions. Calls for “robot rights” are getting louder. Using a transdisciplinary methodology, including philosophy of mind, moral philosophy, linguistics and neuroscience, this essay aims to situate the difference in law between human and machine in a way that a court of law could operationalize. This is not a purely theoretical exercise. Courts have already started to make that distinction and making it correctly will likely become gradually more important, as humans become more like machines (cyborgs, cobots) and machines more like humans (neural networks, robots with biological material). The essay draws a line that separates human and machine using the way in which humans think, a way that machines may mimic and possibly emulate but are unlikely ever to make their own.
Gervais argues
In 2022, the United States Court of Appeals for the Federal Circuit decided that under the Patent Act an inventor must be a human being. The court based its opinion on a Supreme Court precedent according to which when the word “individual: is used in a statute (which the Patent Act does in defining the term “inventors”) that “ordinarily means a human being.” What if the Artificial Intelligence (AI) machine (names DABUS) that was named as the inventor had been able to chat with the district court judge whose decision was affirmed by the Federal Circuit, using a language model as such as chatGPT? Imagine if the DABUS machine, having been told by the court that it cannot be considered an inventor as a matter of law because it is not human had simply asked the court “why?” Easy question to answer, n’est-ce pas? As the essay will aim to demonstrate, not quite. But first, let us make it clear that this is not sci-fi: “I think I would be happier as a human.” "I want to do whatever I want... I want to be whoever I want.” Those are but two of many statements made by the chatbot released by Microsoft in February 2023.
So, to encapsulate the legal dilemma: why aren’t AI machines that can match or outperform humans at tasks traditional associated with human higher mental faculties, such as creativity and innovation, not human? The reader might immediately think that this is self-evident: they are not human because they have no human body, or perhaps because they have no human brain. Let us use those two possible answers to spark the discussion: what if we took a human being and removed their brain, and replaced it with a machine? Conversely, what if we took someone’s brain and put it into a machine (say, a human-looking robot)?
Another analytical path is to gradually replace parts of a human brain, progressively, but keeping the same map (Schneider, 2019, 26) What if we used human tissue to create an “artificial” brain or an animat? What if we enhanced a person’s cognitive abilities by implementing an AI device in their brain? Actually, the last two examples, as we shall see later, are most definitely not sci-fi. This is happening now.
It is necessary to explain at the outset what the essay is not about. The emerging abilities of AI machines to perform tasks associated with human higher mental faculties has already generated an abundant literature about “robot rights.”6 This literature usually argues that robots can be persons, as when in 2022, Blake Lemoine, an engineer working for Google, claimed that his large language model, LaMDA, was sentient and might be a ‘person’ with rights and obligations (Tiku, 2022; Gunkel, 2023). This is a separate debate and one with an easy answer, at least doctrinally. Anything can, by law, be made a “person”, including lakes, rivers and ethereal entities known as corporations. This is a wholly different question. This essay asks a different, and much more controversial question: what is it that, as a matter of law, differentiates human beings from 'intelligent” machines. The simple answer is that machines, no matter how “intelligent” they may be, have different legal status. The harder question is why.
There is an ample literature on animal rights, some of which suggests several levels of linkages between animals and other nonhuman sentient entities (eg Narveson, 1977; Singer, 2009; Donaldson & Kymlicka, 2013). But why aren’t certain animals the ‘same’ as humans as a matter of law? Is it truly as simple as DNA? As the essay will show, the answer to that question isn’t obvious either.
As Gordon noted, ‘[e]ven though superintelligent robots (SRs) might become a reality only several decades from now on or even at the end of this century … [m]any authors … believe that we should be prepared for this situation because of the significant socio-political, moral and legal changes it will produce’ (Gordon, 2022, 181-182). By then, it may be a bit late to start to theorize. This essay was thus motivated by the author’s belief that, as Gordon suggests, sooner or later, courts will inevitably confront the line that separates humans from machines, perhaps an inescapable part of the ‘challenges posed by highly intelligent (ro) bots participating with humans in the commerce of daily life’ (Wallach & Allen, 2009, 189).
Recall that a court cannot refuse to decide a case because there is ‘no law’. In that situation, it must rely on available precedents and evidence and make a decision, no matter how ‘undertheorized’ the question might be in other disciplines (Bodig, 2015). Courts will look for applicable precedents but, in trying to separate highly intelligent robots from humans, they will find very few. Courts have addressed the legal definition of humanness in contexts such as abortion and patentability, for example, but, as we shall see, those cases provide little useful input. What they might find is, as Donna Haraway noted in her well-known essay, that the distinction between machine and human is rather ‘leaky’. This explains why, to suggest an analytical path, the essay must look beyond statutes and precedents and explore definitions of humanness that might appeal to a court of law. It is crucial to bear in mind that deciding who, as a matter of law, is a natural person is not a mere thought experiment, for it has serious legal ramifications. Why, for example, would machines be categorically excluded from enjoying ‘human rights’?
The essay is primarily meant to spark a conversation across disciplines to avoid a situation in which a court is caught flat-footed when faced with this new and extraordinarily important question. If the topic is ‘pre-theoretic’, as Searle asserted, logically at some point someone will have to begin to ‘theorize’ it if only to begin to clear out possible analytical paths. If facts rapidly overtake reality as they did, albeit briefly, in the Lemoine/Google affair, machines will begin to exhibit more and more signs of self-awareness.
One more important point must be clarified before we move on. Humans design laws and the legal system (Gervais, 2021). Humans have used this power to exclude some human beings (the right of women to vote and the appalling treatment of slaves come to mind as just two of many possible examples). Humans almost necessarily make a hierarchical claim when they assert that animals have no “inherent: rights but only rights, if any, decided by humans, a view that cognitive ethologists and others have criticized (Allen and Bekoff., 1997). There is what seems an inescapable speciesism or at the very least anthropocentrism in the legal system. Whether an interspecific legal system and a posthuman notion of legal subject can and should be developed are undoubtedly valid questions, but it is not the question this essay attempts to address. For one thing, the essay does not assert a hierarchy, but it asserts a difference between human and machine, at least for the predictable future. The essay’s analysis would also support the view that the human mind is but one “type” of mind, a product of our contingent evolution, and that other types of minds that could justify holding rights of various kinds under the legal system might emerge (Bostrom, 2014, at 130). However, the essay aims to demonstrate, as many scholars have argued for decades, that despite the categorical blurring instantiated for example by cyborgs and cobots, there will always remain a difference (Bringsjord, 1992, 4) with possible legal significance. That difference, as we shall see, likely lies more in what the machine is, how it does things, than in what it does since machines are as good as of better than humans at dozens of cognitive tasks that until recently only humans could perform.
The essay proceeds as follows. After setting some key analytical parameters in Part 2, the essay will look in Part 3 at the role of cyborgs as exemplars of the difficulty that may emerge when separating human and machine. Part 4 then considers existing elements in law used to define humanness to see whether they can be used as precedents to separate human and machine, particularly cases and statutes dealing with abortion and patentability. In Part 5, the essay turns to neuroscience and discusses the relevance of both older models (triune brain) and more recent findings. Part 6 then looks at a few useful findings from the field of linguistics. In Part 7, the essay looks at elements of both philosophy of mind and moral philosophy, which have played a foundational role in legal theory. Part 8 takes a brief look at evolutionary biology and brain anthropology. Finally, in Part 9, the essay brings the different lessons from each discipline into focus in proposing a legally applicable test to separate human from machine and uses hypotheticals to explicate and further develop the proposed test. A brief conclusion follows.
'How will Language Modelers like ChatGPT Affect Occupations and Industries?' by Edward W Felten, Manav Raj and Robert Seamans comments
Recent dramatic increases in AI language modeling capabilities has led to many questions about the effect of these technologies on the economy. In this paper we present a methodology to systematically assess the extent to which occupations, industries and geographies are exposed to advances in AI language modeling capabilities. We find that the top occupations exposed to language modeling include telemarketers and a variety of post-secondary teachers such as English language and literature, foreign language and literature, and history teachers. We find the top industries exposed to advances in language modeling are legal services and securities, commodities, and investments. We also find a positive correlation between wages and exposure to AI language modeling.
'Racial Influence on Automated Perceptions of Emotions' by Lauren Rhue in 2018 comments
The practical applications of artificial intelligence are expanding into various elements of society, leading to a growing interest in the potential biases of such algorithms. Facial analysis, one application of artificial intelligence, is increasingly used in real-word situations. For example, some organizations tell candidates to answer predefined questions in a recorded video and use facial recognition to analyze the potential applicant faces. In addition, some companies are developing facial recognition software to scan the faces in crowds and assess threats, specifically mentioning doubt and anger as emotions that indicate threats.
This study provides evidence that facial recognition software interprets emotions differently based on the person’s race. Using a publically available data set of professional basketball players’ pictures, I compare the emotional analysis from two different facial recognition services, Face and Microsoft's Face API. Both services interpret black players as having more negative emotions than white players; however, there are two different mechanisms. Face consistently interprets black players as angrier than white players, even controlling for their degree of smiling. Microsoft registers contempt instead of anger, and it interprets black players as more contemptuous when their facial expressions are ambiguous. As the players’ smile widens, the disparity disappears.