30 September 2023

Carbon Chauvinism and Personhood?

'Is it Possible that Robots Will Not One Day Become Persons' by Michael J Reiss in (2023)  Zygon: Journal of Religion and Science  comments

.That robots might become persons is increasingly explored in popular fiction and films and is receiving growing academic analysis. Here, I ask what would be necessary for robots notto become persons at some point. After examining the meanings of “robots” and “persons,” I discuss whether robots might not become persons from a range of perspectives: evolution (which has led overtime from species that do not exhibit personhood to species that do), development (personhood is something into which each of us grows), chemistry (must persons be carbon-based and must robots be non–carbon-based?), history (we now consider more entities to be persons than was once the case), and theology (are humans privileged over the rest of creation, and how relevant is panpsychism?). I end by considering some of the implications if/once robots do become persons. 

The idea that robots might one day be persons, though widely explored in fiction, sounds ridiculous to some people, unimaginable. But so were lots of things that have now come to pass—women playing rugby or being bishops, continents moving, humans as relatives of monkeys, a lot of twentieth-century physics. In this article, I turn the question on its head and examine what needs to be the case if robots will not one day become persons. Of course, many (almost certainly most) robots will never be persons. They will spend their days ensuring our fridges are stocked as we would like, our pets and children receive their medication on time, and the air pressure in our car tyres is appropriate. The question, therefore, is what needs to be the case for no robot ever to be a person. 

What Are Robots? 

There are many definitions of robots but a fairly standard, middle-of-the road one is that “A robot is a type of automated machine that can execute specific tasks with little or no human intervention and with speed and precision” (TechTarget 2021). Robots may resemble humans (in which case they are known as androids) but most don’t. By and large, software alone is not usually considered sufficient to qualify as a robot—hence the term “machine” in the definition quoted above. The hardware component of a robot can profitably be thought of as a form of embodiment. As is well known, there are already things that some robots can do better than any humans (largely, at the present time, to do with performing predictable tasks rapidly and near faultlessly) but my focus is not on such questions as to whether robots will replace (more likely, work alongside) doctors, teachers, lawyers, cooks, long-distance lorry drivers, and others but whether some of them will be persons. 

What Are Persons? 

There is an enormous literature on the meaning of the term “person” and personhood has been and still is understood in a range of ways (Williams 2018; Williams and Bengtsson 2018). A number of positions can be identified. The most frequent is to attempt to determine criteria that are sufficient for an entity to be deemed a person. Such a person has a nontrivial degree of self-awareness (aka self-consciousness) and manifests rationality and moral awareness to at least a certain extent. (I should also add that all current persons are embodied though it is difficult to be certain how essential embodiment is. Might disembodied software manifest personhood? It seems unlikely.) Those who criticize this approach to understanding personhood point out such problems as the case of people (i.e., members of the species Homo sapiens) who are asleep, unconscious, very young (e.g., new-born babies),or living with advanced dementia. Responses to such criticisms mainly fall into two camps (a third camp responds by abandoning the notion of personhood altogether—cf. Farah and Heberlein 2007): either to accept them (i.e., to deny that such people are persons—something that is easier to do in some cases, e.g., new-born babies, than others, e.g., someone in the prime of life who just happens to be taking a nap)—or to reject them, for instance by appealing to notions of personal identity over time, though this raises new problems as considered by Parfit (1984) and others. Another approach is to adopt a significance-based (i.e., relational) view of personhood, though this seems to suffer from the problem that I am perhaps less of a person than you simply because no one cares for me whereas you have friends, neighbors, and relatives galore (an example of “from him who has not, even what he has will be taken away”).There are those who equate “person” with “human being”—where the latter term is understood as a member of the speciesHomo sapiens. Even if we set aside the debate as to whether personhood in humans starts at the moment of conception, the argument that only members of the species Homo sapiens can be persons faces other difficulties. One such difficulty is illustrated by a thought experiment. Imagine that our planet becomes colonized by hyperintelligent, sentient creatures (not members of Homo sapiens) from outer space who consider themselves to be person on the standard definitions of personhood but that these creatures reject our claim that we too are persons, and thus worthy of moral consideration, on the grounds that only members of their species can be persons. This would not be good news for us (i.e., members of the speciesHome sapiens). We might term the equation of “person” with “human being” the Alpha Centauri fallacy after Mary Doria Russell’s 1999 novel The Sparrow, which is set in a world in the vicinity of Alpha Centauri and explores the issue as one of its themes, as did H. G. Wells in his 1895 The Time Machine. A more tangible objection to the “only humans are persons” argument begins by noting that this would mean that no other species alreadypresent on Earth are persons. Increasing numbers of people find this argument difficult to defend when they consider such familiar animals as dogs, cats, cows, pigs, parrots, and corvids not to mention such great apes as chimpanzees and gorillas—cf. The Great Ape Project (Cavalieri and Singer 1993). More generally, until the second half of the nineteenth century, humans were seen by almost everyone as being entirely distinct from other species. What helped shift that perception was, above all, the work of Charles Darwin in his On the Origin of Species by Means of Natural Selection (Darwin1859) and The Descent of Man (Darwin 1871). Post-Darwin it is difficult for biologists to see humans and animals as being fundamentally different in kind—it is more a matter of degree (Reiss in press). Research on animal behavior has shown that what were thought to be dividing lines between humans and animals (i.e., nonhuman animals), such as tool use, are less clear-cut than had been supposed. In October 1960, Jane Goodall observed a chimpanzee bend a twig, strip off its leaves, and use it to “fish” for termites in their nest (Van Lawick-Goodall 1971). Much the same story can be told about just about any other feature once held absolutely to distinguish humans from animals—the use of language, a sense of morality, an aesthetic awareness, the ability to count, rational decision making, and so on. What we often see in animals is what seems to be something akin to the early stages (developmentally or evolutionarily) of the human behavior in question. 

If it is accepted that personhood does not simply equate with being human, one needs to find some other account of what constitutes personhood. It is unlikely that a single attribute will be found to be adequate. Existing attempts (e.g., Cavalieri and Singer 1993) generally presume that personhood requires a certain degree of self-awareness, also known as self-consciousness. 

Of course, determining whether another species is self-conscious is not straightforward. One widespread approach is to use the mirror test. In essence, the animal is marked—for example, by the application of some temporary red dye—while it is asleep on a part of its body, such as its forehead, that it cannot normally see but is in its field of view when it looks at a mirror. When the animal is subsequently awake, it is given access to a large mirror. If it investigates the mark, this is taken as evidence that the animal “presumes” the reflected image shows itself and, in this sense, is self-aware. The mirror test has been critiqued and sets quit ea high bar. Other markers of personhood that have been used include a degree of rationality and the presence of moral awareness. 

So long as it is agreed that in principle, nonhumans could be persons, whether or not some existing nonhumans, such as chimpanzees — which pass the mirror test and seem to exhibit rationality and have a moral sense — actually are persons, we cannot rule out absolutely the possibility that (some) robots might one day be persons. I now proceed to examine, from several perspectives, what is needed for only humans to be persons (cf. Reiss 2021)

'Will We Know Them When We Meet Them? Human Cyborg and Non-Human Personhood' by Léon Turner in the same issue of Zygon comments 

In this article, I assess (1) whether some cyborgs and AI robots can theoretically be considered persons; and (2) how we will know if/when they have attained personhood. Since our discourses of personhood are inherently pluralistic and our concepts of both humanness and personhood are inherently nebulous, both some cyborgs, and some AI robots, I conclude, could theoretically be considered persons depending on what, exactly, one means by “person.” The practical problem of how we distinguish them from nonpersonal AI entities is, however, both more important, and much more difficult to solve. In conversation with various secular and theological accounts of relational personhood, I argue that only by treating AI entities as persons by default might we avoid the potentially catastrophic consequences of mistakenly denying personhood to an entire group of eligible entities. 

All human beings are also persons in some sense. The qualifier “in some sense” is necessary, because, like the term “human,” “person” resists easy definition, acquiring many different meanings as parts of multiple and frequently disparate discourses across the arts, humanities, and sciences. There is no especially good reason to believe that any meaning is superior to, or more basic than any other, and some human individuals may not be persons in all possible senses. Nevertheless, we manage to apply the term accurately and consistently to entities sufficiently like ourselves. As Eric Olson (1999, 53) writes, “There is a fair consensus about what things count as people: no one doubts that you and I and Boris Yeltsin are people, and that houses and bronze statues of people aren't. Although there are disputed cases (foetuses, infants, adults suffering from severe senile dementia), their number is small compared with the number of items we can confidently classify as people or non-people. In this respect ‘person’ is no worse off than most other nouns… The word ‘person’ is well enough understood for there to be philosophical problems about people.” To paraphrase Supreme Court Justice Potter Stewart's famous declaration, we know them when we see them, even if we find them difficult to define, and we disagree about what they are and how they work. But how confident should we be that this will always be the case? After all, much of modern technology, from the most basic telephone answering machines to the most sophisticated robots and generative AI systems, are specifically designed to mimic particular human capacities and abilities, and sometimes to replace human persons altogether. Our ability to distinguish human persons from nonhuman, nonpersonal entities is now challenged on a daily basis. 

For some, the dissolution of the boundary between human and nonhuman is a fait accompli. Donna Haraway, in her article “A Manifesto for cyborgs,” first published almost 40 years ago, argued that because technology already permeated all areas of human life, the boundaries between organism and machine were already forever blurred: “Insofar as we know ourselves in formal discourse (e.g. biology), and in daily practice … we find ourselves to be cyborgs, hybrids, mosaics, chimeras. Biological organisms have become biotic systems, communications devices like others. There is no fundamental, ontological, separation in our formal knowledge of machine and organism, of technical and organic” (1985, 42). Haraway's concept of the cyborg has been extremely influential, but the abandonment of all possible means of separating organic human from inorganic life still looks premature. Her concept of the cyborg is subtle, and operates at multiple levels, but in a very simple, practical sense, despite the great complexification of the mechanical scaffolding of all areas of human life in the last four decades, there are still very clear boundaries between machines and living organisms, regardless of how much they depend on each other (cf. Geertsema 2006). 

We do not treat our home computers, internet-connected smart refrigerators, mobile phone voice assistants, or even fully synthetic robots in the same ways we treat any form of organic life, let alone other human beings. This is not to say we don't care about them in some sense, but we do not relate to them in the same way as our human friends or relations, or even total strangers for that matter. We do not usually worry about the emotional lives of machines because we do not usually believe they have emotions, and we are happy to upgrade them with newer models when they break because they are functionally almost identical. People are not so easy to replace, even if we can find functional equivalents for some of those with whom we have relationships—second spouses, new employers, or alternative shop assistants, for example. At the very least, almost everyone would concede that the relationships we have with inorganic objects, including those designed specifically with human interaction in mind, are nothing like the relationships we have with other human beings (cf. Smith 2022). 

Nevertheless, recent technological developments raise the possibility that however much we may wish to preserve the current status quo (Bryson 2010), our ability to distinguish human from nonhuman will be deeply compromised by the appearance of a new generation of hybrid and synthetic entities, which will incorporate technology in ways unimaginable in 1985 (cf. Barfield 2019). There may come a time, therefore, when deciding what is and isn't human will depend upon resources rather more penetrating than visual acuity. These potential developments inspire much stronger feelings than the gradual diffusion and infiltration of technology documented by Haraway, and are a cause of serious anxiety for some (Sharon 2013). 

Since Alan Turing developed his infamous test for artificial intelligence in 1950, the increasing degree of resemblance between human and machine behavior has been imbued with great philosophical and sociocultural significance, and as the discernible difference between them diminishes at a frightening pace, serious discussion about the conceptual boundaries between human persons and intelligent machines are ever more pressing. My primary focus here is on two practically inseparable, if theoretically distinct, questions: (1) whether some cyborgs and fully synthetic AI robots can be squeezed into the diverse range of concepts of persons already at our disposal; and (2) how will we be able to tell if/when some machines deserve to be considered persons? The first question is simple to answer. Once we acknowledge the nebulousness of our current concepts of persons and the inherent pluralism of discourses of personhood, the answer is almost certainly “it depends on what you mean by person.” The second question may seem more difficult, but it too might have a simple answer, albeit one likely to antagonize those who prize conceptual and methodological precision—“we'll know them when we meet them.” 

In exploring these two questions, I will examine two possible routes we might take toward qualifying cyborgs and fully synthetic AI robots as persons. The first confers personhood upon entities on the basis of their resemblance to the only completely uncontroversial examples we have of persons—human beings. The closer the resemblance to a human being, both biologically and in terms of physical and mental abilities or capacities, the more likely an entity is to qualify as human. The second seemingly severs the connection between personhood and humanness on the basis that a superordinate concept of personhood cannot be abstracted from concrete, particular individual human beings. According to this view, personhood is not an abstract quality, but is better understood dynamically as what individual persons do in the process of relating to other persons and their environments. It is irreducible to any constellation of attributes or capacities. If nonhuman entities can also do it, then they deserve to be treated as persons, since persons are simply those entities that appear to understand themselves as persons, and who are treated as persons by other persons. 

There are, of course, many other possible routes we might take (see Hubbard 2010), but these two expose key conceptual issues underlying the prospective personhood of AI entities very clearly. They are also of great help in advancing a further major claim of this article—that however we conceive of persons and personhood, there will always be borderline cases that are almost impossible to adjudicate. Whereas, as Olson observes, there are already disputed cases of persons, the unfolding technological revolution seems certain to multiply these entities exponentially. Even if one stipulates that only human beings can be persons, as some Christian theologians have suggested, the proliferation of borderline cases seems unavoidable. Precisely how these entities ought to be treated may turn out to be a much more important and complex issue than whether some intelligent machines can be considered persons. 

One final qualification is necessary before proceeding. What follows is not based on any reasoned critique of current or future technology. We are still many years from the advent of anything approximating a synthetic humanlike being. Regardless, the following discussion is not limited to the discussion of currently available technology, or so-called “therapeutic” modifications of the human person (see Chadwick 2009). Rather, I proceed on the basis that human beings may be technologically enhanced in all the ways imagined by science fiction, and that synthetic humanlike minds and bodies will one day be possible. Not everyone agrees this is likely to occur (e.g., Geertsema 2006; Coeckelbergh 2010; Dorobantu 2021), but this maximally different future has especially interesting implications for the understanding and development of the concept of personhood, and it can be explored without embracing the certainty professed by some that it will all come to pass.