'The Full Rights Dilemma for AI Systems of Debatable Moral Personhood' by Eric Schwitzgebel in (2023) 4 Robonomics: The Journal of the Automated Economy comments
An Artificially Intelligent system (an AI) has debatable moral personhood if it is epistemically possible either that the AI is a moral person or that it falls far short of personhood. Debatable moral personhood is a likely outcome of AI development and might arise soon. Debatable AI personhood throws us into a catastrophic moral dilemma: Either treat the systems as moral persons and risk sacrificing real human interests for the sake of entities without interests worth the sacrifice, or do not treat the systems as moral persons and risk perpetrating grievous moral wrongs against them. The moral issues become even more perplexing if we consider cases of possibly conscious AI that are subhuman, superhuman, or highly divergent from us in their morally relevant properties. ...
We might soon build artificially intelligent entities – AIs – of debatable moral personhood. We will then need to decide whether to grant these entities the full range of rights1and moral consideration that we normally grant to fellow humans. Our systems and habits of ethical thinking are currently as unprepared for this decision as medieval physics was for space flight.
If there is even a small chance that some technological leap could soon produce AI systems with a reasonable claim to personhood, the issue deserves careful consideration in advance. We will have ushered a new type of entity into existence –an entity perhaps as morally significant as Homo sapiens, and one likely to possess radically new forms of existence. Few human achievements have such potential moral importance and such potential for moral catastrophe. An entity has debatable moral personhood, as I intend the phrase, if it is reasonable to think that the entity might be a person in the sense of deserving the same type of moral consideration that we normally give, or ought to give, to human beings, and if it is also reasonable to think that the entity might fall far short of deserving such moral consideration. I intend “personhood” as a rich, demanding moral concept. If an entity is a moral person, they normally deserve to be treated as an equal of other persons, including for example – to the extent appropriate to their situation and capacities – deserving “human” rights, care and concern similar to that of other people, and equal protection under the law. Personhood, in this sense, entails moral status, moral standing, or moral considerability fully equal to that of ordinary human beings (Jaworska and Tannenbaum 2013/2021). By “moral personhood” I do not, for example, mean merely the legal personhood sometimes attributed to corporations for certain purposes.
An AI’s personhood is “debatable”, as I will use the term, if it is reasonable to think that the AI might be a person but also reasonable to think that the AI might fall far short of personhood. Substantial doubt is appropriate – not just minor doubts about the precise place to draw the line in a borderline case. Note that debatable personhood in this sense is both epistemic and relational: An entity’s status as a person is debatable if we (we in some epistemic community, however defined) are not compelled, given our available epistemic resources, either to reject its personhood or to reject the possibility that it falls far short. Other entities or communities, or our future selves, with different epistemic resources, might know perfectly well whether the entity is a person. Debatable personhood is thus not an intrinsic feature of an entity but rather a feature of our epistemic relationship to that entity.
I will defend four theses. First, debatable personhood is a likely outcome of AI development. Second, AI systems of debatable personhood might arise soon. Third, debatable AI personhood throws us into a catastrophic moral dilemma: Either treat the systems as moral persons and risk sacrificing real human interests for the sake of entities without interests worth the sacrifice, or don’t treat the systems as moral persons and risk perpetrating grievous moral wrongs against them. Fourth, the moral issues become even more perplexing if we consider cases of possibly conscious AI that are subhuman, superhuman, or highly divergent from us in their morally relevant properties.
'Hybrid theory of corporate legal personhood and its application to artificial intelligence' by Siina Raskulla in (2023) 3(78) SN Social Sciences comments
Artificial intelligence (AI) is often compared to corporations in legal studies when discussing AI legal personhood. This article also uses this analogy between AI and companies to study AI legal personhood but contributes to the discussion by utilizing the hybrid model of corporate legal personhood. The hybrid model simultaneously applies the real entity, aggregate entity, and artificial entity models. This article adopts a legalistic position, in which anything can be a legal person. However, there might be strong pragmatic reasons not to confer legal personhood on non-human entities. The article recognizes that artificial intelligence is autonomous by definition and has greater de facto autonomy than corporations and, consequently, greater potential for de jure autonomy. Therefore, AI has a strong attribute to be a real entity. Nevertheless, the article argues that AI has key characteristics from the aggregate entity and artificial entity models. Therefore, the hybrid entity model is more applicable to AI legal personhood than any single model alone. The discussion recognises that AI might be too autonomous for legal personhood. Still, it concludes that the hybrid model is a useful analytical framework as it incorporates legal persons with different levels of de jure and de facto autonomy. ...
Artificial intelligence is compared to fire, oil, and electricity in taking humankind to the next level of development. In legal studies, artificial intelligence has been compared to nature, animals, children, idols and money. (See, e.g., Gordon 2021; Gunkel & Wales 2021; Kurki 2019; Solaiman 2017; Beck 2016; Solum 1992.) Nevertheless, the analogy between AI and companies is perhaps the most widely used. The analogy between AI and corporations also works at the level of fire, oil and electricity: Corporate personhood is a legal invention that significantly added value to society during the Roman period, the Middle Ages and the colonial era. Corporate persons continue to create added value in today’s market economy. (See, e.g., Micklethwait and Wooldridge 2003; Berle 1952; Dodd 1948; Savigny 1884, pp. 86−88).
While the previous discussion on AI legal personhood has been extensive, the contribution of this article is that it utilizes the hybrid model of corporate legal personhood. It applies three distinct models of corporate legal personhood simultaneously: the real entity, aggregate entity, and artificial entity models. (Raskulla 2022, p. 324.) Simultaneous application of several models has also been envisioned by Chatman (2018), with the significant difference that Chatman only applies the real entity and artificial entity models. The article returns to Chatman’s model and its application in Chapter 6.
Many studies argue that artificial intelligence, particularly one with moral autonomy, challenges pre-existing models, rendering them useless or risky. They suggest that new models of legal personhood are required. (See, e.g., Novelli et al. 2022, p. 202; Laukyte 2020, p. 445; Chen and Burgess 2019, p. 76. See also Mocanu 2021; Kurki 2019.) While the suggestion is warranted, this article studies whether a less radical approach could be adopted. It examines whether the hybrid model of legal personhood could be applied to AI and whether it would offer a new perspective for the challenges we face.
Chapter 2 of the article introduces key concepts: artificial intelligence and legal personhood. Chapter 3 introduces the theoretical framework by outlining three competing models of corporate legal personhood and discussing the hybrid model. Chapter 4 then aims to resolve whether AI legal personhood is best described by the real entity model, aggregate entity model, artificial entity model, or hybrid model. The objective is not to answer the question of whether AI should be provided with legal personhood but whether it could be provided with legal personhood. These are two very different questions. Nevertheless, Chapter 5 reviews the discussion on the possible risks and advantages of AI legal personhood and considers whether the hybrid model could be useful in resolving any of the issues recognised. Finally, chapter 6 discusses the article’s contribution, and Chapter 7 concludes by summarizing key outcomes and formulating questions for future research.