01 November 2024

Robot Rights?

'Debunking robot rights metaphysically, ethically, and legally' by Abeba Birhane, Jelle van Dijk and Frank Pasquale in (2024) 29(4) First Monday comments

For some theorists of technology, the question of AI ethics must extend beyond consideration of what robots and computers may or may not do to persons. Rather, persons may have some ethical and legal duties to robots–up to and including recognizing robots’ “rights”. Gunkel (2018) hailed this Copernican shift in robot ethics debates as “the other question: can and should robots have rights?” Or, to adapt John F. Kennedy’s classic challenge: Ask not what robots can do for you, but rather, what you can, should, or must do for robots. 

In this work we challenge arguments for robot rights on metaphysical, ethical and legal grounds. Metaphysically, we argue that machines are not the kinds of things that could be denied or granted rights. Ethically, we argue that, given machines’ current and potential harms to the most marginalized in society, limits on (rather than rights for) machines should be at the centre of current AI ethics debate. From a legal perspective, the best analogy to robot rights is not human rights but corporate rights, rights which have frequently undermined the democratic electoral process, as well as workers’ and consumers’ rights. The idea of robot rights, we conclude, acts as a smoke screen, allowing theorists to fantasize about benevolently sentient machines, while so much of current AI and robotics is fuelling surveillance capitalism, accelerating environmental destruction, and entrenching injustice and human suffering. 

Building on theories of phenomenology and post-Cartesian approaches to cognitive science, we ground our position in the lived reality of actual humans in an increasingly ubiquitously connected, automated and surveilled society. What we find is the seamless integration of machinic systems into daily lives in the name of convenience and efficiency. The last thing these systems need is legally enforceable “rights” to ensure persons defer to them. Rights are exceptionally powerful legal constructs that, when improvidently granted, short-circuit exactly the type of democratic debate and empirical research on the relative priority of claims to autonomy that are necessary in our increasingly technologized world (Dworkin, 1977). Conversely, the ‘fully autonomous intelligent machine’ is, for the foreseeable future, a sci-fi fantasy, primarily functioning now as a meme masking the environmental costs and human labour which form the backbone of contemporary AI. The robot rights debate further mystifies and obscures these problems. And it paves the way for a normative rationale for permitting powerful entities developing and selling AI to be absolved from accountability and responsibility, once they can program their technology to claim rights to be left alone by the state. 

Existing robotic and AI systems (from large language models (LLMs), “general-purpose AI”, chatbots, to humanoid robots) are often portrayed as fully autonomous systems, which is part of the appeal for granting them rights. However, these systems are never fully autonomous, but always human-machine systems that run on exploited human labour and environmental resources. They are socio-technical systems, human through and through — from the source of training data, to model development, societal uptake following deployment, they necessarily depend on humans. Yet, the “rights” debate too often proceeds from the assumption that the entity in question is somewhat autonomous, or worse that it is devoid of exploited human labour and not a tool that disproportionately harms society’s disenfranchised and minoritized. Current realities require instead a reimagining of technologies from the perspectives, needs, and rights of the most marginalized and underserved. This means that any robot rights discussion that overlooks underpaid and exploited populations that serve as the backbone for “robots” (as well as the environmental cost required to develop AI) risks being disingenuous. As a matter of public policy, the question should not be whether robotic systems deserve rights, but rather, if we grant or deny rights to a robotic system, what consequences and implications arise for people owning, using, profiting, developing, and affected by actual robotic and AI systems?  

The time has come to change the narrative, from “robot rights” to the duties and responsibilities of the corporations and powerful persons now profiting from sociotechnical systems (including, but not limited to, robots). Damages, harm and suffering have been repeatedly documented as a result of the integration of AI systems into the social world. Rather than speculating about the desert of hypothetical machines, the far more urgent conversation concerns robots and AI as concrete artifacts built by powerful corporations, further invading our private, public, and political space, and perpetuating injustice. A purely intellectual and theoretical debate obscures the real threat: that many of the actual robotic and AI systems that powerful corporations are building are harming people both directly and indirectly, and that a premature and speculative robot rights discourse risks even further unravelling our frail systems of accountability for technological harm. 

The rise of ‘gun owners’ rights” in the U.S. is but one of many prefigurative phenomena that should lead to deep and abiding caution about fetishization of technology via rights claims. U.S. gun owners’ emotional attachments to their weapons are often intense. The U.S. has more gun violence than any other developed country, and endures frequent and bloody mass shootings. Nevertheless, the U.S. Supreme Court has advanced a strained interpretation of the U.S. Constitution’s Second Amendment to promote gun owners’ rights above public safety. We should be very careful about robot rights discourse, lest similar developments empower judiciaries to immunize exploitive, harmful, and otherwise socially destructive technologies from necessary regulations and accountability. 

The rest of the paper is structured as follows: Section 2 sets the scene by outlining the robot rights debate. In Section 3, we delve into what exactly “robot” entails in this debate. Section 4 clarifies some of the core underlying errors committed by robot rights advocates surrounding persons, human cognition, and machines and presents our arguments, followed by a brief overview of embodied and enactive perspectives on cognition in Section 5. In Section 6, we examine posthumanism, a core philosophical approach by proponents of robot rights and illustrate the problems surrounding it. Section 7 outlines the legal arguments, which is followed by Section 8 which illustrates how rights talk “cashes out” in actionable claims in courts of law, demonstrating the real danger of robot rights talk. We conclude in Section 9 by emphasizing the enduring irresponsibility of robot/AI rights talk.

In 'Taking AI Welfare Seriously' Robert Long, Patrick Butlin, Jacqueline Harding, Jeff Sebo, Jonathan Birch, Kathleen Finlinson, Kyle Fish, Toni Sims and David Chalmers argue that 

 there is a realistic possibility that some AI systems will be conscious and/or robustly agentic in the near future. That means that the prospect of AI welfare and moral patienthood — of AI systems with their own interests and moral significance — is no longer an issue only for sci-fi or the distant future. It is an issue for the near future, and AI companies and other actors have a responsibility to start taking it seriously. We also recommend three early steps that AI companies and other actors can take: They can (1) acknowledge that AI welfare is an important and difficult issue (and ensure that language model outputs do the same), (2) start assessing AI systems for evidence of consciousness and robust agency, and (3) prepare policies and procedures for treating AI systems with an appropriate level of moral concern. To be clear, our argument in this report is not that AI systems definitely are — or will be — conscious, robustly agentic, or otherwise morally significant. Instead, our argument is that there is substantial uncertainty about these possibilities, and so we need to improve our understanding of AI welfare and our ability to make wise decisions about this issue. Otherwise there is a significant risk that we will mishandle decisions about AI welfare, mistakenly harming AI systems that matter morally and/or mistakenly caring for AI systems that do not.

The authors refer to 'A transitional moment for AI welfare' 

Plausible philosophical and scientific theories, which accord with mainstream expert views in the relevant fields, have striking implications for this issue, for which we are not adequately prepared. We need to take steps toward improving our understanding of AI welfare and making wise decisions moving forward. 

... For most of the past decade, AI companies appeared to mostly treat AI welfare as either an imaginary problem or, at best, as a problem only for the far future. As a result, there appeared to be little or no acknowledgment that AI welfare is an important and difficult issue; little or no effort to understand the science and philosophy of AI welfare; little or no effort to develop policies and procedures for mitigating welfare risks for AI systems if and when the time comes; little or no effort to navigate a social and political context in which many people have mixed views about AI welfare; and little or no effort to seek input from experts or the general public on any of these issues. 

Recently, however, some AI companies have started to acknowledge that AI welfare might emerge soon, and thus merits consideration today. For example, Sam Bowman, an AI safety research lead at Anthropic, recently argued (in a personal capacity) that Anthropic needs to “lay the groundwork for AI welfare commitments,” and to begin to “build out a defensible initial understanding of our situation, implement low-hanging-fruit interventions that seem robustly good, and cautiously try out formal policies to protect any interests that warrant protecting.” Google recently announced that they are seeking a research scientist to work on “cutting-edge societal questions around machine cognition, consciousness and multi-agent systems”. High-ranking members of other companies have expressed concerns as well. 

This growing recognition at AI companies that AI welfare is a credible and legitimate issue reflects a similar transitional moment taking place in the research community. Many experts now believe that AI welfare and moral significance is not only possible in principle, but also a realistic possibility in the near future. And even researchers who are skeptical of AI welfare and moral significance in the near term advocate for caution; for example, leading neuroscientist and consciousness researcher Anil Seth writes, “While some researchers suggest that conscious AI is close at hand, others, including me, believe it remains far away and might not be possible at all. But even if unlikely, it is unwise to dismiss the possibility altogether [emphasis ours].” 

Ouraiminthisreportistoprovidecontext and guidance forthistransitionalmoment. Toimprove our understanding and decision-making regarding AI welfare, we need more precise empirical frameworks for evaluating AI systems for consciousness, robust agency, and other welfare-relevant features. We also need more precise normative frameworks for interacting with potentially morally significant AI systems and for navigating disagreement and uncertainty about these issues as a society. ThisreportoutlinesseveralstepsthatAIcompaniescantaketodayinordertostartpreparing for the possible emergence of morally significant AI systems in the near future, as a precautionary measure.  

We begin in section 1 by explaining why AI welfare is an important and difficult issue. Leaders in this space have a responsibility to understand this issue as best they can, because errors in either direction — either over-attributing or under-attributing moral significance to AI systems — could lead to grave harm. However, understanding this issue will be challenging, since forecasting the mental capacities and moral significance of near-future AI systems requires improving our understanding of topics like the nature of consciousness, the nature of morality, and the future of AI. It also requires overcoming well-known human biases, including a tendency to both over-attribute and under-attribute capacities like consciousness to nonhuman minds. 

In section 2, we argue that given the best information and arguments currently available, there is a realistic possibility of morally significant AI in the near future. We focus on two mental capacities that plausibly suffice for moral significance: consciousness and robust agency. In each case, we argue that caution and humility require allowing for a realistic possibility that (1) this capacity suffices for moral significance and (2) there are certain computations that (2a) suffice for this capacity and (2b) will exist in near-future AI systems. Thus, while there might not be certainty about these issues in either direction, there is a risk of morally significant AI in the near future, and AI companies have a responsibility to take this risk seriously now. 

We argue that, according to the best evidence currently available, there is a realistic possibility that some AI systems will be welfare subjects and moral patients in the near future. 

We close, in section 3, by presenting three procedural steps that AI companies can take today, in order to start taking AI welfare risks seriously. Specifically, AI companies can (1) acknowledge that AI welfare is an issue, (2) take steps to assess AI systems for indicators of consciousness, robust agency, and other potentially morally significant capacities, and (3) take steps to prepare policies and procedures that will allow them to treat AI systems with an appropriate level of moral concern in the future. In each case we also present principles and potential templates for doing this work, emphasizing the importance of developing ecumenical, pluralistic decision procedures that draw from expert and public input. 

Recommendations. 

We recommend that AI companies take these minimal first steps towards taking AI welfare seriously. 

Acknowledge. Acknowledge that AI welfare is an important and difficult issue, and that there is a realistic, non-negligible chance that some AI systems will be welfare subjects and moral patients in the near future. That means taking AI welfare seriously in any relevant internal or external statements you might make. It means ensuring that language model outputs take the issue seriously as well. 

Assess. Develop a framework for estimating the probability that particular AI systems are welfare subjects and moral patients, and that particular policies are good or bad for them. We have templates that we can use as sources of inspiration, including the “marker method” that we use to make estimates about nonhuman animals. We can consider these templates when developing a probabilistic, pluralistic method for assessing AI systems. 

Prepare. Develop policies and procedures that will allow AI companies to treat potentially morally significant AI systems with an appropriate level of moral concern. We have many templates to consider, including AI safety frameworks, research ethics frameworks, and forums for expert and public input in policy decisions. These frameworks can be sources of inspiration — and, in some cases, of cautionary tales. 

These steps are necessary but far from sufficient. AI companies and other actors have a responsibility to start considering and mitigating AI welfare risks. 

Before we begin, it will help to emphasize five important features of our discussion. First, our discussion will concern whether near-future AI systems might be welfare subjects and moral patients. An entity is a moral patient when that entity morally matters for its own sake, and an entity is a welfare subject when that entity has morally significant interests and, relatedly, is capable of being benefited (made better off ) and harmed (made worse off ). Being a welfare subject makes you a moral patient — when an entity can be harmed, we have a responsibility to (at least) avoid harming that entity unnecessarily. But there may be other ways of being a moral patient; our approach is compatible with many different perspectives on these issues. 

Second, our discussion often focuses on large language models (LLMs) as a central case study for the sake of simplicity and specificity, and because we expect that LLMs — as well as broader systems that include LLMs, such as language agents — will continue to be a focal point in public debates regarding AI welfare. But while some of our recommendations are specific to such systems (primarily, our recommendations regarding how AI companies should train these systems to discuss their own potential moral significance), our three general procedural recommendations (acknowledge, assess, and prepare) apply for any AI system whose architecture is complex enough to at least potentially have features associated with consciousness or robust agency. 

Third, our discussion often focuses on initial steps that AI companies can take to address these issues. These recommendations are incomplete in two key respects. First, AI companies are not the only actors with a responsibility to take AI welfare seriously. Many other actors have this responsibility too, including researchers, policymakers, and the general public. Second, these steps are not the only steps that AI companies have a responsibility to take. They are the minimum necessary first steps for taking this issue seriously. Still, we emphasize these steps in this report because by taking them now, AI companies can help lay the groundwork for further steps — at AI companies and elsewhere — that might be sufficient. 

Fourth, our aim in what follows is not to argue that AI systems will definitely be welfare subjects or moral patients in the near future. Instead, our aim is to argue that given current evidence, there is a realistic possibility that AI systems will have these properties in the near future. Thus, our analysis is not an expression of anything like consensus or certainty about these issues. On the contrary, it is an expression of caution and humility in the face of what we can expect will be substantial ongoing disagreement and uncertainty. In our view, this kind of caution and humility is the only stance that one can responsibly take about this issue at this stage. It is also all that we need to support our conclusions and recommendations here. 

Finally, and relatedly, our aim in what follows is not to argue for any particular view about how humans should interact with AI systems in the event that they do become welfare subjects and moral patients. We would need to examine many further issues to make progress on this topic, including: how much AI systems matter, what counts as good or bad for them, what humans and AI systems owe each other, and how AI welfare interacts with AI safety and other important issues. These issues are all important and difficult as well, and we intend to examine them in upcoming work. However, we do not take a stand on any of these issues in this report, nor does one need to take a stand on any of them to accept our conclusions or recommendations here.