10 October 2025

Robot Rights?

Robot Rights? Let’s Talk about Human Welfare Instead (2020 AAAI/ACM Conference on AI, Ethics, and Society (AIES’20), February 7–8, 2020) by Abeba Birhane and Jelle van Dijk comments 

The ‘robot rights’ debate, and its related question of ‘robot responsibility’, invokes some of the most polarized positions in AI ethics. While some advocate for granting robots rights on a par with hu- man beings, others, in a stark opposition argue that robots are not deserving of rights but are objects that should be our slaves. Grounded in post-Cartesian philosophical foundations, we argue not just to deny robots ‘rights’, but to deny that robots, as artifacts emerging out of and mediating human being, are the kinds of things that could be granted rights in the first place. Once we see robots as mediators of human being, we can understand how the ‘robots rights’ debate is focused on first world problems, at the expense of urgent ethical concerns, such as machine bias, machine elicited human labour exploitation, and erosion of privacy all impacting society’s least privileged individuals. We conclude that, if human being is our starting point and human welfare is the primary concern, the negative impacts emerging from machinic systems, as well as the lack of taking responsibility by people designing, selling and deploying such machines, remains the most pressing ethical discussion in AI.

The authors argue 

Some may argue that the idea of robot rights is a peculiar, irrelevant discussion existing only at the fringes of AI ethics research more broadly construed, and as such devoting our time to it would not be paying justice to the important work done in that field. But the idea of robot rights is, in principle, perfectly legitimate if one stays true to the materialistic commitments of artificial intelligence: in principle it should be possible to build an artificially intelligent machine, and if we would succeed in doing so, there would be no reason not to grant this machine the rights we attribute to ourselves. Our critique therefore is not that the reasoning is invalid as such, but rather that we should question its underlying assumptions. Robot rights signal something more serious about AI technology, namely, that, grounded in their materialist techno-optimism, scientists and technologists are so preoccupied with the possible future of an imaginary machine, that they forget the very real, negative impact their intermediary creatures - the actual AI systems we have today - have on actual human beings. In other words: the discussion of robot rights is not to be separated from AI ethics, and AI ethics should concern itself with scrutinizing and reflecting deeply on underlying assumptions of scientists and engineers, rather than seeing its project as ’just’ a practical matter of discussing the ethical constraints and rules that should govern AI technologies in society. Our starting point is not to deny robots ‘rights’, but to deny that robots are the kinds of beings that could be granted or denied rights. We suggest it makes no sense to conceive of robots as slaves, since ‘slave’ falls in the category of being that robots aren’t. Hu- man beings are such beings. We believe animals are such beings (though a discussion of animals lies beyond the scope of this paper). We take a post-Cartesian, phenomenological view in which being human means having a lived embodied experience, which itself is embedded in social practices. Technological artifacts form a crucial part of this being, yet artifacts themselves are not that same kind of being. The relation between human and technology is tightly intertwined, but not symmetrical. 

Based on this perspective we turn to the agenda for AI ethics. For some ethicists, to argue for robot rights, stems from their aversion against a human arrogance in face of the wider world. We too wish to fight human arrogance. But we see arrogance first and foremost in the techno-optimistic fantasies of the technology industry, making big promises to recreate ourselves out of silicon, surpassing ourselves with ‘super-AI’ and ‘digitally uploading’ our minds so as to achieve immortality, while at the same time exploiting human labour. Most debate on robot rights, we feel, is ultimately grounded in the same techno-arrogance. What we take from Bryson, is her plea to focus on the real issue: human oppression. We forefront the continual breaching of human welfare and especially of those disproportionally impacted by the development and ubiquitous integration of AI into society. Our ethical stance on human being is that being human means to interact with our surroundings in a respectful and just way. Technology should be designed to foster that. That, in turn, should be ethicists’ primary concern. 

In what follows we first lay out our post-Cartesian perspective on human being and the role of technology within that perspective. Next, we explain why, even if robots should not be granted rights, we also reject the idea of the robot as a slave. In the final section, we call attention to human welfare instead. We discuss how AI, rather than being the potentially oppressed, is used as a tool by humans (with power) to oppress other humans, and how a discussion about robot rights diverts attention from the pressing ethical issues that matter. We end by reflecting on responsibilities, not of robots, but those of their human producers.