'The Kant-inspired indirect argument for non-sentient robot rights' by Tobias Flattery in (2023) AI and Ethics comments
Some argue that robots could never be sentient, and thus could never have intrinsic moral status. Others disagree, believing that robots indeed will be sentient and thus will have moral status. However, a third group thinks that, even if robots could never have moral status, we still have a strong moral reason to treat some robots as if they do. Drawing on a Kantian argument for indirect animal rights, a number of technology ethicists contend that our treatment of anthropomorphic or even animal-like robots could condition our treatment of humans: treat these robots well, as we would treat humans, or else risk eroding good moral behavior toward humans. But then, this argument also seems to justify giving rights to robots, even if robots lack intrinsic moral status. In recent years, however, this indirect argument in support of robot rights has drawn a number of objections. In this paper, I have three goals. First, I will formulate and explicate the Kant-inspired indirect argument meant to support robot rights, making clearer than before its empirical commitments and philosophical presuppositions. Second, I will defend the argument against a number of objections. The result is the fullest explication and defense to date of this well known and influential but often criticized argument. Third, however, I myself will raise a new concern about the argument’s use as a justification for robot rights. This concern is answerable to some extent, but it cannot be dismissed fully. It shows that, surprisingly, the argument’s advocates have reason to resist, at least somewhat, producing the sorts of robots that, on their view, ought to receive rights.
Flattery argues
Given the growing interest and steady progress in social robotics, it is becoming more and more likely that robots looking and acting like humans or other animals will become widespread in society. Robots are already, for instance, increasingly used for childhood social development [27, 61], elder care [64], sexual companionship (Nyholm 2020: ch. 5), and even as pets. In 2017, Saudi Arabia went so far as to grant citizenship to Sophia, a humanoid robot developed by Hanson Robotics, making Sophia the first robot to be granted citizenship by a sovereign nation (Hatmaker [37]). If Saudi Arabia takes Sophia’s citizenship seriously, presumably Sophia also has the same set of rights enjoyed by human Saudi citizens. But are there moral reasons to give rights to Sophia? Given our track records as humans—which already includes many instances of human abuse toward robots [9]—it seems obvious that, if these sorts of robots indeed become widespread in society, so also will human violence and other forms of harsh treatment toward these robots. But even so, robots are machines, not humans. Are there compelling moral reasons to give robots rights?
In much of the philosophical literature, the question whether we ever ought to give rights to robots hinges on the further question whether robots will ever have moral status in virtue of having the same sorts of mental lives that humans have, or at least mental lives of a similar sort. In other words, if it is possible for robots literally to hope or fear, suffer or enjoy, understand or intend—or at least to have these capacities in the not too distant future—then it makes sense to consider seriously whether they ought to be given rights. Many would accept this conditional claim, but for those who would deny the antecedent, the conditional will not drive an argument for robot rights. However, a number of technology ethicists have argued, to one extent or another, that even if robots have no mental lives at all, we nevertheless ought to treat some robots with some of the same respect due our fellow humans. Drawing on one of Immanuel Kant’s arguments for indirect duties to animals, these authors argue or at least suggest that we have a moral reason to treat anthropomorphic or social robots well—to treat them in some ways as if they have moral status—since how we treat these robots is likely to condition how we treat humans. But then, we also seem to have a good reason for instituting laws or norms that would require us to treat robots well—that is to say, that would grant rights to robots. However, this indirect argument used to support robot rights has attracted a number of objections that have thus far not been answered adequately.
Given how influential this Kant-inspired indirect argument has been, both inside and outside the scholarly literature, it is important that we properly evaluate this argument, so that we can better assess its merits. And given the inevitably increasing growth of robots in human society, fully assessing leading arguments related to robot rights is critical. With these ends in mind, I have three main aims in this paper. First, I will formulate the Kant-inspired indirect argument often advanced in support of robot rights, making clearer than before this argument’s empirical commitments as well as the philosophical presuppositions driving it. The result is the fullest explication of this argument to date. Second, I will defend the argument against a number of objections leveled at it in recent years, resulting in the most sustained defense of this argument in the literature thus far. Third, despite arguing that most objections against the argument can be answered, I also raise a new concern about the argument’s use as a justification for robot rights. While this objection is also answerable to some extent, it cannot be dismissed fully. It shows that, surprisingly, a proper understanding of the argument along with its prior commitments and presuppositions reveals that its advocates ought to support a prima facie moral principle for robot design, according to which we ought to try to minimize producing the sorts of robots to which we would, on their view, have reason to give rights.
In the following section, I will discuss some preliminaries, frame a debate about robot rights, situate the argument on which I will focus, and lay out some important philosophical presuppositions of those who would advance the argument. In §3, I formulate Kant’s argument for indirect duties toward animals, which serves as the inspiration for the analogous argument concerning robots. In §4, I defend the latter argument against a range of recent objections. However, consideration of these objections also serves to make the argument’s specific empirical commitments clearer than they have been previously. In §5, I address two concerns about using the argument as a justification for robot rights, the second of which shows that proponents of the argument are committed to a prima facie moral principle for robot design.