'Will intelligent machines become moral patients?' by Parisa Moosavi in (2023) Philosophy and Phenomenological Research comments
Recent advances in Artificial Intelligence (AI) and machine learning have raised many ethical questions. A popular one concerns the moral status of artificially intelligent machines (AIs). AIs are increasingly capable of emulating intelligent human behaviour. From speech recognition and natural-language processing to moral reasoning, they are continually improving at performing tasks we once thought only humans can do. Their powerful self-learning capabilities give them an important sense of autonomy and independence: they can act in ways that are not directly determined by us. Their problem-solving abilities sometimes even surpasses ours. What's more, they are taking on social roles such as caregiving and companionship, and thereby seem to merit a social and emotional response on our part. All this has led many philosophers and technologists to seriously consider the possibility that we will someday have to grant moral protections to AIs. In other words, we would have to “expand the moral circle” and include AIs among moral patients, i.e., entities that are owed moral consideration.
This question of moral patiency is the focus of my paper. Roughly speaking, the question is whether future AIs will be the kind of entities that can be morally wronged and need moral protection. My position, in a nutshell, is that concerns about the moral status of AI are unjustified. Contrary to the claims of many authors (e.g., Coeckelbergh 2010; 2014; and Gunkel 2018; 2019; Danaher 2020), I argue that we have no reason to believe we will have to provide moral protections to future AIs. I consider this to be a commonsense view regarding the moral status of AI, albeit one that has not been successfully defended in the philosophical literature. This is the kind of defence that I plan to provide in this paper.
... We may start by clarifying the concept of moral patiency further. Frances Kamm's account of this moral status is particularly helpful. According to Kamm (2007, pp. 227–229), an entity has moral status in the relevant sense if it counts morally in its own right and for its own sake. Let's consider each of these conditions in turn: (i) what it is for an entity to count morally, (ii) what it is for it to count morally in its own right, and (iii) what it is for it to count morally for its own sake.
To say that an entity counts morally is to say that it is in some way morally significant. More specifically, it means that there are ways of behaving toward the entity that would be morally problematic or impermissible. An entity that counts morally gives us moral reasons to do certain things and act in certain ways toward it, such as to treat it well and not harm it. We typically consider humans as entities that count morally in this sense, and ordinary rocks as entities that do not. But almost anything can count morally in the right context. If an ordinary rock, for instance, is used as a murder weapon and becomes a piece of evidence, it may be morally impermissible to temper with it.
There are, however, different ways to count morally, not all of which amount to being a moral patient. An entity can count morally, but merely instrumentally so—i.e., because our treatment of it has a morally significant effect on others. To have the relevant moral status, the entity in question must count morally in its own right, i.e., non-instrumentally. In other words, it must be valued as an end, and not merely as a means. The above-mentioned rock does not meet this condition, but our fellow humans do: no further end needs to be served by the way we treat them for us to have a reason to treat them well.
Moreover, an entity with moral patiency counts morally for its own sake, which is to say that we have reason to treat it in a certain way for the sake of the entity itself. Note that an entity might be valued as an end but not for the sake of itself. For instance, the aesthetic value of Mona Lisa can give us reason to preserve it independently of the pleasure or enlightenment it can bring. This, however, does not mean that we have reason to preserve Mona Lisa for the sake of the painting itself. We do not think of preservation as something that is good for the painting. We rather think we have reason to preserve it because the painting has value for us. We value the painting as an end, but — to borrow Korsgaard's term — this non-instrumental value is still “tethered” to us: we are the beneficiaries of this value. In contrast, our moral reasons to save a human being from drowning are reasons to do something for their sake. They get something out of being saved that The Mona Lisa does not.
Thus, on Kamm's account, an entity has moral patiency when it can give us reason to treat it well, independently of any further ends that such a treatment might serve, and precisely because being treated well is good for the entity itself.
This is the conception of moral patiency that I will adopt going forward. The question I am interested in is, therefore, whether future AIs will be the kinds of entities that can give us moral reasons of this specific kind: reasons that have to do with what is good for the entity itself. I am not concerned with whether there will be other sorts of moral reasons to treat them in a certain way. I am only asking whether they will qualify for moral patiency proper.