11 September 2023

Minds

'Moral Uncertainty and Our Relationships with Unknown Minds' by John Danaher in (2023) 32(4)Cambridge Quarterly of Healthcare Ethics 482-495 comments 

In June 2022, Blake Lemoine, a Google-based AI scientist and ethicist, achieved brief notoriety for claiming, apparently in earnest, that a Google AI program called LaMDA may have attained sentience. Lemoine quickly faced ridicule and ostracization. He was suspended from work and, ultimately, fired. What had convinced him that LaMDA might be sentient? In support of his case, Lemoine released snippets of conversations he had with LaMDA. Its verbal fluency and dexterity were impressive. It appeared to understand the questions it was being asked. It claimed to have a sense of self and personhood, to have fears, hopes, and desires, just like a human. Critics were quick to point out that Lemoine was being tricked. LaMDA was just a very sophisticated predictive text engine, trained on human language databases. It was good at faking human responses; there was no underlying mind or sentience behind it. 

Whatever the merits of Lemoine’s claims about LaMDA, his story illustrates an ethical-epistemic challenge we all face: How should we understand our relationships with uncertain or contested minds? In other words, if we have an entity that appears to display mind-like properties or behaviors but we are unsure whether it truly possesses a mind, how should we engage with it? Should we treat it “as if” it has a mind? Could we pursue deeper relationships with it, perhaps friendship or love? This is an epistemic challenge because in these cases, we have some difficulty accessing evidence or information that can confirm, definitively, whether the entity has a mind. It is an ethical challenge because our classification of the other entity—our decision as to whether or not it has a mind—has ethical consequences. At a minimum, it can be used to determine whether the entity has basic moral standing or status. It can also be used to determine the kinds of value we can realize in our interactions with it. 

Our relationships with AI and robots are but one example of a situation in which we face this challenge. We also face it with humans whose minds are fading (e.g., those undergoing some cognitive decline) or difficult to access (those with “locked-in” syndrome). And, we face it with animals, both wild and domestic. Our default assumptions vary across each of these cases. Many people are willing to presume that humans, whatever the evidence might suggest, have minds and that their basic moral status is unaffected by our epistemic difficulties in accessing those minds. They might be less willing to presume that the value of the relationships they have are unaffected by these epistemic difficulties. Some people are willing to presume that animals have minds, at least to some degree, and that they deserve some moral consideration as a result. Many of them are also willing to pursue meaningful relationships with animals, particularly pets. Finally, most people, as of right now, tend to be skeptical about the claims that AI or robots (what I will call “artificial beings” for the remainder of this article) could have minds. This is clear from the reaction to Blake Lemoine’s suggestions about LaMDA. 

In this article, I want to consider, systematically, what our normative response to uncertain minds should be. For illustrative purposes, I will focus on the case study of artificial beings, but what I have to say should have broader significance. I will make three main arguments. First, the correct way to approach our moral relationships with uncertain minds is to use a “risk asymmetry” framework of analysis. This is a style of analysis that is popular in the debate about moral uncertainty and has obvious applications here. Second, deploying that argumentative framework, I will suggest that we may have good reason to be skeptical of claims about the moral status of artificial beings. More precisely, I will argue that the risks of over-inclusivity when it comes to moral status may outweigh the risk of under-inclusivity. Third, and somewhat contrary to the previous argument, I will suggest that we should, perhaps, be more open to the idea of pursuing meaningful relationships with artificial beings—that the risks of relationship exclusion, at least marginally, outweigh those of inclusion. 

In deploying the risk asymmetry framework to resolve the ethical-epistemic challenge, I do not claim any novelty. Other authors have applied it to debates about uncertain minds, before. In the remainder of this article, I will reference the work of four authors in particular, namely Erica Neely, Jeff Sebo, Nicholas Agar, and Eric Schwitzgebel — each of whom has employed a variation of this argument when trying to determine the moral status of unknown minds. The novelty in my analysis, such as it is, comes from the attempt to use empirical data and psychological research to determine the risks of discounting or overcounting uncertain minds. My assessment of this evidence leads me to endorse conclusions that are different from those usually endorsed in this debate (though, I should say, similar to the conclusions reached by Nicholas Agar). The other contribution I hope to make is to be more systematic and formal in my presentation of the risk asymmetry framework. In other words, irrespective of the conclusions I reach, I hope to demonstrate a useful method of analysis that can be applied to other debates about uncertain minds.