'Eudemonia of a machine: On conscious artificial servants' by Mois Navon in (2024) AI and Ethics comments
Henry Ford once said, “For most purposes, a man with a machine is better than a man without a machine.” Engineers today propose an addendum to Ford’s adage: “and a man that is a machine is best of all.” It is to this end – the ultimate machine – that engineers around the globe are working (e.g., Fitzgerald et al. 2020). And make no mistake, this ultimate machine is no mindless automaton but rather a fully conscious humanoid. For indeed, as many maintain, the only way to make a machine as competent as a human is if it too has the consciousness that makes humans unique (e.g., Penrose 1991, p. 419; Signorelli 2018, p. 8; Gocke 2020; McFadden 2020, p. 2). So the quest is on to build a conscious machine, referred to as “the holy grail of artificial intelligence” (e.g., Bishop and Nasuto 2013, 86; Gocke 2020, p. 227). In consonance, Hod Lipson, director of the Creative Machines Lab at Columbia University, explains that his work toward a conscious machine “is bigger than curing cancer. If we can create a machine that will have consciousness on par with a human, this will eclipse everything else we’ve done. That machine itself can cure cancer” (Whang 2023).
So the conscious robot will be built as the ultimate machine – i.e., the ultimate servant (see, e.g., Grau 2011, p. 458; Hauskeller 2017, p. 1). But before we make this great leap that eclipses the world as we know it, we need to ask one very simple question: Should we do it? Should we be seeking to make a conscious machine? On the one hand, what could be wrong with having a super-intelligent machine “cure cancer”? Perhaps nothing, if that’s what the machine wants to dedicate its life to. But what if it isn’t interested in oncology? And what about all the other conscious machines that we will assign to less glorious tasks? “Modern robots,” explains Kevin LaGrandeur, “are chiefly meant to perform the same jobs as slaves – jobs that are dirty, dangerous, [and dull] – thereby freeing their owners to pursue more lofty and comfortable pursuits” (2013, 161; similarly, Bryson 2010; Mayor 2018, p. 152).
Applied to the mindless machines we are building today, LaGrandeur’s statement engenders no moral concerns vis-a-vis the machine. For, according to the standard approach to establishing moral status – i.e., which holds that two entities sharing the same defining ontological properties share the same moral status – today’s machines demand the same moral concern as your laptop (see, e.g., Navon 2021). We are not, however, talking about the mindless machines of today but the conscious machines of tomorrow. We are talking about second-order phenomenal consciousness, the “stuff” that makes us essentially who we are. Indeed, if that were not true, Hod Lipson and the other seventy-two labs around the world wouldn’t be working so hard to achieve it. Accordingly, as consciousness is that which makes us human, so too machines with consciousness will have to be treated, morally, as humans.
Not surprisingly, there is a consensus that the pursuit of this “holy grail of artificial intelligence” is not so holy, as it would result in the creation of a new class of enslaved beings, something clearly beyond the pale (e.g., Walker 2006, p. 2; Bloom and Harris 2018; Penrose 1991, p. 8; Grau 2011, p. 458; Bryson 2010).Footnote 6 Nevertheless many simply accept – ex post facto – that such machines will be built and thus attempt to define a moral framework for them (see, e.g., sources in Musial 2024). That is to be expected. What is unexpected is the defense championed by Steve Petersen who argues – ab initio – that there would, in fact, be nothing morally untoward if the machines were programmed appropriately: “Against [the] consensus I defend the permissibility of robot servitude, and in particular the controversial case of designing robots so that they want to serve… human ends” (Petersen 2007, p. 43). Refining his arguments over ten years (Petersen 2007, 2012, 2017), he explains that if we could ensure that the robot would have a “good life” then surely there would be nothing morally wrong with designing it to want, for example, to do our laundry.
In opposition to this view, I will argue that it is impossible for a conscious being to have a good life if designed to serve human ends. I am not the first to raise objections to Petersen’s claims, and the arguments of those who preceded me will be brought to bear. My contribution to the discussion will be in demonstrating that life is precious in its affordance to allow conscious beings, human or humanoid, to aspire to the ultimate “good life.” To this end, I will employ the notion of the good life as articulated in Aristotle’s Eudemonia and Maimonides’ Summum Bonum. As a consequence, I will then show that Petersen’s arguments fail in the face of the three classical moral approaches: Virtue Ethics, Consequentialism, and Kantian Deontology.