'Digital Duplicates and the Scarcity Problem: Might AI Make Us Less Scarce and Therefore Less Valuable?' by John Danaher and Sven Nyholm in (2024) 37(106) Philosophy and Technology comments
Recent developments in AI and robotics enable people to create personalised digital duplicates – these are artificial, at least partial, recreations or simulations of real people. The advent of such duplicates enables people to overcome their individual scarcity. But this comes at a cost. There is a common view among ethicists and value theorists suggesting that individual scarcity contributes to or heightens the value of a life or parts of a life. In this paper, we address this topic. We make five main points. First, that there is a plausible prima facie case for the scarcity threat: AI may undermine the value of an individual human life by making us less scarce. Second, notwithstanding this prima facie threat, the role of scarcity in individual value is disputable and always exists in tension with the contrasting view that scarcity is a tragedy that limits our value. Third, there are two distinct forms of scarcity – instrumental and intrinsic – and they contribute to value in different ways. Fourth, digital duplication technology may undermine instrumental scarcity, to at least some extent, but the axiological consequences of this are highly variable. Fifth, digital duplication technology does not affect intrinsic scarcity, and may actually heighten it.
There is only one of you to go around and you haven’t got long to live. If you are lucky, you might get 80-90 years. In that time, you might do a thing or two. But you won’t do everything. You might become a doctor, a musician, a lawyer, or an entrepreneur, but probably not all four or, if you do manage it, you will have to compromise on some of those options. You might travel around the world, and meet a few people, but you won’t go everywhere and meet everyone. To put it bluntly, you are a scarce resource, and your scarcity is one of the defining features of your existence.
Or is it? Recent developments in AI and robotics offer some hope for individuals to overcome their scarcity. For example, the latest iterations of large language and multi-modal models – such as GPT, Claude, Gemini and others – allow us to create personalised digital duplicates. By using so-called fine-tuning, these are AI models or agents that can serve as, at least partial, recreations of ourselves. A philosophically relevant illustration is the DigiDan chatbot created by Schwitzgebel et al. (2023). This was an AI model trained on the writings of the late philosopher Daniel Dennett, which could produce plausibly Dennettian responses to philosophical inquiries. It was a sufficiently good duplication of Dennett’s philosophical sensibilities to make it difficult for Dennett experts to tell the difference between the real Dan Dennett and DigiDan. Similarly, in April 2024, the philosopher Luciano Floridi was duplicated in AI form by two students (one still in high school). They created the LuFlot Bot as a free online education tool which could answer philosophical questions in the style of the real Luciano Floridi. With advances in embodied social robotics, it may soon be possible to go a step further and create walking-talking duplicates. The roboticist Hiroshi Ishiguro is a pioneer in this field, notable for creating a highly realistic robotic duplicate of himself.
These examples just scratch the surface of what is now becoming possible with technology and there are of course important philosophical questions to be asked about the extent to which these digital duplicates can really be said to copy individuals or just create partial representations of them. Some of these questions will be addressed later in this article. Nevertheless, on the face of it, this new technological possibility gives rise to an axiological conundrum. The capacity to copy and duplicate ourselves (to whatever extent this is possible given current and future technology) threatens something we can call the scarcity thesis: Scarcity Thesis: An individual’s scarcity contributes to or heightens the value of their life or the parts of their lives (e.g. the choices or events within them).
In this context, ‘scarcity’ is understood to have several dimensions. It can refer to an individual’s temporal and geographical limitations, i.e. the fact that they live for a finite period of time and can only occupy one physical space at a time. It can also refer to an individual’s ontological uniqueness and non-replaceability, i.e. the fact there is only one version of them. In short, the scarcity thesis maintains that because there is only one of you and you can only live in or occupy a limited amount of space and time, your life has some additional or distinctive value, that it would not have if you were not scarce.
The scarcity thesis has a good deal of support among ethicists and value theorists. It is also something that has been highlighted by critical commentators on contemporary AI. For instance, the computer scientist and AI ethicist Joanna Bryson (2010, 2018) is well-known for her claims that AIs should not be recognised as co-equal members of our moral communities. One of the reasons she adduces in support of this view is that humans are not scalable and copyable. This is distinct from AI. As she put it in an interview: “Humans don’t scale in that way…And it would be a little scary if we did. We would lose a lot of our individuality.”
Along with her co-author Andreas Theodorou, she has remarked that destroying AI tools doesn’t carry the same moral significance as would destroying a human because “there is no hazard of loss of a unique perspective…as there would be if we lose a single human life, or even a unique copy of an old book or fossil’ (Bryson & Theodorou, 2019: 320). Bryson is not alone in these thoughts. Before he died, Dennett (2023) expressed concerns about the copyability and scalability of digital counterfeit people, going so far as to argue that they pose a threat to our civilisation and should, consequently, be banned. And Sweeney (2023a) has argued that robots are importantly different from humans because they lack uniqueness.
Are these critics correct? Should we be concerned about AI’s threat to the scarcity thesis? Or is this technology no more of a threat to individual scarcity than a photo or video of an individual might be? In this paper, we address these questions. We will make five main points. First, that there is a plausible prima facie case for the scarcity threat: AI may undermine the value of an individual human life by making us less scarce. Second, notwithstanding this prima facie threat, the role of scarcity in individual value is disputable and always exists in tension with the contrasting view that scarcity is a tragedy that limits our value. Third, there are significant philosophical complexities associated with the idea of copying or duplicating real people. Specifically, there are distinct forms of individual scarcity – instrumental and intrinsic – and they contribute to value in different ways. Fourth, digital duplication technology may undermine instrumental scarcity, to at least some extent, but the axiological consequences of this are highly variable. Fifth, digital duplication technology does not affect intrinsic scarcity, and may actually heighten it. This, by implication, and somewhat paradoxically, implies that AI may heighten individual value despite the prima facie threat to scarcity. This argument turns on the idea that it is effectively impossible to truly recreate or duplicate an individual in a complete sense. We can only ever create partial extensions or representations of an individual.
We proceed through each of these five points in the remaining sections of the paper. In presenting our view, we ignore ethical and moral concerns associated with digital duplicates that do not relate to scarcity. We are aware that such concerns exist and that they are as important, if not more important, than the scarcity concern. Digital duplicates raise issues relating to autonomy, consent, identity theft, fraud, exploitation, commodification, transparency, and trust – to name but a few examples. A detailed examination of those issues is necessary, and others have started to provide it (Earp et al., 2024; Iglesias et al., 2024; Mann et al., 2023; Fabry & Alfano, 2024; Lindemann, 2022; Dennett, 2023; Danaher & Nyholm 2024), but it lies beyond the scope of this paper.