'Recognising rights for robots: Can we? Will we? Should we?' by Belinda Bennett and Angela Daly in (2020)
Law, Innovation and Technology comments
This article considers the law’s response to the emergence of robots and artificial intelligence (AI), and whether they should be considered as legal persons and accordingly the bearers of legal rights. We analyse the regulatory issues raised by robot rights through three questions: (i) could robots be granted rights? (ii) will robots be granted rights? and (iii) should robots be granted rights? On the question of whether we can recognise robot rights we examine how the law has treated different categories of legal persons and non-persons historically, finding that the concept of legal personhood is fluid and so arguably could be extended to include robots. However, as can be seen from the current debate in Intellectual Property (IP) law, AI and robots have not been recognised as the bearers of IP rights despite their ability to create and innovate, suggesting that the answer to the question of whether we will grant rights to robots is less certain. Finally, whether we should recognise rights for robots will depend on the intended purpose of regulatory reform.
The authors argue
&The question of whether machines can think is not new, having been posited prominently by Turing in the 1950s. However the increasing sophistication of machines with developments in artificial intelligence (‘AI’) and robotics is raising new questions about the role for law in regulating these technologies. Responding to these developments, in 2017 the European Parliament passed a Resolution, calling on the European Commission to develop civil law rules on robotics and artificial intelligence. In a wide-ranging set of recommendations which included calls for ‘a common European definition for smart autonomous robots’, and for introduction of ‘a comprehensive Union system of registration of advanced robots’, the Parliament proposed a Charter on Robotics. The Resolution called on the Commission, ‘to explore, analyse and consider the implications of’ inter alia:
creating a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently.
The European Parliament’s proposal led to considerable debate, including an open letter from experts to the European Commission arguing against recognising electronic personhood for robots. The European Parliament’s suggestion, that the increasing sophistication of robots raises questions about their appropriate legal status, arises against a backdrop of a growing debate about robots and rights which highlights the complexities of our current conceptualisations of rights, revealing the tensions between different categories and understandings of what it means to be human. With Saudi Arabia granting citizenship to a ‘female’ robot, and Japan granting residency rights to a ‘boy’ chatbot, this issue is far from a mere academic or theoretical inquiry.
Although robots and AI are often discussed in tandem, it is important to recognise the distinction between them. The term ‘robot’, coming from a Czech language term for ‘serf labour’, dates back to a Czech play from the 1920s and usually refers to a physical entity, often with humanoid-type characteristics. Increasingly robots are used to provide assistance or companionship. In contrast, artificial intelligence (AI) is associated with machine learning. As a 2016 report by the UK’s House of Commons Science and Technology Committee explained:
Robots can (and, for the most part, do) operate without possessing any artificial intelligence. It is anticipated, however, that this will gradually change over time, with robots becoming the ‘hardware’ that use, for example, machine learning algorithms to perform a manual or cognitive task. AI and robotics will, therefore, have an important degree of interdependency. Although it is difficult to anticipate precisely how these technologies will develop from our current capabilities, it is clear that they are likely to have a transformative effect on society. It is against this backdrop that we seek to analyse the legal challenges associated with articulating the nature of both rights and responsibilities associated with robotics and AI. In addressing these challenges we seek to analyse robot rights through three questions: (i) could robots be granted rights? (ii) will robots be granted rights? and (iii) should robots be granted rights?
In Part 2 we analyse the question of whether robots could be granted rights. In answering this question we assess categories of personhood in the common law tradition where human beings have either not been recognised as people, or not recognised as entitled to full human rights. We also examine those categories where legal personality is attached to non-human entities. As we argue in Part 2, the categories of legal personality are sufficiently flexible to allow for recognition of robots’ rights. We explore the possibilities for categorising robots and AI according to their capabilities and level of sophistication, and consider the possibility of some advanced robots being categorised as legal persons. In Part 3 we consider whether robots will be granted rights. We use the example of intellectual property (IP) rights for robot-created works to analyse the discussion in that area of law concerning rights being ascribed to robots and AI given robots and AI’s incursions into creativity and inventiveness. Finally, in Part 4 we consider whether robots should be granted rights. In considering this question we analyse Roger Brownsword’s work which argues that the way in which questions about new technologies are answered may depend on our conceptual starting point in terms of how we conceptualise new technology and our regulatory responses to it.