'Extending Legal Rights To Social Robots' by Kate Darling
explores the human tendency to anthropomorphize social robots. It suggests that projecting emotions onto robotic companions could induce the desire to protect them, similar to our eagerness to protect animals that we care about. The practice of assigning rights to non-human entities is not new. Given societal demand, laws protecting social robots could fit into our current legal system parallel to animal abuse laws. While the nature of this analysis is descriptive, it aims to provide a basis for normative discussion.
This Article recognizes that legal discourse involving science-fictional scenarios of robots with human-like cognition or emotion is premature. It argues, however, that current technology and foreseeable future developments may warrant a different approach to “robot rights”. It seems timely to consider the societal implications of anthropomorphism and how they could be addressed by our legal system.zzzzzzz
Darling comments that
Assuming there is societal demand, one argument in favor of granting rights to social robots sees the purpose of law as a social contract. We construct behavioral rules that most of us agree on, and we hold everyone to the agreement. In theory, the interest of the majority prevails in democratic societies, and the law is tailored to reflect social norms and preferences. If this is the purpose of the legal system, then societal desire for robot rights should be taken into account and translated to law. There is also the view, however, that laws should be used to govern behavior for the greater good of society. In other words, laws should be used to influence people’s preferences, rather than the other way around. In this case, the question of whether we should extend legal rights to social robots becomes more complex. The costs and benefits to society as a whole must be weighed.
Whether or not one believes that the majority makes the best decisions for
society in general, and even if one believes in a natural rights theory of higher
truths, there could be reasons to support accommodating societal preferences.
Legislatively ignoring that people feel strongly about an issue can lead to discontent and even a lack of compliance with the law as people attempt to take “justice” into their own hands. Depending on the circumstances, this could cause more problems than would simply legislating the social demand. This is not to say that denying robots rights would lead to anarchy. But if there is an easy way to adjust the law to best reflect people’s preferences, it may be worth doing so for this utilitarian reason.
Another benefit to protecting social robots could be the above-mentioned effect of promoting socially desirable behavior. The Kantian philosophical argument for preventing cruelty to animals is that our actions towards non-humans reflect our morality — if we treat animals in inhumane ways, we become inhumane persons. This logically extends to the treatment of robotic companions. Granting them protection may encourage us and our children to behave in a way that we generally regard as morally correct, or at least in a way that makes our cohabitation more agreeable or efficient.
There could, however, also be costs to legally protecting social robots. It is subject to debate whether extending rights to robotic companions would promote socially desirable values. Some argue that the development and dissemination of such technology encourages a society that no longer differentiates between real and fake, thereby potentially undermining values we may want to preserve. Another cost could be the danger of commercial or other exploitation of our emotional bonds to social robots. While these issues must be addressed in light of modern technology whether there is legal protection for social robots or not, they are worth considering here — in particular because a change in law could accelerate development and commercial distribution of social robots (for example by increasing their market value).
Depending on its implementation, legal intervention could also cause the opposite effect on social robot technology by distorting market incentives, changing prices, and reducing not only the commercial production of social robots, but also potentially desirable robotics research and development in general. There could be other, indirect economic costs that arise due to the introduction of new laws, especially since they would interfere with people’s property rights. Furthermore, there are direct costs associated with establishing and enforcing the law.
Some practical difficulties could include defining “social robot” in legal terms, especially in light of rapidly changing technology. The extent of protection would need to be clearly established, raising questions as to what constitutes “death”, what constitutes “mistreatment”, and so forth. Many of these issues could be resolved analogous to animal abuse laws, but there are likely to be some difficult edge cases.
Summing up, the question of whether we should legally protect robotic companions is by no means simple. However, whether or not we end up deciding to extend second-order rights to robots, it seems timely to begin thinking about potential ways to address the general implications of anthropomorphism.