The 108 page 'Remedies for Robots' (Stanford Law and Economics Olin Working Paper No. 523) by Mark A. Lemley and Bryan Casey asks
What happens when artificially intelligent robots misbehave? The question is not just hypothetical. As robotics and artificial intelligence (AI) systems increasingly integrate into our society, they will do bad things. They have already killed people.
These new technologies present a number of interesting substantive law questions, from predictability, to transparency, to liability for high stakes decision making in complex computational systems. Our focus here is different. We seek to explore what remedies the law can and should provide once a robot has caused harm.
The authors
state
Where substantive law defines who wins legal disputes, remedies law asks, “What do I get when I win?” Remedies are sometimes designed to make plaintiffs whole by restoring them to the condition they would have been in “but for” the wrong. But they can also contain elements of moral judgment, punishment, and deterrence. For instance, the law will often act to deprive a defendant of its gains even if the result is a windfall to the plaintiff, because we think it is unfair to let defendants keep those gains. In other instances, the law may order defendants to do (or stop doing) something unlawful or harmful.
Each of these goals of remedies law, however, runs into difficulties when the bad actor in question is neither a person nor a corporation but a robot. We might order a robot—or, more realistically, the designer or owner of the robot—to pay for the damages it causes. (Though, as we will see, even that presents some surprisingly thorny problems.) But it turns out to be much harder for a judge to “order” a robot, rather than a human, to engage in or refrain from certain conduct . Robots can’t directly obey court orders not written in computer code. And bridging the translation gap between natural language and code is often harder than we might expect. This is particularly true of modern AI techniques that empower machines to learn and modify their decision making over time. If we don’t know how the robot “thinks,” we won’t know how to tell it to behave in a way likely to cause it to do what we actually want it to do.
Moreover, if the ultimate goal of a legal remedy is to encourage good behavior or discourage bad behavior, punishing owners or designers for the behavior of their robots may not always make sense—if only for the simple reason that their owners didn’t act wrongfully in any meaningful way. The same problem affects injunctive relief. Courts are used to ordering people and companies to do (or stop doing) certain things, with a penalty of contempt of court for noncompliance. But ordering a robot to abstain from certain behavior won’t be trivial in many cases. And ordering it to take affirmative acts may prove even more problematic.
In this paper, we begin to think about how we might design a system of remedies for robots. It may, for example, make sense to focus less of our doctrinal attention on moral guilt and more of it on no-fault liability systems (or at least ones that define fault differently) to compensate plaintiffs. But addressing payments for injury solves only part of the problem. Often we want to compel defendants to do (or not do) something in order to prevent injury. Injunctions, punitive damages, and even remedies like disgorgement are all aimed, directly or indirectly, at modifying or deterring behavior. But deterring robot misbehavior too is going to look very different than deterring humans. Our existing doctrines often take advantage of “irrational” human behavior like cognitive biases and risk aversion. Courts, for instance, can rely on the fact that most of us don’t want to go to jail, so we tend to avoid conduct that might lead to that result. But robots will be deterred only to the extent that their algorithms are modified to include sanctions as part of the risk-reward calculus. These limitations may even require us to institute a “robot death penalty” as a sort of specific deterrence against certain bad behaviors. Today, speculation of this sort may sound far-fetched. But the field already includes examples of misbehaving robots being taken offline permanently—a trend which only appears likely to increase in the years ahead.
Finally, remedies law also has an expressive component that will be complicated by robots. We sometimes grant punitive damages—or disgorge ill-gotten gains—to show our displeasure with you. If our goal is just to feel better about ourselves, perhaps we might also punish robots simply for the sake of punishing them. But if our goal is to send a slightly more nuanced signal than that through the threat of punishment, robots will require us to rethink many of our current doctrines. It also offers important insights into the law of remedies we already apply to people and corporations.
Should Robots Pay Taxes? Tax Policy in the Age of Automation' by Ryan Abbott and Bret Bogenschneider in (2018) 12
Harvard Law and Policy Review 145-176 comments
Existing technologies can already automate most work functions, and the cost
of these technologies is decreasing at a time when human labor costs are increasing.
This, combined with ongoing advances in computing, artificial intelligence, and
robotics, has led experts to predict that automation will lead to significant job losses
and worsening income inequality. Policy makers are actively debating how to deal
with these problems, with most proposals focusing on investing in education to train
workers in new job types, or investing in social benefits to distribute the gains of
automation.
The importance of tax policy has been neglected in this debate, which is unfortunate because such policies are critically important. The tax system incentivizes
automation even in cases where it is not otherwise efficient. This is because the vast
majority of tax revenues are now derived from labor income, so firms avoid taxes by
eliminating employees. Also, when a machine replaces a person, the government
loses a substantial amount of tax revenue potentially hundreds of billions of dollars a year in the aggregate. All of this is the unintended result of a system designed
to tax labor rather than capital. Such a system no longer works once the labor is
capital. Robots are not good taxpayers.
We argue that existing tax policies must be changed. The system should be at
least "neutral" as between robot and human workers, and automation should not be
allowed to reduce tax revenue. This could be achieved through some combination of
disallowing corporate tax deductions for automated workers, creating an "automation tax" which mirrors existing unemployment schemes, granting offsetting tax preferences for human workers, levying a corporate self-employment tax, and increasing
the corporate tax rate.
'Digitalisation and the Future of National Tax Systems: Taxing Robots?' by Joachim Englisch
comments
It is generally assumed that already in the next decade, the use of labour-saving robots with implemented artificial intelligence will lead to a dramatic transition of the workforce in almost all sectors of production and services. The ensuing loss of jobs that have traditionally been performed by a human employees is likely to result at least temporarily in reduced wage tax and payroll tax revenues, increasing income inequality and a disruption of the labour market. Against this backdrop, the idea of taxing the use of robots that replace human workforce, or even taxing the robots themselves, has emerged in politics and scholarly writings. Several justifications have been brought forward by its proponents: the robot tax has been regarded, respectively, as a corollary to a soon-to-be-expected concession of civil law personhood to robots, as a tax on imputed income earned by means of the robot, as an equalisation levy to restore the level playing field regarding the taxation of robots and of human workers, as an instrument for economically efficient wage compression between winners and losers of automation among the human workforce, or as a corrective tax to slow down the disruption of the labour market.
This paper argues that upon a closer look, the case for taxing robots or their use is relatively weak, though, except when specific conditions are met. There is currently no compelling argument to make robots themselves taxable persons, neither for the purposes of income taxation nor for the purposes of indirect taxes on consumption expenditure. Moreover, significant objections can also be raised regarding suggestions to tax the use of robots. Some of the concepts advanced in literature rely on presumptions that are either conceptually flawed or lack credible empirical support. Other proposals have their merits, but when weighing in on their potential benefits, policymakers will also have to take into account that any tax on robots is liable to result in distortions, complexities, and reduced growth. Besides, proponents of a robot tax tend to underestimate how capital mobility and international tax competition could easily undermine the respective objective of such a tax. As a Pigouvian tax, a robot tax will therefore likely have a very limited field of reasonable application. Regarding income redistribution and revenue raising objectives, the taxation of robots should only be considered as a measure of last resort, and in any event a provisional one. Where politically feasible, priority should instead be given to intensified efforts to tax the return on capital investments and on profits in general, including an adequate taxation of ultimate shareholders. In any event, increasing automation should have implications for the international allocation of taxing rights.
'A tax on robots? Some food for thought' by Germana Bottone
argues
Tax administrations face new and ever greater challenges. Some data collected show that the
use of industrial robots has been increasing since 1990; therefore their entry in our life is not
more a fantasy and it will generate relevant changes in legal, economic and social systems. As
far as tax policy is concerned, if robots have a high elasticity of substitution with labour a fall in
tax revenue is expected, as labour taxes represent a significant portion of tax revenue. In
addition, since robotization seems to jeopardise especially routine and low skilled workers,
governments need growing public resources to be invested in education and training system.
Starting from these premises, the paper deals with the possible design and the effects of the
introduction of a robot tax.
Bottone comments
The coming age of “robots” raises a lot of questions with regard to social,
economic and legal order. Therefore, it may be helpful to begin thinking about how to
face the issues arising in the future from robotization and trying to control them rather
than being mere bystanders. First of all, the paper deals with the definition of artificial
intelligence (AI), which seems to bring about a technological revolution in many ways
different from the past ones, given that AI may reproduce human cognitive capabilities.
The second step is to establish the actual diffusion of AI mainly in productive activities
and try to predict future scenarios. These premises are preparatory to a discussion about
the introduction a robot tax, provided that robots progressively substitute labour and
policy makers have to face massive unemployment and the lack of public resources.
Two issues are at stake: a) labour taxes supplies a large portion of tax revenue almost
everywhere, therefore if robots progressively substitute labour, a fall in tax revenue is
expected; b) most of the economic literature suggest to invest in education and training
to face unemployment brought about by AI, since data show that robotization hampers
especially routine/low-skilled workers. As a consequence the need of public resources
may increase. Summing up, a robot tax – restoring tax neutrality among productive
inputs - may slow down the growth of unemployment and provide the necessary public
resources.
In addition, the literature in favour of a robot tax highlight that labour taxes are
very high as they include also payroll taxes, while capital taxation is more favourable
also because policy makers aim at fostering private investments, infringing the principle
of neutrality with a view to promote economic growth. This is a relevant issue in the
discussion about a robot tax, especially considering globalization and tax competition
among global jurisdictions. Therefore, the conclusion of this paper is that, if we agree
upon the convenience to introduce a robot tax, a global effort is required to include this
topic in the international agreements already in place to fight global tax competition.
'Taxing the Robots' by Orly Mazur in (2018-2019) 46
Pepperdine Law Review 277-330 comments
Robots and other artificial intelligence-based technologies are increasingly outperforming humans in jobs previously thought safe from automation. This has led to growing concerns about the future of jobs, wages, economic
equality, and government revenues. To address these issues, there have been
multiple calls around the world to tax the robots. Although the concerns that
have led to the recent robot tax proposals may be valid, this Article cautions
against the use of a robot tax. It argues that a tax that singles out robots is
the wrong tool to address these critical issues and warns of the unintended
consequences of such a tax, including limiting innovation. Rather, advances
in robotics and other forms of artificial intelligence merely exacerbate the
issues already caused by a tax system that undertaxes capital income and
overtaxes labor income. Thus, this Article proposes tax policy measures that
seek to rebalance our tax system so that capital income and labor income are
taxed in parity. Because tax policy alone cannot solve all of the issues raised
by the robotics revolution, this Article also recommends non-tax policy
measures that seek to improve the labor market, support displaced workers,
and encourage innovation. Together, these changes have the potential to
manage the threat of automation while also maximizing its advantages,
thereby easing our transition into this new automation era.
'Vital, Sophia, and Co. — The Quest for the Legal Personhood of Robots' by Ugo Pagallo in (2018) 9(9)
Information 230 comments
The paper examines today’s debate on the legal status of AI robots, and how often scholars and policy makers confuse the legal agenthood of these artificial agents with the status of legal personhood. By taking into account current trends in the field, the paper suggests a twofold stance. First, policy makers shall seriously mull over the possibility of establishing novel forms of accountability and liability for the activities of AI robots in contracts and business law, e.g., new forms of legal agenthood in cases of complex distributed responsibility. Second, any hypothesis of granting AI robots full legal personhood has to be discarded in the foreseeable future. However, how should we deal with Sophia, which became the first AI application to receive citizenship of any country, namely, Saudi Arabia, in October 2017? Admittedly, granting someone, or something, legal personhood is—as always has been—a highly sensitive political issue that does not simply hinge on rational choices and empirical evidence. Discretion, arbitrariness, and even bizarre decisions play a role in this context. However, the normative reasons why legal systems grant human and artificial entities, such as corporations, their status, help us taking sides in today’s quest for the legal personhood of AI robots. Is citizen Sophia really conscious, or capable of suffering the slings and arrows of outrageous scholars?
Pagallo argues
The legal personhood of robots has been a popular topic of today’s debate on the normative challenges brought about by this technology. In 2007, for example, Carson Reynolds and Masatoshi Ishikawa explored the scenarios of Robot Thugs, namely, machines that choose to commit and, ultimately, carry out a crime: their aim was to determine whether and to what extent these machines can be held accountable [1]. Three years later, I expanded this analysis on agency and criminal responsibility, to the fields of contracts and extra-contractual liability [2]. In homage to Reynolds and Ishikawa’s creature Picciotto Roboto, my next paper then provided a concise phenomenology on how smart AI systems may affect pillars of the law, such as matters of criminal accountability, negligence, or human intent [3]. In 2013, I summed this analysis up with my monograph on The Laws of Robots [4]. There, I suggested a threefold level of abstraction, so as to properly address today’s debate on the legal personhood of robots and smart AI systems, that is:
(i)
The legal personhood of robots as proper legal “persons” with their constitutional rights (for example, it is noteworthy that the European Union existed for almost two decades without enjoying its own legal personhood);
(ii)
The legal accountability of robots in contracts and business law (for example, slaves were neither legal persons nor proper humans under ancient Roman law and still, accountable to a certain degree in business law);
(iii)
New types of human responsibility for others’ behaviour, e.g., extra-contractual responsibility or tortuous liability for AI activities (for example, cases of liability for defective products. Although national legislation may include data and information in the notion of product, it remains far from clear whether the adaptive and dynamic nature of AI through either machine learning techniques, or updates, or revisions, may entail or create a defect in the “product”).
Against this framework, the aim of the paper is to shed further light on such threefold status that AI robots may have in the legal domain, by taking into account what has happened in this domain of science, technology, and their normative challenges over the past years. Whereas most legal systems, so far, have regulated the behaviour of AI robots as simple tools of human interaction and hence, as a source of responsibility for other agents in the system [4], have advancements of technology affected this traditional framework? Do certain specimens of AI technology, such as smart humanoid robots, recommend that we should be ready to grant some of these robots full legal personhood and citizenship? Or, would such legislative action be morally unnecessary and legally troublesome, in that holding AI robots accountable outweighs the “highly precarious moral interests that AI legal personhood might protect” [5]?
To offer a hopefully comprehensive view on these issues, the paper is presented in three parts. First, focus is on current trends of AI technology and robotics, so as to stress both benefits and threats of this field. Then, attention is drawn to the confusion that prevails in most of today’s debate between the legal personhood of AI robots and their legal accountability in contracts and business law. Finally, the analysis dwells on the pros and cons of granting AI robots full legal personhood, as opposed to the status of legal accountability, or as a source of responsibility for other agents in the legal system. At the risk of being lambasted for reactionary anthropocentrism, the conclusion of the paper is that such a quest for the legal personhood of AI robots should not have priority over the regulation of more urgent issues triggered by the extraordinary developments in this field.