'The Reasonable Computer: Disrupting the Paradigm of Tort Liability' by Ryan Abbott in (2018) 86 George Washington Law Review comments
Artificial intelligence is part of our daily lives. Whether working as chauffeurs, accountants, or police, computers are taking over a growing number of tasks once performed by people. As this occurs, computers will also cause the injuries inevitably associated with these activities. Accidents happen, and now computer-generated accidents happen. The recent fatality involving Tesla's autonomous driving software is just one example in a long series of "computer- generated torts.,"
Yet hysteria over such injuries is misplaced. In fact, machines are, or at least have the potential to be, substantially safer than people. Self-driving cars will cause accidents, but they will cause fewer accidents than human drivers. Because automation will result in substantial safety benefits, tort law should encourage its adoption as a means of accident prevention.
Under current legal frameworks, suppliers of computer tortfeasors are likely strictly responsible for their harms. This Article argues that where a supplier can show that an autonomous computer, robot, or machine is safer than a reasonable person, the supplier should be liable in negligence rather than strict liability. The negligence test would focus on the computer's act instead of its design, and in a sense, it would treat a computer tortfeasor as a person rather than a product. Negligence-based liability would incentivize automation when doing so would reduce accidents, and it would continue to reward sup- pliers for improving safety.
More importantly, principles of harm avoidance suggest that once com- puters become safer than people, human tortfeasors should no longer be mea- sured against the standard of the hypothetical reasonable person that has been employed for hundreds of years. Rather, individuals should be judged against computers. To appropriate the immortal words of Justice Holmes, we are all "hasty and awkward" compared to the reasonable computer.Abbott argues
An automation revolution is coming, and it is going to be hugely disruptive.' Ever cheaper, faster, and more sophisticated computers are able to do the work of people in a wide variety of fields and on an unprecedented scale. They may do this at a fraction of the cost of existing workers, and in some instances, they already outperform their human competition. Today's automation is not limited to manual labor; modem machines are already diagnosing disease, conducting legal due diligence, and providing translation services. For better or worse, automation is the way of the future-the economics are simply too compelling for any other outcome. But what of the injuries these automatons will inevitably cause? What happens when a machine fails to diagnose a cancer, ignores an incriminating email, or inadvertently starts a war? How should the law respond to computer-generated torts?
Tort law has answers to these questions based on a system of common law that has evolved over centuries to deal with unintended harms., The goals of this body of law are many: to reduce accidents, promote fairness, provide a peaceful means of dispute resolution, real- locate and spread losses, promote positive social values, and so forth.9 Whether tort law is the best means for achieving all of these goals is debatable, but jurists are united in considering accident reduction as one of the central, if not the primary, aims of tort law. By creating a framework for loss shifting from injured victims to tortfeasors, tort law deters unsafe conduct." A purely financially motivated rational actor will reduce potentially harmful activity to the extent that the cost of accidents exceeds the benefits of the activity. This liability framework has far-reaching and sometimes complex impacts on behavior. It can either accelerate or impede the introduction of new technologies.
Most injuries people cause are evaluated under a negligence stan- dard where unreasonable conduct establishes liability. When computers cause the same injuries, however, a strict liability standard applies. This distinction has financial consequences and a corresponding impact on the rate of technology adoption. It discourages automation, because machines incur greater liability than people. It also means that in cases where automation will improve safety, the current framework to prevent accidents now has the opposite effect.
This Article argues that the acts of autonomous computer tortfeasors should be evaluated under a negligence standard, rather than a strict liability standard, in cases where an autonomous computer is occupying the position of a reasonable person in the traditional negligence paradigm and where automation is likely to improve safety. For the purposes of ultimate financial liability, the computer's supplier (e.g., manufacturers and retailers) should still be responsible for satisfying judgments under standard principles of product liability law.
This Article employs a functional approach to distinguish an au- tonomous computer, robot, or machine from an ordinary product. Society's relationship with technology has changed. Computers are no longer just inert tools directed by individuals. Rather, in at least some instances, computers are given tasks to complete and determine for themselves how to complete those tasks. For instance, a person could instruct a self-driving car to take them from point A to point B, but would not control how the machine does so. By contrast, a person driving a conventional vehicle from point A to point B controls how the machine travels. This distinction is analogous to the distinction be- tween employees and independent contractors, which centers on the degree of control and independence." As this Article uses such terms, autonomous machines or computer tortfeasors control the means of completing tasks, regardless of their programming.
The most important implication of this line of reasoning is that just as computer tortfeasors should be compared to human tortfeasors, so too should humans be compared to computers. Once computers become safer than people and practical to substitute, com- puters should set the baseline for the new standard of care. This means that human defendants would no longer have their liability based on what a hypothetical, reasonable person would have done in their situation, but what a computer would have done. In time, as computers come to increasingly outperform people, this rule would mean that someone's best efforts would no longer be sufficient to avoid liability. It would not mandate automation in the interests of freedom and autonomy, but people would engage in certain activities at their own peril. Such a rule is entirely consistent with the ratio- nale for the objective standard of the reasonable person, and it would benefit the general welfare. Eventually, the continually improving "reasonable computer" standard should even apply to computer tortfeasors, such that computers will be held to the standard of other computers. By this time, computers will cause so little harm that the primary effect of the standard would be to make human tortfeasors essentially strictly liable for their harms.
This Article uses self-driving cars as a case study to demonstrate the need for a new torts paradigm. There is public concern over the safety of self-driving cars, but a staggering ninety-four percent of crashes involve human error. These contribute to over 37,000 fatalities a year in the United States at a cost of about $242 billion. Automated vehicles may already be safer than human drivers, but if not, they will be soon. Shifting to negligence would accelerate the adoption of driverless technologies, which, according to a report by the consulting firm McKinsey and Company, may otherwise not be wide- spread until the middle of the century.
Automated vehicles may be the most prominent and disruptive upcoming example of robots changing society, but this analysis applies to any context with computer tortfeasors. For instance, IBM's flagship artificial intelligence system, Watson, is working with clinicians at Me morial Sloan Kettering to analyze patient medical records and provide evidence-based cancer treatment options. It even provides supporting literature to human physicians to support its recommendations. Like self-driving cars, Watson does not need to be perfect to improve safety - it just needs to be better than people. In that respect, the bar is unfortunately low. Medical error is one of the leading causes of death. A 2016 study in the British Medical Journal reported that it is the third leading cause of death in the United States, ranking just behind cardiovascular disease and cancer. Some companies already claim their artificial intelligence systems outperform doctors, and that claim is not hard to swallow. Why should a computer not be able to outperform doctors when the computer can access the entire wealth of medical literature with perfect recall, benefit from the experience of directly having treated millions of patients, and be immune to fatigue?
This Article is divided into three Parts. Part I provides background on the historical development of injuries caused by machines and how the law has evolved to address these harms. It discusses the role of tort law in injury prevention and the development of negli- gence and strict product liability. Part II argues that while some forms of automation should prevent accidents, tort law may act as a deterrent to adopting safer technologies. To encourage automation and improve safety, this Part proposes a new categorization of "computer-generated torts" for a subset of machine injuries. This would apply to cases in which an autonomous computer, robot, or machine is occupying the position of a reasonable person in the traditional negligence paradigm and where automation is likely to improve safety. This Part contends that the acts of computer tortfeasors should be evaluated under a negligence standard rather than under principles of product liability, and it goes on to propose rules for implementing the system.
Finally, Part III argues that once computer operators become safer than people and automation is practical, the "reasonable computer" should become the new standard of care. It explains how this standard would work, argues the reasonable computer standard works better than a reasonable person using an autonomous machine, and consid- ers when the standard should apply to computer tortfeasors. At some point, computers will be so safe that the standard's most significant effect would be to internalize the cost of accidents on human tortfeasors.
This Article is focused on the effects of automation on accidents, but automation implicates a host of social concerns. It is important that policymakers act to ensure that automation benefits everyone. Automation may increase productivity and wealth, but it may also contribute to unemployment, financial disparities, and decreased social mobility. These and other concerns are certainly important to consider in the automation discussion, but tort liability may not be the best mechanism to address every issue related to automation