'U.S. Tort Liability for Large-Scale Artificial Intelligence Damages: A Primer for Developers and Policymakers' (RAND, 2024) by Ketan Ramakrishnan, Gregory Smith and Conor Downey comments
Leading artificial intelligence (AI) developers and researchers, as well as government officials and policymakers, are investigating the harms that advanced AI systems might cause. In this report, the authors describe the basic features of U.S. tort law and analyze their significance for the liability of AI developers whose models inflict, or are used to inflict, large-scale harm.
Highly capable AI systems are a growing presence in widely used consumer products, industrial and military enterprise, and critical societal infrastructure. Such systems may soon become a significant presence in tort cases as well—especially if their ability to engage in autonomous or semi-autonomous behavior, or their potential for harmful misuse, grows over the coming years.
The authors find that AI developers face considerable liability exposure under U.S. tort law for harms caused by their models, particularly if those models are developed or released without utilizing rigorous safety procedures and industry-leading safety practices.
At the same time, however, developers can mitigate their exposure by taking rigorous precautions and heightened care in developing, storing, and releasing advanced AI systems. By taking due care, developers can reduce the risk that their activities will cause harm to other people and reduce the risk that they will be held liable if their activities do cause such harm.
The report is intended to be useful to AI developers, policymakers, and other nonlegal audiences who wish to understand the liability exposure that AI development may entail and how this exposure might be mitigated.
Key Findings are
• Tort law is a significant source of legal risk for developers that do not take adequate precautions to guard against causing harm when developing, storing, testing, or deploying advanced AI systems.
Under existing tort law, there is a general duty to take reasonable care not to cause harm to the person or property of others. This duty applies by default to AI developers—even if targeted liability rules to govern AI development are never enacted by legislatures or regulatory agencies. In Chapter 3, we discuss the requirements of the duty to take reasonable care, and how AI developers might comply (or fail to comply) with these requirements.
• There is substantial uncertainty, in important respects, about how existing tort doctrine will be applied to AI development.
Jurisdictional variation and uncertainty about how legal standards will be interpreted and applied may generate substantial liability risk and costly legal battles for AI developers. Courts in different states may reach different conclusions on important issues of tort doctrine, especially in novel fact situations. Tort law varies significantly across both domestic and international jurisdictions. In the United States, each state has a different body of tort law, which coexists alongside federal tort law. Which state’s tort law applies to a dispute depends on complex choice-of-law rules, which in turn depend on the location of the tortious harm at issue (among other things). Moreover, tort decisions often depend on highly context-specific applications of broad legal standards (such as the negligence standard of “reasonable care”) by lay juries. As a result, tort liability can be difficult to predict, particularly with respect to emergent technologies that pose novel legal questions. In the wake of large-scale harms with effects spread across many states, AI developers may face many costly suits across multiple jurisdictions, each with potentially different liability rules. The tort liability incurred by irresponsible AI development may be sufficiently onerous, in the case of sufficiently large-scale damage, to render an AI developer insolvent or force it to declare bankruptcy. Given the cost and risk of litigating a plausible tort suit, moreover, there will often be strong financial incentive for an AI developer (or its liability insurer) to agree to a costly settlement before a verdict is reached.
• AI developers that do not employ industry-leading safety practices, such as rigorous red- teaming and safety testing or the installation of robust safeguards against misuse, among others,may substantially increase their liability exposure.
Tort law gives significant credit to industry custom, standards, and practice when determining whether an agent has acted negligently (and is thus liable for the harms it has caused). If most or many industry actors take a certain sort of precaution, this fact will typically be regarded as strong evidence that failing to take this precaution is negligent. Developers who forgo common safety practices in the AI industry, without instituting comparably rigorous safety practices in their stead, may thus increase the likelihood that they will be found negligent should their models cause or enable harm. Therefore, AI developers may wish to consider employing state-of-the-art safety procedures by, for instance, evaluating models for dangerous capabilities, fine-tuning models to limit unsafe behavior, monitoring and moderating models hosted via an application programming interface (API) for dangerous behavior, investing in strong information security measures for model weights, installing reasonably robust safeguards against misuse in potentially dangerous AI systems, and releasing these systems in ways that minimize the chance that third parties will remove the safeguards installed in them.
• While developers face significant liability exposure from the risk that third parties will misuse their models, there is considerable uncertainty about how this issue will be treated in the courts, and different states may take markedly different approaches.
Most American courts today maintain that a defendant will be liable for negligently enabling a third party to cause harm, maliciously or inadvertently, if this possibility was reasonably foreseeable to the defendant. But “foreseeability” is a pliable concept, and in practice some courts will only hold a defendant liable for enabling third-party misbehavior if such behavior was readily or especially foreseeable. The risks of misuse of advanced AI systems are being actively discussed and debated, and several leading AI developers take significant precautions to guard against such risks; these facts will tend to support the determination that such misuse was foreseeable, in the event that it occurs. The fact that many of these risks are of a novel kind, and have not previously materialized, may cut in the opposite direction. In some cases, moreover, courts may decline to hold defendants liable for negligently enabling third parties to cause harm even when the possibility of such misuse is foreseeable. For these reasons, and others, there is a good deal of uncertainty about how liability for third-party misuse will be adjudicated in the courts. It would not be surprising if different states took different positions on this issue, just as different states have taken different positions on liability for enabling the misuse of other dangerous instrumentalities (such as guns). Thus, a careless AI developer could face a series of complex and costly legal battles if its model is misused to inflict harm across many jurisdictions.
• Safety-focused policymakers, developers, and advocates can strengthen AI developers’ incentives to employ cutting-edge safety techniques by developing, implementing, and publicizing new safety procedures and by formally promulgating these standards and procedures through industry bodies.
The popularization and proliferation of safe and secure AI development practices by safety-conscious developers and industry bodies can help set industry standards and “customs” that courts may consider when evaluating the liability of other developers, creating stronger incentives for safe and responsible AI development.
• Policymakers may wish to clarify or modify liability standards for AI developers and/or develop complementary regulatory standards for AI development.
Our analysis suggests that there remains significant uncertainty as to how existing liability law will be applied if harms are caused by advanced AI systems. This uncertainty could conceivably lead some developers to be too cautious, while pushing other developers to neglect the liability risks associated with unsafe development. Clarifying or modifying liability law might thus facilitate responsible innovation and increase the tort system’s ability to incentivize safe behavior. Legislation might also help to remedy the inherent limitations of the tort liability system. For example, tort liability cannot easily address the fact that certain AI developers might discount serious risks on the basis of idiosyncratic views, or that an AI company’s liability exposure— which is limited by its total assets—might fail to provide it with adequately strong incentives for taking due care. Carefully designed legislation might remedy these shortcomings through the creation of a well-tailored regulatory regime, the clarification or improvement of existing liability law to more clearly identify when a developer or another party is liable for harms, or the establishment of minimum safety requirements for forms of AI development that pose especially significant risks to national security or public welfare.