28 October 2022

Regulating AI

'Regulating the Risks of AI' by Margot E Kaminski in (2023) 103 Boston University Law Review (Forthcoming) comments 

 Companies and governments now use Artificial Intelligence (AI) in a wide range of settings. But using AI leads to well-known risks—that is, not yet realized future harms that arguably present challenges for a traditional liability model. It is thus unsurprising that lawmakers in both the United States and the European Union (EU) have turned to the tools of risk regulation for governing AI systems. 

This Article observes that constructing AI harms as risks is a choice with consequences. Risk regulation comes with its own policy baggage: a set of tools and troubles that have emerged in other fields. Moreover, there are at least four models for risk regulation, each with both overlapping and divergent goals and methods. Emerging conflicts over AI risk regulation illustrate the tensions that emerge when regulators employ one model of risk regulation, while stakeholders call for another. 

This Article examines and compares a number of recently proposed and enacted AI risk regulation regimes, as risk regulation. It asks whether risk regulation is, in fact, the right approach. While this Article is intended largely to be diagnostic rather than prescriptive, it closes with suggestions for doing things differently, including addressing two types of shortcomings: shortcomings that stem from the nature of risk regulation itself (including the inherent difficulties of contested and non-quantifiable harms, and the dearth of mechanisms for public or stakeholder input), and failures to consider other tools in the risk regulation toolkit (including conditional licensing, liability, and design mandates).