29 May 2024

Responsibility

'Humans Outside the Loop' by Charlotte A Tschider in 26 Yale Journal of Law & Technology 324 comments 

Artificial Intelligence (AI) is not all artificial. Despite the need for high-powered machines that can create complex algorithms and routinely improve them, humans are instrumental in every step used to create AI. From data selection, decisional design, training, testing, and tuning to managing AI’s development as it is used in the human world, humans exert agency and control over the choices and practices underlying AI products. AI is now ubiquitous: it is part of every sector of the economy and many people’s everyday lives. When AI development companies create unsafe products, however, we might be surprised to discover that very few legal options exist to remedy any wrongs. 

This Article introduces the myriad of choices humans make to create safe and effective AI products and explores key issues in existing liability models. Significant issues in negligence and products liability schemes, including contractual limitations on liability, distance the organizations creating AI products from the actual harm they cause, obscure the origin of issues relating to the harm, and reduce the likelihood of plaintiff recovery. Principally, AI offers a unique vantage point for analyzing the limits of tort law, challenging long-held divisions and theoretical constructs. From the perspectives of both businesses licensing AI and AI users, this Article identifies key impediments to realizing tort doctrine’s goals and proposes an alternative regulatory scheme that shifts liability from humans in the loop to humans outside the loop.