'Humans Outside the Loop' by Charlotte Tschider comments
Artificial Intelligence is not all artificial. After all, despite the need for high-powered machines that can create complex algorithms and routinely improve them, humans are instrumental in every step used to create it. Through data selection, decisional design, training, testing, and tuning to managing AI’s developments as it is used in the human world, humans exert agency and control over these choices and practices. AI is now ubiquitous: it is part of every sector and, for most people, their everyday lives. When AI development companies create unsafe products, however, we might be surprised to discover that very few legal options exist to actually remedy any wrongs.
This paper introduces the myriad of choices humans make to create safe and effective AI products, then explores key issues in existing liability models. Significant issues in negligence and products liability negligence schemes, including contractual limitations on liability, separate organizations creating AI products from the actual harm, obscure the origin of issues, and reduce the likelihood of plaintiff recovery. Principally, AI offers a unique vantage point for analyzing the relative limits of tort law in these types of technologies, challenging long-held divisions and theoretical constructs, frustrating its goals. From the perspectives of both businesses licensing AI and AI users, this paper identifies key impediments to realizing tort goals and proposes an alternative regulatory scheme that reframes liability from the human in the loop to the humans outside the loop.