21 March 2019

AI Personhood and Liability

'Advanced Artificial Intelligence and Contract' by John Linarelli in Uniform Law Review (Special Issues on Transnational Commercial Law and the Technology/Digital Economy) comments 
 The aim of this article is to inquire whether contract law can operate in a state of affairs in which artificial general intelligence (AGI) exists and has the cognitive abilities to interact with humans to exchange promises or otherwise engage in the sorts of exchanges typically governed by contract law. AGI is a long way off but its emergence may be sudden and come in the lifetimes of some people alive today. How might contract law adapt to a situation in which at least one of the contract parties could, from the standpoint of capacity to engage in promising and exchange, be an AGI? This is not a situation in which AI operates as an agent of a human or a firm, a frequent occurrence right now. Rather, the question is whether an AGI could constitute a principal – a contract party on its own. Contract law is a good place to start a discussion about adapting the law for an AGI future because it already incorporates a version of what is known as weak AI in its objective standard for contract formation and interpretation. Contract law in some limited sense takes on issues of relevance from philosophy of mind. AGI holds the potential to transform a solution to an epistemological problem of how to prove a contract exists into solution to an ontological problem about the capacity to contract. An objection might be that contract law presupposes the existence of a person the law recognizes as possessing the capacity to contract. Contract law itself may not be able to answer the prior question of legally recognized personhood. The answer will be to focus on how AGI cognitive architecture could be designed for compatibility for human interaction. This article focuses on that question as well.
'When AIs Outperform Doctors: Confronting the Challenges of a Tort-Induced Over-Reliance on Machine Learning' by A Michael Froomkin, Ian Kerr and Joelle Pineau in (2019) 61 Arizona Law Review 33 comments 
Someday, perhaps soon, diagnostics generated by machine learning (ML) will have demonstrably better success rates than those generated by human doctors. What will the dominance of ML diagnostics mean for medical malpractice law, for the future of medical service provision, for the demand for certain kinds of doctors, and – in the long run – for the quality of medical diagnostics itself? This Article argues that once ML diagnosticians, such as those based on neural networks, are shown to be superior, existing medical malpractice law will require superior ML-generated medical diagnostics as the standard of care in clinical settings. Further, unless implemented carefully, a physician’s duty to use ML systems in medical diagnostics could, paradoxically, undermine the very safety standard that malpractice law set out to achieve. Although at first doctor + machine may be more effective than either alone because humans and ML systems might make very different kinds of mistakes, in time, as ML systems improve, effective ML could create overwhelming legal and ethical pressure to delegate the diagnostic process to the machine. 
Ultimately, a similar dynamic might extend to treatment also. If we reach the point where the bulk of clinical outcomes collected in databases are ML-generated diagnoses, this may result in future decisions that are not easily audited or understood by human doctors. Given the well-documented fact that treatment strategies are often not as effective when deployed in clinical practice compared to preliminary evaluation, the lack of transparency introduced by the ML algorithms could lead to a decrease in quality of care. This Article describes salient technical aspects of this scenario particularly as it relates to diagnosis and canvasses various possible technical and legal solutions that would allow us to avoid these unintended consequences of medical malpractice law. Ultimately, we suggest there is a strong case for altering existing medical liability rules to avoid a machine-only diagnostic regime. We argue that the appropriate revision to the standard of care requires maintaining meaningful participation in the loop by physicians the loop.
'Negligence and AI’s Human Users' by Andrew D Selbts in (2019) Boston University Law Review (forthcoming) comments
Negligence law is often asked to adapt to new technologies. So it is with artificial intelligence (AI). But AI is different. Drawing on examples in medicine, financial advice, data security, and driving in semi-autonomous vehicles, this Article argues that AI poses serious challenges for negligence law. By inserting a layer of inscrutable, unintuitive, and statistically-derived code in between a human decisionmaker and the consequences of that decision, AI disrupts our typical understanding of responsibility for choices gone wrong. The Article argues that AI’s unique nature introduces four complications into negligence: 1) unforeseeability of specific errors that AI will make; 2) capacity limitations when humans interact with AI; 3) introducing AI-specific software vulnerabilities into decisions not previously mediated by software; and 4) distributional concerns based on AI’s statistical nature and potential for bias. Tort scholars have mostly overlooked these challenges …