'From Deep Blue to Deep Learning: A Quarter Century of Progress for Artificial Minds' by Dina Moussa and Garrett Windle in (2016)
The Georgetown Law Technology Review comments
In a future that is nearly upon us, machines outthink human beings. In many specialized domains, machines already do; beyond the nearly instantaneous math and text processing that has become mundane, computer systems have overtaken humans in tasks as complex as image and facial recognition, learning to play simple video games, and guessing where the nearest McDonald’s might be. Artificial intelligence (“AI”) systems have already entered the workforce, replacing grocery store cashiers, bank tellers, and, soon, taxi drivers. If the age of sentient machines is upon us, how must our law adapt?
Exploring the issue in 1992, Professor Lawrence Solum published 'Legal Personhood for Artificial Intelligences', in which he laid out two thought experiments. In the first, Solum imagines what the law might require before an AI agent could be allowed to serve as an independent trustee. In the second thought experiment, Solum evaluates such an AI’s claim to rights under the Constitution.
In this essay, we examine Solum’s theory and predictions in light of the intervening developments in technology and scholarship. We will first survey important technological developments in AI research, focusing on the deep learning algorithms that challenge previous assumptions about the pace and scope of the changes to come. We will then proceed to apply Solum’s dual thought experiments to these new technologies. Solum introduced the insight that for an AI system, we might separate the concepts of legal duties and legal rights. Applying a contemporary understanding of the facts and theory, we reimagine whether and how an AI system might shoulder legal duties such as trusteeship, and when such a system might have a colorable claim of constitutional rights. Finally, we synthesize these findings into an updated theory, in keeping with the framework that Solum first offered in 1992.
'Can Robots Be Lawyers? Computers, Lawyers, and the Practice of Law' by Dana Remus and Frank S. Levy
comments
We assess frequently-advanced arguments that automation will soon replace much of the work currently performed by lawyers. Our assessment addresses three core weaknesses in the existing literature: (i) a failure to engage with technical details to appreciate the capacities and limits of existing and emerging software; (ii) an absence of data on how lawyers divide their time among various tasks, only some of which can be automated; and (iii) inadequate consideration of whether algorithmic performance of a task conforms to the values, ideals and challenges of the legal profession.
Combining a detailed technical analysis with a unique data set on time allocation in large law firms, we estimate that automation has an impact on the demand for lawyers’ time that while measureable, is far less significant than popular accounts suggest. We then argue that the existing literature’s narrow focus on employment effects should be broadened to include the many ways in which computers are changing (as opposed to replacing) the work of lawyers. We show that the relevant evaluative and normative inquiries must begin with the ways in which computers perform various lawyering tasks differently than humans. These differences inform the desirability of automating various aspects of legal practice, while also shedding light on the core values of legal professionalism.