'Halting, Intuition, Heuristics, and Action: Alan Turing and the Theoretical Constraints on AI-Lawyering' by Jeffrey M. Lipshaw in
Savannah Law Review (Forthcoming)
comments
This is a reflection on the relationship of lawyering and artificial intelligence. Its goal is a better understanding of the theoretical constraints of the latter. The first part is an assessment of one particular and crucially important aspect in the theory of machine thinking – determining if the program being run will reach a conclusion. This is known as the “Halting Problem.” One question at the far reaches of AI capability is whether any physical machine presently conceivable could always, on its own, for every possible program, determine whether the program will ultimately generate an answer. The essence of the Halting Problem is that the answer to that specific question is “no.” Hence, unless a human programs the machine to decide it short of a final answer being generated, the machine won’t itself be able to decide whether it had thought enough and it was time to fish or cut bait. The second part is a philosophical reflection on what it means to decide something as opposed merely to think about it. Humans don’t have a Halting Problem. Even if they think as logically and formally as a machine, they also act. The thesis is that humans seem always in the case of every problem to be able to stop thinking and start doing, even if they don’t know whether the thinking is or will ever be complete. The third part is an assessment of what a law school of the future ought to look like, given this moderate view of the interaction between thinking machines and deciding humans.
'Toward a Fourth Law of Robotics: Preserving Attribution, Responsibility, and Explainability in an Algorithmic Society' by Frank A. Pasquale in (2017) 78
Ohio State Law Journal comments
Jack Balkin makes several important contributions to legal theory and ethics in his lecture, “The Three Laws of Robotics in the Age of Big Data.” He proposes “laws of robotics” for an “algorithmic society” characterized by “social and economic decision making by algorithms, robots, and AI agents.” These laws both elegantly encapsulate, and add new principles to, a growing movement for accountable design and deployment of algorithms. My comment aims to 1) contextualize his proposal as a kind of “regulation of regulation,” familiar from the perspective of administrative law, 2) expand the range of methodological perspectives capable of identifying “algorithmic nuisance,” a key concept in Balkin’s lecture, and 3) propose a fourth law of robotics to ensure the viability of Balkin’s three laws.