25 October 2022

Accountability

'Artificial Intelligence Accountability of Public Administration' by Francesca Bignami in (2022) 70 (Supp 1) The American Journal of Comparative Law i312–i346 comments 

In law and policy debates, there are many terms that get thrown around—big data, algorithms, artificial intelligence. One way to understand the technology is to think of the big data as the fuel, algorithms as the rockets; and artificial intelligence as the planet that computer scientists seek to reach. That is, to go in reverse order, the goal of artificial intelligence is to empower computers to replicate all the things that humans can do—see, hear, speak, even think. And how is that to be accomplished? By algorithms that use big data. Or, more precisely, by machine learning algorithms that use big data. 

Machine learning algorithms represent a newer generation of computer science. Old style algorithms are based on complete models with relatively few explanatory variables and contain a comprehensive set of if-then statements that give instructions to a computer. Machine learning algorithms are very different: based on an initial algorithm and the data inputs and the desired output, they do the work of generating what can be an extraordinarily complex, operating algorithm. To quote from the computer scientist Pedro Domingos:

Every algorithm has an input and an output: the data goes into the computer, the algorithm does what it will with it, and out comes the result. Machine learning turns this around: in goes the data and the desired result and out comes the algorithm that turns one into the other. Learning algorithms—also known as learners—are algorithms that make other algorithms. With machine learning, computers write their own programs, so we don’t have to.

The difficulty with machine learning, at least for the law, is that the actual content of that algorithm is often not fully known or knowable to the humans operating the code, not even to the scientific expert:

State-of-the-art machine learning deploys far more complex models to learn about the relationship across hundreds or even thousands of variables. Model complexity can make it difficult to isolate the contribution of any particular variable to the result. . . [R]elatedly, the machine learning outputs are often nonintuitive—that is, they operated according to rules that are so complex, multi-faceted, and interrelated that they defy practical inspection, do not comport with any practical human belief about how the world works, or simply lie beyond human-scale reasoning. Even if data scientist can spell out the embedded rule, such rules may not tell a coherent story about the world as humans understand it, defeating conventional modes of explanation. 

Only recently has the law sought to address the new scientific and human reality. The novelty that the emerging legal frameworks seek to capture is the ability of machines to do what only a few decades ago most people thought only humans could do. So far, in the United States, no one legal definition of the technology and the policy problem has emerged as dominant. As the Administrative Conference of the United States puts it:

There is no universally accepted definition of “artificial intelligence,” and the rapid state of evolution in the field, as well as the proliferation of use cases, makes coalescing around any such definition difficult. . . Generally speaking, AI systems tend to have characteristics such as the ability to learn to solve complex problems, make predictions, or undertake tasks that heretofore have relied on human decision making or intervention. There are many illustrative examples of AI that can help frame the issue for the purpose of this Statement. They include, but are not limited to, AI assistants, computer vision systems, biomedical research, unmanned vehicle systems, advanced game-playing software, and facial recognition systems as well as application of AI in both information technology and operational technology.

There are a couple of definitions that have gained currency in the law. The first is contained in the John S. McCain National Defense Authorization Act for Fiscal Year 2019, and is used in the AI in Government Act of 2020, Executive Order 13960, and Office of Management and Budget, M-21-06. It reads as follows:

(1) Any artificial system that performs tasks under varying and unpredictable circumstances without significant human oversight, or that can learn from experience and improve performance when exposed to data sets. (2) An artificial system developed in computer software, physical hardware, or another context that solves tasks requiring human-like perception, cognition, planning, learning, communication, or physical action. (3) An artificial system designed to think or act like a human, including cognitive architectures and neural networks. (4) A set of techniques, including machine learning, that is designed to approximate a cognitive task. (5) An artificial system designed to act rationally, including an intelligent software agent or embodied robot that achieves goals using perception, planning, reasoning, learning, communicating, decision making, and acting.

The second definition is contained in the National Artificial Intelligence Act of 2020. It reads:

The term artificial intelligence means a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to—(A) perceive real and virtual environments; (B) abstract such perceptions into models through analysis in an automated manner; and (C) use model inference to formulate options for information or action.

Lastly, there are definitions that have been formulated as part of sector-specific proposed legislation, for instance bills on facial recognition, driverless cars, and social media. One example is the proposed Algorithmic Justice and Online Platform Transparency Act, which is targeted at commercial uses of algorithms on online platforms. Section 3 defines “Algorithmic process” as:

a computational process, including one derived from machine learning or other artificial intelligence techniques, that processes personal information or other data for the purpose of determining the order or manner that a set of information is provided, recommended to, or withheld from a user of an online platform, including the provision of commercial content, the display of social media posts, or any other method of automated decision making, content selection, or content amplification.

In keeping with the state of terminological flux, this Report uses the terms AI and algorithms interchangeably. Where, as is often the case, the law finds its origins in the 1970s and earlier computer practices of public administration, algorithm can refer to either old style computer programming or machine learning. There are a couple of other preliminaries to keep in mind before turning to the questionnaire. Unless otherwise stated, the discussion below refers to federal public administration, not to the state law governing the operation of the administrative agencies of the fifty states. Last, following conventional practice, the discussion of administration excludes national security agencies and defense agencies because of the different concerns and legal frameworks that apply in those domains.