'The Impoverished Publicness of Algorithmic Decision Making' by Neli Frost in (2024) Oxford Journal of Legal Studies comments
The increasing use of machine learning (ML) in public administration requires that we think carefully about the political and legal constraints imposed on public decision making. These developments confront us with the following interrelated questions: can algorithmic public decisions be truly ‘public’? And, to what extent does the use of ML models compromise the ‘publicness’ of such decisions? This article is part of a broader inquiry into the myriad ways in which digital and AI technologies transform the fabric of our democratic existence by mutating the ‘public’. Focusing on the site of public administration, the article develops a conception of publicness that is grounded in a view of public administrations as communities of practice. These communities operate through dialogical, critical and synergetic interactions that allow them to track—as faithfully as possible—the public’s heterogeneous view of its interests, and reify these interests in decision making. Building on this theorisation, the article suggests that the use of ML models in public decision making inevitably generates an impoverished publicness, and thus undermines the potential of public administrations to operate as a locus of democratic construction. The article thus advocates for a reconsideration of the ways in which administrative law problematises and addresses the harms of algorithmic decision making.
The use of algorithmic—including machine learning (ML)—models in public decision making to assist or replace human administrators in their routine decision-making tasks, has garnered much attention in recent years. Scandals such as the ‘Robodebt Scheme’ in Australia or the childcare benefits scheme in the Netherlands offer stark examples of why legal scholars are increasingly concerned by this use. From a legal standpoint, such use may certainly contribute to vital features of a properly functioning public administration, such as efficiency, expediency (together referred to as ‘scalability’) and ‘Weberian instrumental rationality’. But it also potentially undermines both ethical and legal principles that are decidedly significant in this arena, such as fairness, due process, the rule of law, a range of human rights and principles of justice, as well as individual autonomy or dignity. Concerns that centre on these principles all meaningfully problematise the use of algorithmic models in public administration. Algorithmic decision making is indeed often biased, is typically opaque and unexplainable, and can result in unjust, rights-infringing decisions at least some of the time.
The present article joins these concerns, but offers a different, complementary frame to problematise the use of algorithmic models—particularly ML—in public administration. Broadly preoccupied with the tensions between novel technologies and democracy, this frame centres a unit of analysis that is constitutive of the very idea and fabric of democracy: the ‘public’. The article grapples with the following interrelated questions: can ML-driven public administrative decisions be truly ‘public’? And, to what extent does the use of ML models compromise the publicness of public decision making and decisions? These questions are part of a broader inquiry into the myriad ways in which digital and artificial intelligence technologies transform the very fabric of our democratic existence by mutating the ‘public’.
The main claim I advance in the article is two pronged. First, I offer to view public administrations as important sites of democratic construction insofar as they maintain a quality of publicness. To make this argument, I offer a theory of publicness that is tailored to the arena of public administration, and explain the importance of this attribute for the potential of bureaucracies to function as coherent entities in modern democratic states. Secondly, I argue that the increasing deployment of ML technologies compromises the publicness of administrative decision making and decisions to generate an impoverished publicness and thereby destabilise this site’s democratic potential.
Together, these two prongs contribute to thinking in the fields of law & technology, public law theory & democratic jurisprudence and administrative law. To the first, the article cautions that the challenges that it identifies are likely to persist even where technological advancement will allow the overcoming of concerns that relate to the bias and opaqueness of ML models, which currently occupy much of the literature. To the second, it offers a view of publicness that complements parallel treatments of this concept, but that is tailored to the specific site of public administration and its unique features, and is also well suited to address the challenges that bureaucracies face today in the advent of technological developments. To the last, it highlights administrative law’s existing limits in fully addressing the plights of algorithmic decision making, and points to novel sites for regulatory intervention.
The arguments the article puts forward unfold as follows. Section 2 addresses the first prong of the argument. It theorises what publicness means in the context of public administration—as both a norm and an institutional practice. Publicness in this account centres on the web of interactions between public administrators themselves and between them and the public’s elected representatives. It is theorised as an attribute that relates to public administrations’ praxis of decision making and to their decisional outcomes. Briefly, it is grounded in a view of public administrations as ‘communities of practice’ that operate through dialogical, critical and synergetic interactions driven not only by explicit knowledge, but also by tacit, practical forms of knowledge. Publicness is further grounded in the view that this unique feature of the fabric and operations of public administrations potentially allows them to track—as faithfully as possible—the public’s intricate view of its interests and to produce decisions that reify those interests.
This theory of publicness is grounded in a theory of democracy that explains its normative value. The normativity of publicness lies in the claim that it imbues public administration—that unelected, democratically suspect stratum of state functionaries—with democratic legitimacy by institutionalising the neo-republican ideal of liberty as non-domination within the bureaucracy. Publicness is also grounded in a theory of the state that helps frame its ontology. I draw here on the work of Martin Loughlin, and his account of power, politics and representation, to explain the political and institutional constraints in which public administrations operate and which shape their task of governing. Importantly, my concept of publicness is equally grounded in empirical accounts of how public administrations operate in practice, which demonstrate its plausibility. I draw here on literature in both law and political science that empirically examines the nature of interactions between public administrators themselves and their interactions with the public’s elected representatives.
The article then proceeds in section 3 to address the second limb of the argument that problematises the use of ML models in public administration. I begin here by situating the claim in the broader literature on algorithmic public decision making, and offer an overview of the types of normative concerns that have attracted legal scholars’ attention in this context. I then move to offer a different problematisation that draws on my analysis of publicness and the necessary conditions for its viability. Here, I suggest that the use of ML models in public administration is malignant to the operations of communities of practice. The claim, in brief, is that the knowledge and operational logic of ML models is largely incompatible with the types of knowledge and interactions that drive communities of practice. ML models thus undermine the quality of publicness, so that public decision making that incorporates these models will feature an impoverished publicness. On this account, publicness is not only deeply political, but also deeply human. I conclude with the observation that if this is the case, the analysis of publicness should inform how we shape the law that regulates ML systems.