Orly Lobel's 'The Law of AI for Good' (San Diego Legal Studies Paper No. 23-001) comments
Legal policy and scholarship are increasingly focused on regulating technology to safeguard against risks and harms, neglecting the ways in which the law should direct the use of new technology, and in particular artificial intelligence (AI), for positive purposes. This article pivots the debates about automation, finding that the focus on AI wrongs is descriptively inaccurate, undermining a balanced analysis of the benefits, potential, and risks involved in digital technology. Further, the focus on AI wrongs is normatively and prescriptively flawed, narrowing and distorting the law reforms currently dominating tech policy debates. The law-of-AI-wrongs focuses on reactive and defensive solutions to potential problems while obscuring the need to proactively direct and govern increasingly automated and datafied markets and societies. Analyzing a new Federal Trade Commission (FTC) report, the Biden administration’s 2022 AI Bill of Rights and American and European legislative reform efforts, including the Algorithmic Accountability Act of 2022, the Data Privacy and Protection Act of 2022, the European General Data Protection Regulation (GDPR) and the new draft EU AI Act, the article finds that governments are developing regulatory strategies that almost exclusively address the risks of AI while paying short shrift to its benefits. The policy focus on risks of digital technology is pervaded by logical fallacies and faulty assumptions, failing to evaluate AI in comparison to human decision-making and the status quo. The article presents a shift from the prevailing absolutist approach to one of comparative cost-benefit. The role of public policy should be to oversee digital advancements, verify capabilities, and scale and build public trust in the most promising technologies.
A more balanced regulatory approach to AI also illuminates tensions between current AI policies. Because AI requires better, more representative data, the right to privacy can conflict with the right to fair, unbiased, and accurate algorithmic decision-making. This article argues that the dominant policy frameworks regulating AI risks—emphasizing the right to human decision-making (human-in-the-loop) and the right to privacy (data minimization)—must be complemented with new corollary rights and duties: a right to automated decision-making (human-out-of-the-loop) and a right to complete and connected datasets (data maximization). Moreover, a shift to proactive governance of AI reveals the necessity for behavioral research on how to establish not only trustworthy AI, but also human rationality and trust in AI. Ironically, many of the legal protections currently proposed conflict with existing behavioral insights on human-machine trust. The article presents a blueprint for policymakers to engage in the deliberate study of how irrational aversion to automation can be mitigated through education, private-public governance, and smart policy design.
'Trustworthy artificial intelligence and the European Union AI act: On the conflation of trustworthiness and acceptability of risk' by
Johann Laux, Sandra Wachter and Brent Mittelstadt in (2023) Regulation and Governance comments
The global race to establish technological leadership in artificial intelligence (AI) is escorted by an effort to develop “trustworthy AI.” Numerous policy frameworks and regulatory proposals make principled suggestions as to which features render AI “trustworthy” [Cf. the overviews in Lucia Vesnic-Alujevic et al., 2020 and Thiebes et al., 2021], Private companies such as auditing firms are offering their clients support in designing and deploying “trustworthy AI” (Mökander & Floridi, 2021). The emphasis on trustworthiness serves a strategic purpose: induce people to place trust in AI so that they will use it more and, hence, unlock the technology's economic and social potential.
This strategy is not unfounded. Trust cannot be created on command. Signaling trustworthiness is thus the most promising option for regulators and technologists who seek to create the initial trust needed for a broader uptake of AI (Drake et al., 2021; O'Neill, 2012). Success, however, is not guaranteed. Even allegedly trustworthy persons, institutions, and technologies might not be trusted after all. For example, populations which have historically faced discrimination may reasonably distrust broadly accepted signals of trustworthiness (Scheman, 2020).
As part of the global trustworthiness effort, the European Commission recently proposed a legal framework for trustworthy AI, the “AI Act” (European Commission, 2021b). The AI Act explicitly pursues the dual purpose of promoting the uptake of the technology and addressing the risks associated with its use (AI Act, Recital 81 and p. 1). At the time of writing, the proposal is being discussed by the Council of the European Union and the European Parliament, both of which must agree on a common text before the AI Act can pass into law.
As this article will show, in its proposal the Commission chose to understand “trustworthiness” narrowly in terms of the “acceptability” of AI's risks, with the latter being primarily assessed through conformity assessments carried out by technology experts (see Section 2.1). This regulatory conflation of trustworthiness with the acceptability of risks invites further reflection.
Based on a systematic narrative literature review on trust research, this article argues that the European Union (EU) is overselling its regulatory ambition and oversimplifying a highly complex and heterogeneous set of closely related concepts. First, while there is an inherent relationship between trust, trustworthiness, and the perceived acceptability of risks (Poortinga & Pidgeon, 2005), the AI Act will itself require citizens' trust to succeed in promoting the uptake of AI. Second, the concept of trustworthiness serves an important normative function. It allows to assess whether people's actual levels of trust are normatively “justified” (Cf. Lee, 2022) or “well-placed.” This justification depends on whether their degree of trust in something matches its degree of trustworthiness. A person's trust can be “blind” or misplaced; so too can their mistrust. There is a rich philosophical debate as to whether AI even has the capacity of being a genuine object of trust. Its lack of human qualities such as intentionality could prohibit such attributions. AI may then be merely reliable, but not trustable [Miller & Freiman, 2020; for the debate, see further Rieder et al. (2021), Weydner-Volkmann and Feiten (2021), Ryan (2020), Grodzinsky et al. (2020), Nickel et al. (2010), and Taddeo (2009)].
Conflating trust and trustworthiness with the acceptability of risks blurs the distinction between acceptability judgments made by domain experts and the trustworthiness of AI systems implemented in society. Others have criticized before that the AI Act outsources decisions about which risks are “acceptable” to AI providers with an economic interest to market the AI system (Smuha et al., 2021). Rather than providing a seal of approval, we argue that trustworthiness is a longitudinal concept that necessitates an iterative process of controls, communication, and accountability to establish and maintain its existence across both AI technologies and the institutions using them. The AI Act suggests an unfounded bright-line distinction between acceptable and unacceptable risks and hence trustworthy and non-trustworthy AI. This approach is incompatible with the conceptualization of trustworthiness as a longitudinal process as opposed to a binary characteristic of systems and the risks they pose. This article therefore aims to provide an intervention into the EU's policy effort to develop “trustworthy AI” by risk regulation based on a review of the multi-disciplinary literature on trust. Instead of working out a coherent theory of trust, it aims to demonstrate the conceptual futility of labeling a complex AI system “trustworthy” prior to placing it on the market.
We limit our analysis to the use of AI in public institutions. The potential of AI for the public sector is rapidly gaining interest (Gesk & Leyer, 2022; see also de Sousa et al., 2019). AI systems have already been introduced in public institutions (Desouza et al., 2017), with promises of higher quality services and increased efficiency (Sun & Medaglia, 2019). At the same time, AI's characteristics have led to considerable debate about whether and how the public sector should deploy the technology (Green, 2022). Many AI systems “reason by association”: they detect statistical patterns in data but do not offer causal explanations (Bishop, 2021). In addition, an AI system might include so many parameters that its outcome is opaque, resembling a “black box.” There is too much information to interpret its outcome clearly (Dignum, 2019). These features arguably set AI systems aside from other digital technologies already in use by public institutions.
Through the proposed AI Act and other instruments, the European Commission nevertheless seeks to “make the public sector a trailblazer for using AI” (European Commission, 2021a). Its 2020 “White Paper” on AI (European Commission, 2020) holds it “essential” that the public sector, especially in healthcare and transport, begins to “rapidly” deploy products and services that rely on AI (White Paper, p. 8). The European Commission also supports the uptake of AI in the domain of justice (European Commission, 2018).
While making AI trustworthy has garnered substantial political momentum, equal attention needs to be paid to AI's potential to erode the trustworthiness of public institutions and, with it, their own ability to produce trust in the population (Bodó, 2021). Without trust, the public sector risks losing support and compliance by citizens.
Some publicly documented uses of automated decision-systems have led to widespread criticism and the cessation of operations. Consider, for example, the algorithmic prediction of social welfare fraud in marginalized neighborhoods in the Netherlands or the algorithmic profiling of families for early detection of vulnerable children in Denmark (Kayser-Bril, 2020; Vervloesem, 2020). AI in the public sector can quickly become politicized, not least because of the public sector's dual role. It is at the same time drawn to using AI to increase its efficiency and under an obligation to protect citizens from harm caused by AI (Kuziemski & Misuraca, 2020).
Citizens' concerns about AI in the public sector have likewise been identified as one of the major obstacles to broader implementation (Gesk & Leyer, 2022, pp. 1–2). However, while the use of (non-AI-based) information and communication technology in the public sector has been widely researched—often under the rubric of “eGovernment”—the use of AI in the public sector and its acceptance by citizens is still understudied [Gesk & Leyer, 2022; drawing on Sun and Medaglia (2019); Wang and Liao (2008)]. At the same time, insights gained from the private sector cannot easily be transferred to the public sector, not least because the latter's target is not to maximize profits generated from customers [See the references in Gesk and Leyer (2022, p. 1)]. Moreover, public services' adoption of AI further differs from the private sector as it can have a coercive element. Citizens will often have no choice but to use and pay for the services (through taxes or insurance premiums) whether or not they prefer an AI system to be involved (Aoki, 2021). As the coercive power of public authority requires justification (Simmons, 1999), AI in the public sector thus also raises questions of legitimacy.
Politicization can add further justificatory pressure. Trust researchers consider how in highly politicized contexts of AI implementation, conflicts about what constitutes a “right” or “fair” decision are likely to erupt (de Bruijn et al., 2022; drawing on Bannister & Connolly, 2011b). The stakes of implementing AI in public services are thus high, invoking the foundational concepts of trust in and legitimacy of public authority.
This article proceeds as follows. Section 2 begins with a trust-theoretical reconstruction of the conflation of “trustworthiness” with the “acceptability of risks” in the EU's AI policy. We then turn to our review of the literature on trust in AI implemented within public institutions. One simple definition of “trust” is the willingness of one party to expose themselves to a position of vulnerability towards a second party under conditions of risk and uncertainty as regards the intentions of that second party (similarly, Bannister & Connolly, 2011b, p. 139). However, the term “trust” has found multiple definitions within and across social science disciplines, so much that the state of defining trust has been labeled as one of “conceptual confusion” (McKnight & Chervany, 2001). This makes comparing and evaluating trust research across disciplines (and sometimes even within one discipline) extremely difficult.
Section 3, therefore, develops a prescriptive set of variables for reviewing trust-research in the context of AI. We differentiate between normative and empirical research as well as between subject, objects, and roles of trust. Section 4 then uses those variables as a structure for a narrative review of prior research on trust and trustworthiness in AI in the public sector. We identify common themes in the reviewed literature and reflect on the heterogeneity of the field and, thus, the many ways in which trust in AI can be defined, measured, incentivized, and governed.
This article concludes in Sections 5 and 6 by relating the findings of the literature review to the EU's AI policy and especially its proposed AI Act. It states the uncertain prospects for the AI Act to be successful in engineering citizens' trust. There remains a threat of misalignment between levels of actual trust and the trustworthiness of applied AI. The conflation of “trustworthiness” with the “acceptability of risks” in the AI Act will thus be shown to be inadequate.