27 September 2023

AI Ethics

'The technology triad: disruptive AI, regulatory gaps and value change' by Jeroen K G Hopster and Matthijs M Maas in (2023) AI and Ethics comments 

Emerging technologies such as artificial intelligence are engines of social change. Such change can manifest itself directly in a range of domains (healthcare, military, governance, industry, etc.) [1]. For instance, technologies can drive shifts in power relations at the societal level [2, 3], as well as internationally [4,5,6]. Less visibly but no less significant, technological change can also have “soft impacts” [7], by challenging and changing entrenched norms, values, and beliefs [8, 9]. In virtue of such societally disruptive “second-order effects” [10]—which go far beyond the domain-specific changes of “first-order” market disruptions [11]—emerging technologies such as AI have been described as “socially disruptive” [12] or “transformative” [13]. 

For instance, while there is still considerable uncertainty over AI technology’s future trajectory, AI experts expect continued progress towards increasingly capable systems [14,15,16]. Further capability developments are likely to make AI’s eventual societal impacts considerable, possibly on par with previous radical and irreversible societal transformations such as the industrial revolution [13, 17]. Even when assuming a baseline scenario that (implausibly) assumes no further progress in AI, the mere proliferation of many existing AI techniques to existing actors, and its integration with pre-existing digital infrastructures will suffice to drive extensive societal impacts [18, pp. 56–82]. Indeed, Dafoe has argued that AI’s transformative implications may be grasped by considering it as the next step in a long line of ‘information technologies’ broadly conceived, spanning back to other such ‘technologies’ such as speech and culture, writing, the printing press, digital services, communications technologies; or as the next ‘intelligence technology’, following previous mechanisms such as “price mechanisms in a free market, language, bureaucracy, peer review in science, and evolved institutions like the justice system and law” [19]. Accordingly, we take AI to be a paradigmatic example of an emerging Socially Disruptive Technology [12]—i.e., a technology with the potential to affect important pillars of human life and society, in a way that raises perennial ethical and political questions [20]. 

The rise of AI has provoked increasing public concern with the technology’s potential ethical impacts [21,22,23], which has translated into growing calls for regulation and ethical guidance [24]. The European Commission has begun to draft an “Artificial Intelligence Act” [25]; Chinese government bodies have articulated new regulatory moves for AI governance, setting out requirements for algorithmic transparency and explainability [26]. There have also been notable steps in governance for AI at the global level [27,28,29], such as, among others, the establishment of the ‘Global Partnership on Artificial Intelligence’ (GPAI) [30], or the UNESCO ‘Recommendation on the Ethics of Artificial Intelligence’ [31], the first such global agreement. Such initiatives reflect the growing view that the sociotechnical impacts of transformative AI should not be left to run their own course without supervision [32], but may require intervention and accountability to safeguard core values such as justice, fairness, and democracy [33]. Yet, scholars, policymakers and the public continue to grapple with questions over how AI is concretely impacting societies, what values it impinges upon, and in what ways these societies can and should best respond. 

One challenge to the formulation of adequate responses to the ‘first-order problems’ posed by AI, is that these can be derailed or suspended by underlying second-order disruptions of this technology, in the foundations and normative categories of both ethics and law (see Table 1 for key concepts). We understand first-order problems as those that can be adequately addressed in terms of pre-existing norms or prescriptions, such as pre-existing ethical norms or legal codes. For instance, the question of how pre-existing standards of jus in bello can be applied to warfare with autonomous weapons systems is a first-order problem. Second-order problems or disruptions, by contrast, call into question the appropriateness or adequacy of existing ethical and regulatory schemas. For instance, it has been argued that autonomous weapons systems create responsibility gaps that make the very idea of jus in bello inapplicable [34], and it is not obvious how this problem should be resolved. Given its major societal impact, it seems very likely that AI will drive second-order disruptions of various kinds, affecting ethical norms and values as well as systems of regulation. How can we rely on ethical and regulatory frameworks to cope with emerging technologies, while at the same time these frameworks are themselves changed by technology? 

In this paper, we propose a conceptual approach that helps to mitigate this challenge, by addressing the disruptive implications of emerging technologies on ethics and regulation in tandem. To date, the fields of Technology Ethics (TechEthics), and Technology Law (TechLaw) have developed sophisticated frameworks that explore the co-evolutionary interaction of technology with existing (moral or legal) systems, in order both to analyze these impacts, and to normatively prescribe appropriate responses. However, these frameworks have remained isolated from one another, and insufficiently acknowledge that norms of TechEthics and regulations of TechLaw co-evolve. We propose to integrate the dyadic models of TechLaw and TechEthics, to shift focus to the triadic relations and mutual shaping of values, technology, and regulation. We claim that a triadic values-technology-regulation model is more descriptively accurate, and serves to highlight a broader portfolio of ethical, technical, or regulatory interventions that can enable effective ethical triage of Socially Disruptive Technologies. 

We spell out this claim in the subsequent sections of this paper. In Sect. 2, we further clarify what second-order disruptions amount to and how they challenge TechEthics and TechLaw. In Sect. 3, we present succinct mappings of the dyadic models of TechEthics and TechLaw and subsequently point out some of their limitations. Specifically, we zoom in on AI technology and explain why second-order disruptions by AI cannot be easily captured by the dyadic models. In Sect. 4, we sketch a triadic model (the “Technology Triad”) that aims to synthesize these two frameworks, showing how it helps to grapple with the second-order societal impacts of AI both analytically and prescriptively. In Sect. 5, we evaluate this model, arguing that it can be both more descriptively accurate (as it allows the mapping of second-order impacts on values and norms, through changes in legal systems—or on legal systems, through changes in values and norms), as well as instrumentally useful (and normatively valuable) in responding to these changes, than each of the dyadic models used in isolation. We accordingly provide a step-by-step operationalization of this framework through a series of questions that can be posed of historical, ongoing, or anticipated technology-driven societal disruption, and we illustrate the application of this framework with two cases, one historical (how the adoption of the GDPR channeled and redirected the evolution of the ethical value of ‘privacy’ when that had been put under pressure by digital markets), and one anticipatory (looking at anticipated disruptions caused by the ongoing wave of generative AI systems). We conclude that approaching disruptive AI through the lens of the “Technology Triad” can lead to more resilient ethical and regulatory responses.