07 October 2019

Infernal Engines

The Immoral Machine' by John Harris in (2019) Cambridge Quarterly of Healthcare Ethics comments
In a recent paper in Nature entitled ‘The Moral Machine Experiment’, Edmond Awad, et al make a number of breathtakingly reckless assumptions, both about the decisionmaking capacities of current so-called ‘autonomous vehicles’ and about the nature of morality and the law. Accepting their bizarre premise that the holy grail is to find out how to obtain cognizance of public morality and then program driverless vehicles accordingly, the following are the four steps to the Moral Machinists argument:
(1) Find out what ‘public morality’ will prefer to see happen; 
(2) On the basis of this discovery, claim both popular acceptance of the preferences and persuade would-be owners and manufacturers that the vehicles are programmed with the best solutions to any survival dilemmas they might face; 
(3) Citizen agreement thus characterized is then presumed to deliver moral license for the chosen preferences; 
(4) This yields ‘permission’ to program vehicles to spare or condemn those outside the vehicles when their deaths will preserve vehicle and occupants.
This paper argues that the Moral Machine Experiment fails dramatically on all four counts.
Harris' critique concerns 'The Moral Machine experiment' by Edmond Awad, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph Henrich, Azim Shariff, Jean-François Bonnefon and Iyad Rahwan  in (2018) 563 Nature 59–64, in which the authors comment
With the rapid development of artificial intelligence have come concerns about how machines will make moral decisions, and the major challenge of quantifying societal expectations about the ethical principles that should guide machine behaviour. To address this challenge, we deployed the Moral Machine, an online experimental platform designed to explore the moral dilemmas faced by autonomous vehicles. This platform gathered 40 million decisions in ten languages from millions of people in 233 countries and territories. Here we describe the results of this experiment. First, we summarize global moral preferences. Second, we document individual variations in preferences, based on respondents’ demographics. Third, we report cross-cultural ethical variation, and uncover three major clusters of countries. Fourth, we show that these differences correlate with modern institutions and deep cultural traits. We discuss how these preferences can contribute to developing global, socially acceptable principles for machine ethics.
They argue
We are entering an age in which machines are tasked not only to promote well-being and minimize harm, but also to distribute the well- being they create, and the harm they cannot eliminate. Distribution of well-being and harm inevitably creates tradeoffs, whose resolution falls in the moral domain. Think of an autonomous vehicle that is about to crash, and cannot find a trajectory that would save everyone. Should it swerve onto one jaywalking teenager to spare its three elderly passengers? Even in the more common instances in which harm is not inevitable, but just possible, autonomous vehicles will need to decide how to divide up the risk of harm between the different stakeholders on the road. Car manufacturers and policymakers are currently struggling with these moral dilemmas, in large part because they cannot be solved by any simple normative ethical principles such as Asimov’s laws of robotics. 
Asimov’s laws were not designed to solve the problem of universal machine ethics, and they were not even designed to let machines distribute harm between humans. They were a narrative device whose goal was to generate good stories, by showcasing how challenging it is to create moral machines with a dozen lines of code. And yet, we do not have the luxury of giving up on creating moral machines. Autonomous vehicles will cruise our roads soon, necessitating agreement on the principles that should apply when, inevitably, life-threatening dilemmas emerge. The frequency at which these dilemmas will emerge is extremely hard to estimate, just as it is extremely hard to estimate the rate at which human drivers find themselves in comparable situations. Human drivers who die in crashes cannot report whether they were faced with a dilemma; and human drivers who survive a crash may not have realized that they were in a dilemma situation. Note, though, that ethical guidelines for autonomous vehicle choices in dilemma situations do not depend on the frequency of these situations. Regardless of how rare these cases are, we need to agree beforehand how they should be solved. 
The key word here is ‘we’. As emphasized by former US president Barack Obama, consensus in this matter is going to be important. Decisions about the ethical principles that will guide autonomous vehicles cannot be left solely to either the engineers or the ethicists. For consumers to switch from traditional human-driven cars to autonomous vehicles, and for the wider public to accept the proliferation of artificial intelligence-driven vehicles on their roads, both groups will need to understand the origins of the ethical principles that are programmed into these vehicles. In other words, even if ethicists were to agree on how autonomous vehicles should solve moral dilemmas, their work would be useless if citizens were to disagree with their solution, and thus opt out of the future that autonomous vehicles promise in lieu of the status quo. Any attempt to devise artificial intelligence ethics must be at least cognizant of public morality. 
Accordingly, we need to gauge social expectations about how autonomous vehicles should solve moral dilemmas. This enterprise, however, is not without challenges. The first challenge comes from the high dimensionality of the problem. In a typical survey, one may test whether people prefer to spare many lives rather than few; or whether people prefer to spare the young rather than the elderly; or whether people prefer to spare pedestrians who cross legally, rather than pedestrians who jaywalk; or yet some other preference, or a simple combination of two or three of these preferences. But combining a dozen such preferences leads to millions of possible scenarios, requiring a sample size that defies any conventional method of data collection. 
The second challenge makes sample size requirements even more daunting: if we are to make progress towards universal machine ethics (or at least to identify the obstacles thereto), we need a fine-grained understanding of how different individuals and countries may differ in their ethical preferences. As a result, data must be collected worldwide, in order to assess demographic and cultural moderators of ethical preferences. 
As a response to these challenges, we designed the Moral Machine, a multilingual online ‘serious game’ for collecting large-scale data on how citizens would want autonomous vehicles to solve moral dilemmas in the context of unavoidable accidents. The Moral Machine attracted worldwide attention, and allowed us to collect 39.61 million decisions from 233 countries, dependencies, or territories (Fig. 1a). In the main interface of the Moral Machine, users are shown unavoidable accident scenarios with two possible outcomes, depending on whether the autonomous vehicle swerves or stays on course (Fig. 1b). They then click on the outcome that they find preferable. Accident scenarios are generated by the Moral Machine following an exploration strategy that focuses on nine factors: sparing humans (versus pets), staying on course (versus swerving), sparing passengers (versus pedestrians), sparing more lives (versus fewer lives), sparing men (versus women), sparing the young (versus the elderly), sparing pedestrians who cross legally (versus jaywalking), sparing the fit (versus the less fit), and sparing those with higher social status (versus lower social status). Additional characters were included in some scenarios (for example, criminals, pregnant women or doctors), who were not linked to any of these nine factors. These characters mostly served to make scenarios less repetitive for the users. After completing a 13-accident session, participants could complete a survey that collected, among other variables, demographic information such as gender, age, income, and education, as well as religious and political attitudes. Participants were geolocated so that their coordinates could be used in a clustering analysis that sought to identify groups of countries or territories with homogeneous vectors of moral preferences. 
Here we report the findings of the Moral Machine experiment, focusing on four levels of analysis, and considering for each level of analysis how the Moral Machine results can trace our path to universal machine ethics. First, what are the relative importances of the nine preferences we explored on the platform, when data are aggregated worldwide? Second, does the intensity of each preference depend on the individual characteristics of respondents? Third, can we identify clusters of countries with homogeneous vectors of moral preferences? And fourth, do cultural and economic variations between countries predict variations in their vectors of moral preferences?
Harris had earlier considered 'Who owns my autonomous vehicle: Ethics and responsibility in artificial and human intelligence' in (2018) 27(4) Cambridge Quarterly of Healthcare Ethics 500–509.

'Law and Technology: Two Modes of Disruption, Three Legal Mindsets and the Big Picture of Regulatory Responsibilities' by Roger Brownsword in (2018) 14 Indian Journal of  Law and Technology comments
This article introduces three ideas that are central to understanding the ways in which law and legal thinking are disrupted by emerging technologies and to maintaining a clear focus on the responsibilities of regulators. The first idea is that of a double disruption that technologicalinnovation brings to the law. While the first disruption tells us that the old rules are no longer fit for purpose and need to be revised and renewed, the second tells us that, even if the rules have been changed, regulators might now be able to dispense with the use of rules (the rules are redundant) and rely instead on technological instruments. 
The second idea is that the double disruption leads to a three-way legal and regulatory mind-set that is divided between: (i) traditional concerns for coherence in the law; (ii) modern concerns with instrumental effectiveness; and (iii) a continuing concern with instrumental effectiveness and risk management but now focused on the possibility of employing technocratic solutions. The third idea is one of a hierarchy of regulatory responsibilities. Most importantly, regulators have a 'stewardship' responsibility for maintaining the 'commons'; then they have a responsibility to respect the fundamental values of a particular human social community; and, finally, they have a responsibility to seek out an acceptable balance of legitimate interests within their community. 
Such disruptions notwithstanding, it is argued that those who have regulatory responsibilities need to be able to think through the regulatory noise to frame questions in the right way and to respond in ways that are rationally defensible and reasonable. In an age of smart machines and new possibilities for technological fixes, traditional institutional designs might need to be reviewed.
Brownsword states
In a series of articles, I have argued that lawyers need to engage more urgently with the regulatory effects of new technologies;' and, while I have argued this in relation to the full spectrum of technological interventions, whether they are 'soft' and 'assistive' or 'hard' and fully 'managerial',  My concerns have been primarily with the employment of hard technologies. For, whereas assistive technologies (such as those surveillance and identification technologies that are employed in criminal justice systems) reinforce the prohibitions and requirements that are prescribed by legal rules, full- scale technological management introduces a radically different regulatory approach by redefining the practical options that are available to regulatees. Instead of seeking to channel the conduct of regulatees by prescribing what they 'ought' or 'ought not' to do, regulators focus on controlling what regulatees actually can or cannot do in particular situations. Instead of finding themselves reminded of their legal obligations, regulatees find themselves  obliged or 'forced' to act in certain ways. 
If lawyers are to get to grips with these new articulations of regulatory power, I have suggested that they frame their inquiries by employing a broad concept of the 'regulatory environment' (one that recognises both normative rule-based and non-normative technology-based regulatory mechanisms); I have identified the 'complexion' of the regulatory environment as an important focus for inquiry (because the use of technological management can compromise the context for the possibility of both autonomous and moral human action); and I have argued that it is imperative that the use of regulatory technologies is authorised in accordance with the ideal of the Rule of Law. 
I have also posed a number of questions about the future of traditional rules of law where 'regulators' (broadly conceived) turn away from rules in favour of technological solutions or where historic regulatory objectives are simply taken care of by automation-such as will be the case, for example, when it is the design of autonomous vehicles that takes care of concerns about human health and safety that have hitherto been addressed by legal  rules directed at human drivers of vehicles. Hence, if we look ahead, what does the increasing use of technological management signify for traditional rules of criminal law, torts, and contracts? Will these rules be rendered redundant, will they be directed at different human addressees, or will they simply be revised? In short, how are traditional laws disrupted by technological innovation and, in an age of technological management, how are rule-based regulatory strategies disrupted? It is questions of this kind that I want to begin to address in the present article. 
Yet, why linger over such questions? After all, the prospect of technological management implies that rules of any kind have a limited future. To the extent that technological management takes on the regulatory roles traditionally performed by legal rules, those rules seem to be redundant;   and, to the extent that technological management does not supersede but co-exists with legal rules, while some rules will be redirected, others will need to be refined and revised (imagine, for example, a legal framework that covers both autonomous and driven vehicles sharing the same roads). Accordingly, the short answer to these questions is that the destiny of legal rules is to be found somewhere in the range of redundancy, replacement, redirection, revision and refinement. Precisely which rules are replaced, which refined, which revised and so on, will depend on both technological development and the way in which particular communities respond to the idea that technologies, as much as rules, are available as regulatory instruments - indeed, that legal rules are just one species of regulatory technologies. 
This short answer, however, does not do justice to the deeper and distinctive disruptive effects of technological development on both legal rules and the regulatory mind-set. Accordingly, in this article, I want to sketch a back-story that features two overarching ideas: one is the idea of a double tech- nological disruption and the other is the idea of a regulatory mind-set that is divided in three ways. With regard to the first of these overarching ideas, the double disruption has an impact on: (i) the substance of traditional legal rules; and then (ii) on the use-or, rather, non-use-of legal rules as the regulatory modality. With regard to the second overarching idea, the ensuing three-way legal and regulatory mind-set is divided between: (i) traditional concerns for coherence in the law; (ii) modern concerns with instrumental effectiveness (relative to specified regulatory purposes) and particularly with seeking an acceptable balance of the interests in beneficial innovation and management of risk; and (iii) a continuing concern with instrumental effec- tiveness and risk management but now focused on the possibility of employ- ing technocratic solutions. 
If what the first disruption tells us is that the old rules are no longer fit for purpose and need to be revised and renewed, then the second disruption tells us that, even if the rules have been changed, regulators might now be able to dispense with the use of rules (the rules are redundant) and rely instead on technological instruments. Moreover, what the disruptions further tell us is that we can expect to find a plurality of competing mind-sets seeking to guide the regulatory enterprise. However, what none of this tells us is how regulators should engage with these disruptions. When there is pressure on regulators to think like coherentists (focusing on the internal consistency and integrity of legal doctrine), when regulators are expected to think in a way that is sensitive to risk and to make instrumentally effective responses, and when there is now pressure to think beyond rules to technological fixes, what exactly are the responsibilities of, and priorities for, regulators? Without some critical distance and a sense of the bigger picture, how are regulators to plot a rational and reasonable course through a conflicted and confusing regulatory discourse? Although these are large questions, they are ones that I also want to begin to address in this article. 
Accordingly, the shape of the article, which is in four principal Parts, is as follows. We start (in Parts II and III) with some questions about the future of traditional legal rules, the backstory to which is one of a double disruption that technological innovation brings to the law and, in consequence, a three- way re-configuration of the legal and regulatory mind-set. While the double disruption is elaborated in Part II of the article, the three elements of the re-configured legal and regulatory mind-set (namely, the coherentist, regulatory-instrumentalist, and technocratic elements) are elaborated in Part III. Given this re-configuration, we need to think about how regulators should engage with new technologies, whether viewing them as regulatory targets or as regulatory tools"; and this invites thoughts about the bigger picture of regulatory responsibilities as well as regulatory roles and institutional competence. Some reflections on the bigger picture are presented in Part IV of the article; and, in Part V, I offer some initial thoughts on the competence of, respectively, the Courts and the Legislature to adopt the appropriate mind- set. While this discussion will not enable us to predict precisely what the future of legal rules will be, it will enable us to appreciate the significance of the disruption to traditional legal mind-sets, to understand the confusing plurality of voices that will be heard in our regulatory discourses, and to have a sense of the priorities for regulators.