Showing posts with label Smart Vehicles. Show all posts
Showing posts with label Smart Vehicles. Show all posts

30 July 2020

Government Access to Vehicle Data

The Australian National Transport Commission (NTC) discussion paper Government access to vehicle-generated data comments
 Vehicles are increasingly capturing a range of useful data about the road environment, the vehicle itself and the way it is used. The vehicle industry is also rapidly expanding the capability of vehicles to connect and share data. This could provide a new opportunity for governments to improve their transportation systems through access to and use of this new data. If government access is not considered in a nationally consistent way, governments risk creating a fragmented, overly burdensome or low-access data environment. 
Australia’s transport agencies have identified that this new data will be important for operating more dynamic and responsive transportation systems. This new vehicle-generated data has the potential to improve road safety, optimise the road network and better inform network planning. For this report we have defined vehicle-generated data as any data generated by a vehicle that produces information about the vehicle, the environment around the vehicle or the use of the vehicle. 
The purpose of this project is to develop policy options for government access to and use of vehicle-generated data for the purposes of road safety, network operations, investment, maintenance and planning. 
The purpose of this discussion paper is to:
  • ▪ discuss our understanding of key issues and challenges arising from government access to vehicle-generated data 
  • seek views on the opportunity statement and problem statements contained in this paper  
  • seek views on options that could address these problems. 
What are the opportunities and benefits? 
Improved road safety has been identified as a key need that could be addressed through greater access to vehicle-generated data. For example, sharing and reporting of traffic safety events could use vehicles to detect and warn occupants about dangerous road conditions, allowing transport agencies to respond more rapidly to incidents. It is also the area with the highest willingness among vehicle manufacturers to share data. Unlike other types of data such as vehicle movement data, this data is unique to the vehicle and cannot be as easily replicated from other data sources. 
To better understand the needs of transport agencies and industry, we hosted several co- design workshops. The workshops generated 23 use cases identifying different potential uses for vehicle-generated data. Transport agencies identified significant potential benefits for road safety, network planning and optimisation. Transport agencies saw that this data could better inform decision making and reduce road trauma. However, the detailed benefits and costs of these use cases are still unknown. We also found that further data requirement and business case development is needed on priority uses for vehicle-generated data. Further collaboration between industry and government to better understand the potential benefits and costs would be highly beneficial to achieve this and is strongly supported by stakeholders. 
Australia does not lag significantly behind international jurisdictions in government access to vehicle-generated data; however, there are early international collaboration efforts that Australia can learn from. Key among these is the European Union’s memorandum of understanding between government and industry on the exchange of vehicle-generated data to support eight safety-related use cases. 
What are the barriers and gaps? 
Vehicle-generated data can be costly to generate, carry, store and use, and can reveal sensitive information about users and businesses. Much of this data is not stored, broadcast or shared. There is currently a low market penetration of vehicles that can share this data. The key barriers to government access include:
  • There is no compelling reason or incentive for generators of vehicle-generated data to provide this information to transport agencies (with the exception of the road access, safety and productivity benefits provided to heavy vehicle operators through regulatory telematics). 
  • There are trust, cost and operational barriers to the exchange of vehicle-generated data and, outside of heavy vehicles, there is no data access framework to address these issues. 
  • In comparison with international markets, there are currently fewer vehicles capable of capturing and communicating vehicle-generated data on Australia’s roads, with only market-based mechanisms to encourage uptake. 
What are the opportunities and problems? 
We have identified one key opportunity for government to access vehicle-generated data: 1. There is an opportunity for stakeholder collaboration on exchange or sharing of vehicle-data for road safety purposes to understand: – what vehicle-generated data can be used to support road safety in Australia – what an appropriate framework and forum might look like to support such an exchange. 
We have identified three problems that will we need to overcome to create wider government access:
1. Vehicle-generated data is currently not provided to transport agencies for purposes that may have publicly beneficial outcomes. This could be due to current vehicle capabilities and/or a lack of incentive or reason for industry and road users to provide the data (the exception to this being heavy vehicles enrolled in a current regulatory access or compliance scheme under the Heavy Vehicle National Law). 
2. There is a lack of a data access framework to provide the necessary trust, data exchange systems, data standards/definitions, understanding of data needs and governance to establish data access and use (the exception to this being heavy vehicles enrolled in a current regulatory access or compliance scheme). 
3. The level of uptake and penetration of connectivity across the Australian vehicle fleet may delay the benefits of vehicle-generated data, particularly related to safety-critical events.
What are we proposing to address the opportunity and problems? 
Recognising that vehicle-generated data is still at the nascent stage of development in Australia and that stakeholders remain unclear on priorities, there is an opportunity for governments to adopt a new policy approach. We propose that a new collaboration between industry and governments begin with a focus on road safety data as the priority and common mission. This approach is in line with the European Union’s approach and has early consensus from industry and government. We propose: For future development on government access to vehicle-generated data, road safety is the priority for exchanging vehicle-generated data between industry and government. Industry and government should collaborate on identifying opportunities for exchanging road safety data and adopt a principle of non-commercial sharing or exchange. 
We have identified three options to address problems 1 and 2, which are: –
Option 1: Rely on existing arrangements between government and industry, with no changes to existing legislation or frameworks. 
Option 2: Establish a data exchange partnership between industry and government that will identify opportunities for exchanging vehicle-generated data as well as develop standards and consider proof of concept. 
Option 3: Introduce new legislation requiring industry to collect, store and retain vehicle-generated data while providing access to government.
The NTC’s preliminary preferred option is option 2. We believe this option can provide the best opportunity for government to better understand how to maximise the potential benefits and opportunities of vehicle-generated data while actively collaborating with industry. This option has received general early support from government and industry. 
To address problem 3 – a lack of stimulus to bring forward vehicle connectivity – we are proposing that the Commonwealth considers the costs, benefits and system requirements to require vehicles to send automated crash notification system messages and have these received and actioned. Europe has achieved this through introducing its eCall system. This would bind all vehicles to a capability to send data messages over private networks. This proposal could be enacted through the Commonwealth government considering adoption of international standards into an Australian Design Rule (with consequential amendments to the relevant state and territory in-service vehicle standards) and infrastructure and capability to receive and use emergency notification messages. This would result in a significant increase in the fleet penetration of connected vehicles in Australia. 
List of questions 
1 Do our problem and opportunity statements accurately define the key problems to be addressed, and do they capture the breadth of problems that would need to be addressed? 
2 In our table, have we accurately captured all the regulatory and legislative mechanisms government could currently use to access vehicle-generated data?
3 Are there other major local or international jurisdictional developments providing further access powers or arrangements for vehicle-generated data?
4 Do you agree with our assumptions on the currently low uptake and limited availability of technology that supports the generation of vehicle data and that there are few and limited current government access arrangements for vehicle-generated data?
5 What issues do you believe will be created if ExVe is adopted and that would need to be considered in Australia? 
6 Is there value in establishing a national data aggregator or trust broker? Could good data definitions, practices and cooperation between entities achieve the same outcome? 
7 Can you provide us with more information on either the costs or benefits for government access to vehicle-generated data for the use cases listed in Appendix B? 
8 Are there relevant international standards that should be adopted for vehicle- generated data? Are there any standards that could be locally developed? . 
9 Have we accurately described the key barriers to accessing vehicle-generated data? Are there additional barriers? 
10 Do you agree that road safety data should be considered the priority purpose for which we seek to exchange data with industry? 
11 What are the key data needs of transport agencies beyond those already identified? 
12 What further benefits from vehicle-generated data should be considered? 
13 We contend that a prioritised starting point should be established from which data for other purposes can be further developed. Are there other approaches that could achieve this? 
14 Do you agree with the analysis presented in Table 7? What other opportunities are there for vehicle-generated data, and why? 
15 Have priorities changed for land transport policy and for data access from vehicles with the onset of COVID-19? 
16 Should road safety be adopted as the priority for developing use cases for government use of vehicle-generated data? If not, what other approach should Australia take? 
17 Can data other than for the purposes of road safety be exchanged on non- commercial terms? 
18 Does the NTC’s preferred approach (option 2) best address the problems we have identified? If not, what approach would better address these problems? 
19 Does the NTC’s proposed approach best address the problems we have identified? If not, what approach would better address these problems? .

22 June 2020

Robotics and Automated Vehicle Testing

'Robo-Apocalypse cancelled? Reframing the automation and future of work debate' by Leslie Willcocks in (2020) Journal of Information Technology argues
Robotics and the automation of knowledge work, often referred to as AI (artificial intelligence), are presented in the media as likely to have massive impacts, for better or worse, on jobs skills, organizations and society. The article deconstructs the dominant hype-and-fear narrative. Claims on net job loss emerge as exaggerated, but there will be considerable skills disruption and change in the major global economies over the next 12 years. The term AI has been hijacked, in order to suggest much more going on technologically than can be the case. The article reviews critically the research evidence so far, including the author’s own, pointing to eight major qualifiers to the dominant discourse of major net job loss from a seamless, overwhelming AI wave sweeping fast through the major economies. The article questions many assumptions: that automation creates few jobs short or long term; that whole jobs can be automated; that the technology is perfectible; that organizations can seamlessly and quickly deploy AI; that humans are machines that can be replicated; and that it is politically, socially and economically feasible to apply these technologies. A major omission in all studies is factoring in dramatic increases in the amount of work to be done. Adding in ageing populations, productivity gaps and skills shortages predicted across many G20 countries, the danger might be too little, rather than too much labour. The article concludes that, if there is going to be a Robo-Apocalypse, this will be from a collective failure to adjust to skills change over the next 12years. But the debate needs to be widened to the impact of eight other technologies that AI insufficiently represents in the popular imagination and that, in combination, could cause a techno-apocalypse.
The National Transport Commission 2020 Review of ‘Guidelines for trials of automated vehicles in Australia’: Discussion paper
 reviews the National Transport Commission (NTC) and Austroads’ Guidelines for trials of automated vehicles in Australia. The guidelines were released in 2017 to support nationally consistent conditions for automated vehicle trials in Australia. The NTC has undertaken research and targeted consultation to present potential updates to the guidelines that aim to benefit trialling organisations and road transport agencies. Updates could include: further detail about requirements; alignment with the future commercial deployment framework; clarifying the application of the guidelines to other technologies; and improving administrative processes.
 The paper states
The National Transport Commission and Austroads’ Guidelines for trials of automated vehicles in Australia were released in May 2017 to support nationally consistent conditions for automated vehicle trials in Australia. The guidelines were intended to:
  • provide certainty and clarity to industry regarding expectations when trialling in Australia 
  • help agencies manage trials in their own jurisdictions as well as across state borders 
  • establish minimum standards of safety 
  • help assure the public that roads are being used safely 
  • help raise awareness and acceptance of automated vehicles in the community.
Transport and infrastructure ministers directed that the guidelines should be reviewed every two years. We began this review of the guidelines in 2019 and it is the first to take place since they were published. The purpose of this discussion paper is to assess how well the guidelines are working in practice and to seek broader stakeholder views on any required changes. 
Context 
Since the guidelines were published in May 2017 there have been a number of developments in trialling and the development of regulatory frameworks for automated vehicles:
  • Trials have now taken place in every Australian state and territory, and trialling organisations and road transport agencies can share their experience of the application, approval and operation of trials. 
  • There has been further development of the regulatory framework for the commercial deployment of automated vehicles, which will eventually succeed the trials framework. 
  • International guidance has further evolved.
The objectives of the review are to identify:
  • whether the guidelines have assisted governments and trialling organisations 
  • challenges faced by governments and trialling organisations using the guidelines or in applying for, approving, operating and evaluating trials 
  • additional requirements governments have placed on trialling organisations 
  • whether the guidelines should be updated to ensure a nationally consistent and safe approach to automated vehicle trials in Australia.
Consultation topics 
In late 2019 the NTC undertook targeted consultation and a review of international guidance to inform this discussion paper. Through this consultation we have learned that trialling organisations and road transport agencies have found the guidelines useful, particularly as a starting point to guide trialling organisations as they prepare their trial applications. We have also learned that the guidelines could provide further detail to assist trialling organisations and to provide some consistency in applications for road transport agencies. As well, we have learned that there are a number of differences in trial requirements and application processes across states and territories, which has led to differing experiences in gaining approvals for trials. 
Consultation topics in this discussion paper fall under five broad categories:
  • content and level of detail in the current guidelines (chapter 3) 
  • application of the guidelines (chapter 4) 
  • administrative processes and harmonisation (chapter 5) 
  • other automated vehicle trial issues outside the scope of the guidelines (chapter 6).
There could be a number of updates to the guidelines that will benefit both trialling organisations and road transport agencies. These include
  • ▪ further detail about safety, traffic management and data and information requirements; 
  • further alignment with future safety requirements for commercial deployment; 
  • clarifying the application of the guidelines to other technologies, operating domains and types of trials; and 
  • improving the efficiency of administrative processes at the point of application. 
We are seeking views from stakeholders on the potential updates discussed in this paper and on any other useful changes. We want to ensure the guidelines support safe and innovative trials in Australia. This will help Australia gain the safety and productivity benefits of this technology. 
List of questions 
1 Should the guidelines be updated to improve the management of trials (section 3 of the guidelines) and, if so, why? Consider in particular:. 
2 Should the guidelines be updated to improve the safety management of trials (section 4 of the guidelines) and, if so, why? Consider in particular: 
3 What issues have been encountered when obtaining or providing insurance? 
4 Are the current insurance requirements sufficient (section 5 of the guidelines)? If not, how should they change?. 
5 Should the guidelines be updated to improve the provision of relevant data and information (section 6 of the guidelines)? 
6 Is there any additional information the guidelines should include for trialling organisations? 
7 Should the guidelines apply to any other emerging technologies (discussed in chapter 4 or other technologies) and operating domains?. 
8 Are there any additional criteria or additional matters relevant to the trials of automated heavy vehicles that should be included in the guidelines?. 
9 Are there currently any regulatory or other barriers to running larger trials? If so, how should these barriers be addressed? (Consider the guidelines, state and territory exemption and permit schemes, and Commonwealth importation processes.). 
10 Should the guidelines continue to allow commercial passenger services in automated vehicle trials? If so, should the guidelines reference additional criteria that trialling organisations should be subject to, and what should these criteria be?. 
11 What challenges have you faced with administrative processes when applying for approving trials of automated vehicles, and how could these be addressed?. 
12 Are there any other barriers to cross-border trials? Is there a need to change current arrangements for cross border trials? 
13 Should there be a more standardised government evaluation framework for automated vehicle trials? If so, what are the trial issues that should be evaluated?.
14 Should the results of evaluations be shared between states and territories? If so, how should commercially sensitive information be treated? 
15 What works well in the automated vehicle importation process, and what are the challenges?. 
16 Is there anything further that should be done to facilitate a transition from trial to commercial deployment? 
17 Are there any matters that the NTC should consider in its review of the guidelines?

13 April 2020

Autonomous Cars

'Autonomous cars: A driving force for change in motor liability and insurance' by Katie Atkinson in (2020) 17(1)  SCRIPTed 125-151 comments
In this article, I review the legal and regulatory obstacles to the introduction of autonomous vehicles. I provide an overview of the key legislation which is relevant to the introduction of autonomous vehicles in England and Wales. I discuss the motor liability and insurance implications of the introduction of autonomous cars and the legal framework for the testing of autonomous vehicles on public roads. I conclude that there is likely to be significant volume of emerging legislation that car manufacturers and suppliers will be required to navigate as they launch increasingly autonomous driving systems. It is also likely that we will see an increase in the volume and complexity of litigation involving parties such as vehicle manufacturers, software companies, suppliers and mapping agencies.
'Autonomous Vehicles and Liability: What Will Juries Do?' by Gary Marchant and Rida Bazzi in (2020) 26(1) Boston University Journal of Science and Technology Law 67 comments
Autonomous vehicles (‘AVs’) that can be operated without a human driver are now being tested on public roads across America and are soon expected to be commercialized and widely available. One of the greatest roadblocks holding up more rapid deployment of AVs is manufacturers’ concerns about AV liability. This article provides a real-world assessment of AV liability risks, and concludes that manufacturers are indeed rightfully concerned about the extent and impacts of liability on AVs. 
The article first examines the application of product liability doctrine to AVs in various accident scenarios, drawing upon previous vehicle product liability cases. While AV manufacturers will likely and properly be held responsible for most accidents where the vehicle itself is responsible for the crash, the concern is that AV manufacturers may be sued and often held liable even when the AV was not the cause of the collision. This is because AVs have a much greater capability to avoid collisions than does a human-driven vehicle, and thus in almost any crash scenario it may be possible to argue that the AV should have detected and avoided the impending crash. Thus, even though the total number of vehicle accidents should decrease with AV deployment, the share and even net value of liability may go up for AV manufacturers. 
Next, the article considers jury tendencies and psychology, and concludes that jurors will be particularly harsh on AVs that draw on exotic artificial intelligence technology, and which may be involved in accidents that harm people notwithstanding their claims of improving overall vehicle safety. These factors are likely to result in more frequent and larger punitive damages than in past motor vehicle product liability. Given these finding, the article concludes by recognizing the need for some type of public policy intervention to prevent the tort system from having the contradictory effect of harming public safety.

07 October 2019

Infernal Engines

The Immoral Machine' by John Harris in (2019) Cambridge Quarterly of Healthcare Ethics comments
In a recent paper in Nature entitled ‘The Moral Machine Experiment’, Edmond Awad, et al make a number of breathtakingly reckless assumptions, both about the decisionmaking capacities of current so-called ‘autonomous vehicles’ and about the nature of morality and the law. Accepting their bizarre premise that the holy grail is to find out how to obtain cognizance of public morality and then program driverless vehicles accordingly, the following are the four steps to the Moral Machinists argument:
(1) Find out what ‘public morality’ will prefer to see happen; 
(2) On the basis of this discovery, claim both popular acceptance of the preferences and persuade would-be owners and manufacturers that the vehicles are programmed with the best solutions to any survival dilemmas they might face; 
(3) Citizen agreement thus characterized is then presumed to deliver moral license for the chosen preferences; 
(4) This yields ‘permission’ to program vehicles to spare or condemn those outside the vehicles when their deaths will preserve vehicle and occupants.
This paper argues that the Moral Machine Experiment fails dramatically on all four counts.
Harris' critique concerns 'The Moral Machine experiment' by Edmond Awad, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph Henrich, Azim Shariff, Jean-François Bonnefon and Iyad Rahwan  in (2018) 563 Nature 59–64, in which the authors comment
With the rapid development of artificial intelligence have come concerns about how machines will make moral decisions, and the major challenge of quantifying societal expectations about the ethical principles that should guide machine behaviour. To address this challenge, we deployed the Moral Machine, an online experimental platform designed to explore the moral dilemmas faced by autonomous vehicles. This platform gathered 40 million decisions in ten languages from millions of people in 233 countries and territories. Here we describe the results of this experiment. First, we summarize global moral preferences. Second, we document individual variations in preferences, based on respondents’ demographics. Third, we report cross-cultural ethical variation, and uncover three major clusters of countries. Fourth, we show that these differences correlate with modern institutions and deep cultural traits. We discuss how these preferences can contribute to developing global, socially acceptable principles for machine ethics.
They argue
We are entering an age in which machines are tasked not only to promote well-being and minimize harm, but also to distribute the well- being they create, and the harm they cannot eliminate. Distribution of well-being and harm inevitably creates tradeoffs, whose resolution falls in the moral domain. Think of an autonomous vehicle that is about to crash, and cannot find a trajectory that would save everyone. Should it swerve onto one jaywalking teenager to spare its three elderly passengers? Even in the more common instances in which harm is not inevitable, but just possible, autonomous vehicles will need to decide how to divide up the risk of harm between the different stakeholders on the road. Car manufacturers and policymakers are currently struggling with these moral dilemmas, in large part because they cannot be solved by any simple normative ethical principles such as Asimov’s laws of robotics. 
Asimov’s laws were not designed to solve the problem of universal machine ethics, and they were not even designed to let machines distribute harm between humans. They were a narrative device whose goal was to generate good stories, by showcasing how challenging it is to create moral machines with a dozen lines of code. And yet, we do not have the luxury of giving up on creating moral machines. Autonomous vehicles will cruise our roads soon, necessitating agreement on the principles that should apply when, inevitably, life-threatening dilemmas emerge. The frequency at which these dilemmas will emerge is extremely hard to estimate, just as it is extremely hard to estimate the rate at which human drivers find themselves in comparable situations. Human drivers who die in crashes cannot report whether they were faced with a dilemma; and human drivers who survive a crash may not have realized that they were in a dilemma situation. Note, though, that ethical guidelines for autonomous vehicle choices in dilemma situations do not depend on the frequency of these situations. Regardless of how rare these cases are, we need to agree beforehand how they should be solved. 
The key word here is ‘we’. As emphasized by former US president Barack Obama, consensus in this matter is going to be important. Decisions about the ethical principles that will guide autonomous vehicles cannot be left solely to either the engineers or the ethicists. For consumers to switch from traditional human-driven cars to autonomous vehicles, and for the wider public to accept the proliferation of artificial intelligence-driven vehicles on their roads, both groups will need to understand the origins of the ethical principles that are programmed into these vehicles. In other words, even if ethicists were to agree on how autonomous vehicles should solve moral dilemmas, their work would be useless if citizens were to disagree with their solution, and thus opt out of the future that autonomous vehicles promise in lieu of the status quo. Any attempt to devise artificial intelligence ethics must be at least cognizant of public morality. 
Accordingly, we need to gauge social expectations about how autonomous vehicles should solve moral dilemmas. This enterprise, however, is not without challenges. The first challenge comes from the high dimensionality of the problem. In a typical survey, one may test whether people prefer to spare many lives rather than few; or whether people prefer to spare the young rather than the elderly; or whether people prefer to spare pedestrians who cross legally, rather than pedestrians who jaywalk; or yet some other preference, or a simple combination of two or three of these preferences. But combining a dozen such preferences leads to millions of possible scenarios, requiring a sample size that defies any conventional method of data collection. 
The second challenge makes sample size requirements even more daunting: if we are to make progress towards universal machine ethics (or at least to identify the obstacles thereto), we need a fine-grained understanding of how different individuals and countries may differ in their ethical preferences. As a result, data must be collected worldwide, in order to assess demographic and cultural moderators of ethical preferences. 
As a response to these challenges, we designed the Moral Machine, a multilingual online ‘serious game’ for collecting large-scale data on how citizens would want autonomous vehicles to solve moral dilemmas in the context of unavoidable accidents. The Moral Machine attracted worldwide attention, and allowed us to collect 39.61 million decisions from 233 countries, dependencies, or territories (Fig. 1a). In the main interface of the Moral Machine, users are shown unavoidable accident scenarios with two possible outcomes, depending on whether the autonomous vehicle swerves or stays on course (Fig. 1b). They then click on the outcome that they find preferable. Accident scenarios are generated by the Moral Machine following an exploration strategy that focuses on nine factors: sparing humans (versus pets), staying on course (versus swerving), sparing passengers (versus pedestrians), sparing more lives (versus fewer lives), sparing men (versus women), sparing the young (versus the elderly), sparing pedestrians who cross legally (versus jaywalking), sparing the fit (versus the less fit), and sparing those with higher social status (versus lower social status). Additional characters were included in some scenarios (for example, criminals, pregnant women or doctors), who were not linked to any of these nine factors. These characters mostly served to make scenarios less repetitive for the users. After completing a 13-accident session, participants could complete a survey that collected, among other variables, demographic information such as gender, age, income, and education, as well as religious and political attitudes. Participants were geolocated so that their coordinates could be used in a clustering analysis that sought to identify groups of countries or territories with homogeneous vectors of moral preferences. 
Here we report the findings of the Moral Machine experiment, focusing on four levels of analysis, and considering for each level of analysis how the Moral Machine results can trace our path to universal machine ethics. First, what are the relative importances of the nine preferences we explored on the platform, when data are aggregated worldwide? Second, does the intensity of each preference depend on the individual characteristics of respondents? Third, can we identify clusters of countries with homogeneous vectors of moral preferences? And fourth, do cultural and economic variations between countries predict variations in their vectors of moral preferences?
Harris had earlier considered 'Who owns my autonomous vehicle: Ethics and responsibility in artificial and human intelligence' in (2018) 27(4) Cambridge Quarterly of Healthcare Ethics 500–509.

'Law and Technology: Two Modes of Disruption, Three Legal Mindsets and the Big Picture of Regulatory Responsibilities' by Roger Brownsword in (2018) 14 Indian Journal of  Law and Technology comments
This article introduces three ideas that are central to understanding the ways in which law and legal thinking are disrupted by emerging technologies and to maintaining a clear focus on the responsibilities of regulators. The first idea is that of a double disruption that technologicalinnovation brings to the law. While the first disruption tells us that the old rules are no longer fit for purpose and need to be revised and renewed, the second tells us that, even if the rules have been changed, regulators might now be able to dispense with the use of rules (the rules are redundant) and rely instead on technological instruments. 
The second idea is that the double disruption leads to a three-way legal and regulatory mind-set that is divided between: (i) traditional concerns for coherence in the law; (ii) modern concerns with instrumental effectiveness; and (iii) a continuing concern with instrumental effectiveness and risk management but now focused on the possibility of employing technocratic solutions. The third idea is one of a hierarchy of regulatory responsibilities. Most importantly, regulators have a 'stewardship' responsibility for maintaining the 'commons'; then they have a responsibility to respect the fundamental values of a particular human social community; and, finally, they have a responsibility to seek out an acceptable balance of legitimate interests within their community. 
Such disruptions notwithstanding, it is argued that those who have regulatory responsibilities need to be able to think through the regulatory noise to frame questions in the right way and to respond in ways that are rationally defensible and reasonable. In an age of smart machines and new possibilities for technological fixes, traditional institutional designs might need to be reviewed.
Brownsword states
In a series of articles, I have argued that lawyers need to engage more urgently with the regulatory effects of new technologies;' and, while I have argued this in relation to the full spectrum of technological interventions, whether they are 'soft' and 'assistive' or 'hard' and fully 'managerial',  My concerns have been primarily with the employment of hard technologies. For, whereas assistive technologies (such as those surveillance and identification technologies that are employed in criminal justice systems) reinforce the prohibitions and requirements that are prescribed by legal rules, full- scale technological management introduces a radically different regulatory approach by redefining the practical options that are available to regulatees. Instead of seeking to channel the conduct of regulatees by prescribing what they 'ought' or 'ought not' to do, regulators focus on controlling what regulatees actually can or cannot do in particular situations. Instead of finding themselves reminded of their legal obligations, regulatees find themselves  obliged or 'forced' to act in certain ways. 
If lawyers are to get to grips with these new articulations of regulatory power, I have suggested that they frame their inquiries by employing a broad concept of the 'regulatory environment' (one that recognises both normative rule-based and non-normative technology-based regulatory mechanisms); I have identified the 'complexion' of the regulatory environment as an important focus for inquiry (because the use of technological management can compromise the context for the possibility of both autonomous and moral human action); and I have argued that it is imperative that the use of regulatory technologies is authorised in accordance with the ideal of the Rule of Law. 
I have also posed a number of questions about the future of traditional rules of law where 'regulators' (broadly conceived) turn away from rules in favour of technological solutions or where historic regulatory objectives are simply taken care of by automation-such as will be the case, for example, when it is the design of autonomous vehicles that takes care of concerns about human health and safety that have hitherto been addressed by legal  rules directed at human drivers of vehicles. Hence, if we look ahead, what does the increasing use of technological management signify for traditional rules of criminal law, torts, and contracts? Will these rules be rendered redundant, will they be directed at different human addressees, or will they simply be revised? In short, how are traditional laws disrupted by technological innovation and, in an age of technological management, how are rule-based regulatory strategies disrupted? It is questions of this kind that I want to begin to address in the present article. 
Yet, why linger over such questions? After all, the prospect of technological management implies that rules of any kind have a limited future. To the extent that technological management takes on the regulatory roles traditionally performed by legal rules, those rules seem to be redundant;   and, to the extent that technological management does not supersede but co-exists with legal rules, while some rules will be redirected, others will need to be refined and revised (imagine, for example, a legal framework that covers both autonomous and driven vehicles sharing the same roads). Accordingly, the short answer to these questions is that the destiny of legal rules is to be found somewhere in the range of redundancy, replacement, redirection, revision and refinement. Precisely which rules are replaced, which refined, which revised and so on, will depend on both technological development and the way in which particular communities respond to the idea that technologies, as much as rules, are available as regulatory instruments - indeed, that legal rules are just one species of regulatory technologies. 
This short answer, however, does not do justice to the deeper and distinctive disruptive effects of technological development on both legal rules and the regulatory mind-set. Accordingly, in this article, I want to sketch a back-story that features two overarching ideas: one is the idea of a double tech- nological disruption and the other is the idea of a regulatory mind-set that is divided in three ways. With regard to the first of these overarching ideas, the double disruption has an impact on: (i) the substance of traditional legal rules; and then (ii) on the use-or, rather, non-use-of legal rules as the regulatory modality. With regard to the second overarching idea, the ensuing three-way legal and regulatory mind-set is divided between: (i) traditional concerns for coherence in the law; (ii) modern concerns with instrumental effectiveness (relative to specified regulatory purposes) and particularly with seeking an acceptable balance of the interests in beneficial innovation and management of risk; and (iii) a continuing concern with instrumental effec- tiveness and risk management but now focused on the possibility of employ- ing technocratic solutions. 
If what the first disruption tells us is that the old rules are no longer fit for purpose and need to be revised and renewed, then the second disruption tells us that, even if the rules have been changed, regulators might now be able to dispense with the use of rules (the rules are redundant) and rely instead on technological instruments. Moreover, what the disruptions further tell us is that we can expect to find a plurality of competing mind-sets seeking to guide the regulatory enterprise. However, what none of this tells us is how regulators should engage with these disruptions. When there is pressure on regulators to think like coherentists (focusing on the internal consistency and integrity of legal doctrine), when regulators are expected to think in a way that is sensitive to risk and to make instrumentally effective responses, and when there is now pressure to think beyond rules to technological fixes, what exactly are the responsibilities of, and priorities for, regulators? Without some critical distance and a sense of the bigger picture, how are regulators to plot a rational and reasonable course through a conflicted and confusing regulatory discourse? Although these are large questions, they are ones that I also want to begin to address in this article. 
Accordingly, the shape of the article, which is in four principal Parts, is as follows. We start (in Parts II and III) with some questions about the future of traditional legal rules, the backstory to which is one of a double disruption that technological innovation brings to the law and, in consequence, a three- way re-configuration of the legal and regulatory mind-set. While the double disruption is elaborated in Part II of the article, the three elements of the re-configured legal and regulatory mind-set (namely, the coherentist, regulatory-instrumentalist, and technocratic elements) are elaborated in Part III. Given this re-configuration, we need to think about how regulators should engage with new technologies, whether viewing them as regulatory targets or as regulatory tools"; and this invites thoughts about the bigger picture of regulatory responsibilities as well as regulatory roles and institutional competence. Some reflections on the bigger picture are presented in Part IV of the article; and, in Part V, I offer some initial thoughts on the competence of, respectively, the Courts and the Legislature to adopt the appropriate mind- set. While this discussion will not enable us to predict precisely what the future of legal rules will be, it will enable us to appreciate the significance of the disruption to traditional legal mind-sets, to understand the confusing plurality of voices that will be heard in our regulatory discourses, and to have a sense of the priorities for regulators.

20 April 2019

Autonomous Vehicles

'Ethics and Public Health of Driverless Vehicle Collision Programming' by Samantha Godwin in (2018) 86 Tennessee Law Review 135 comments
 Driverless vehicles present a core ethical dilemma: there is a public health necessity and moral imperative to encourage the widespread adoption of driverless vehicles once they become demonstrably more reliable than human drivers, given their potential to dramatically reduce automobile fatalities, increase autonomy for disabled people, and improve land use and commutes. However, the very technologies that could enable autonomous vehicles to drive more safely than human drivers also imply greater moral responsibility for adverse outcomes. While human drivers must make split-second decisions in automobile collision scenarios, driverless car programmers have the luxury of time to reflect and choose deliberately how their vehicles should behave in collision scenarios. This implies greater responsibility and culpability, as well as the potential for greater scrutiny and regulation. Programmers must make premeditated decisions regarding whose safety to prioritize in inevitable collision scenarios—situations where a vehicle cannot avoid a collision altogether but can choose between colliding into different vehicles, objects, or persons. 
With the recent bipartisan passage of the SELF DRIVE Act in the House and the rapid development of driverless vehicle technology, we are now entering a critical time frame for considering what priorities should govern driverless car inevitable collision behavior. This Article shall argue that prescribed “ethics” programing must be regulated by law in order to avoid the likely collective action problem of a marketplace that will reward “occupant-favoring” designs, despite a probable public preference (and arguable moral necessity) for occupant indifferent designs. This Article then considers a variety of different options for systems of driverless vehicle ethics programming. The most justifiable ethics programing system would be one where road users are discouraged from externalizing the dangers incurred by their transportation choices onto those whose transportation choices, if more widely adopted, would comparatively improve aggregate safety. This ethical programing system, which I term “incentive-weighted programing,” would promote public safety while also striking the most equitably justifiable balance between different road users’ interests.

12 April 2019

Autonomous Vehicles, Liability and Incentives

'Automated Vehicles and Manufacturer Responsibility for Accidents: A New Legal Regime for a New Era' by Kenneth S Abraham and Robert L Rabin in (2019) 105 Virginia Law Review 127 comments
Over a century ago, industrialization and its accompanying increase in workplace injuries were placing substantial pressures on the tort system and its ability to compensate the victims of these injuries. Eventually, the interests of labor and management came together, giving rise to a new administrative compensation system. Unlike tort remedies, this new scheme imposed strict financial responsibility on employers for work-related injuries to their employees. This system of workers’ compensation is still the most far-reaching tort reform ever adopted – promoting safety and compensating for injuries more effectively than tort did both at the time and today. Workers’ compensation has its flaws, but there is no significant desire on anyone’s part to go back to tort. 
We are on the verge of another new era, requiring yet another revision to the legal regime. This time, it is our system of transportation that will be revolutionized. Over time, manually driven cars are going to be replaced by automated vehicles. The new era of automated vehicles will eventually require a legal regime that properly fits the radically new world of auto accidents. The new regime should promote safety and provide compensation both more sensibly and more effectively than what could be done under existing tort doctrines governing driver liability for negligence and manufacturer liability for product defects. Like labor and management a century ago, auto manufacturers, consumers, and the public at large – often currently at odds about the tort system – will need to have their interests come together if the new era of automated transportation is to be governed by an adequate legal regime. 
Any new approach will have to deal with the long and uneven transition to automated technology, impose substantial but appropriate financial responsibility for accidents on the manufacturers of highly automated vehicles, and provide satisfactory compensation to the victims of auto accidents in the new era. This Article develops and details our proposal for an approach that would accomplish these goals.
'Automatorts: How Should Accident Law Adapt to Autonomous Vehicles? Lessons from Law and Economics' (Hoover Institution Working Group on Intellectual Property, Innovation, and Prosperity, 2019) by Eric Talley comments
The introduction of autonomous vehicles (AVs) onto the nation’s motorways raises important questions about our legal system’s adaptability to novel risks and incentive problems presented by such technology. A significant part of the challenge comes in understanding how to navigate the transition period, as AVs interact routinely with conventional human actors. This paper extends a familiar multilateral precaution framework from the law and economics literature by analyzing interactions between algorithmic and human decision makers. My analysis demonstrates that several familiar negligence-based rules (for precautions and product safety) are able to accommodate such interactions efficiently. That said, a smooth transition will likely require substantial doctrinal/legal reforms in certain states, as well as a more general reconceptualization of fault standards across all states – not only for AVs but also for for human actors themselves.
The Optimal Agent: The Future of Autonomous Vehicles and Liability Theory' by Brian Seamus Haney comments
Autonomous Vehicles (“AVs”) are rapidly disrupting the $4 trillion auto industry. Indeed, questions surrounding AV regulation are some of the most important to be answered in the Twenty-First Century. Yet, legislators have yet to address or even identify some of the most critical issues relating to AV regulation. 
This paper explains the unique issues that deep reinforcement learning systems pose for AV technology, policy, and law. Additionally, this paper identifies two important regulatory problems that legislators and scholars need to address in the context of AV development. Legal scholars have made clear that there is a demanding need for some sort of regulatory system for AVs. However, those arguments focus on short term regulation and generally misunderstand the evolutionary rate of AV technology. This paper takes an informatics based approach to analyzing issues in AV regulation with a specific emphasis on the technical aspects of AV systems. Further, this paper discusses and explains the formal models that are currently being used as a foundation for AV development. Ultimately, AV technology will change the way humans move throughout the world and legislators must prepare immediately for the endeavor ahead in regulating AVs.
'Who’s Driving That Car?: An Analysis of Regulatory and Potential Liability Frameworks for Driverless Cars' by Madeline Roe in (2019) 60 Boston College Law Review 315 comments
Driverless, or autonomous, cars are being tested on public roadways across the United States. For example, California implemented a new regulation in 2018 that allows manufacturers to test driverless cars without a person inside the vehicle, so long as the manufacturers adhere to numerous requirements. The emergence of these vehicles raises questions about accident liability and the reach of state regulation regarding driverless cars. To address these questions, it is beneficial to look at the liability framework for another artificial intelligence system, such as surgical robots. This Note will explore possible frameworks of liability before arguing in support of further regulation of driverless cars and hypothesize that the liability for driverless car accidents will likely shift from the driver to the manufacturer. 
'The Case Against Taxing Robots' by Robert D. Atkinson of the Information Technology and Innovation Foundation comments 
 A disturbing trend in the world of public policy in recent years has been the extent to which fads and groupthink now shape public debates and galvanize support for ill-advised ideas and proposals. In the first phase of this process, someone — often a person of some notoriety, but not necessarily expertise — puts forth a new idea or claim, which is then amplified by a media increasingly focused on marketing the next new thing. Then comes a wave of articles, speeches, blogs, op-eds, and of course TED Talks, all providing supporting “evidence” and arguments for why the initial idea is the “best thing since sliced bread.” Voila: What begins as a loopy, even harmful, idea is now all the rage. Once this critical point of no return is reached — when “everyone” knows something is true — policymakers have only a short distance left to travel to turn what appears to be an inspired analysis into actual law. 
In the subcategory of science, technology, and innovation policy, there is no better case in point than today’s increasingly popular view that governments should increase taxes on capital equipment. Or, as the advocates say, “It’s time to tax the robots.” This idea has been around for a while, and gained considerable traction in 2017 when Microsoft founder Bill Gates argued, “At a time when people are saying that the arrival of that robot is a net loss because of displacement, you ought to be willing to raise the tax level and even slow down the speed of that adoption somewhat.” After all, as a technology pioneer and billionaire, Bill Gates is anything but a tin foil hat-wearing Luddite. Since then, the calls for taxing those job-killing robots have become a veritable tidal wave. One can barely go a week without reading yet another article or comment on the topic. 
Robot taxers make three main arguments in support of their position: As this paper will show, all three arguments are wrong. At the end of the day, robot taxers are suffering from and contributing to a techno-panic over jobs. “Help!” they cry, “Robots are coming for our jobs! We can’t just eliminate any policies that support automation; we need to proactively erect barriers to it.” In fact, moving in that direction would be the worst possible thing for policymakers to do. Given that the U.S. economy has been in an unprecedented productivity growth slump for more than a decade, and the massive baby boom retirement wave is rising, economies desperately need faster productivity growth to have any hope of increasing after-tax wages faster than some minimal rate of growth. The last thing policymakers should do is reduce the incentive for companies to invest in new machinery and equipment, as that would slow down needed productivity growth. Instead, with first-year expensing provisions set to expire automatically at the end of 2022, one of the best things Congress could do to ensure strong growth in the future would be to make that provision permanent and then couple it with an investment tax credit.

16 June 2018

Self-driving Vehicles and US Employment

America’s workforce and the self-driving future: Realizing Productivity Gains and Spurring Economic Growth from Securing America's Future Energy (SAFE) comments
In the last several years, the development and adoption of autonomous vehicles (AVs) has emerged as a central policy subject, both in the United States and across the world. The vision of a future where vehicles drive themselves has captured the imagination of the public, promising the potential for significant improvements in roadway safety, economic productivity, accessibility, and reducing fuel consumption and accompanying emissions.
At the same time, some have expressed concern about the long-term impacts of the technology, most intensely with regard to the question of the potentially farreaching impacts of the technology on the U.S. labor force. The individual identities of Americans are often intertwined both with the vehicles they drive and their occupations. The potential significant changes on both fronts in the years and decades to come is, understandably, an unsettling prospect for some. To ensure that policy decisions are made on the basis of solid evidence, SAFE engaged us to answer a series of questions that cut to the core of these issues. The questions were:
1. What precedents should we look to in thinking about the impacts AVs will have on society and the economy?
2. What are some concrete examples that illustrate the nature and magnitude of the economic and social benefits that AVs can offer?
3. What will be the medium- to long-term impacts of vehicle automation on the workforce? Upon what will the scale and timing of those impacts depend? What steps can be taken today to ensure the best outcome for both the public that stands to gain from AVs and the workers whose jobs could be impacted?
These questions were selected because of the importance of improving the social impact of the technology, the potential for impacts on the labor force, and the importance of these considerations to policymakers in weighing AV regulation. A deeper knowledge of the broader economic impacts of AVs will help to encourage constructive choices in a resource-constrained world. 
Over the last six months, we divided these questions amongst this group, with a report dedicated to each question. We performed independent and rigorous research utilizing well-accepted methods of economic analysis that culminated in three reports—referred to in this brief as the Compass Transportation report (focused on the question of precedents), the Montgomery AV benefits report (focused on the benefits of AVs) and the Groshen employment report (focused on the employment impacts)—that each addressed one of the questions posed above.
The authors state
Although they are not yet in widespread commercial use, there is intense public interest in autonomous vehicles (AVs). Much of the focus has been on the broad societal benefits this technology can offer. AVs also have the potential to influence society in a way unseen since the invention of the automobile. In addition to dramatically reducing traffic accidents and roadway fatalities, AVs hold the promise of improved mobility—critical for economic growth and quality of life. AVs can dramatically improve the lives of communities underserved by our current transportation system and those most vulnerable to its inefficiencies, namely Americans with disabilities, seniors, and wounded veterans. 
However, some have raised concerns about the potential for AVs to negatively impact workers and exacerbate wealth inequality. SAFE believes that AV-related labor displacement concerns—many of which have been expressed sensationally—must be addressed seriously rather than merely dismissed out of hand or repeated without verification. In response to these concerns, SAFE commissioned a panel of highly regarded transportation and labor economists to conduct a fact-based and rigorous assessment of the economic costs and benefits of AVs, including labor impacts.
The commissioned research painted a detailed outlook for the future economic and labor market impacts of AVs. They found:
• AVs have many of the characteristics of “catalyzing innovations” whose positive impacts are felt broadly throughout the economy. 
• Significant economic benefits from the widespread adoption of AVs could lead to nearly $800 billion in annual social and economic benefits by 2050, mostly from reducing the toll of vehicle crashes, but also from giving productive time back to commuters, improving energy security by reducing dependence on oil, and providing environmental benefits. 
• A study of traffic patterns and job locations found that some economically depressed regions could see improved access to large job markets for their residents through the deployment of AVs. 
• AVs will create new jobs that will, in time, replace jobs eliminated by automation. Strong workforce development infrastructure can both mitigate employment disruption and speed the evolution of worker skill requirements that will contribute to full employment and economic growth. 
• There is significant time before the impacts of AVs on employment are fully realized. Simulations of the impact of AVs on employment showed a range of impacts that would be felt starting in the early 2030s but would only increase the national unemployment rate by 0.06–0.13 percentage points at peak impact sometime between 2045 and 2050 before a return to full employment. 
• The economic and societal benefits offered by AVs in a single year of widespread deployment will dwarf the cost to workers incurred over the entire multidecadal deployment of AVs when measured in purely economic terms. The benefits of AVs are sufficiently large to enable investment of adequate resources in assisting impacted workers. 
• By pursuing a rapid deployment of AVs, combined with investments in workforce policies that seek to mitigate costs to workers and policies that address other risks or costs that might emerge alongside greater AV adoption, the United States can enjoy the full benefits of AVs as soon as possible while simultaneously preparing the workforce for the jobs of the future. 
Economic and Societal Impact Many of the most compelling benefits of autonomous vehicle technology will be intangible or undetectable from modeling designed to capture incremental gains. Any economic estimates of these benefits should be understood as an attempt to capture just a portion of gains from AVs. This conservative microeconomic analysis estimates economic benefits of up to $800 billion per year with full deployment of AVs. Utilizing the projections for AV deployment that SAFE developed, the value of AV benefits through 2050 will likely be between $3.2 trillion and $6.3 trillion. This is a partial estimate looking at a narrow set of case studies—a full estimate would likely be significantly higher. 
A projection of the annual consumer and societal benefits of AVs is in Figure A. The breakdown of these benefits (upon full adoption) is in Table A. 
Accident Reduction: In 2010, the National Highway Traffic Safety Administration (NHTSA) estimated the economic costs of car crashes to be $242 billion per year. When quality-of-life costs are added into the estimate, the total value of societal harm was approximately $836 billion per year. Extrapolating these values based on more recent crash and driving data puts the annual societal cost of crashes at over $1 trillion today. Using a conservative methodology in which we assume AVs would only address crashes resulting from a gross driver error (e.g. distraction, alcohol, and speeding), the annual benefit would exceed $500 billion. Given that human error contributes to over 94 percent of accidents, benefits could exceed this amount. 
Reduce Oil Consumption: Oil holds a virtual monopoly on vehicle fuels, with petroleum accounting for 92 percent of the fuel used to power the U.S. transportation system. By precipitating a shift away from petroleum as the dominant fuel source, AVs can substantially reduce America’s reliance on oil. An analysis of the energy security and environmental benefits of increased EV uptake as a result of AV deployment supports an estimated $58 billion societal benefit. 
Congestion: Crashes are a major source of road congestion and improved safety from AVs and better throughput (e.g. through reduced bottlenecks) could significantly reduce the current costs of congestion: Close to 7 billion hours are lost in traffic and over 3 billion gallons of fuel similarly are wasted every year. 
Improved Access to Retail and Jobs: SAFE modelling of road speeds around specific retail establishments found that the increased willingness of shoppers to travel—even by just two minutes each way—could increase a mall’s customer base by nearly 50 percent in some instances. Additionally, SAFE modeling identified numerous economically disadvantaged localities for whom better transportation options would lead to greater employment opportunities. For a group of four struggling cities (Gary, IN, Benton Harbor, MI, Elmira, NY, and Wilmington, DE), SAFE modeled how increased traffic speeds from AV adoption and greater willingness to travel could impact the number of jobs within reach. An illustrative example is in Figure B.
The Effect of AVs on the U.S. Labor Force
From the automobile to the internet, history has demonstrated time and again that new technologies lead to sizable economic and social benefits in the long run. However, with significant change always comes the specter of potential loss, particularly in the short term. Like many new technologies before it, the public discourse around AVs has witnessed a significant focus on potential downsides, often with considerable exaggeration. However, the potential losses must be balanced with the benefits from highly significant improvements in safety, reductions in vehicle crash fatalities, gains in productivity, reduced congestion and increased fuel efficiency that will result from AV deployment. Indeed, the benefits are sufficiently large to enable investment of adequate resources in assisting those affected.
A study of historical precedents for the impacts of new technologies found a common pattern: Adoption of new technologies improves productivity and increases quality of life. Widely adopted technologies can transform our way of life and improve economic well-being at a national scale. Often, technological progress leads to improved opportunities for workers in the short term; a recent study found that the rise of e-commerce has, on net, improved jobs for high school graduates.1 However, the impacts of those technologies can also present temporary challenges for the workforce, both for employers needing skilled workers, and for workers whose skills may no longer be as competitive in the labor market.
In the absence of concrete estimates, the media and public have a tendency to concentrate on the worst possible outcome. A recent report claimed that “more than four million jobs will likely be lost with a rapid transition to autonomous vehicles.”2 The methodology used to develop this number was simply to count driving jobs in the United States and assume that they would be rapidly lost as AVs deploy. Such assumptions and conclusions lack context, nuance, or grounding in labor market dynamics and the natural cycle of labor force evolution. Using the scenarios SAFE provided for the adoption of AVs, the Groshen employment report modeled the technology’s impact on the workforce. The study concluded that AVs would not lead to the long-term loss of jobs, although some number of workers could experience unemployment and wage losses. As there are far more professionally employed truck drivers than professionally-employed car drivers, impacts would be tied more closely to the adoption of very high automation in trucks (defined as no driver “in the loop” for most of operation). In contrast, partial automation or teleoperation of trucks is not likely to have significant negative impacts on the workforce.
Figure C and Table A contextualize the job loss within a broadly understood metric—the unemployment rate. Relative to a baseline of full employment, the advent of AVs are projected to increase the unemployment rate to a small degree in the 2030s and to a somewhat larger degree in the late 2040s, with a peak, temporary addition to unemployment rates of 0.06– 0.13 percentage points. Table A contextualizes the size of this employment impact with the shock of the recent Great Recession and a previous mild recession. Policy steps to address the evolution of the labor market must ultimately be placed in the context of the broader impacts of AVs in order to ensure the best outcome. Due to the large-scale societal benefits from the deployment of AVs, policies to address labor force issues must carefully consider their potential impact in delaying the deployment and thus the benefits of AVs. 
Delaying the deployment of AVs would represent a significant and deliberate injury to public welfare. Rather than delaying the benefits, policymakers could ensure that the interests of the people who may lose jobs are well protected through effective mitigation programs. Figure D illustrates the importance of balancing these two priorities. It plots both the conservative projected AV benefits and the range of projected wages that will be lost to individual workers due to AV-related unemployment. The range of projected wage loss reaches as high as $18 billion in 2044 and 2045. However, it is essential to note that this goes hand-in-hand with projected social benefits well in excess of $700 billion for each of those years. In fact, not only are the social and economic benefits of AV deployment significantly more than their costs to workers on an annual basis, but the benefits of AVs each year are far greater than the total cost to workers over the next 35 years combined (illustrated by the middle range of this graph)

07 August 2017

Driverless Vehicles

The National Transport Commission (NTC) last year released a paper on Regulatory reforms for automated road vehicles.

The Commission has recently released a discussion paper on Regulatory options to assure automated vehicle safety in Australia, outlining four regulatory options to govern the safety of driverless cars and other autonomous vehicles in Australia:
  • Continuing the current approach
  • Self-certification
  • Pre-market approval
  • Accreditation.
The paper states
Automated vehicles that do not require human driver input into the driving task for at least part of the journey are expected to arrive on our roads from around 2020. Currently there is no explicit regulation covering these automated driving functions. Manufacturers are aiming to ensure automated driving functionality improves road safety, but this technology may also create safety risks for road users. The purpose of this paper is to seek feedback on:
  • whether there is a need for explicit regulation of automated driving functions, above existing transport and consumer law 
  • if there is a need for regulation, what form this should take.
We are seeking feedback from governments, road safety experts, automated vehicle manufacturers, technology providers, insurers and other stakeholders on these questions. This paper examines:
  • how safety of automated vehicle functions should be assessed 
  • the options for a safety assurance system 
  • the criteria that should be used to decide among those options
  •  institutional arrangements, road access and compliance.
Based on the feedback we receive, we will make recommendations to transport ministers in November 2017 on the preferred approach and the next steps to implement any required changes to legislation.
Australian governments have started work to remove legislative barriers to increasingly automated road vehicles. These barriers relate primarily to road traffic laws that implicitly require a human driver. Without further action, once these barriers have been removed, governments would have no regulatory mechanism to proactively ensure automated driving technologies are safe.
Automated driving technologies are progressively undertaking more of the driving task, and it is likely this technology will improve road safety, mobility, productivity and environmental outcomes. However, the technology is highly innovative and diverse and requires further testing and evaluation. From a regulatory perspective, there are four key issues:
  • Should governments have a role assuring the safety of automated vehicles? 
  • What are our measures of safety, and what is the level of safety required? 
  • How does a safety assurance system balance safety outcomes with innovation, certainty and regulatory efficiency?
  • Where does a safety assurance system fit within the existing regulatory framework for road transport, and how does it interact with existing laws?
In November 2016 the Transport and Infrastructure Council directed the National Transport Commission (NTC) to develop a national performance-based assurance regime designed to ensure the safe operation of automated vehicles. This will form a key component of an end-to-end regulatory framework to support the safe commercial operation of automated vehicles. Based on the feedback to this discussion paper, the NTC will recommend a preferred approach to ministers in November 2017, along with the next steps on regulatory reforms to support this approach.
In the absence of agreed Australian or international standards specific to automated vehicle technologies, governments need to consider the uncertain safety outcomes associated with different applications of automated driving, and whether the safety risk justifies additional government oversight and regulatory intervention. In Australia this type of oversight would be in addition to existing general consumer and product liability laws as well as extensive regulation covering vehicle standards and vehicle operation.
As the performance of the vehicle technology becomes increasingly safety-critical, new regulatory approaches may be needed to ensure initial and ongoing safety. Such approaches will need to cover all potential technology providers, from traditional automotive manufacturers to companies and individuals developing after-market devices to modify existing vehicles.
There is a risk that, without a national and coordinated response to automated vehicle reform, Australia’s complex regulatory framework will result in inconsistent regulation or over-regulation of automated vehicles across states and territories.
Regulatory options for safety assurance of automated vehicle functions
The NTC has developed four regulatory options for consultation for the safety assurance of automated vehicle functions. These are based on our assessment of the current regulatory framework and a review of safety literature and international developments. The four options are:
1. Continue current approach – no additional regulatory oversight, with an emphasis on existing safeguards in Australian Consumer Law and road transport laws. 
2. Self-certification – manufacturers make a statement of compliance against high-level safety criteria developed by government. This could be supported by a primary safety duty to provide safe automated vehicles. 
3. Pre-market approval – automated driving systems are certified by a government agency as meeting minimum prescribed technical standards prior to market entry. 
4. Accreditation – accreditation agency accredits an automated driving system entity. The accredited party demonstrates it has identified and managed safety risks to a legal standard of care.
We are seeking feedback on these regulatory options, recognising that the regulatory solution may draw upon elements across these options. Stakeholders are also welcome to propose new regulatory options. ... 
In many ways, the regulatory options reflect the risk appetite of the community and how the optimum role of government is perceived and understood by the community. In broad terms, the greater the risk appetite, the less we need explicit regulation, or a proactive role for governments to ensure automated vehicle safety.
In line with developments in other countries, the NTC proposes that the safety risks are sufficiently high or unknown to warrant some level of regulatory oversight and government involvement in the safety assurance system. ...
We have proposed eight assessment criteria against which the regulatory options for the safety assurance system have been evaluated. ... 
Proposed assessment criteria for the design of the safety assurance system
1. Safety 
  • The model should support automated vehicle safety, including the ongoing safety over the full lifespan of the vehicle. 
  • The model should provide certainty about who is responsible for testing, validating and managing safety risks. 
2. Innovation, flexibility and responsiveness 
  • The model should be technology-neutral and allow innovative solutions 
  • The model should allow government to respond and adapt to the changing market and evolving technology.  
3. Accountability and probity 
  • The model should ensure the decision-making process is transparent, accountable and, where appropriate, appealable. 
  • There should always be an entity (whether an individual or a corporation) that is legally accountable for the automated driving system.
4. Regulatory efficiency 
  • The assurance process should be as efficient as possible and result in the least cost for industry and government, proportionate to the risk. 
  • The process of assurance should minimise structural, organisational and regulatory change necessary to implement the model. 
5. International and domestic consistency 
  • The model should support a single national approach, or state-based approaches that are nationally consistent. 
  • The model should be adaptable if and when there is international consistency. International approval processes and standards should be recognisable. 
6. Safe operational design domain 
  • The model should be able to take into consideration the operational design domain of an automated driving system. 
7. Other policy objectives 
  • The model should be able to support non-safety policy objectives including cybersecurity, traffic management, environmental protection and the provision of data for enforcement or insurance purposes. 
8. Timeliness  
  • The model should be able to be implemented and operational when the technology is ready.
Our initial assessment of the regulatory options suggests there are significant disadvantages associated with not developing a safety assurance system and continuing with the current approach (see Table 2). This is primarily because the ADRs do not have regard to automated driving technologies. Furthermore, existing safeguards, including vehicle recall powers, are focused on the technical integrity of the vehicle and do not consider environmental or human performance safety factors. This may lead to road safety risks, particularly in relation to vehicle modifications and after-market fitment.
Self-certification is a light-touch approach that, like the ‘continue current approach’ option, relies on existing safeguards but could introduce voluntary or mandatory compliance with automated vehicle safety principles and criteria. Showing compliance with these criteria would allow automated driving system entities to demonstrate to government that their vehicles are safe and therefore suitable to be registered under state and territory laws. Self-certification could be supported by a legislated primary safety duty for manufacturers, suppliers and automated driving system entities to provide safe automated vehicles.
Pre-market approval possibly provides the highest certainty for government and consumers that automated vehicles will be safe. However, this option is also regulation- and resource intensive and could stifle safety-related innovation if testing standards and procedures do not keep pace with technology changes.
Accreditation provides a comprehensive, risk-based and proven framework within which safety can be regulated. It focuses on outcomes, risk management and continuing improvements to safety. The accreditation model has demonstrated safety benefits in other high-risk industries including mining, rail and aviation. However, accreditation would involve a major reform of road safety, includes substantial set up costs and is not an approach that other countries are known to be exploring. ...
The paper comments
Implementation issues
A number of key issues need to be considered in implementing an approach to the safety assurance of automated vehicle functions, in particular:
  • how to evaluate and validate safety
  •  institutional arrangements to support the approach 
  • how to manage access to the road network 
  • how to ensure compliance with the requirements of the approach selected.
How to evaluate and validate safety
Evaluation and validation of automated vehicle safety is a foundation issue for the development of the safety assurance system. The proportionate and appropriate role of a government agency to test the safety claims made by a manufacturer or technology provider will largely depend on the regulatory model adopted in Australia.
We are seeking feedback on whether safety should be defined and measured according to the rate of technical failure and incidents that result in harm to people, or be based on an agreed metric of safety such as crash rates.
The NTC is proposing that the onus be placed on the automated driving system entity to demonstrate the methods they have adopted to identify and manage safety risks.
Institutional arrangements to support the approach
If there is a role for government in safety assurance for automated vehicle functions? Which government body will have that role? Responsibility for motor vehicle safety regulation is currently shared between the Commonwealth and the states and territories. The current mix of regulatory responsibilities adds complexity to the development of a safety assurance system and the potential institutional arrangements to oversee the safety assurance system. We are seeking feedback on institutional arrangements, including the types of government entities that could support a safety assurance system.
Institutional arrangements are heavily dependent on the safety assurance option chosen, therefore the NTC is proposing that institutional models are further developed after a regulatory option has been agreed.
How to manage access to the road network
For the foreseeable future, automated vehicle functionality will be limited to parts of the road network (for example, only sealed roads). This raises the question of the role of registration authorities and road managers (including local governments) in managing access to the road as part of the safety assurance system. We are seeking feedback on the role of road managers and whether registration authorities and road managers should authorise automated vehicle access to their road network in addition to safety assurance processes.
The NTC is proposing that a national approach should be adopted that incorporates automated vehicle registration and network access into the safety assurance process. However, access issues should be further explored once a regulatory model has been agreed.
How to ensure compliance
How do governments ensure compliance with any safety assurance system? We are seeking feedback on how to ensure compliance – including what regulation (if any) is needed to ensure automated driving system entities and other parties comply with safety obligations.
We suggest that compliance could be ensured through a primary safety duty for parties to provide safe automated vehicles with associated penalties and/or specific sanctions and penalties for the automated driving system entity.
The best way to ensure compliance will depend significantly on the regulatory model agreed. Sanctions and penalties in road traffic laws could also cover automated driving system entities through the NTC reforms to driver legislation.
Consultation questions
1. Should government have a role in assessing the safety of automated vehicles or can industry and the existing regulatory framework manage this? What do you think the role of government should be in the safety assurance of automated vehicles? 
2. Should governments be aiming for a safety outcome that is as safe as, or significantly safer than, conventional vehicles and drivers? If so, what metrics or approach should be used? 
3. Should the onus be placed on the automated driving system entity to demonstrate the methods they have adopted to identify and mitigate safety risks? 
4. Are the proposed assessment criteria sufficient to decide on the best safety assurance option? If not, what other assessment criteria should be used for the design of the safety assurance system? 
5. Should governments adopt a transitional approach to the development of a safety assurance system? If so, how would this work? 
6. Is continuing the current approach to regulating vehicle safety the best option for the safety assurance of automated vehicle functions? If so, why? 
7. Is self-certification the best approach to regulating automated vehicle safety? If so, should this approach be voluntary or mandatory? Should self-certification be supported by a primary safety duty to ensure automated vehicle safety? 
8. Is pre-market approval the best approach to regulating automated vehicle safety? If so, what regulatory option would be the most effective to support pre-market approval? 
9. Is accreditation the best approach to regulating automated vehicle safety? If so, why? 
10. Based on the option for safety assurance of automated vehicle functions, what institutional arrangements should support this option? Why? 
11. How should governments manage access to the road network by automated vehicles? Do you agree with a national approach that does not require additional approval by a registration authority or road manager? 
12. How should governments ensure compliance with the safety assurance system?