Driverless vehicles present a core ethical dilemma: there is a public health necessity and moral imperative to encourage the widespread adoption of driverless vehicles once they become demonstrably more reliable than human drivers, given their potential to dramatically reduce automobile fatalities, increase autonomy for disabled people, and improve land use and commutes. However, the very technologies that could enable autonomous vehicles to drive more safely than human drivers also imply greater moral responsibility for adverse outcomes. While human drivers must make split-second decisions in automobile collision scenarios, driverless car programmers have the luxury of time to reflect and choose deliberately how their vehicles should behave in collision scenarios. This implies greater responsibility and culpability, as well as the potential for greater scrutiny and regulation. Programmers must make premeditated decisions regarding whose safety to prioritize in inevitable collision scenarios—situations where a vehicle cannot avoid a collision altogether but can choose between colliding into different vehicles, objects, or persons.
With the recent bipartisan passage of the SELF DRIVE Act in the House and the rapid development of driverless vehicle technology, we are now entering a critical time frame for considering what priorities should govern driverless car inevitable collision behavior. This Article shall argue that prescribed “ethics” programing must be regulated by law in order to avoid the likely collective action problem of a marketplace that will reward “occupant-favoring” designs, despite a probable public preference (and arguable moral necessity) for occupant indifferent designs. This Article then considers a variety of different options for systems of driverless vehicle ethics programming. The most justifiable ethics programing system would be one where road users are discouraged from externalizing the dangers incurred by their transportation choices onto those whose transportation choices, if more widely adopted, would comparatively improve aggregate safety. This ethical programing system, which I term “incentive-weighted programing,” would promote public safety while also striking the most equitably justifiable balance between different road users’ interests.