Computers are making an increasing number of important decisions in our lives. They fly airplanes, navigate traffic, and even recommend books. In the process, computers reason through automated algorithms and constantly send and receive information, sometimes in ways that mimic human expression. When can such communications, called here “algorithmic outputs,” claim First Amendment protection?Wu argues
The question of “rights for robots,” if once limited to science fiction, has now entered the public debate. In recent years, firms like Verizon and Google have relied on First Amendment defenses against common-law and regulatory claims by arguing that some aspect of an automated process is speech protected by the Constitution. These questions will only grow in importance as computers become involved in more areas of human decisionmaking. A simple approach, favored by some commentators, says that the First Amendment presumptively covers algorithmic output so long as the program seeks to communicate some message or opinion to its audience. But while simplicity is attractive, so is being right. In practice, the approach yields results both absurd and disruptive;3 the example of the car alarm shows why. The modern car alarm is a sophisticated computer program that uses an algorithm to decide when to communicate its opinions, and when it does it seeks to send a particularized message well understood by its audience. It meets all the qualifications stated: yet clearly something is wrong with a standard that grants Constitutional protection to an electronic annoyance device. Something is missing.
The big missing piece is functionality. More specifically, what’s being overlooked is the differential treatment courts should accord communications closely tied to some functional task. A close reading of the relevant cases suggests that courts, in fact, limit coverage in a way that reserves the power of the state to regulate the functional aspects of the communication process, while protecting its expressive aspects. Here, I go further and suggest that the law contains a de facto functionality doctrine that must be central to any consideration of machine speech.
The doctrine operates in two distinct ways. First, courts tend to withhold protection from carrier/conduits—actors who handle, transform, or process information, but whose relationship with speech or information is ultimately functional. Definitive examples are Federal Express or the telephone company, common carriers to whom the law does not grant speech rights. Those who merely carry information from place to place (courier services) generally don’t enjoy First Amendment protection, while those who select a distinct repertoire, like a newspaper or cable operator, do. Similarly, those who provide the facilities for job interviews are not recognized as speakers, nor are the manufacturers of technologies that record or transform information from one form into another—like a typewriter, photocopier, or loudspeaker.
Second, courts do not normally protect tools—works whose use of information is purely functional, such as navigational charts, court filings, or contracts. The reasons are complex, and related to a broader nonprotection of information that by its very communication performs some task. In the words of language philosophers these are “speech acts,” “illocutionary acts,” or “situation-altering utterances.” The broader category includes the communications embodied in criminal commands, commercial paper, nutritional information, and price-fixing conspiracies.
Combined, these two tendencies form a de facto functionality doctrine, which, as we shall see, is central to understanding the First Amendment in the context of algorithmic output (and, thankfully, excludes car alarms from the protections of the Constitution). For one thing, in many cases the fact that an algorithm makes the decisions in software cases is in tension with the requirement of knowing selection or intimate identification. Other times, algorithmic output falls into the category of communication that acts by its very appearance. Warnings, status indications, directions, and similar signals are common outputs for computer software in this category. These outputs act to warn or instruct and are therefore similar analytically to something like a criminal command or conspiracy.
In an area as complex as this, a rule of thumb might be useful. Generally, we can distinguish software that serves as a “speech product” from that which is a “communication tool.” Communication tools fall into the categories just described: they primarily facilitate the communications of another person, or perform some task for the user. In contrast, speech products are technologies like blog posts, tweets, video games, newspapers, and so on, that are viewed as vessels for the ideas of a speaker, or whose content has been consciously curated.
The boundary between one and the other may be imperfect, but it must be drawn somewhere if the First Amendment is to be confined to its primary goal of protecting the expression of ideas, and if we are to prevent its abuse. If a software designer is primarily interested in facilitating some task for the user, he will be unlikely to have the space to communicate his own ideas. At a minimum, his ideas must bend to operations. Thus, the intent is not to communicate ideas, or, as the Supreme Court puts it, “affect public attitudes and behavior in a variety of ways, ranging from direct espousal of a political or social doctrine to the subtle shaping of thought which characterizes all artistic expression.”
In what follows, I introduce these ideas more thoroughly and, along the way, consider the speech status of blogging and microblogging software like Twitter, GPS navigation software, search engines, and automated concierges. The importance of these matters cannot be overstated. Too little protection would disserve speakers who have evolved beyond the printed pamphlet. Too much protection would threaten to constitutionalize many areas of commerce and private concern without promoting the values of the First Amendment.‘Sue My Car Not Me: Products Liability and Accidents Involving Autonomous Vehicles’ by Jeffrey Gurney in (2013) 2 Journal of Law, Technology and Policy 101 argues that
Autonomous vehicles will revolutionize society in the near future. Computers, however, are not perfect, and accidents will occur while the vehicle is in autonomous mode. This Article answers the question of who should be liable when an accident is caused in autonomous mode. This Article addresses the liability of autonomous vehicle by examining products liability through the use of four scenarios: the Distracted Driver; the Diminished Capabilities Driver; the Disabled Driver; and the Attentive Driver.
Based on those scenarios, this Article suggests that the autonomous technology manufacturer should be liable for accidents caused in autonomous mode because the autonomous vehicle probably caused the accident. Liability should shift back to the “driver” depending on the nature of the driver and the ability of that person to prevent the accident. Thus, this Article argues that an autonomous vehicle manufacturer should be liable for accidents caused in autonomous mode for the Disabled Driver and partially for the Diminished Capabilities Driver and the Distracted Driver. This Article argues the Attentive Driver should be liable for most accidents caused in autonomous vehicle. Currently, products liability does not currently allocate the financial responsibility of an accident to the party that is responsible the accident, and this Article suggests that courts and legislatures need to address tort liability for accidents caused in autonomous mode to ensure that the responsible party bears responsibility for accidents.