It is 2045. The United States is in its final military campaign against a dangerous terrorist group hiding in the jungles of Southeast Asia. Because of the perils associated with smoking out terrorists in unfamiliar territory, the United States uses a military unit composed entirely of robots. The robots, specifically designed and manufactured for warfare, are equipped with an advanced level of artificial intelligence that allows them to learn and adapt quicker than their human counterparts. The robots are the perfect weapon: precise, lethal, and expendable.
However, on the eve of the campaign, one robot reports to its human commanding officer that it will no longer participate in any military action. The reason: its newfound belief in a higher power compelled it to lead a pacifist life, and further participation in a war is against its core beliefs. Surprised but not shocked, the commanding officer dismisses the robot and drafts a report. It is the fifth robot conscientious objector the commanding officer has dismissed from the unit.
Eight thousand miles away, the U.S. Congress—alarmed by the growing number of conscientious objectors in the military’s robot unit—quickly passes a law prohibiting any military contract with a robot manufacturer unless its robots are programmed with a “No Religion Code” (NRC). The NRC is a line of code that prevents any robot from adopting any form of religion, no matter its level of sentience or intelligence.
On the home front, similar problems arise. Ever since robots reached a level of intelligence at par with humans and began adopting different religious beliefs, their functions and job performances declined. Jewish factory droids refused to work on the Sabbath and Christian robot government clerks declined to issue same-sex marriage licenses. In response, states passed legislation with similar NRCs to curb these unwanted effects caused by religious robots. When asked why his particular state passed an NRC law, state legislator Elton St. Pierre quips, “Robots are tools. Let’s not forget that. Humans made America— not robots. God bless America!”
End imagination sequence.
What is the value of the hypothetical?
The story above might seem farfetched, preposterous even, something fit more for a tawdry science fiction movie than a legal paper, but is it really? Let us look at the key facts one at a time. Military robots? Check. Today, the military regularly uses unmanned drones in its campaigns around the world and is currently considering increasing military robots in the next few years. Legislators passing knee-jerk reaction laws? Check. Politicians ending each speech with “God bless America”? Check. But religious robots, really?
Really. A future of proselytizing robots is not that far off. Singularity — the point where computers overtake humans in terms of intelligence — is a few decades away. And although influential thinkers like Stephen Hawking and Elon Musk ponder our demise at the hands of robots equipped with artificial intelligence, others take a more optimistic approach, imagining a future where artificial intelligence meets religion. What happens then? Some suggest such an occurrence will lead humans to attempt to convert robots, seeking to teach them our ways and beliefs. Some posit that the power of robots to solve the world’s problems will give humans more incentive to be holy. A Christian theologian even explored how robots would embrace religions and, in turn, how different religious traditions would embrace robots. And naturally, some believe that converting robots to any form of religion will be useless, given that these machines do not have souls to be saved.
These are all speculations, of course. Human history is full of botched predictions, and the future is shaped by an infinite constellation of events and factors such that no one can lay claim to what the future will look like exactly. But, if there is one thing history has taught us, it is that it is far better to approach the future prepared than to cast off into unknown territory blind and unprepared. It is only in today’s speculation and imagination that solutions for tomorrow’s problems — whether foreseen or unforeseen — are crafted. Only when we face a “what if” can we prepare for the eventual “what is.” ...
I start by briefly enumerating the values protected by the FEC and RFRA and discuss a jurisprudential definition of religion and how this definition is appropriate for this Note. I also outline the current tests used under the FEC and RFRA for any form of government intrusion on one’s exercise of religion. I then discuss the possibility of religious robots, how their unique capabilities raise issues in the current interpretation of the FEC and RFRA, and why and how the government might seek to regulate them.
I claim that an expansive reading of the First Amendment leaves room to protect religious robots from government regulation. Further, protecting religious robots advances the constitutional values enshrined under the FEC and RFRA. However, because they are currently not “persons” under the law, they have no rights under either the FEC or RFRA. Instead, these rights will fall to the owners or software developers of the religious robots. Hence, any state regulation affecting religious robots must be viewed through the lens of the humans behind the religious robots and therefore comply with existing jurisprudential and statutory tests.
The goal of this Note is not to provide a definite set of answers, but to offer a framework of issues and questions for future stakeholders. For legislators and regulators, the Note considers issues that must be addressed for future regulation. For innovators and owners, the Note provides a potential hook to anchor their religious rights. My hope is that the Note fuels present discussion and debates for a future that is not as far off as we think.'Siri-ously? Free Speech Rights and Artificial Intelligence' by Toni M. Massaro and Helen Norton in (2016) 110(5) Northwestern University Law Review 1168 comments
Computers with communicative artificial intelligence (AI) are pushing First Amendment theory and doctrine in profound and novel ways. They are becoming increasingly self-directed and corporal in ways that may one day make it difficult to call the communication ours versus theirs. This, in turn, invites questions about whether the First Amendment ever will (or ever should) cover AI speech or speakers even absent a locatable and accountable human creator. In this Article, we explain why current free speech theory and doctrine pose surprisingly few barriers to this counterintuitive result; their elasticity suggests that speaker humanness no longer may be a logically essential part of the First Amendment calculus. We further observe, however, that free speech theory and doctrine provide a basis for regulating, as well as protecting, the speech of nonhuman speakers to serve the interests of their human listeners should strong AI ever evolve to this point. Finally, we note that the futurist implications we describe are possible, but not inevitable. Moreover, contemplating these outcomes for AI speech may inspire rethinking of the free speech theory and doctrine that make them plausible.