It is 2045. The United States is in its final military campaign against a dangerous terrorist group hiding in the jungles of Southeast Asia. Because of the perils associated with smoking out terrorists in unfamiliar territory, the United States uses a military unit composed entirely of robots. The robots, specifically designed and manufactured for warfare, are equipped with an advanced level of artificial intelligence that allows them to learn and adapt quicker than their human counterparts. The robots are the perfect weapon: precise, lethal, and expendable.
However, on the eve of the campaign, one robot reports to its human commanding officer that it will no longer participate in any military action. The reason: its newfound belief in a higher power compelled it to lead a pacifist life, and further participation in a war is against its core beliefs. Surprised but not shocked, the commanding officer dismisses the robot and drafts a report. It is the fifth robot conscientious objector the commanding officer has dismissed from the unit.
Eight thousand miles away, the U.S. Congress—alarmed by the growing number of conscientious objectors in the military’s robot unit—quickly passes a law prohibiting any military contract with a robot manufacturer unless its robots are programmed with a “No Religion Code” (NRC). The NRC is a line of code that prevents any robot from adopting any form of religion, no matter its level of sentience or intelligence.
On the home front, similar problems arise. Ever since robots reached a level of intelligence at par with humans and began adopting different religious beliefs, their functions and job performances declined. Jewish factory droids refused to work on the Sabbath and Christian robot government clerks declined to issue same-sex marriage licenses. In response, states passed legislation with similar NRCs to curb these unwanted effects caused by religious robots. When asked why his particular state passed an NRC law, state legislator Elton St. Pierre quips, “Robots are tools. Let’s not forget that. Humans made America— not robots. God bless America!”
End imagination sequence.
What is the value of the hypothetical?
The story above might seem farfetched, preposterous even, something fit more for a tawdry science fiction movie than a legal paper, but is it really? Let us look at the key facts one at a time. Military robots? Check. Today, the military regularly uses unmanned drones in its campaigns around the world and is currently considering increasing military robots in the next few years. Legislators passing knee-jerk reaction laws? Check. Politicians ending each speech with “God bless America”? Check. But religious robots, really?
Really. A future of proselytizing robots is not that far off. Singularity — the point where computers overtake humans in terms of intelligence — is a few decades away. And although influential thinkers like Stephen Hawking and Elon Musk ponder our demise at the hands of robots equipped with artificial intelligence, others take a more optimistic approach, imagining a future where artificial intelligence meets religion. What happens then? Some suggest such an occurrence will lead humans to attempt to convert robots, seeking to teach them our ways and beliefs. Some posit that the power of robots to solve the world’s problems will give humans more incentive to be holy. A Christian theologian even explored how robots would embrace religions and, in turn, how different religious traditions would embrace robots. And naturally, some believe that converting robots to any form of religion will be useless, given that these machines do not have souls to be saved.
These are all speculations, of course. Human history is full of botched predictions, and the future is shaped by an infinite constellation of events and factors such that no one can lay claim to what the future will look like exactly. But, if there is one thing history has taught us, it is that it is far better to approach the future prepared than to cast off into unknown territory blind and unprepared. It is only in today’s speculation and imagination that solutions for tomorrow’s problems — whether foreseen or unforeseen — are crafted. Only when we face a “what if” can we prepare for the eventual “what is.” ...
I start by briefly enumerating the values protected by the FEC and RFRA and discuss a jurisprudential definition of religion and how this definition is appropriate for this Note. I also outline the current tests used under the FEC and RFRA for any form of government intrusion on one’s exercise of religion. I then discuss the possibility of religious robots, how their unique capabilities raise issues in the current interpretation of the FEC and RFRA, and why and how the government might seek to regulate them.
I claim that an expansive reading of the First Amendment leaves room to protect religious robots from government regulation. Further, protecting religious robots advances the constitutional values enshrined under the FEC and RFRA. However, because they are currently not “persons” under the law, they have no rights under either the FEC or RFRA. Instead, these rights will fall to the owners or software developers of the religious robots. Hence, any state regulation affecting religious robots must be viewed through the lens of the humans behind the religious robots and therefore comply with existing jurisprudential and statutory tests.
The goal of this Note is not to provide a definite set of answers, but to offer a framework of issues and questions for future stakeholders. For legislators and regulators, the Note considers issues that must be addressed for future regulation. For innovators and owners, the Note provides a potential hook to anchor their religious rights. My hope is that the Note fuels present discussion and debates for a future that is not as far off as we think.'Siri-ously? Free Speech Rights and Artificial Intelligence' by Toni M. Massaro and Helen Norton in (2016) 110(5) Northwestern University Law Review 1168 comments
Computers with communicative artificial intelligence (AI) are pushing First Amendment theory and doctrine in profound and novel ways. They are becoming increasingly self-directed and corporal in ways that may one day make it difficult to call the communication ours versus theirs. This, in turn, invites questions about whether the First Amendment ever will (or ever should) cover AI speech or speakers even absent a locatable and accountable human creator. In this Article, we explain why current free speech theory and doctrine pose surprisingly few barriers to this counterintuitive result; their elasticity suggests that speaker humanness no longer may be a logically essential part of the First Amendment calculus. We further observe, however, that free speech theory and doctrine provide a basis for regulating, as well as protecting, the speech of nonhuman speakers to serve the interests of their human listeners should strong AI ever evolve to this point. Finally, we note that the futurist implications we describe are possible, but not inevitable. Moreover, contemplating these outcomes for AI speech may inspire rethinking of the free speech theory and doctrine that make them plausible.'Computers as Inventors – Legal and Policy Implications of Artificial Intelligence on Patent Law' by Erica Fraser in (2016) 13(3) SCRIPTed 105 comments
The nascent but increasing interest in incorporating Artificial Intelligence (AI) into tools for the computer-generation of inventions is expected to enable innovations that would otherwise be impossible through human ingenuity alone. The potential societal benefits of accelerating the pace of innovation through AI will force a re-examination of the basic tenets of intellectual property law. The patent system must adjust to ensure it continues to appropriately protect intellectual investment while encouraging the development of computer-generated inventing systems; however, this must be balanced against the risk that the quantity and qualities of computer-generated inventions will stretch the patent system to its breaking points, both conceptually and practically. The patent system must recognise the implications of and be prepared to respond to a technological reality where leaps of human ingenuity are supplanted by AI, and the ratio of human-to-machine contribution to inventive processes progressively shifts in favour of the machine. This article assesses the implications on patent law and policy of a spectrum of contemporary and conceptual AI invention-generation technologies, from the generation of textual descriptions of inventions, to human inventors employing AI-based tools in the invention process, to computers inventing autonomously without human intervention.
Fraser argues In light of recent extraordinary progress, we may be on the cusp of a revolution in robotics and artificial intelligence (AI) technology wherein machines will be able to do anything people can, and more. Recent successes have demonstrated that computers can independently learn how to perform tasks, prove mathematical theorems, and engage in artistic endeavours such as writing original poetry and music, and painting original works.
There is a nascent but increasing interest in incorporating AI into tools for the computer-generation of inventions. Applying AI to the invention process is expected to enable innovations that would not be possible through human ingenuity alone, whether due to the complexity of the problems or human cognitive “blind spots.” Further, these technologies have the potential to increase productivity and efficiency, thereby increasing the speed and decreasing the cost of innovation. Some even argue that computers will inevitably displace human inventors to become the creators of the majority of new innovation.
Computer involvement in the inventive process falls on a spectrum. At one end, a computer could simply be used as a tool assisting a human inventor without contributing to the conception of an invention. At its most miniscule, this could consist of a spell-checker or simple calculator. Further along, a text generator may be used to fill gaps in patent documents. At the far end of the spectrum, a computer could autonomously generate outputs that would be patentable inventions if otherwise created by a human. Some tools fall in between; for example, a computer could be used to generate several possible solutions under the guidance of humans who define the problems and select successful solutions. AI may also be incorporated into robotics, adding a physical embodiment with the potential to increase the likelihood that a computer could generate inventions without direct human intervention.
In response to computer-generated works of art, a discussion of the implications of these works on copyright law is emerging; however, there is comparatively little examination of the repercussions of computer-generated invention on patent law. This discussion is necessary, as the adoption of these technologies have the potential to impact the patent system on a scale and of a nature that it is not currently equipped to accommodate. In particular, the advent of computer-generated invention will raise important questions regarding the legal implications of protecting the results of such systems, specifically, whether the right activity is being rewarded to the right person, to the right extent, and on the right conditions.
In light of legal uncertainty in the context of rapidly advancing AI technology, this article will examine whether the current legal concepts in patent law are appropriate for computer-generated inventions, and what adaptations may be necessary to ensure that the patent system’s fundamental objectives continue to be met. This discussion will explore two contemporary categories of the state of the art: automated generation of patent texts; and, AI algorithms used in the inventive process. Finally, this article will speculate on possible economic and policy impacts were AI to advance such that computers could invent autonomously “in the wild.”