13 February 2015

AI and Personhood

'Machine Minds: Frontiers in Legal Consciousness' by Evan Joseph Zimmerman argues that
Research is at the point where we might have to confront the possibility of what computer scientists call “strong AI” in the coming years. A strong AI could be intelligent by most reasonable definitions of the word, and possibly have a subjective experience. It stands to reason that we must seriously consider whether such a machine should have its will recognized, protected, and enabled. That is to say, such a sufficiently advanced machine might be said to carry responsibilities, and rights, on its own, as a legal person.
This question is wrapped up in significant philosophical and technical questions of acceptability and feasibility. The novelty of this Essay is that it provides a positive reason to grant such rights with a basis in technical law, along with a concrete definition and justification for personhood. The organizing principle this Essay arrives at is: Personhood exists to protect conscious individuals from suffering and allow them exercise their wills, subject to their intelligence.
This Essay examines several documents across a variety of fields, as well as historical records. Examples such as corporate law, personhood law, slavery law, and standing law demonstrate that the story of person-hood is a history of grappling with what is fully conscious, and how to allow these consciousness’s to exercise their will and avoid pain. However, because law was made by and for humans, by examining the law surrounding animal welfare and humans in a vegetative state, one finds that the law privileges humanity. Hence, our laws imply that a computer, if it is intelligent enough, should be considered conscious; however, our laws as they are would arbitrarily not provide it personhood simply because it is not made of flesh and blood.
Zimmerman states -
In one episode of that ever-prescient television show, Star Trek: The Next Generation, the android Data’s right to refuse to be dismantled for research purposes is questioned. Data is an android that is one-of-a-kind in its (the crew calls it “his”) intelligence, and is treated by his crew as a living being with feelings. Despite this, a scientist wants to dissect Data despite the fact that this could destroy data’s personality, which causes Data to refuse the procedure. In response, the scientist claims that as a robot, Data is Star Fleet’s property, and thus has no rights to refuse any procedure. A court case ensues, in which one side appeals to, essentially, a form of species solipsism; organic beings are known to be life, and as Data is an android, he is thus clearly not alive. The opposing side argues that Data has feelings, and that his consciousness is as difficult to prove as any other person’s. At stake is the potential destruction of Data’s soul, which is not likely to survive the procedure. The argument, of course, is over whether such a thing even exists.
The complexity of computers, particularly with their potential to become intelligent, raises profound legal questions. Although computers may not become brilliant overnight, several observers of the field consider such a thing a serious possibility, and technology has shown itself to be unpredictable. Despite the quip that artificial intelligence is always ten years away, a breakthrough may be just around the corner, and the law should be prepared. It is extremely important that this matter is treated wisely, carefully, and with an eye on the emerging future moving at the speed of light and imagination. Proper care depends on our ability to abstract beyond our own experiences, as a being need not think like us for it to be said that it is thinking. It also requires knowledge—though not necessarily expertise!—of difficult technical issues.
Those in software circles are fond of saying that “Technology moves fast, and the law moves slow.” The likes Europe’s Luddites litter the 19th century. Yet, in the 21st, it seems impossible to avoid the fact that machines outfitted with advanced computational power form an increasingly large part of our lives and are changing more quickly than ever: in homes, in vehicles, at work, in the air. Many in the technology industry believe that machines have changed in front of our eyes as the law seems to have hardly noticed. Often the technologies are so complicated that the regulations, may seem to our tech observer as built on an incomplete or faulty understanding, inadequate, cumbersome, outdated upon pronouncement, and obstructionist in the reality. Whether or not this is true, outside of a few articles in the past two decades, the literature has mostly not addressed the issue of artificial intelligence and personhood.
Any such judgment(s) will have to hinge on consciousness. The point of the law is not to protect tools. It is to ensure the protection and allow the full life of those who can feel, even if, like for animals, in degrees, and even if those feelings are alien yet almost inconceivably compared to our own. A different type of being need not think the same way as a human being for it to be said that it is thinking, and consequently to be deserving of privileges and protections. Surely it carries obligations and liabilities; so, why not rights? Consequently, consciousness is the key factor in determining the legal status of intelligent computers. A recognition of this requires a willingness to admit that there are serious philosophical questions that the law may not be able to address, but be forced to consider anyways. It is important to address these issues long before we have to scramble to piece together a last-minute solution, while we have years to consider the question. These are very pressing concerns that address the core of our legal system, and by highlighting and thinking about them now, perhaps for the first time in a long time, the law can be prepared for when technology arrives.
Unlike the previous literature, instead of negating reasons for denying personhood, it discusses an affirmative reason to grant it, and instead of approaching it philosophically, this Essay considers the technical legal standpoint. I examine case law and influential historical conceptions of personhood, including corporate personhood, and liability to determine that personhood is intended to privilege consciousness, as it has a will and can feel pain. By examining case law and statute for such issues as slavery, women’s suffrage and education, corporate personhood, vegetative state humans, children, and animals, to conclude that personhood exists in a tiered form, even if sometimes we dare not speak its name. Furthermore, popular historical justification is used to suggest that the basis for such stratification is the level of intelligence, i.e., the complexity and depth of the subjective experience of, the persons in question.
In Section I, this Essay frames the issue for future scholars considering such cases by explaining how these sophisticated machines work with a technology primer. In Section II, this Essay provides working legal definitions for terms like “intelligence” and “consciousness” that a court could use without forcing it to take a position in perhaps the most significant and ancient debate in human society, that is, how to assess consciousness. In Section III, this Essay examines the historical record, statutes, and case law to assess what personhood really is, and then whether machines should be granted personhood, and contrasts my conclusion with the existing literature. The main thrust of this Essay is ultimately to answer this question: Could a machine be a legal person, and is the law currently able to withstand the intellectual challenge of an intelligent machine?