'Will There Be a Neurolaw Revolution?' by Adam J. Kolber in (2014) 89
Indiana Law Journal 807-845
argues that
The central debate in the field of neurolaw has focused on two claims. Joshua Greene and Jonathan Cohen argue that we do not have free will and that advances in neuroscience will eventually lead us to stop blaming people for their actions. Stephen Morse, by contrast, argues that we have free will and that the kind of advances Greene and Cohen envision will not and should not affect the law. I argue that neither side has persuasively made the case for or against a revolution in the way the law treats responsibility.
There will, however, be a neurolaw revolution of a different sort. It will not necessarily arise from radical changes in our beliefs about criminal responsibility but from a wave of new brain technologies that will change society and the law in many ways, three of which I describe here: First, as new methods of brain imaging improve our ability to measure distress, the law will ease limitations on recoveries for emotional injuries. Second, as neuroimaging gives us better methods of inferring people’s thoughts, we will have more laws to protect thought privacy but less actual thought privacy. Finally, improvements in artificial intelligence will systematically change how law is written and interpreted. …
The emerging field of neurolaw addresses two major topics that have only limited overlap. The “neurolaw of responsibility” concerns how neuroscience will and should affect laws related to responsible action. It was traditionally addressed by punishment theory and the philosophy of action. The “neurolaw of technology,” by contrast, concerns the ways the law will and should respond to new brain-related technologies. It covers issues traditionally addressed by applied ethics. Both topics require familiarity with law and neuroscience, but they otherwise examine rather different issues. Nevertheless, since both fields happen to involve law and neuroscience, the neurolaw moniker seems to have stuck.
Greene, Cohen, and Morse write principally about the neurolaw of responsibility. They spend much of their energy defending their substantive views about free will, though none of them purport to offer a new argument to break the free will impasse. Greene and Cohen also claim that advances in neuroscience will change the way we think about punishment, but they have yet to persuasively defend the claim. Similarly, Morse may be right that we ought to understand the law in compatibilist terms, but current law may be rooted in contrary assumptions.
While prospects for a responsibility revolution remain hard to predict, I claim that there will be a technology-driven neurolaw revolution. The law will change in many ways, and I focus on three hypotheses: (1) the differences in how the law treats emotional and physical injuries will diminish as neuroscientists develop more objective methods of identifying and assessing emotional injuries; (2) new methods of “mind reading” will lead us to have less thought privacy but more thought privacy laws; and (3) as autonomous and semiautonomous machines become more integrated into human life, they will have systematic effects on the law and its interpretation, perhaps by increasing the concretization of the law. The precise details of how technology will develop are hard to predict, but by trying to predict the path of technology, we can hope to make the law better prepared for the changes to come.
Kolber argues that there will be "More Privacy Laws but Less Privacy" -
Researchers are working on a variety of technologies aimed at what can loosely be referred to as mind reading. For example, based on measurements of brain activity, researchers can make pretty good guesses about what images are shown to a subject in a brain scanner, be it a still image or even, to some extent, a video. One recent study demonstrated that subjects under fMRI can be taught to mentally spell words in a manner that can be decoded in real time by researchers, a technique that could prove especially helpful for people with locked-in syndrome or other conditions that make it difficult to communicate. Neuroscientist Jack Gallant predicts that “[w]ithin a few years, we will be able to determine someone’s natural language thoughts using fMRI-based technology.”
These new brain imaging techniques point to a future where our thoughts will not be as private as they are now. We will not read minds directly in any spooky sense, but we will continue to get better at identifying correlations between brain activity and mental activity and using brain activity to make predictions about mental activity.
Legal scholars have focused their attention on efforts to develop more accurate lie detectors. Brain-based methods of deception detection are still in the early stages. Much of the research compares the brain activity of a group of “honest” subjects relative to a group of “dishonest” subjects. More helpful research to determine whether a particular person is lying is beginning to accumulate, but the testing has always been done in somewhat artificial contexts. If we put aside concerns about how well these experiments apply to real-life contexts, most published studies report using fMRI to distinguish honesty and deception at accuracies “between 70% and slightly over 90%.”
But even if we develop a lie detector that works well with the cooperative subjects that tend to participate in experiments, very little research examines the possible countermeasures people could take to fool such a device. One fMRI study was 100% accurate in detecting the lies of individual subjects, but accuracy fell to 33% when subjects used countermeasures they were trained to apply. So even though at least two companies have marketed brain-based methods of lie detection, many neuroscientists are skeptical of the current state of the technology. Indeed, two recent attempts to introduce fMRI evidence of deception in court were unsuccessful.
Nevertheless, deception detection has so many potential uses that the incentives to improve it are quite strong. Someday, the technology will at least be a useful aid in assessing credibility. When that day comes, many questions will be raised about how, if at all, the technology should be used in court. The real question we ought to ask ourselves when considering some supposed lie detector is: will we tend to get more accurate outcomes with or without it?
The answer may depend on the context. Lie detection evidence offered by prosecutors to provide evidence of guilt beyond a reasonable doubt would have to be extremely accurate, while lie detection evidence offered by a defendant to generate a reasonable doubt could be much more imperfect. Deciding whether or not brain-based lie detection will improve outcomes, however, will put us in an awkward position: we will have to compare the error rates of lie detection technology to our current technology, namely, the jury, and we know relatively little about how well juries assess credibility. What we do know is that people are not very good at detecting deception, and there is little correlation between people’s confidence in their ability to detect deception and their accuracy. Our entrenched preference for jury decision making is largely a result of the path of history, rather than an empirically validated conclusion about how good juries are at discerning credibility.
In an opinion in United States v. Scheffer, Justice Clarence Thomas, joined by three other Justices, wrote that a rule banning all polygraph evidence in military trials serves the legitimate government interest of preserving jurors’ “core function of making credibility determinations in criminal trials.” According to Thomas, “[a] fundamental premise of our criminal trial system is that ‘the jury is the lie detector.’” His remarks admit the possibility that even perfectly accurate lie-detection evidence could be excluded from the courtroom on the ground that it would infringe the province of the jury.
In my view, excluding accurate lie-detection information to protect the province of the jury makes a mockery of the justice system. The most important role of trials is to uncover the truth as best we can. To do so, we ought to use the best technology that cost-effectively helps us do so. There are legitimate concerns that poor quality lie detection evidence could irrationally sway jurors. They may not understand how the technology works or how to interpret known rates of error. But it would be foolish to keep some high-quality future lie detector out of the courtroom — under a blanket rule — simply because credibility determinations have traditionally been made by jurors.
Of course, even a perfectly accurate lie detector could not usurp all jury functions. Some cases do not depend on credibility assessments at all. For example, whether or not conduct was consistent with that of a reasonably prudent person cannot be determined by a lie detector alone. Moreover, when cases do depend on witness credibility, there is an important difference between honesty and truth. Honest assertions are not necessarily true. A person may believe he committed a crime that, in fact, never occurred. Similarly, a dishonest assertion can turn out to be true. A gunman may believe he fired the coup de grĂ¢ce shot that ended the life of a rival gang member. Denying that he killed the rival would be dishonest, even if unbeknownst to him, the deceased was already dead before he fired.
If direct attempts at brain-based lie detection fail, other mind reading efforts may still prove helpful. The technologies discussed in the preceding section on the experiential future can serve as indirect methods of lie detection by telling us whether a person’s pain claims are likely to be false. (In fact, pain measurement techniques could give us information that cannot be obtained from truthful subjects. Even when a person honestly reports his pain as “9” on a scale of 1 to 10, we cannot easily compare his report to those of others.)
Researchers are improving their understanding of other experiences, as well, including sexual arousal. One study examined the brain activity of male pedophiles and male non-pedophiles when shown images of nude children. Researchers used brain activity to accurately classify the pedophilia status of more than 90% of subjects. While this technique may be subject to countermeasures, it may be less so than other techniques used to classify pedophiles. Another study looked at the brain activity of subjects while they looked at male and female human genitalia. Researchers could determine sexual orientation with more than 85 percent accuracy.
Of course, all of this work on mind reading raises privacy concerns. The pedophilia research suggests that fMRI could someday be used to assess the likelihood that a person has committed or will commit a sex crime. The research on sexual orientation could potentially bear on the distribution of assets in a divorce or the way prisoners are segregated. Other neuroscientific research may uncover conscious or unconscious racial biases. People could be scanned for one purpose, say, to see how an advertising campaign affects their brains, while they inadvertently generate information that bears on their racial biases, sexual orientation, and other sexual preferences. One group of researchers recently demonstrated that the very simple electroencephalography (EEG) sensors in certain mass-market video games can already be used to make plausible inferences about gamers’ private “information related to credit cards, PIN numbers, the persons known to the user, or the user’s area of residence,” and may enable more confident inferences as these sensors improve.
Brain imaging may even inform questions about mens rea. It might help us assess a person’s capacity to generate some mental state or bear on the credibility of a person’s statements about his past mental states. Brain imaging might even have more direct applications. For example, one group of researchers is trying to use fMRI to identify the culpable mental states described by the Model Penal Code. Imagine a border crossing where someone is transporting a suspicious container. Before opening it, we could scan the brain of the person carrying the container to see if his brain is consistent with a culpable mental state of knowledge, recklessness, or negligence with respect to its contents. (The person might have to believe he was randomly selected for screening so that the mere fact of being selected does not significantly alter his beliefs.)
Accurate mind-reading technologies would raise a host of questions: For example, when, if ever, could prosecutors use brain-based lie detectors to incriminate or defendants to exculpate? How would police and other investigators use such tools? Could they be used by employers to make hiring and firing decisions?
Even if accurate mind-reading techniques are still decades away, we already have reason to think about their implications because of what I call the technological look-back principle. If we develop an accurate lie detector thirty years from now, you can be asked in 2044 about your conduct today in 2014. When you are in such a scanner in 2044, your spouse could ask if you have ever been unfaithful, and the police could ask if you have ever killed someone. And just as campaigning politicians often make their tax returns public even though they are under no legal obligation to do so, voters may expect politicians to go into a scanner and tell them what their intentions really are and whether or not they have ever acted corruptly.
I am not arguing that we need legislation today to prepare for all of the potential future uses of mind-reading techniques. We would have little confidence that such legislation would survive the intervening period or that it would it take the appropriate form. Moreover, we often worry too much about the privacy concerns raised by new technologies in ways that unnecessarily hinder their development.
But those expecting to be alive in coming decades or who care about those who will should begin to think about the privacy implications of mind-reading technologies. Many who shed their DNA while committing crimes before DNA sequencing became common are now in prison, prosecuted with evidence they never imagined could be used against them. Our memories may become the evidence that embarrasses or incriminates us in the future.
I offer two general predictions about how our rights to privacy may change in a world with better mind-reading technology. First, as the preceding suggests, we will have less mental privacy as advances in neuroscience make it easier to infer thoughts and thought patterns. We strike a balance between the societal value of making information public and the value to a person or group of people of keeping it private. These costs and benefits push and pull each other to reach a certain equilibrium. Neuroscience will reduce the costs of obtaining otherwise private information and will likely enable access to information that would otherwise be unavailable. Given that societal demand for information is likely to stay the same or increase, the equilibrium is likely to shift toward more information gathering.
In the days before the Internet, one could hire a private investigator to learn about people’s occupations, family members, and various likes and dislikes. Today, such information is frequently easy to obtain. Indeed, many people publicize it themselves on social networking sites. Even when people try to keep their own information private, their associates still generate information about them. As technology makes information easier to obtain, it becomes harder to keep private.
Second, I speculate that we will craft more laws to protect thought privacy. Right now, there is little we can do to penetrate the thoughts of people who prefer to keep them secret. Only when we have plausible methods of doing so will we fully see the need to create laws to protect thought privacy. For example, as polygraphs became more reliable and widespread, Congress passed the Federal Employee Polygraph Protection Act in 1988 to prohibit most private employers from subjecting employees to polygraphs and other forms of lie detection. And just as we have seen an onslaught of laws to protect electronic privacy, we will see new laws directed at protecting the privacy of our thoughts. Laws addressing rights to read minds or to be free of mind reading will grow more prevalent, complex, and controversial in a world with more accurate neurotechnology. Hence, we will have more law protecting thought privacy but less actual thought privacy.