'Rethinking Privacy' by William H. Simon in 20 October 2014
Boston Review comments
Anxiety about surveillance and data mining has led many to embrace implausibly expansive and rigid conceptions of privacy. The premises of some current privacy arguments do not fit well with the broader political commitments of those who make them. In particular, liberals seem to have lost touch with the reservations about privacy expressed in the social criticism of some decades ago. They seem unable to imagine that preoccupation with privacy might amount to a “pursuit of loneliness” or how “eyes on the street” might have reassuring connotations. Without denying the importance of the effort to define and secure privacy values, I want to catalogue and push back against some key rhetorical tropes that distort current discussion and practice.
One problem is that privacy defenses often imply a degree of pessimism about the state inconsistent with the strong general public regulatory and social-welfare roles that many defenders favor. Another is a sentimental disposition toward past convention that obscures the potential contributions of new technologies to both order and justice. And a third is a narrow conception of personality that exalts extreme individual control over information at the expense of sharing and sociability.
Paranoia
In urban areas, most people’s activity outdoors and in the common spaces of buildings is recorded most of the time. Surveillance cameras are everywhere. When people move around, their paths are registered on building access cards or subway fare cards or automobile toll devices. Their telephone and email communications, Internet searches, and movements are tracked by telephone companies and other intermediaries. All their credit card transactions—which, for many people, means nearly all of their transactions—are documented by time, place, and substance. The health system extracts and records detailed information about their psychic and bodily functions. Anyone arrested, and many who fear arrest, in the criminal justice system typically surrender a variety of personal information and often have to submit to ongoing monitoring. Even within the home, water and energy consumption are monitored, and some people choose to install cameras to monitor children or protect against burglars.
To many people, this society looks like the panopticon—a prison designed as a circular tower so that the inmates can be easily observed by a centrally located authority figure. Jeremy Bentham originated the panopticon idea as a low-cost form of subjugation for convicted criminals. Michel Foucault adopted it as a metaphor for what he regarded as the insidiously pervasive forms of social control in contemporary society. To him, schools, hospitals, workplaces, government agencies all engaged in repressive forms of surveillance analogous to the panopticon.
In the United States, paranoid political style has been associated traditionally with the right and the less educated. But Foucault helped make it attractive to liberal intellectuals. His contribution was largely a matter of style. Foucault was the most moralistic of social theorists, but he purported to disdain morality (“normativity”) and refused to acknowledge, much less defend, the moral implications of his arguments. He gave intellectual respectability to the three principal tropes of the paranoid style.
First, there is the idea of guilt by association. The resemblance between some feature of a strikingly cruel or crackpot regime of the past or in fiction—especially in Nineteen Eighty-Four—and a more ambiguous contemporary one is emphasized in order to condemn the latter. Thus, the elaborate individualized calibration of tortures in eighteenth- and nineteenth-century penology is used to make us feel uncomfortable about the graduated responses to noncompliance in contemporary drug treatment courts. George Orwell’s image of television cameras transmitting images from inside the home to the political police is used to induce anxiety about devices that monitor electricity usage so that the hot water tank will re-heat during off-peak hours.
The paranoid political style has been associated with the right. Foucault brought it to liberals.
The second trope of the paranoid style is the portrayal of virtually all tacit social pressure as insidious. What people experience as voluntary choice is substantially conditioned by unconscious internalized dispositions to conform to norms, and a key mechanism of such conformity is the actual, imagined, or anticipated gaze of others. Almost everyone who thinks about it recognizes that such pressures are potentially benign, but people differ in their rhetorical predispositions toward them. The individualist streak in American culture tends to exalt individual choice in a way that makes social influence suspect.
Foucault disdained individualism, but he introduced a conception of power that was so vague and sinister that it could be applied to make almost any social force seem creepy. When Neil Richards writes in the Harvard Law Review that surveillance “affects the power dynamic between the watcher and the watched, giving the watcher greater power to influence or direct the subject of surveillance,” he is channeling Foucault. So is Julie Cohen, when she writes in the Stanford Law Review: “Pervasive monitoring of every first move or false start will, at the margin, incline choices toward the bland and the mainstream.”
We have come a far cry from Jane Jacobs’s idea of “eyes on the street” as the critical foundation of urban vibrancy. For Jacobs, the experience of being observed by diverse strangers induces not anxiety or timidity but an empowering sense of security and stimulation. It makes people willing to go out into new situations and to experiment with new behaviors. Eyes-on-the-street implies a tacit social pact that people will intervene to protect each other’s safety but that they will refrain from judging their peers’ non-dangerous behavior. Electronic surveillance is not precisely the same thing as Jacobean eyes-on-the-street, but it does offer the combination of potentially benign intervention and the absence of censorious judgment that Jacobs saw as conducive to autonomy.
The third trope of the paranoid style is the slippery slope argument. The idea is that an innocuous step in a feared direction will inexorably lead to further steps that end in catastrophe. As The Music Man (1962) puts it in explaining why a pool table will lead to moral collapse in River City, Iowa, “medicinal wine from a teaspoon, then beer from a bottle.” In this spirit, Daniel Solove in Nothing to Hide (2011) explains why broad surveillance is a threat even when limited to detection of unlawful activity. First, surveillance will sometimes lead to mistaken conclusions that will harm innocent people. Second, since “everyone violates the law sometimes” (think of moderate speeding on the highway), surveillance will lead to over-enforcement of low-stakes laws (presumably by lowering the costs of enforcement), or perhaps the use of threats of enforcement of minor misconduct to force people to give up rights (as for example, where police threaten to bring unrelated charges in order to induce a witness or co-conspirator to cooperate in the prosecution of another). And finally, even if we authorize broad surveillance for legitimate purposes, officials will use the authorization as an excuse to extend their activities in illegitimate ways.
Yet, slippery slope arguments can be made against virtually any kind of law enforcement. Most law enforcement infringes privacy. (“Murder is the most private act a man can commit,” William Faulkner wrote.) And most law enforcement powers have the potential for abuse. What we can reasonably ask is, first, that the practices are calibrated effectively to identify wrongdoers; second, that the burden they put on law-abiding people is fairly distributed; and third, that officials are accountable for the lawfulness of their conduct both in designing and in implementing the practices.
The capacity of broad-based electronic surveillance—the sort that collects data on large or indeterminate numbers of people who are not identified in advance—to satisfy these conditions is in some respects higher than that of the more targeted and reactive approaches that privacy advocates prefer. Such approaches rely heavily on personal observation by police and witnesses, reports by informants of self-inculpatory statements by suspects, and confessions. But these strategies have their shortcomings. Scholars in recent years have emphasized the fallibility of human memory and observation. Witness reports of conduct by strangers are often mistaken and influenced by investigators. Those who report self-inculpatory statements often have dubious motivations, and, with surprising frequency, even confessions prove unreliable.
Inferences from broad-based electronic surveillance are not infallible, but they are often more reliable than reports of personal observation, and they can be less intrusive. Computers programmed to identify and photograph red light violations make much more reliable determinations of the violation than a police officer relying on his own observation. And they are less intrusive: the camera can be set to record only when there’s a violation, whereas a police officer would observe and remember much more. Yet many civil libertarians, including some ACLU affiliates, oppose them. One of their key arguments is that the systems generate tickets in many situations where the driver might have had an excuse for not stopping in time that would have persuaded a police officer to dismiss the violation. (The case for excuse can still be made in court, but a court appearance would cost more than the ticket for many.) The argument is not frivolous, but it is a curiosity typical of this field that people concerned about the abuse of state power often oppose new technology in favor of procedures that give officials more discretion.
Broad-based surveillance distributes its burdens widely, which may be fairer. ...
Simon states
The substantive conception to which the advocates are most drawn is the notion of a right to control information about one’s self. James Whitman argues in the Yale Law Journal that this conception evolved through the democratization of aristocratic values. The aristocrat’s sense of self-worth and dignity depended on respect from peers and deference from subordinates, and both were a function of his public image. Image was thus treated as a kind of personal property. Whitman says this view continues to influence the European middle class in the age of equal citizenship. As the ideal was democratized, it came to be seen as a foundation for self-expression and individual development.
European law evolved to express this cultural change. Whitman showed that the idea of a right to control one’s public image underlies French and German privacy law, and it appears to animate European Union privacy law, which advocates admire for its stronger protections than those of U.S. law. For example, French and German law impose stricter limits on credit reporting and the use of consumer data than U.S. law. The EU directive mandates that individuals be given notice of the data collection practices of those with whom they deal and rights to correct erroneous data about them. More controversially, a proposed revision prohibits decisions based “solely on automatic data processing” for various purposes, including employment and credit. By contrast, U.S. law tends to be less protective and less general. Its privacy law tends to be sector-based, with distinctive regulations for health care, education, law enforcement, and other fields.
Whitman associates the weaker influence of the idea of personal-image control in the United States with the stronger influence here of competing libertarian notions that broadly protect speech and publication. Expansive notions of privacy require a more active state to enforce them. This was recently illustrated by a decision of the EU Court of Justice holding that the “right to be forgotten” may require removal from an Internet website of true but “no longer relevant” information about the plaintiff’s default on a debt. The prospect of courts reviewing Internet data to determine when personal information is “no longer relevant” has emphasized the potential conflict between privacy and other civil rights.
But reservations about the broad conception of dignity Whitman describes go deeper. There is a powerful moral objection to it grounded in ideals of sociability. Even in Europe, during the period in which the ideal was democratized, there was a prominent critique of it. A character in a nineteenth-century English novel preoccupied with controlling his public image is likely to be a charlatan or a loser. Not for nothing is Sherlock Holmes the most prominent hero in the canon. His talents are devoted to invading the privacy of those who would use their image-management rights to exploit others. And as he teaches that the façade of self-presentation can be penetrated by observation and analysis of such matters as frayed cuffs, scratches on a watch, or a halting gait, he sets up as a competing value the capacity to know and deal with people on our terms as well as theirs.
He goes on to argue that privacy advocates
object most strongly to data collection designed to yield specific conclusions about the individual, but they persist even when anonymized data is used to assess general patterns. Since anonymization is never perfectly secure, it exposes people to risk. Moreover, the privacy norm sometimes shades into a property norm. It turns out that some people carry around economically valuable information in their bodies—for example, the DNA code for an enzyme with therapeutic potential—and that information about everyone’s conduct and physical condition can, when aggregated, be sold for substantial sums. For some, the extraction of such information without consent looks like expropriation of property. They would like to see explicit extension of property rights to require consent and compensation for use of personal information. In Who Owns the Future? (2014) Jaron Lanier develops this line of thought, suggesting that we create institutions that enable individuals to monetize their personal data—individual accounts would be credited every time a piece of data is used.
In addressing such issues, a lot depends on how we understand consent. Consent can mean clicking on an “I agree to the terms” button that refers to a mass of small-print boilerplate that hardly anyone can be expected to read. Or it may mean simply the failure to find and click on the button that says “I refuse consent.” The advocates want something more demanding. Moreover, they don’t want the cost of the decision to be too high. If insisting on privacy means exclusion from Google’s search tool or Amazon’s retail service, many proponents would view that as unfair. If Google or Amazon charged a price for not mining your data, many would call it extortion—like asking someone to pay in order not to be assaulted. So the idea of “consent” touches on deep and unresolved issues of entitlement to information.
Such issues have arisen in connection with employer-sponsored wellness programs that encourage employees to get checkups that include a “health risk assessment” designed to generate prophylactic advice. At Pennsylvania State University such a program recently provoked a wave of privacy protests, apparently directed to parts of a questionnaire that addressed marital and job-related problems, among other things. The protesters also objected that the questionnaires would be analyzed by an outside consultant, even though the information would be subject to the confidentiality provisions of the federal Health Insurance Portability and Accountability Act. The University allowed people to refuse to participate subject to a $100 per month surcharge.
The strong privacy position has disturbing implications for medical research.
No doubt such programs may be unnecessarily intrusive and may not safeguard information adequately, but the objections made in this case do not appear to have depended on such concerns. The $100 surcharge was based on an estimate of the average additional health costs attributable to refusal to participate. The premise of the protests seems to have been that the interest in not disclosing this information even under substantial safeguards is important enough that those who disclose should be asked to subsidize those who do not. ...
The reciprocity theme occasionally surfaces in privacy discussion. Lanier’s proposal to monetize data arises from a sense of injustice about the relative rewards to, on the one hand, data-mining entrepreneurs and high-tech knowledge workers, and on the other, the masses of people whose principal material endowment may be their control over their own personal information. In the health sector, doctors have been caught trying to derive patent rights from information embedded in their patients’ DNA without informing the patients.
But privacy advocates rarely acknowledge the possibility that average reciprocity of advantage will obviate over time the need for individual compensation in some areas. Might it be the case, as with airplanes and zoning laws, that people will do better if individual data (anonymized where appropriate) is made freely available except where risks to individuals are unreasonably high or gains or losses are detectably concentrated? There will always be a risk that some data will be disclosed in harmful ways, such as when personal data leaks out because of ineffective anonymization. However, the key question is whether we will make a social judgment about what level of risk is reasonable or whether we shall accord property rights that allow each individual to make her own risk calculus with respect to her own data.
The latter approach would likely preclude valuable practices in ways analogous to what would happen if airlines had to get owners’ consent for passing over private property. Moreover, strengthening rights in personal data could exacerbate, rather than mitigate, distributive fairness concerns. While it is surely unfair for doctors to earn large capital gains from DNA extracted without consent, wouldn’t it also be unfair (admittedly in a lower key) for Freedom Box users to benefit from the Center for Disease Control’s mining of Google searches for new viruses while denying access to their own Internet searches?
The strong privacy position has disturbing implications for medical research. In the past, medicine has strongly separated research from treatment. Research is paradigmatically associated with randomized controlled clinical trials. Treatment experience has been considered less useful to research because treatment records do not describe the condition of the patient or the nature of the intervention with enough specificity to permit rigorous comparisons. But information technology is removing this limitation, and, as the capacity to analyze treatment information rigorously increases, the quality of research could improve as its cost lowers.
However, this development is in some tension with expansive conceptions of privacy. A prominent group of bioethicists led by Ruth Faden of Johns Hopkins has recently argued that the emerging “learning health care system” will require a moral framework that “depart[s] in important respects from contemporary conceptions of clinical and research ethics.” A key component of the framework is a newly recognized obligation on the part of patients to contribute to medical research. The obligation involves a duty to permit disclosure and use of anonymized treatment data for research purposes and perhaps also to undergo some unburdensome and non-invasive examination and testing required for research but not for individual treatment. (Anonymization is unlikely to be effective with data made generally available online, but regimes involving selective and monitored disclosure have proven reliable.) The group justifies its proposal in terms of reciprocity values. Since everyone has a good prospect of benefiting from research, refusing to contribute to it is unfair free riding.
Of course, the reciprocity idea assumes that researchers will make the fruits of the research derived from patient information freely available. People would be reluctant to agree to make a gift of their information if researchers could use it to make themselves rich. Effective constraints on such conduct should be feasible. Much medical research, including much of the highest value research, has been and continues to be done by salaried employees of charitable corporations.
Applied in this context, Lanier’s proposal to monetize individual data looks unattractive. There is a danger that a lot of valuable information would be withheld or that the costs of negotiating for it would divert a lot of resources from research and treatment. It is not clear what the resulting redistributive effects would be. Perhaps they would approximate a lottery in which the only winners would be a small number of people with little in common except that they happened to possess personal information that had high research value at the moment. At a point where we do not know who the winners will be, we would all be better off giving up our chances for a big payoff in return for assurance that we will have free access to valuable information. We can do this by treating the information as part of a common pool.