In his insightful article [PDF], "The Dangers of Surveillance," 126 Harvard Law Review 1934 (2013), Neil Richards offers a framework for evaluating the implications of government surveillance programs that is centered on protecting "intellectual privacy." Although we share his interest in recognizing and protecting privacy as a condition of personal and intellectual development, we worry in this essay that, as an organizing principle for policy, "intellectual privacy" is too narrow and politically fraught. Drawing on other work; we, therefore, recommend that judges, legislators, and executives focus, instead, on limiting the potential of surveillance technologies to effect programs of broad and indiscriminate surveillance. ...
Although we live in a world of total surveillance, we need not accept its dangers — at least not without a fight. As Richards rightly warns, unconstrained surveillance can be profoundly harmful to intellectual privacy. It would be wrong, however, to conflate symptom and cure. What is most concerning, for us is the rapid adoption of technologies that increasingly facilitate persistent, continuous, and indiscriminate monitoring of our daily lives. Although harms to intellectual privacy are certainly central to our understanding of the interests at stake, it is this specter of a surveillance state that we think ought to be the center of judicial, legislative, and administrative solutions, not the particular intellectual privacy interests of individuals.The Richards article is noted here.
Citron and Gray state that
The ethos of our age is “the more data, the better.”1 In nearly every sector of our society, information technologies identify, track, analyze, and classify individuals by collecting and aggregating data. Law enforcement, agencies, industry, employers, hospitals, transportation providers, Silicon Valley, and individuals are all engaged in the pervasive collection and analysis of data that ranges from the mundane to the deeply personal. Rather than being silos, these data gathering and surveillance systems are linked, shared, and integrated. Whether referred to as coveillance, sousveillance, bureaucratic surveillance, “surveillance-industrial complex,” “panvasive searches,” or business intelligence, total-information awareness is the objective. ...
The scope of surveillance capacities continues to grow. Fusion centers and projects like Virtual Alabama may already have access to broadband providers’ deep packet inspection (DPI) technologies, which store and examine consumers’ online activities and communications. This would provide government and private collaborators with a window into online activities, which could then be exploited using data-mining and statistical-analysis tools capable of revealing more about us and our lives than we are willing to share with even intimate family members. More unsettling still is the potential combination of surveillance technologies with neuroanalytics to reveal, predict, and manipulate instinctual behavioral patterns of which we are not even aware.
There can be no doubt that advanced surveillance technologies such as these raise serious privacy concerns. In his article, Professor Neil Richards offers a framework to “explain why and when surveillance is particularly dangerous and when it is not.” Richards contends that surveillance of intellectual activities is particularly harmful because it can undermine intellectual experimentation, which the First Amendment places at the heart of political freedom. Richards also raises concerns about governmental surveillance of benign activities because it gives undue power to governmental actors to unfairly classify, abuse, and manipulate those who are being watched; but it is clear that his driving concern is with intellectual privacy. We think that this focus is too narrow.
According to Richards, due to intellectual records’ relationship to First Amendment values, “surveillance of intellectual records — Internet search histories, email, web traffic, or telephone communications — is particularly harmful.” Richards argues that governmental surveillance seeking access to intellectual records should therefore be subjected to a high threshold of demonstrated need and suspicion be-fore it is allowed by law. He argues also that individuals ought to be able to challenge in court “surveillance of intellectual activities.” Richards further proposes that “a reasonable fear of government surveillance that affects the subject’s intellectual activities (reading, thinking, and communicating) should be recognized as a harm sufficient to prove an injury in fact under standing doctrine.” ... Although Richards aptly captures the dangers to intellectual freedom posed by technologically enhanced surveillance, we fear his policy prescriptions are both too narrow and too broad because they focus on “intellectual activities” as a necessary trigger and metric for judicial scrutiny of surveillance technologies. Our concerns run parallel to arguments we have made elsewhere against the so-called “mosaic theory” of quantitative privacy advanced by the D.C. Circuit and four Justices of the Supreme Court in United States v. Jones. Our argument there supports our objection here: by focusing too much on what information is gathered rather than how it is gathered, efforts to protect reasonable expectations of privacy threatened by new and developing surveillance technologies will disserve the legitimate interests of both information aggregators and their subjects.
One reason we are troubled by Richards’s focus on “intellectual activities” as the primary trigger for regulating surveillance technology is that it dooms us to contests over which kinds of conduct, experiences, and spaces implicate intellectual engagement and which do not. Is someone’s participation in a message board devoted to video games sufficiently intellectual to warrant protection? What about a telephone company’s records showing that someone made twenty phone calls in ten minutes’ time to a particular number without anyone picking up? Would we consider the route someone took going to the library an intellectual activity? Is it the form of the activity or what is being accomplished that matters most?
Setting aside obvious practical concerns, the process of determining which things are intellectual necessarily raises the specter of oppression. Courts and legislators would be required to select among competing conceptions of the good life, marking some “intellectual” activities as worthy of protection, while denying that protection to other “non-intellectual” activities. Inevitable contests over the content and scope of “intellectual privacy” will be, by their nature, subject to the whims and emergencies of the hour. In the face of terrorist threats, decisionmakers will surely promote a narrow definition of “intellectual privacy,” one that is capable of licensing programs like Virtual Alabama and fusion centers. Historically, decisionmakers have limited civil liberties in times of crisis and reversed course in times of peace, but the post-9/11 period shows no sign of the pendulum’s swinging back. Given the nature of political and judicial decisionmaking in our state of perpetually heightened security, protection, even of “intellectual privacy,” is most likely to be denied to the very outsiders, fringe thinkers, and social experimenters whom Richards is most concerned with protecting.