'Privacy versus Security' by Derek Bambauer in (2013) 103(3)
Journal of Criminal Law and Criminology 667-683 argues that
Legal scholarship tends to conflate privacy and security. However, security and privacy can, and should, be treated as distinct concerns. Privacy discourse involves difficult normative decisions about competing claims to legitimate access to, use of, and alteration of information. It is about selecting among different philosophies and choosing how various rights and entitlements ought to be ordered. Security implements those choices—it mediates between information and privacy selections. This Article argues that separating privacy from security has important practical consequences. Security failings should be penalized more readily and more heavily than privacy ones, both because there are no competing moral claims to resolve and because security flaws make all parties worse off. Currently, security flaws are penalized too rarely, and privacy ones too readily. The Article closes with a set of policy questions highlighted by the privacy-versus-security distinction that deserve further research.
Bambauer notes that
Acxiom is one of the world’s foremost data mining companies. The company’s databases contain information on over half a billion consumers, with an average of 1,500 transactions or data points per consumer. It processes one billion such records each day. Each consumer receives a unique numeric identifier, allowing Acxiom to track and classify them by location, credit card usage history, and even interests. Acxiom earns over a billion dollars annually by selling this data to companies that want to market their wares more effectively. If Big Data has an epicenter, it is likely located in Conway, Arkansas, where Acxiom’s server farm can be found.
Even giants make mistakes. In February 2003, Acxiom provided a defense contractor with the Social Security numbers of passengers who flew on JetBlue flights. The contractor used one of those Social Security numbers in a PowerPoint presentation, and that passenger’s information quickly became public. The disclosure led to intense criticism of the company and to a complaint to the Federal Trade Commission.
And, in 2002 and 2003, hackers penetrated Acxiom’s computers, accessing records on millions of American consumers. Acxiom failed to detect the breaches; rather, the attacks were noticed first by local law enforcement and then by the Federal Bureau of Investigation (FBI). Indeed, in the 2003 case, Acxiom had no idea its systems had been compromised until a Cincinnati sheriff turned up compact discs filled with the company’s records while searching the home of a systems administrator for a marketing firm. It was only while the FBI was investigating the case that agents stumbled upon a second group of hackers who had broken into Acxiom’s server three times the prior year. The Cincinnati systems administrator captured the sensitive data while it was being transferred via File Transfer Protocol (FTP), without encryption, from a server outside Acxiom’s firewall—the equivalent, in security terms, of writing it on a postcard sent through regular mail.
Thus, Acxiom exposed sensitive consumer data three times—once through a deliberate choice and twice through incompetence. Privacy advocates were outraged in each instance. This Article argues, though, that these cases — the disclosure, and the hacks — should be treated differently. The disclosure is a privacy problem, and the hacks are a security problem. While legal scholars tend to conflate privacy and security, they are distinct concerns. Privacy establishes a normative framework for deciding who should legitimately have the capability to access and alter information. Security implements those choices. A counterintuitive consequence of this distinction is that law should punish security failures more readily and harshly than privacy ones. Incompetence is worse than malice.
Security, in contrast to privacy, is the set of technological mechanisms (including, at times, physical ones) that mediates requests for access or control. If someone wants access to your online banking site, he needs your username, password, and personal identification number (your credentials). The security of your online banking is determined by the software on the bank’s server and by who knows your credentials. If someone wants access to your paper health records, they need physical access to your physician’s file room. The security of your health records is determined by the physical configuration of the office and by who holds a copy of the key to it. As a privacy matter, you might want only your doctor and her medical staff to have access to your records. As a security matter, the office’s cleaning staff might have a key that lets them into the file room.
The differences between privacy and security matter. Security defines which privacy choices can be implemented. For example, if your entire electronic medical record is secured by a single mechanism (such as a password), it is not possible to enforce selective access, so that your dermatologist can see information about your sunscreen use but not about your antidepressant use. And privacy dictates how security’s options should be implemented, the circumstances under which they are appropriate, and the directions in which they ought to develop.
Distinguishing between privacy and security is unusual in legal scholarship. Most academics and advocates treat the two concerns as interchangeable or as inextricably intertwined. Jon Mills, for example, treats encryption and authentication—classic security technologies—as methods of protecting privacy. For Mills, any “disclosure without consent gives rise to privacy concerns.” Similarly, Viktor Mayer-Schönberger takes up the possibilities of digital rights management (DRM) technology as a privacy solution. Mayer-Schönberger contemplates using the locks and keys of DRM as a mechanism to implement restrictions on who can access personal information. Yet the difficulties he rightly recognizes in his proposal, such as comprehensiveness, resistance to circumvention, and granularity, are those of security, not privacy. DRM is not privacy at all: it is security. Placing it in the wrong category causes nearly insurmountable conceptual difficulties. In assessing privacy protections on social networking services, such as Facebook and Orkut, Ruben Rodrigues focuses on privacy controls (which enable users to limit access to information), and distinguishes data security mechanisms (which protect users from inadvertent breaches or deliberate hacks). Yet both, in fact, are aspects of security, not privacy. Here, too, the wrong classification creates problems. Rodrigues grapples with problems of access by third-party programs, which could be malware or a competitor’s migration tool; user practices of sharing login information; and authentication standards. Each issue is made clearer when realigned as a security matter.
While some privacy scholarship has recognized the privacy–security distinction rather murkily, it has not yet been explored rigorously or systematically. For example, Charles Sykes treats cryptography as conferring privacy, but then later quotes cypherpunk Eric Hughes, who writes, “Privacy in an open society requires cryptography. If I say something, I want it heard only by those for whom I intend it.” This correctly recognizes that privacy and security (as implemented through cryptography) are different, though complementary. Ira Rubenstein, Ronald Lee, and Paul Schwartz seem implicitly to understand the distinction, though they do not leverage it, in their analysis of privacy- enhancing technologies. Thus, in assessing why users have not embraced anonymization tools, they concentrate principally on security risks, such as the possibility of attacks against these tools or of drawing attention from government surveillance. Peter Swire and Lauren Steinfeld formally treat security and privacy separately, but conflate the roles of the two concepts. For example, Swire and Steinfeld discuss the Health Insurance Portability and Accountability Act’s (HIPAA) Privacy Rule but lump in security considerations. And Paul Schwartz and Ted Janger see analogous functioning by information privacy norms, which “insulate personal data from different kinds of observation by different parties.” That is exactly what security does, but unlike norms, security restrictions have real bite. Norms can be violated; security must be hacked. Rudeness is far easier to accomplish than decryption.
The one privacy scholar who comes closest to recognizing the distinction between security and privacy is Daniel Solove. In his article on identity theft, Solove analyzes the interaction (along the lines of work by Joel Reidenberg and Larry Lessig exploring how code can operate as law) between architecture and privacy. Solove’s view of architecture is a holistic one, incorporating analysis of physical architecture, code, communications media, information flow, and law. Solove assesses the way architecture shapes privacy. This is similar to, but distinct from, this Article’s argument, which is that security implements privacy. Moreover, the security concept is less holistic: it assesses precautions against a determined attacker, one unlikely to be swayed by social norms or even the threat of ex post punishment.
Finally, Helen Nissenbaum’s recent work is instructive about the differences between these two concepts, although it is not a distinction she draws directly. She argues that standard theories of privacy devolve, both descriptively and normatively, into focusing upon either constraints upon access to, or forms of control over, personal information. This encapsulation points out the problems inherent in failing to recognize how privacy differs from security. An individual may put forth a set of claims about who should be able to access her personal information or what level of control she should have over it. Those claims describe a desired end state—the world as she wants it to be regarding privacy. However, those claims are unrelated to who can access her personal information or what level of control she has over it at present. More important, those normative claims are unrelated to overall access and control, not only now, but into the future, and perhaps in the past. A given state of privacy may be desirable even if it is not achievable.