The current regulatory approach for protecting privacy involves what I refer to as the “privacy self-management model” -- the law provides people with a set of rights to enable them to decide for themselves about how to weigh the costs and benefits of the collection, use, or disclosure of their information. People’s consent legitimizes nearly any form of collection, use, and disclosure of personal data.
Although the privacy self-management model is certainly a laudable and necessary component of any regulatory regime, I contend in this essay that it is being asked to do work beyond its capabilities. Privacy self-management does not provide meaningful control. Empirical and social science research has undermined key assumptions about how people make decisions regarding their data, assumptions that underpin and legitimize the privacy self-management model.
Moreover, even if individuals were well-informed and rational, they still cannot appropriately self-manage their privacy due to a series of problems. For example, the problem of scale involves the fact that there are too many companies collecting and using data for a person to be able to manage privacy with every one. The problem of aggregation involves the fact that privacy harms often consist of an aggregation of disparate pieces of data, and there is no way for people to assess whether revealing any piece of information will sometime later on, when combined with other data, reveal something sensitive or cause harm. The essay also discusses a number of other problems.
In order to advance, privacy law and policy must confront a complex and confounding paradox with consent. Consent to collection, use, and disclosure of personal data is often not meaningful, and the most apparent solution – paternalistic measures – even more directly denies people the freedom to make consensual choices about their data. No matter which direction the law takes, consent will be limited, and a way out of this dilemma remains elusive.Solove concludes that -
In order for privacy regulation to make headway, the law needs a better and more coherent approach to consent with regards to privacy. Currently, the law has not sufficiently grappled with the social science literature that has been teaching us about the complexities and challenges involved in human decisionmaking. The law’s current view of consent is inconsistent, and the law treats consent as a simple binary (i.e. it either exists or it does not). Consent is far more nuanced, and privacy law needs a new approach that better accounts for the nuance without getting too complex to be workable.
Any way forward will require the law to make difficult substantive decisions. Privacy self-governance attempts to remain neutral about the merits of particular forms of data collection, use, or disclosure and looks merely to whether or not there is consent. Under the privacy self-governance model, most forms of data collection, use, or disclosure are acceptable if consensual. To move forward, this kind of neutrality cannot be sustained.
The law should codify basic privacy norms. Such codification need not be overly paternalistic – it can be in a form like the Uniform Commercial Code, where certain default rules can be waived. The norms of the UCC have become well-entrenched and oft-followed. Deviations from these norms are quite salient Privacy law has thus far said far too little about the appropriate forms of collection, use, and disclosure of data. I am not suggesting a paternalistic regime where the law rigidly prohibits; only on the outer boundaries should the law do so. But the law must weigh in more strongly about substance.
In essence, what many people want when it comes to privacy is for their data to be collected, used, and disclosed in ways that benefit them or that benefit society without harming them individually. If people have an objection to certain uses of data, they want a right to say no. But many people do not want to micromanage their privacy. They want to know that someone is looking out for their privacy and that they will be protected from harmful uses.
With the food we eat and the cars we drive, we trust that there will be a general level of safety. Much choice remains, but we do not have to fear that whenever we drive a car or drink milk, it will be unsafe. We trust that certain basic features will be available and that these products will fall within certain reasonable parameters of safety. We do not have to become experts on cars or milk. Establishing more substantive rules about data collection, use, and disclosure will help. These rules can consist of hard boundaries that block practices that are particularly troublesome as well as softer default rules that can be bargained around to establish a basic set of norms. Indeed, default rules can be crafted with various levels of easiness to bargain around.
Of course, moving away from neutrality must avoid too much paternalism. When the law overrides people’s ability to consent, it typically does so because the harm of what they might consent to clearly outweighs the benefit. With privacy, the costs and benefits are often complicated to weigh. As Lior Strahilevitz notes in this volume, various restrictions on the collection, use, and disclosure of personal data lead to benefits for some people and detriments to others. Privacy has distributive effects, and this fact makes it more complicated to determine which choice is the right one to make. Moreover, as Omer Tene and Jules Polonetsky note and demonstrate with examples, the collection, use, and disclosure of personal data – even without consent – can lead to great benefits for individuals and society.
In many cases, benefits might not be apparent immediately at the time the data is collected. New ideas for combining data, new discoveries in data aggregation and analysis, and new techniques and technologies of data analysis might change the benefits side of the equation. They might also change the costs side as well. Rules that require renewed consent for new uses of data might be too cost-prohibitive and serve as a de-facto bar on these uses. Such an outcome might not be socially desirable, and it might not be the outcome preferred by most people whose data is involved. On the other hand, blanket consent that allows for a virtually unlimited array of new uses of data can be just as undesirable, as data can potentially be used in harmful ways people might not be able to anticipate or understand.
Moreover, measuring the costs of certain forms of collection, use, and disclosure of personal data is extremely difficult because privacy harms are so elusive to define. Ultimately, because of the dynamism of this area, assessing costs and benefits requires a fair degree of speculation about the future. Individuals are likely not able to make such decisions very well in advance, but neither is the law.
Such decisions would be better made at the time of particular uses of data. What is needed is for the law to weigh in and provide guidance about the types of new uses at the time they are proposed. Perhaps some ought to be restricted outright; some ought to be limited; some ought to require new consent; some ought to be permitted but with a right to revoke consent; and some ought to be permitted without new consent. Perhaps an agency should review proposals for new uses as they arise. The self-management model cannot be abandoned, and nor can more paternalistic measures. There is no silver bullet, and so the most apparent solution at hand appears to be continuing to engage in an elaborate dance between self-management and paternalism.