'The Right to Be Forgotten: Issuing a Voluntary Recall' (Indiana University Robert H. McKinney School of Law Research Paper No. 2015-13) by R. George Wright
comments
Recently, in Europe and elsewhere, some form of a “Right to Be Forgotten” in various internet and search engine contexts has been recognized. This Article contends, however, that for various largely practical reasons, no such broad-sweeping right should be adopted in the United States. More narrowly particularized defamation, privacy, confidentiality, and emotional distress claims, along with criminal record expungement statutes, jointly provide a better alternative path, especially when modified to address significant socio-economic class effects. Crucially, the superiority of narrower, particularized, contextual, and pluralistic approaches to the concerns underlying a “Right to Be Forgotten” flows from important systematic biases and asymmetries between persons seeking a de-linking or deletion of personal information on the one hand, and information aggregators such as Google on the other. ...
Given the evolving nature of the right to be forgotten, the focus below will not be on specific legal formulas, mechanisms, tests, or procedures. Instead, the focus will be on more basic underlying values, concerns, tendencies, experiences, patterns, risks, and costs. Controversial definitions of ideas such as privacy, autonomy, anonymity, and the public interest will be avoided as much as possible. The focus will instead be more practical. The argument below will rely not on dogmatic assertions about rights, but on a pragmatic sense of the inevitable basic problems in implementing such a right.
In the end, for pragmatic reasons, no broad European-style legal right to be forgotten should be adopted. Narrowly contextual particularized statutory and common law privacy, non- defamation, confidentiality, and emotional distress damages rights, along with criminal expungement statutes, jointly provide a better alternative path. Narrowly focused holdings and statutes can readily be modified to address significant socio-economic class effects.
Any broad-sweeping legal right to be forgotten, beyond such narrow, particularized, context-sensitive accommodations, is ultimately likely, for practical reasons discussed throughout below, to be ill-advised. The superiority of a narrow, particularized, contextual, and pluralistic approach to a right to be forgotten flows from practically significant systematic biases and asymmetries between individuals seeking de-linking or deletion of personal information on the one hand and information aggregators such as Google on the other.
Wright goes on to conclude -
a broad right to be forgotten thus involves distinctive practical problems associated with a substantial element of government paternalism. Such an argument could easily be further developed. The main focus of this Article, however, has instead been on the range of relevant and mutually reinforcing biases and asymmetries, and on related practical considerations more generally. Each of the reinforcing practical concerns raised above adds further pragmatic complications particularly to the operation of any broad-sweeping, generalized right to be forgotten.
At an extreme, though, even pragmatic concerns for the risks, costs, and sheer workability of any broad legal mechanism unavoidably begin to merge with fundamental relevant moral and political principles. Dystopias, in the extreme case, may manifest the above problems of systematic and mutually reinforcing biases, unnecessarily severe asymmetries of information and motivation, and the resulting policy distortions and inefficiencies. Any such political or moral concern may seem irrelevant if we simplistically think of a right to be forgotten as the merely straightforward universal empowerment of individuals, and perhaps of small groups. But for every individual who is empowered by a legally enforced cleaning of the personal internet slate, many other individuals, disproportionately those who cannot easily afford the most productive internet searches, may lose realistic access to usable information without their knowledge or consent.
One might argue in response that the lost information was authoritatively deemed to be outdated or irrelevant, at least for such purposes as the relevant decision maker was able to envision. We will not reiterate here the systematic biases and asymmetries that predictably tend to distort such determinations. Rather, the point here is that determining which items of information should be widely accessible is made not by an empowered individual, but, ultimately, by a government or a court that sets the criteria for search engine corporations and other private actors to follow. How comfortable we should be with a for-profit entity like Google making discretionary public policy decisions is certainly worth asking. But if the corporate decision rules are ultimately set in general terms by governments, it is unclear that individual persons are thereby distinctly empowered.
In the end, it is certainly understandable that contemporary Europeans in particular, given the history of the twentieth century, would be especially anxious about personal privacy. But it is far from clear why the best response to nightmarish historic centralized governmental abuse of privacy is to entrust decisions as to the relevance and significance of information, in unforeseen contexts, to a few multinationals, and ultimately to centralized governments, rather than diffusely to the various more or less reasonable people who may be affected, positively or negatively, by how sensibly they respond to and discount the information in question. The exercise of discretionary authority to grant or withhold access to information, whether by one or more international profit-seeking entities or by centralized governments, where it is not systematically skewed as described above, may well tend to track the perceived interests of that decision maker. Those interests may not correspond to the interests of affected individuals or the broader public. ...
The idea of a broad-sweeping right to be forgotten, in whatever form it is proposed, inevitably raises fundamental questions. It is tempting to try to make sound, broad legal policy by somehow pitting the value of informational privacy, however defined, against an equally vague and general right of access to information itself. A debate at that level of competing abstract rights, however, distorts the issues without encouraging genuine progress in understanding.
The approach taken in this Article has, instead, focused on the systematic and mutually reinforcing patterns of biases and asymmetries likely, in practice, to distort in predictable ways the practical operation of any version of a generalized right to be forgotten. Some of the crucial biases and asymmetries reflect familiar psychologies, cognitive limits, incentives, and patterns of interests of persons and institutions, as recognized in common experience and supported by the available social science evidence. On this basis, this Article has instead recommended a range of better understood, fine-tuned, particularized, more contextualized common law and statutory privacy-oriented remedies, as continually amended in the interests of socio-economic fairness and equality. Given this more attractive alternative path, a broadly generalized right to be forgotten should be subject to a recall.