21 April 2017

German Privacy, Harm and Dignity

'A Pantomime of Privacy: Terror and Investigative Powers in German Constitutional Law' (Washington &  Lee Legal Studies Paper No. 2017-5) by Russell Miller comments 
Germany is widely regarded as a global model for the privacy protection its constitutional regime offers against intrusive intelligence-gathering and law enforcement surveillance. There is some basis for Germany’s privacy “exceptionalism,” especially as the text of the German constitution (Basic Law) provides explicit textual protections that America’s 18th Century constitution lacks. The German Federal Constitutional Court has added to those doctrines with an expansive interpretation of the more general rights to dignity (Article 1 of the Basic Law) and the free development of one’s personality (Article 2 of the Basic Law). This jurisprudence includes constitutional liberty guarantees such as the absolute protection of a “core area of privacy,” a “right to informational self-determination,” and a right to the “security and integrity of information-technology systems.” On closer examination, however, Germany’s burnished privacy reputation may not be so well-deserved. The Constitutional Court’s assessment of challenged intelligence-gathering or investigative powers through the framework of the proportionality principle means, more often than not, that the intrusive measures survive constitutional scrutiny so long as they are adapted to accommodate an array of detailed, finely-tuned safeguards that are meant to minimize and mitigate infringements on privacy. Armed with a close analysis of its recent, seminal decision in the BKA-Act Case, in this article I argue that this adds up to a mere pantomime of privacy – a privacy of precise data retention and deletion timelines, for example – but not the robust “right to be let alone” that contemporary privacy advocates demand.
'Data Privacy and Dignitary Privacy: Google Spain, the Right to Be Forgotten, and the Construction of the Public Sphere' by Robert Post comments
 In 2014, the decision of the European Court of Justice in Google Spain SL v. Agencia Española de Protección de Datos (“Google Spain”) set off a firestorm by holding that the fair information practices set forth in EU Directive 95/46/EC, which is probably the most influential data privacy text in the world, require that Google remove from search results links to websites containing true information on the grounds that persons possess a “right to be forgotten.” As a result of Google Spain, Google has processed 703,910 requests to remove 1,948,737 URLs from its search engine, and some 43.2% of these URLs have been erased from searches made under the name of the person requesting removal. The world-wide influence of Google Spain is likely to become even greater when the EU promulgates it General Data Protection Regulation (“GDPR”) in 2018.
At stake in Google Spain were both privacy values and freedom of expression values. Google Spain inadequately analyzes both. With regard to the latter, Google Spain fails to recognize that the circulation of texts of common interest among strangers makes possible the emergence of a “public” capable of forming “public opinion.” The creation of public opinion is essential for democratic self-governance and is a central purpose for protecting freedom of expression. As the rise of American newspapers in the 19th and 20th Century demonstrates, the press establishes the public sphere by creating a structure of communication that is independent of the content of any particular news story. Google underwrites the virtual public sphere by creating an analogous structure of communication.
With regard to privacy values, EU law, like the law of many nations, recognizes two distinct forms of privacy. The first is data privacy, which is protected by Article 8 of the Charter of Fundamental Rights of the European Union. Data privacy is safeguarded by fair information practices designed to ensure (among other things) that personal data is used only for the specified purposes for which it has been legally gathered. Data privacy operates according to an instrumental logic, and it applies whenever personal information is processed. Its object is ensure that persons retain “control” over their personal data. Google Spain interprets the Directive to give persons a right to have their personal data “forgotten” or erased whenever it is “inadequate, irrelevant or no longer relevant, or excessive in relation to the purposes of the processing at issue carried out by the operator of the search engine.” It is not necessary to show that the processing of such data will cause harm.
In contrast to data privacy, Article 7 of the Charter of Fundamental Rights of the European Union is entitled “Respect for Family and Private Life.” Article 7 should be interpreted analogously to how the European Court of Human Rights interprets Article 8 of the European Convention. Article 7 of the Charter therefore protects the dignity of persons by regulating inappropriate communications that threaten to degrade, humiliate or mortify them. The privacy at issue in Article 7 follows a normative logic that tracks harms to personality caused by violations of civility rules. Article 7 protects the same privacy values as those safeguarded by the American tort of public disclosure of private facts. It protects what we may call “dignitary privacy.” Throughout the world, courts protect dignitary privacy by balancing the harm a communication may cause to the integrity of a person against the importance the communication may have to the public discourse necessary for democratic self-government.
The instrumental logic of data privacy is inapplicable to public discourse, which is why both the Directive and GDPR categorically exempt journalistic activities from the reach of fair information practices. The communicative action characteristic of the public sphere is made up of intersubjective dialogue, which is antithetical both to the instrumental rationality of data privacy and to its aspiration to ensure individual control of personal information. It was therefore a mistake for Google Spain to apply the fair information practices of the Directive to the Google search engine.
But the Google Spain opinion also glancingly mentions Article 7, and in the end the opinion creates doctrinal rules that are inconsistent with the Directive and roughly reminiscent of those used to protect dignitary privacy. The Google Spain opinion is thus deeply confused about the kind of privacy it wishes to protect. The opinion is pushed in the direction of dignitary privacy because courts have for more than a century sought to impose the values of dignitary privacy on both public discourse and the press. Although the normative logic of dignitary privacy is in tension with freedom of expression, because it limits what can be said, it is not ultimately incompatible with a right to freedom of expression insofar as that right seeks to foster democratic self-government. Public discourse cannot become an effective instrument of self-governance without a modicum of civility.
The Google Spain decision recognizes dignitary privacy only in a rudimentary and unsatisfactory way. It inadequately theorizes both the harm a Google link might cause and the contribution a link might make to public discourse. If it had more clearly applied the doctrine of dignitary privacy, moreover, Google Spain would not have held that the right to be forgotten should apply to Google and not necessarily to the underlying websites to which the Google search engine creates links. Google Spain would not have outsourced the enforcement of the right to be forgotten to a private corporation like Google. Only government agencies are authorized to determine the balance between civility and freedom appropriate for these underlying websites.

Robots, Liability and Accidents

'The Rise of Robots and the Law of Humans' (Oxford Legal Studies Research Paper No. 27/2017) by Horst Eidenmueller is characterised as an
ttempt to answer fundamental questions raised by the rise of robots and the emergence of ‘robot law’. The main theses developed in this article are the following: (i) robot regulation must be robot- and context-specific. This requires a profound understanding of the micro- and macro-effects of ‘robot behaviour’ in specific areas. (ii) (Refined) existing legal categories are capable of being sensibly applied to and regulating robots. (iii) Robot law is shaped by the ‘deep normative structure’ of a society. (iv) If that structure is utilitarian, smart robots should, in the not too distant future, be treated like humans. That means that they should be accorded legal personality, have the power to acquire and hold property and to conclude contracts. (v) The case against treating robots like humans rests on epistemological and ontological arguments. These relate to whether machines can think (they cannot) and what it means to be human. I develop these theses primarily in the context of self-driving cars – robots on the road with a huge potential to revolutionize our daily lives and commerce.
'Products Liability and the Internet of (Insecure) Things: Should Manufacturers Be Liable for Damage Caused by Hacked Devices?' by Alan Butler in University of Michigan Journal of Law Reform (forthcoming) comments
Despite the fact that discussions of liability for defective software go back more than forty years, there is no clear consensus on what theory governs liability for damage caused by ‘onnected devices’ (or the ‘Internet of Things’). However, the proliferation of IoT devices may be the catalyst for a new field of ‘connected devices’ products liability law, which could provide a good model for determining liability for several reasons. First, attacks on IoT devices can and have caused significant damage to property and are highly foreseeable given the widely acknowledged insecurity of connected devices and numerous high-profile attacks. Second, IoT devices are, in many cases, capable of being updated and secured remotely by the manufacturer, and patching well-known security flaws could significantly reduce the risk of future attacks. And third, holding manufacturers liable for downstream harms caused by their insecure devices is well aligned with the purposes of products liability law—to minimize harm by encouraging manufacturers (as a least-cost-avoider) to invest in security measures.