16 September 2018

Obscurity

'Search Engines and the Right to be Forgotten: Squaring the Remedy with Canadian Values on Personal Information Flow' by Andrea Slane in (2018) 55(2) Osgoode Hall Law Journal 349-347 states
The Office of the Privacy Commissioner of Canada (“OPC”) recently proposed that Canada’s private sector privacy legislation should apply in modified form to search engines. The European Union (“EU”) has required search engines to comply with its private sector data protection regime since the much-debated case regarding Google Spain in 2014. The EU and Canadian data protection authorities characterize search engines as commercial business ventures that collect, process, and package information, regardless of the public nature of their sources. Yet both also acknowledge that search engines serve important public interests by facilitating users’ search for relevant information. This article considers specifically what a Canadian right to be forgotten might look like when it is seen as an opportunity to re-balance the values at stake in information flow. This article aims to bring Canada’s existing legacy of balancing important values and interests regarding privacy and access to information to bear on our current information environment. 
Slane comments
As evidenced by slogans like ‘lest we forget’ and let ‘bygones be bygones,’ ‘remembering’ and ‘forgetting’ play important social functions: We need to both learn from history and be able to move on from the past. The right to be forgotten that has entered global public consciousness in the last few years has inspired both concerns about suppressing history1 and reminders that total remembering is both new and damaging to data subjects and communities. However, the impetus behind the right to be forgotten is less about grand social values of remembering and forgetting, and more about managing personal information flows in the digital age: It is about trying to address vast power imbalances between data subjects and various digital information brokers, including information location service providers such as search engines. 
In Canada, the Office of the Privacy Commissioner of Canada (“OPC”) recently proposed that the data protection regime governing the private sector, the Personal Information Protection and Electronic Documents Act (“PIPEDA”), should be interpreted to obligate search engines to abide by fair information principles, in particular the principles of accuracy and appropriate purposes. Applying PIPEDA to search engines would be a new practice, even though the OPC claims that it is merely applying the current legislation. As recently as 2017, it seemed that only voluntary cooperation would be requested of search engines. For example, in the Federal Court’s affirmation of the OPC findings in AT v Globe24h.com, the defendant’s website was found to have violated PIPEDA when it scraped court and tribunal documents containing personal information from publicly accessible legal databases and allowed them to be indexed by general search engines. The Federal Court issued a declaratory court order, as endorsed by the OPC, which allowed the complainant to appeal to Google to honour its voluntary search alteration policies: The court did not directly issue an order to compel Google to do so. 
The European Union (“EU”), however, already requires search engines to honour complainants’ requests to remove personal information from search results in certain circumstances. The data protection regime in the EU characterizes search engines as primarily commercial business ventures that collect, process, and package information, regardless of the public nature of their sources.  Search engine results are in this sense a product sold by the search engine company—not directly to the user, but rather to advertisers and other data brokers with an interest in search result content and compilation. If this understanding of search engine results as an information product were adopted in Canada, as currently proposed by the OPC, then a search engine company could be deemed to be subject to PIPEDA, in that it “collects, uses or discloses [personal information] in the course of commercial activities.” While the OPC has rightly suggested that it would be unreasonable to require search engines to abide by PIPEDA as a whole, in particular with regard to securing consent for all of its collection and use of personal information, there are nonetheless significant ways that PIPEDA could be applied in a workable and rights-balancing way. This article considers what a finding that search engines are subject to PIPEDA would mean, and how it could be justified and limited in a principled fashion that respects our commitment to privacy, access to information and freedom of expression. In other words, what would a Canadian right to be forgotten look like? 
The right to be forgotten is generally recognized as arising from European sensibilities regarding personality rights. European privacy and identity rights provide strong protections for individual autonomy in the domain of identity formation and presentation, giving individuals more control over how they are discussed and portrayed in public. European data protection law operates as an outgrowth of this broader and stronger protection of citizens’ identity. This commitment is rooted in European emphasis on human dignity, respect for one’s ‘private life,’ and protection from damage to one’s reputation by either government or private actors. These rights are enshrined in multiple constitutional documents of the EU, and illustrate the more general trust that European legal culture places in government regulation to protect these interests, and their distrust of private markets to do so. 
The United States, on the other hand, is often regarded as having the opposite of European sensibilities when it comes to personal information flow. In the United States, privacy is rooted in liberty rather than dignity, as a right to be ‘free from’ government interference in one’s private life, with far fewer and more limited restrictions placed on private actors. Constitutional protection for privacy only extends to unreasonable search and seizure, and any private rights to privacy are consequently derived from statute or common law and often lose out to the much stronger constitutional protection for freedom of speech, which is notoriously strong in the United States. US legal culture stresses an acute distrust of government regulation, and instead places much more trust in markets to deal with private problems. 
Canada tends to fall somewhere in between these two interpretations: Our Charter of Rights and Freedoms does not contain express protection for privacy beyond protection from unreasonable search and seizure — although Quebec’s additional Charter of human rights and freedoms does, and is closer to the European approach to privacy, using similar language in fostering respect for “private life.” However, section 1 of the Canadian Charter has allowed privacy interests to be more readily balanced against freedom of expression than in the United States, as more restrictions can be justified as reasonable in a “free and democratic society.” Canada consequently approaches some issues of personal information flow differently than the United States—for example, publication bans to protect the privacy of some crime victims are constitutionally possible in Canada but not in the US. However, Canada has not embraced personality rights to the extent that the EU has, and significant recent gains in Canada for freedom of expression (specifically regarding publication of defamatory content) illustrate that Canada places more value on freedom of expression and less on protecting reputation than Europe. Canadian law on intermediary liability for information posted online by others is also less developed than in these jurisdictions. 
Discussions about the right to be forgotten are emerging along with the rapid development of our technology-based information landscape. Real concerns about actual and potential pervasive surveillance—from government, companies, peers, and the broader public—have resulted in heightened anxiety about being able to protect one’s identity and interests. Revelations of broad government surveillance of communications, commercial entities amassing vast quantities of data about consumer behaviour (including emotional responses to various stimuli, tracking online and app-enabled interactions with others, and geo-location technologies in many portable devices), as well as the explosion of social media, have fueled these concerns. Anonymity has always been a central strategy for protecting one’s privacy online, but it is becoming increasingly difficult to remain unidentified. We all now have large dossiers with data held by various public, private, and personal actors, with little knowledge of what is in them, and how they are combined (including from both private and public sources). Proponents of the right to be forgotten are attempting to intervene against this power imbalance. 
Search engines have become the primary means by which we find information, including of course about people: We search for people we know or hear about, and occasionally check our own names. Google has emerged as the worldwide leader in online search services, credited with over 90 per cent of the global market share. Google’s success has been attributed to its algorithms, by which the company processes information gathered from publicly available webpages and delivers the results in list form to a user, partly based on the user’s previous search history. The aim is to deliver the most relevant material at the top of the list. Information presented further down the list is deemed less relevant and most people do not even look at search results beyond the first page or two. Online reputation management services have long profited from the willingness of companies and individuals to pay for techniques such as Search Engine Optimization (“SEO”) to manipulate search results so positive information rises to the top and negative information is pushed down the list. These services are expensive, however, so only wealthy individuals can benefit from this private regulation of information flow: Without a right to be forgotten, ordinary people are at the mercy of the algorithms. 
Online identity — the profile that emerges when online information connected to a person’s name or other identifier is aggregated and made available to others — has increasingly become a central component of our social and professional lives. Youth are increasingly being taught about ‘self-branding’ as an important part of educational and professional success: They understand that online identity is central to many forms of social evaluation. Lisa Austin described privacy as the regime by which we secure and bolster the conditions for self-formation and presentation, online and off. She argued that data protection principles establish the ground rules for creating and safeguarding an identity-favourable environment. The problem with pervasive surveillance, then, is its possible effects on identity formation, revision, and tailoring to suit various social interactions. It can stifle one’s capacity to express “yourself freely in the here and now.” Erving Goffman noted that every individual has multiple identities, and that social interaction is built on which ‘face’ is put forward in a particular relational context. With the explosion of data collection from so many different directions and via so many channels, we have been rapidly losing the capacity to meaningfully influence, much less control, this process. The right to be forgotten, in its various forms, has the goal of allotting data subjects greater control over the flow of information about them. 
This article explores what a Canadian variant of the right to be forgotten might look like in relation to search engines as a particular type of business that collects and packages publicly available personal information about individuals. Part I will consider the different versions of this ‘right’ in the EU, specifically obscurity, oblivion, and erasure. In particular, it will explore how the EU deals with publicly and indirectly collected information, given that until now data protection regimes generally have not regulated the collection and processing of such information. Part II will consider the digital information dynamics related to publicly available personal information, and what the normative impetus behind regulating these information dynamics might be. It will include a discussion of the difference between what search engines do and what news sources do, and how it may be possible to restrict the former while preserving the importance of expression and access to information regarding the latter. Part III explores the possibility of dividing publicly available personal information into three subcategories: information that should not have been published in the first place; information that is publicly available from public sector sources, but to which public access has been legitimately restricted; and information that, while legitimately and publicly available, has been given more prominence than warranted by way of a search engine’s algorithm. Also important is whether this information has caused the data subject some harm. It also considers the current Canadian approach to each of these categories, and explores how the right to be forgotten might fit into our already established or developing normative approaches to personal information flow. Part III concludes by suggesting a creative solution to the especially complex and novel dynamics of information flow.