'How to Govern Visibility?: Legitimizations and Contestations of Visual Data Practices after the 2017 G20 Summit in Hamburg' by Rebecca Venema in (2020) 18(4) Surveillance & Society 522-539 comments
Technological changes shift how visibility can be established, governed, and used. Ubiquitous visual technologies, the possibility to distribute and use images from heterogeneous sources across different social contexts and publics, and increasingly powerful facial recognition tools afford new avenues for law enforcement. Concurrently, these changes also trigger fundamental concerns about privacy violations and all-encompassing surveillance. Using the example of police investigations after the 2017 G20 summit in Hamburg, the present article provides insights into how different actors in the political and public realm in Germany deal with these potentials and tensions in handling visual data. Based on a qualitative content analysis of newspaper articles (n=42), tweets (n=267), experts’ reports (n=3), and minutes of parliamentary debates and committee hearings (n=8), this study examines how visual data were collected, analyzed, and published and how different actors legitimated and contested these practices. The findings show that combined state, corporate, and privately produced visual data and the use of facial recognition tools allowed the police to cover and track public life in large parts of the inner city of Hamburg during the summit days. Police authorities characterized visual data and algorithmic tools as objective, trustworthy, and indispensable evidence-providing tools but black-boxed the heterogeneity of sources, the analytical steps, and their potential implications. Critics, in turn, expressed concerns about infringements of civic rights, the trustworthiness of police authorities, and the extensive police surveillance capacities. Based on these findings, this article discusses three topics that remained blind spots in the debates but merit further attention in discussions on norms for visual data management and for governing visibility: (1) collective responsibilities in visibility management, (2) trust in visual data and facial recognition technologies, and (3) social consequences of encompassing visual data collection and registered faceprints.
Venema argues
“It is an amount of visual data never seen before in the criminal history in Germany” (Monroy 2017), “a new standard of proof” (Monroy 2018); “we enter uncharted technological territory” (Bürgerschaft der Freien und Hansestadt Hamburg 2018: 8). With these words, the chief inspector of Hamburg’s criminal investigation department praised the wealth of images and the pivotal role of facial recognition tools that were used for police investigations after the 2017 G20 summit in Hamburg. Protests had culminated in various violent confrontations between protesters and the police as well as in severe riots (for a detailed chronology and an in-depth analysis of the dynamics, see Malthaner, Teune, and Ullrich 2018). In the subsequent prosecutions against individuals accused of, for example, disturbing the peace, assault, civil disorder, damage to property, or looting, the police collected more than 100 TB of photographs and videos and analyzed them with the help of a third-party facial recognition tool. Moreover, the police published more than two hundred pictures of suspects online in several waves of national, and later European, public searches.
These practices reflect important shifts in how visibility can be established, governed, and used in highly visualized and datafied societies: Both protests and public life in general are increasingly videotaped or captured by photographs—be it by the police, video surveillance cameras, people who attend an event, or those who simply pass by a given public place. Vast numbers of digital images taken and shared in private and public contexts can be widely distributed, combined with images from other sources, and (re)used across different social contexts and publics. Visual data, that is the combination of a given photograph or video sequence with specific metadata, such as GPS coordinates or the date or time at which a picture or video was taken, can detail fundamental personal information such as a person’s whereabouts at a given time, individuals’ physical and facial traits, or how people interact with each other. Moreover, increasingly powerful tools for algorithmic analyses, such as facial recognition tools, now promise significant advancements for scanning large data sets, mapping facial features from a photograph or video, and identifying individuals or tracking their movements.
These changes and characteristics have implications for how the police and public, private, and voluntary sector partners interact in policing strategies (see Spiller and L’Hoiry 2019; Trottier 2015). Furthermore, they entail both myriad potentials as well as possible risks. On one hand, extensive and heterogeneous visual data and facial recognition tools might be beneficial in situations such as searching for terrorists or a missing child. In fact, they can open up significant opportunities for safeguarding public security and for supporting policing operations, as the case of the Boston marathon bombing has shown (Mortensen 2015). On the other hand, ubiquitous visual technologies, the potentially broad reach of images or videos, and biometric analyses may also be considered fundamental threats to civil liberties and an intrusive shift in control capacities (Crawford 2019). In fact, encompassing visual data can also contribute to exclusion, repression, and targeted control when pictures or videos published online are used to monitor and collect information about individuals or groups of people, their activities, interactions, and associations (see, e.g., Pearce, Vitak, and Barta 2018; Lane, Ramirez, and Pearce 2018; Uldam 2018; Dencik, Hintz, and Carey 2018).
How images were handled in the G20 investigations triggered controversial public and political debates. In these debates, the crucial steps of (visual) data management and governing visibility—that is, how to collect, how to analyze, and how to use and distribute data—moved to the center of public attention. The ways in which facial recognition was used even led to a precedent lawsuit in Germany (Caspar 2019). This makes the 2017 G20 investigations a timely case study to investigate discourses on visual data practices and to examine how ethical and legal norms for handling visual data and for governing visibility are currently discussed. Visual data practices are thereby understood as practices of collecting, analyzing, and publishing visual data. Tracing these practices and debates on visibility management and law enforcement is vital as they provide insights into an urgent social concern (Flyverbom 2019) and are a key site for understanding the politics of datafied societies in general (Hintz, Dencik, and Wahl-Jorgensen 2018).
So far, insights into how different authorities and stakeholders in the political and public realm deal with potentials, risks, and normative questions related to visibility and visual data are scarce. Based on a qualitative content analysis of newspaper articles, tweets, experts’ reports, and minutes of parliamentary debates and committee hearings, I seek to address this gap in a twofold way. First, I compile publicly available information about visual data practices. Second, I examine how different actors in mediated public and political debates legitimated and contested visual data practices.
I start by outlining the theoretical concepts of visibility and visibility management. I then conceptually discuss how changing visual practices shift visibility and surveillance constellations. The subsequent review of previous research and the empirical study focus on how different actors such as political decision-makers, journalists, or citizens make sense of these shifts and their implications. Based on the empirical findings, I discuss three topics that remained blind spots in the debates but merit further attention in discussions on norms for visual data management and for governing visibility: (1) collective responsibilities in visibility management, (2) trust in visual data and facial recognition technologies, and (3) social consequences of encompassing visual data collection and registered faceprints.