Showing posts with label Sousveillance. Show all posts
Showing posts with label Sousveillance. Show all posts

18 August 2020

Bodycams

'Making the Body Electric: The Politics of Body-Worn Cameras and Facial Recognition in the United States' by Jacon Hood in (2020) 18(2) Surveillance and Society 157 comments 

This paper explores the rapid deployment of police body-worn cameras (BWCs) and the subsequent push for the integration of biometric technologies (i.e., facial recognition) into these devices. To understand the political dangers of these technologies, I outline the concept of “making the body electric” to provide a critical language for cultural practices of identifying, augmenting, and fixing the body through technological means. Further, I argue how these practices reinforce normative understandings of the body and its political functionality, specifically with BWCs and facial recognition. I then analyze the rise of BWCs in a cultural moment of high-profile police violence against unarmed people of color in the United States. In addition to examining the ethics of BWCs, I examine the politics of facial recognition and the dangers that this form of biometric surveillance pose for marginalized groups, arguing against the interface of these two technologies. The pairing of BWCs with facial recognition presentsa number of sociopolitical dangers that reinforce the privilege of perspective granted to police in visual understandings of law enforcement activity. It is the goal of this paper to advance critical discussion of BWCs and biometric surveillance as mechanisms for leveraging political power and racial marginalization.

Hood states 

On December 15, 2017,an Austin Police Department (APD) officer opened fire on a stabbing suspect outside of a Central Austin apartment complex. The entire incident was recorded by the officer’s body-worn camera (BWC). Two days later, in an unrelated incident, another APD officer fired at a suspect who, while walking toward the officer, refused to drop a knife. This event was also recorded by the officer’s BWC. Almost immediately, APD began analyzing the BWC footage in order to garner an “objective” viewpoint. Interim Police Chief Brian Manley said that “it was fortunate that our officers that were involved had the body-worn camera because they really did provide a view that we would not have had otherwise,” because prior to the deployment of BWCs in Austin, Texas, he would have had to “just try to put together the best assessment of what had happened” (Wilson 2017). 
 
BWCs have become the technological norm for police departments across the United States, employed with the goal of obtaining a similar third-party perspective as in the incidents in Austin (Sousa, Miethe, and Sakiyama 2015). Police BWCs emerged amid a cultural panic over police violence towards people of color (largely unarmed black people), promising to reduce police misconduct and foster transparency. Yet the growth of interest in these devices has been met with worry regarding their privacy implications as well as overly optimistic hopes that they will reduce police misconduct and improve officer-community relations (Nielsen 2016; Phillips 2016; Thomas 2017; Fan 2018). Accompanying this are evolving efforts to integrate biometric technologies, such as facial recognition software, into existing BWC practices (Harwell 2018; van Schelle 2018). Beyond the legal concerns about these advancements are the normative concerns about using the body as the target of policing. If we consider the history of the physical body as a site of domination for marginalized groups, then practices of making these bodies more visible becomes all the more perilous. The central danger becomes the potential for this new model of policing to (re)define which people’s bodies are codified as authorized and unauthorized in terms of criminality. If the proposed duty of police is to investigate, solve, and prevent crime, then the target of policing practice is the “criminal” as defined by their socio-legal transgression(s). Policing becomes more dangerous when individuals are broken down and reinterpreted in terms of the information provided by their body, instead of as agential social beings. 
 
The first section of this article lays out the guiding theoretical framework of “making the body electric” to describe cultural practices of entangling technology with the body. Drawing upon ideas from Simone Browne(2010, 2015), Bryan Pfaffenberger(1992), and Michel Foucault(1975, 1976, 1978), I propose this theory to address the various ways that connecting the physical body with technology contorts social power, particularly around race. The second section of this article describes the development and employment of BWC devices and the subsequent push for integration of biometric capabilities. This section reviews the major literature on the devices’ ability to decrease police misconduct, foster department transparency, and arouse public support for their use.In the third section,I build my analysis of the sociopolitical consequences of BWC and biometric technologies, with special attention toward facial recognition analysis, usingthe lens of making the body electric. This paper merges empirical and theoretical work on BWCs with emerging conceptual, discursive, and technical work on facial recognition to outline the dangers of what may occur if these technologies collide

30 March 2019

Heckling, Chilling and Evidence

'Recording as Heckling' by Scott Skinner- Thompson in (2019) 108 Georgetown Law Journal comments 
A growing body of authority recognizes that citizen recording of police officers and public space is protected by the First Amendment. But the judicial and scholarly momentum behind the emerging “right to record” fails to fully incorporate recording’s cost to another important right that also furthers First Amendment principles: the right to privacy. 
This Article helps fill that gap by comprehensively analyzing the First Amendment interests of both the right to record and the right to privacy in public, while highlighting the role of technology in altering the First Amendment landscape. Recording information can be critical to future speech and, as a form of confrontation to authority, is also a direct method of expression. Likewise, efforts to maintain privacy while navigating public space may create an incubator for thought and future speech, and also serve as direct expressive resistance to surveillance regimes. 
As this Article explains, once the First Amendment values of both the right to privacy and the right to record are systematically understood, existing doctrine—including the concept of the “heckler’s veto”—can help restore balance between these sometimes competing forms of “speech,” permitting citizen recording of police as well as allowing government regulation of certain recordings that breach the privacy shields of other citizens. 
Just as a heckler’s suppression of another’s free speech justifies government regulation of the heckler’s speech, so too when recording (a form of speech) infringes on and pierces reasonable efforts to maintain privacy (also a form of expression), then the government may limit the ability to record. The heckling framework underscores that liberated and vibrant public space is contingent on a balance between the ability to gather information and maintain privacy in public, while also providing a doctrinally-grounded path for adjudicating those interests.
In Australia the Canberra Times reports that the Australian Federal Police appear to be continuing the practice of chilling legitimate observation by the public through claims that recording is illegal or, more subtly, seizing recordings as 'evidence'. Such seizure should be unnecessary if we have systematic recording by officers and preservation of that evidence, through for example bodycams.

The CT states that an officer
adopted a heavy-handed approach to seizing evidence after an onlooker used his mobile phone to film a public arrest in the city on Thursday. The incident on Barry Drive shows a man attempting to evade police on his bicycle. He is caught by several officers, tackled to the ground, and handcuffed. 
The onlooker filmed the arrest of a man on Barry Drive. Then police approached him.  It appears to be a textbook albeit clumsy arrest until one of the officers sees the person filming and tells him to stop filming and back away. The person filming complied and turned to go away but the Traffic Operations officer then chased after him, seized the phone, and according to the new victim in this incident's online comments, would not return it until he could record the person's details and get him to send any recorded footage to police. 
Legal advice provided to The Canberra Times says that while police have the power to seize evidence, the more pressing issue in this case was whether the police "genuinely had grounds to suspect the footage could be important evidence in court". On the face of the limited public footage shown on social media, the evidence would appear to be to the contrary. 
The ACT Law Society goes on to suggest that the person was well within their rights to film the incident. The more likely reason the phone footage was seized was that "[the police] were more concerned about PR given it was a less than glamorous arrest". ACT Police media are continually requesting via their media interactions that anyone holding CCTV or dash cam footage to assist in their investigations. 
Police released a statement on Friday about the incident in which they identified the offender as in breach of his bail conditions. "The man was placed under arrest, and was subsequently charged with breach of bail, fail to appear, possess knife without reasonable excuse, and unlawful possession of stolen property," police said. "The video was provided to police. "As this matter is now before the courts, we are unable to comment further. However, in general terms police have the power to seize footage from members of the public as evidence."
One response is that having a power is not identical with an obligation to use that power.

My experience on campus several years ago, ironically as I was walking to give a privacy lecture, was being told by a uniformed AFP officer that it was illegal to film that officer - irrespective of circumstances - and that if I chose to do so I could be arrested. Possibly the thin blue line needs some more education.

The 'needed for evidence' seizure of devices poses challenges for advocates of sousveillance.

One perspective is provided in 'Context, visibility, and control: Police work and the contested objectivity of bystander video' by Bryce Clayton Newell in (2019) 21(1) New Media and Society, which
examines how police officers understand and perceive the impact of bystander video on their work. Drawing from primarily qualitative data collected within two police departments in the Pacific Northwest, I describe how officers’ concerns about objectivity, documentation, and transparency all manifest as parts of a broader politics of information within policing that has been amplified in recent years by the affordances of new media platforms and increasingly affordable surveillance-enabling technologies. Officers’ primary concerns stem from their perceived inability to control the context of what is recorded, edited, and disseminated to broad audiences online through popular platforms such as YouTube.com, as well as the unwanted visibility (and accountability) that such online dissemination generates. I argue that understanding the effects of this `new visibility’ on policing, and the role played by new media in this process, has become vitally important to our tasks of organizing, understanding, and overseeing the police.
'Points of View: Arrestees’ Perspectives on Police Body-Worn Cameras and their Perceived Impact on Police–Citizen Interactions' by Emmeline Taylor and Murray Lee in (2019) The British Journal of Criminology comments
Entirely absent from debates about the desirability and potential impacts of police body-worn cameras (BWCs) are the views of a significant group on the other side of the lens—individuals who have recently experienced arrest by a police officer. In a bid to redress this significant gap, this article reports findings from the first study to examine arrestee views and experiences of police BWCs. Data from interviews with 907 police detainees reveal that they are largely in favour of officers wearing cameras, believing that they can provide greater accountability and improve the behaviour of both law enforcement officers and members of the public. Importantly, however, this support is contingent on a number of operational and procedural policies regulating the use of BWCs.
The authors argue
 ‘Release the tapes. Release the tapes’ chants a throng of protesters in North Carolina, USA following the fatal shooting of Keith L. Scott by police in September 2016. Amid mounting pressure, the police released segments of two videos; one from a police dash-cam and the other from a police officer’s body-worn camera (BWC). Although neither recording provided conclusive evidence about the events that unfolded, or crucially whether Scott was indeed carrying a gun as had been claimed by the officer that shot him dead, the controversy highlights the degree to which audio-visual technologies have come to play a politically laden role in policing internationally, and importantly, symbolically represent notions of fairness, legitimacy, transparency and accountability. Despite such high-profile examples emphasizing their fallibility, recent years have seen billions of public monies invested in police BWCs internationally. A lack of evidence demonstrating effectiveness, or an understanding of how they operate in practice, has certainly not hampered their rapid adoption. Rather, an evidential desert has enabled police BWCs to be ascribed many ‘mythical properties’ (Palmer 2016). Elevated to ‘best practice’ from multiple sources including the American Civil Liberties Union (ACLU 2015) and the International Association of Chiefs of Police (IACP 2014), their costly adoption has proceeded on an exiguous evidence base. Although not unusual for police technologies to be heavily invested in without sufficient understanding of their effectiveness (Lum et al. 2019; Taylor 2010), the lack of awareness regarding how the public view and understand the police use of BWCs runs the risk of them inadvertently negatively impacting on perceptions of procedural justice and police legitimacy. 
Since the publication of a 2015 literature review that refrained ‘from drawing any definitive conclusions about BWC’ due to the scarcity of research (Lum et al. 2015: 11), Lum et al. (2019) report a five-fold increase in empirical studies. In addition to a modest catalogue of randomized control trials (RCTs) that typically use officer behaviour (e.g. use of force) and citizen behaviour (e.g. resisting arrest and citizen complaints) as proxy measures for assessing impact (see, e.g. Jennings et al. 2014; Ariel et al. 2016a; Braga et al. 2018), several studies have sought to gain insight into the views and experiences of police officers (Jennings et al. 2014; Katz et al. 2014; Roy 2014; Gaub et al. 2016; Goetschel and Peha 2017; Headley et al. 2017; Sandhu 2017); law enforcement leadership (Smykla et al. 2016; Sandhu 2017), and public attitudes toward police BWCs (Ellis et al. 2015; Maskaly et al. 2017; White, Gaub and Todak 2017). Yet, remarkably, entirely absent in debates about the desirability and potential impacts of BWC thus far are the views of an important group on the other side of the lens—i.e. arrestees. It is this literature on the perceptions of police BWCs that this study contributes a vital and unique dataset. By understanding the views of arrestees, we can begin to see how they might animate their encounters with camera-wearing officers and influence their perceived understanding of any subsequent involvement with criminal justice procedures. 
The article is organized into five sections. First, an overview of developments in the use of audio-visual surveillance technologies in policing is provided before looking at the emergence of police BWCs specifically. The second section offers a prĂ©cised overview of empirical research, focusing on the impact that BWCs have been found to have on the behaviour of police officers and citizens. Adding a vital international perspective, an overview of developments in Australia, the site of this study, is provided in the third section.1 This is followed by details of the methodology before the article turns, in the fifth section, to the findings. The study elicited a large amount of data and this article focuses specifically on four thematic domains not elsewhere reported: police use of force; arrestee aggression and violence; procedural justice; and, the operation of the cameras. By shifting the focus to those individuals on the other side of the lens, the analysis offers essential insights into the nuanced ways that police arrestees interpret and respond to police wearable cameras. This is of global significance if police legitimacy is to be maintained in the era of ‘new visibility’ (Goldsmith 2010). The sixth and final section discusses the implications for the ongoing operation of police BWCs and avenues for future research.
Another perspective is offered in 'Eyes and Apps on the Streets: From Surveillance to Sousveillance Using Smartphones' by Vania Ceccato in (2019) 44(1) Criminal Justice Review, which
explores the concept of surveillance by assessing the nature of data gathered by users of a smartphone-based tool (app) developed in Sweden to assist citizens in reporting incidents in public spaces. This article first illustrates spatial and temporal patterns of records gathered over 9 months in Stockholm County using Geographic Information Systems (GIS) to exemplify the process of sousveillance via app. Then, the experiences of user group members, collected using an app-based survey, are analyzed. Findings show that the incident reporting app is more often used to report an incident and less often to prevent it. Preexistent social networks in neighborhoods are fundamental for widespread adoption of the app, often used as a tool in Neighborhood Watch schemes in high-crime areas. Although the potentialities of using app data are open, these results call for more in-depth evaluations of smartphone data for safety interventions.
Ceccato comments
Since Jacobs’s seminal work, The Death and Life of Great American Cities in 1961, we have heard the powerful key concept of “eyes on the street” countless times. Jacobs (1961) wrote that in order for a street to be a safe place, “there must be eyes upon the street, eyes belonging to those we might call the natural proprietors of the street” (p. 35). But the era of smartphones and location-based services (LBS) has changed the way that the individuals interact with a city. Now, “eyes” are complemented by “apps,” giving expression to new ways of depicting what happens in public space and perhaps redefining the role of guardians in surveillance. Compared with the traditional eyes on the street, the new exercise of social control invites a number of senses other than sight, such as touch and sound. An incident that happens on the street is still local (attached to a physical place with a pair of coordinates) but can now be seen by faraway eyes, literally by the whole world. Jacobs’ sense of “natural proprietors of the street” acquires a different meaning, as those who set a record on the (m)app are not only local residents but also visitors or transients, perhaps with no attachment to the area. With networks of smartphone app users, the process of sousveillance (Mann, 2004, p. 620), from French for “to watch from below,” seems to be more appropriate than surveillance (“to watch from above”). “Sousveillance describes the present state of modern technological societies where anybody may take photos or videos of any person or event, and then diffuse the information freely all over the world” (Ganascia, 2010, p. 489). This article calls for a reconceptualization of the term surveillance in the context of crowdsourced data (as sousveillance) gathered by LBS apps. 
The aim of this article is to explore the concept of surveillance and related terms by evaluating the nature of the data captured by users of an incident-reporting app,1 which was developed to support crime-prevention initiatives across Sweden. The aim is achieved by first characterizing this type of crowdsourced data as a result of the processes of sousveillance with an LBS app. Nine months of reports (app entries) in Stockholm County are assessed using geographic information systems (GIS) in relation to other indicators of safety and area characteristics. Also, the experiences of app users are analyzed via a survey. Then, by looking at the nature of the app-based data and the characteristics of the app users, we reflect upon some ideas that are taken for granted and traditionally characterize the process of surveillance. 
A reason to choose Stockholm, the capital of Sweden, as a case study is the availability of app-based data coming from smartphones (the app is an award-winning, free digital tool) that promote sousveillance through an online “Neighborhood Watch” scheme (NWS) and support local emergency services. Moreover, another reason for this choice is the degree of media penetration in the country, which is one of the highest in the world (Fox, 2013). According to The Internet Foundation in Sweden, as many as 77% of the population has a smartphone, 62% uses the Internet on their smartphone on a daily basis, and 57% navigates with help of a GPS in the smartphone. In 2015, over 95% in the 8–55 age-group were using the Internet, and this percentage is increasing within all age groups (Internetstiftelsen i Sverige, 2016). 
This article is structured as follows. It first reviews the literature in guardianship and surveillance and indicates how they may be affected by new technological developments, for example, LBS apps. We identify the current knowledge gaps in the international literature and use the Stockholm case study to contribute to filling some of these gaps. Note, however, that the Stockholm study presented here is based on a small sample data set, which means that some of the conclusions are driven by an exploratory analysis of the data rather than by rigorous, confirmatory hypotheses testing. Instead of claiming generality of the results, this analysis provides examples that are illustrative for the field. This article ends with a discussion of relevant topics to be pursued in future research and some of the technical, legal, and ethical challenges that lie ahead when using smartphone data.

05 November 2018

Sousveillance

'Environmental Sousveillance, Citizen Science and Smart Grids' by Bruce Baer Arnold in Matthew Rimmer (ed) Intellectual Property and Clean Energy (Springer, 2018) 375-398 comments
 Enhancing water, energy, transport and communication infrastructure through a distributed or centralised sentience—‘smart grids’—involves questions about power. Those questions are as much about data, knowledge and environmental activism as they are about technical protocols for internet refrigerators, congestion pricing of road networks and remote reading of domestic electricity meters. This chapter explores who gets to collect, access and use data from smart grids. It highlights emerging debate about privacy, including systemic surveillance by grid operators/partners, and security. It discusses scope for environmental mashups that inform public policymaking and environmental activism but conflict with legal frameworks for the ownership of data and quarantining of knowledge. It looks ahead to ask whether citizens can establish participatory environmental monitoring networks that are independent of grids operated by network providers such as power and water utilities.

14 June 2016

Privacy Infrastructure

'Building Privacy into the Infrastructure: Towards a New Identity Management Architecture' (University of Miami Legal Studies Research Paper No. 16-26) by Michael Froomkin argues
We are at risk of becoming digitally transparent to both government and the private sector. As it is increasingly obvious that US law is not going to prevent the destruction of personal privacy, we urgently need better privacy tools, baked into the way we do transactions. A partial, but significant, privacy enhancement would be a new Identity Management Architecture (IMA) enabling multiple privacy-protective transaction-empowered digital personae per user. Each persona (or ‘nym if you prefer) would have the ability to communicate, and at least a limited ability to transact, in a manner that would not be linkable, or least very difficult to link, to the real identity of the user. By using a variety of personae for online transactions, reading, and communication, users would defeat — or at least vastly reduce the effectiveness — of commercial and perhaps also governmental profiling. 
The problem is that an IMA that enables privacy enhanced personae is most unlikely to reach wide acceptance unless it is designed in a manner that makes it easy to use. It will not receive US governmental acceptance unless it also reduces the extent to which the personae can be used to break laws and evade contractual obligations. This paper thus discusses the legal and political considerations that might inform a requirements document for such an IMA with special reference to US law and likely US government reaction. It includes a survey of laws that parties engaging in or enabling anonymous or pseudonymous transactions should consider, and concludes with discussion of several critical design decisions including transnational credentials, the possibility of identity escrow for transactional personae, and speculation as to how personae might fare in the marketplace. 
The timeliness of this proposal is demonstrated by David Chaum’s recent announcement of new privacy protocol, PrivaTegrity, that contains most of the features needed to engineer a privacy-enhanced IMA that might be acceptable to law enforcement. The need for some action, whether based on PrivaTegrity or otherwise, is very great — so critical that it may time to accept the previously unthinkable, and accept some form of identity escrow as part of the IMA.
'Privacy, Public Disclosure, Police Body Cameras: Policy Splits' by Mary Fan in (2016) 68 Alabama Law Review comments
When you call the police for help — or someone calls the police on you — do you bear the risk that your worst moments will be posted on YouTube for public viewing? Police officers enter some of the most intimate incidences of our lives — after an assault, when we are drunk and disorderly, when someone we love dies in an accident, when we are distraught, enraged, fighting, and more. As police officers around the nation begin wearing body cameras in response to calls for greater transparency, communities are wrestling with how to balance privacy with public disclosure. 
This article sheds light on the balances being struck in state laws and in the body camera policies of police departments serving the 100 largest cities in the nation. The evaluation illuminates two emerging areas of concern — the enactment of blanket or overbroad exemptions of body camera footage from public disclosure, and silence on victim and witness protection in many policies. The article offers two proposals to address the challenges. First, the article argues for legal safe harbors to foster the development of new redaction technologies to automate the removal of private details rather than exempting body camera video from disclosure. Blanket or broad exemptions from public disclosure destroys the incentive to use technological innovations to reconcile the important values of transparency and privacy and disables much of the promised benefits of the body camera revolution. Second, the article argues for giving victims and witnesses control over whether officers may record them, rather than putting the burden on victims and witnesses to request that recording cease. This approach better protects against the perverse unintended consequence of deterring victims from help-seeking and witnesses from coming forward, and reduces the risk of inflicting further privacy harms from justice-seeking.

05 May 2015

Counterveillance

'Mediating the Med. Surveillance and Counter-Surveillance at the Southern Borders of Europe' by Huub Dijstelbloem comments
Opening up the political dimensions of surveillance and counter-surveillance 'mystery guests' as well. These invited guests work for the airport, directly or indirectly, and their task is to test security measures. Schiphol also receives uninvited guests trying to test their level of security. One of the best known of these is SBS reporter Alberto Stegeman, who regularly tries to prove that security measures at Schiphol are inadequate; by successfully forging a KLM ID-card for instance. Less well known perhaps is the case of the American artist Rozalinda Borcila. As part of her project Geography lessons, she aimed 'to intervene in apparently controlled spaces that are policed through technologies of visualization and information management' (Amoore 2009, 26). Unfortunately, she was deported after being caught making videos of Schiphol's airport security.
The undercover guests acting on behalf of Schiphol itself are mainly an internal business affair, but Stegeman's activities are part of the regular undercover media repertoire. Borcila's case, however, touches upon a different category of action. Hers is neither just a form of civil disobedience nor of artistic expression. Instead, her project relates to a type of political question, and to reflection on the public and private side of technologies and their role in the inclusion and exclusion of citizens and aliens in today's mobility circus. In contrast to the surveillance regime of the airport, she performs a certain kind of counter-surveillance.
As a category of all kinds of empirical examples, counter-surveillance concerns a broad spectrum of forms and meanings, varying from initiatives of so called 'inverse surveillance' or 'sousveillance' (Mann, Nolan and Wellman. 2003) and the development of apps to support migrants, to initiatives in radical geography concerned with mapping and counter-mapping. As a concept, counter-surveillance is related to both a culture of resistance and to a broader account of the role of protest and the control of state power in liberal democracies
An adequate definition to start with is Monahan's (2006, 516), which defined counter-surveillance as 'intentional, tactical uses, or disruptions of surveillance technologies to challenge institutional power asymmetries'. He explained that such activities can include 'disabling or destroying surveillance cameras, mapping paths of least surveillance and disseminating that information over the Internet, employing video cameras to monitor sanctioned surveillance systems and their personnel, or staging public plays to draw attention to the prevalence of surveillance in society' (Monohan, 515).
Monohan has investigated different kinds of interventions in the technical and the social faces of public surveillance. He has described initiatives of the Institute of Applied Autonomy (IAA), a collective of technicians, artists, and activists engaged in projects in 'productive disruption and collective empowerment' and of the group RTMark which advocates a more radical and direct approach - namely destroying cameras. In addition, he has analysed Steve Mann's Shooting Back project, which utilizes high-tech devices to take video footage of security personnel, and the Surveillance Camera Players (SCP), a New York based, ad hoc acting group. The analysis led him to the conclusion that 'current modes of activism tend to individualize surveillance problems and methods of resistance, leaving the institutions, policies, and cultural assumptions that support public surveillance relatively insulated from attack'.
Although Monohan's definition creates a certain kind of sensitivity for what counter-surveillance is about, his conclusion leaves room for some questions. To say that Borcila's project at Schiphol did not touch upon the 'institutions, policies, and cultural assumptions that support public surveillance' suggests that something important has not been taken into account. To clarify the kind of political space that was opened up by her project - and the kind of political realm that has been created by many other forms of counter-surveillance to which I will refer in this chapter - we also need to open up the concepts of both surveillance and of counter-surveillance in order to better understand their meaning and their mutual interaction.
The concept of surveillance is usually applied to state activities and technologies that aim to register and control certain populations (e.g. Foucault). However, as Rosanvallon (2008) clarified, the concept of surveillance has historically referred to the initiatives of citizens to control state power as well. According to him, 'surveillance constitutes a hidden and protean aspect of modern politics' as does the inverse phenomenon; namely the surveillance of power by society (2008, 31-32). In this chapter, I will focus on this democratic dimension, and share 'counter-surveillance' under the umbrella of what Rosanvallon called the 'counter-democracy', i.e. all forms of controlling governmental power. As such, 'counter-surveillance' is rooted in liberal democracies both historically and conceptually.
By opening up the notions of surveillance and counter-surveillance, more insight can be gained into what exactly is at stake in the confrontations between state initiatives to protect borders against unwelcome migrants, and the actions of various groups to create more public awareness of today's border drama; to circulate information about it, to visualize it, to make it a public issue, to protest against it, to sabotage it or to use it as an opportunity to mobilize public support for migrants and to take care of them. This chapter is structured in the following way. Section 2 will deal with three transformations seen by Europe's borders and the resulting surveillance regime. Section 3 will present some examples of counter-surveillance as initiated by a number of European NGOs, activist groups and researchers which react and reflect on these transformations and aim to 'mediate the Mediterranean', i.e. they visualize the Med as a place of contestation that not only seems to consist of a humanitarian drama, but of a technologically mediated drama of conflicting representations as well. In addition, I will introduce an analytical framework based on Pierre Rosanvallon's account of counter-democracy as to understand the conceptual, historical and political background of counter-surveillance. In section 4, I will use Rosanvallon's notion of 'powers of oversight' to evaluate current initiatives of counter-surveillance. Section 5 continues the search for the political dimension of public actions related to surveillance by elaborating on the writings of Hannah Arendt, specifically on the idea of a 'portable public realm' (Ring 1991) apparent in her work. Using Louise Amoore's (2009) notion of 'lines of sight', section 6 investigates the nature of the representations, such as images and maps, that both surveillance and counter-surveillance provide us with. The last section, section 7, presents the conclusions

28 June 2013

Surveillance Harms

'Addressing the Harm of Total Surveillance: A Reply to Professor Neil Richards' by Danielle Citron and David Gray in (2013) 126 Harvard Law Review Forum 262 comments
In his insightful article [PDF], "The Dangers of Surveillance," 126 Harvard Law Review 1934 (2013), Neil Richards offers a framework for evaluating the implications of government surveillance programs that is centered on protecting "intellectual privacy." Although we share his interest in recognizing and protecting privacy as a condition of personal and intellectual development, we worry in this essay that, as an organizing principle for policy, "intellectual privacy" is too narrow and politically fraught. Drawing on other work; we, therefore, recommend that judges, legislators, and executives focus, instead, on limiting the potential of surveillance technologies to effect programs of broad and indiscriminate surveillance. ...
Although we live in a world of total surveillance, we need not accept its dangers — at least not without a fight. As Richards rightly warns, unconstrained surveillance can be profoundly harmful to intellectual privacy. It would be wrong, however, to conflate symptom and cure. What is most concerning, for us is the rapid adoption of technologies that increasingly facilitate persistent, continuous, and indiscriminate monitoring of our daily lives. Although harms to intellectual privacy are certainly central to our understanding of the interests at stake, it is this specter of a surveillance state that we think ought to be the center of judicial, legislative, and administrative solutions, not the particular intellectual privacy interests of individuals.
The Richards article is noted here.

Citron and Gray state that
The ethos of our age is “the more data, the better.”1 In nearly every sector of our society, information technologies identify, track, analyze, and classify individuals by collecting and aggregating data. Law enforcement, agencies, industry, employers, hospitals, transportation providers, Silicon Valley, and individuals are all engaged in the pervasive collection and analysis of data that ranges from the mundane to the deeply personal.  Rather than being silos, these data gathering and surveillance systems are linked, shared, and integrated. Whether referred to as coveillance, sousveillance, bureaucratic surveillance, “surveillance-industrial complex,” “panvasive searches,” or business intelligence, total-information awareness is the objective. ...
The scope of surveillance capacities continues to grow. Fusion centers and projects like Virtual Alabama may already have access to broadband providers’ deep packet inspection (DPI) technologies, which store and examine consumers’ online activities and communications. This would provide government and private collaborators with a window into online activities, which could then be exploited using data-mining and statistical-analysis tools capable of revealing more about us and our lives than we are willing to share with even intimate family members. More unsettling still is the potential combination of surveillance technologies with neuroanalytics to reveal, predict, and manipulate instinctual behavioral patterns of which we are not even aware.
There can be no doubt that advanced surveillance technologies such as these raise serious privacy concerns. In his article, Professor Neil Richards offers a framework to “explain why and when surveillance is particularly dangerous and when it is not.” Richards contends that surveillance of intellectual activities is particularly harmful because it can undermine intellectual experimentation, which the First Amendment places at the heart of political freedom. Richards also raises concerns about governmental surveillance of benign activities because it gives undue power to governmental actors to unfairly classify, abuse, and manipulate those who are being watched; but it is clear that his driving concern is with intellectual privacy. We think that this focus is too narrow.
According to Richards, due to intellectual records’ relationship to First Amendment values, “surveillance of intellectual records — Internet search histories, email, web traffic, or telephone communications — is particularly harmful.” Richards argues that governmental surveillance seeking access to intellectual records should therefore be subjected to a high threshold of demonstrated need and suspicion be-fore it is allowed by law. He argues also that individuals ought to be able to challenge in court “surveillance of intellectual activities.” Richards further proposes that “a reasonable fear of government surveillance that affects the subject’s intellectual activities (reading, thinking, and communicating) should be recognized as a harm sufficient to prove an injury in fact under standing doctrine.” ... Although Richards aptly captures the dangers to intellectual freedom posed by technologically enhanced surveillance, we fear his policy prescriptions are both too narrow and too broad because they focus on “intellectual activities” as a necessary trigger and metric for judicial scrutiny of surveillance technologies. Our concerns run parallel to arguments we have made elsewhere against the so-called “mosaic theory” of quantitative privacy advanced by the D.C. Circuit  and four Justices of the Supreme Court in United States v. Jones. Our argument there supports our objection here: by focusing too much on what information is gathered rather than how it is gathered, efforts to protect reasonable expectations of privacy threatened by new and developing surveillance technologies will disserve the legitimate interests of both information aggregators and their subjects.
One reason we are troubled by Richards’s focus on “intellectual activities” as the primary trigger for regulating surveillance technology is that it dooms us to contests over which kinds of conduct, experiences, and spaces implicate intellectual engagement and which do not. Is someone’s participation in a message board devoted to video games sufficiently intellectual to warrant protection? What about a telephone company’s records showing that someone made twenty phone calls in ten minutes’ time to a particular number without anyone picking up? Would we consider the route someone took going to the library an intellectual activity? Is it the form of the activity or what is being accomplished that matters most?
Setting aside obvious practical concerns, the process of determining which things are intellectual necessarily raises the specter of oppression. Courts and legislators would be required to select among competing conceptions of the good life, marking some “intellectual” activities as worthy of protection, while denying that protection to other “non-intellectual” activities. Inevitable contests over the content and scope of “intellectual privacy” will be, by their nature, subject to the whims and emergencies of the hour. In the face of terrorist threats, decisionmakers will surely promote a narrow definition of “intellectual privacy,” one that is capable of licensing programs like Virtual Alabama and fusion centers. Historically, decisionmakers have limited civil liberties in times of crisis and reversed course in times of peace, but the post-9/11 period shows no sign of the pendulum’s swinging back. Given the nature of political and judicial decisionmaking in our state of perpetually heightened security, protection, even of “intellectual privacy,” is most likely to be denied to the very outsiders, fringe thinkers, and social experimenters whom Richards is most concerned with protecting.