Showing posts with label Wearables. Show all posts
Showing posts with label Wearables. Show all posts

15 December 2020

Work Surveillance

Data subjects, digital surveillance, AI and the future of work by Phoebe V. Moore for the Panel for the Future of Science and Technology (STOA) and the Secretariat of the European Parliament is characterised as providing 

 an in-depth overview of the social, political and economic urgency in identifying what we call the ‘new surveillance workplace’. The report assesses the range of technologies that are being introduced to monitor, track and, ultimately, watch workers, and looks at the immense changes they imbue in several arenas. How are institutions responding to the widespread uptake of new tracking technologies in workplaces, from the office, to the contact centre, to the factory? What are the parameters to protect the privacy and other rights of workers, given the unprecedented and ever-pervasive functions of monitoring technologies? 

The report evidences how and where new technologies are being implemented; looks at the impact that surveillance workspaces are having on the employment relationship and on workers themselves at the psychosocial level; and outlines the social, legal and institutional frameworks within which this is occurring, across the EU and beyond, ultimately arguing that more worker representation is necessary to protect the data rights of workers. 

 Moore comments 

 Workplace surveillance is an age-old practice, but it has become easier and more common, as new technologies enable more varied, pervasive and widespread monitoring practices, and have increased employers’ ability to monitor what seems like every aspect of workers’ lives. New technological innovations have increased both the number of monitoring devices available to employers as well as the efficiency of these instruments to extract, process and store personal information. Digital transformation, work design experimentation and new technologies are, indeed, overwhelming methods with intensified potential to process personal data in the workplace. While much of the activity appears as an exciting and brave new world of possibility, workers’ personal experiences of being tracked and monitored must be taken into account. Now, issues are emerging having to do with ownership of data, power dynamics of work-related surveillance, usage of data, human resource practices and workplace pressures in ways that cut across all socio-economic classes. 

The first chapter of the present report ‘Surveillance and monitoring: The future of work in the digital era’, commissioned by the European Parliament’s Panel for the Future of Science and Technology (STOA), deals with surveillance studies, which originates in legal studies and criminology but is increasingly important in sociology of work and digitalisation research. The first chapter outlines some of the technologies applied in workplaces. The second chapter looks at the employment relationship, involving how workers and managers, and surrounding pressures transform when a third actor (the machine and/or computer), is introduced. This chapter also covers the ways that inter-collegial relations are impacted, as well as issues around work/life integration. 

The third chapter looks at data protection and privacy regulatory frameworks and other instruments as they have developed over time, leading up to today’s General Data Protection Regulation (GDPR). Various historical moments have impacted how data and privacy protection has evolved. Concepts surrounding this legal historical trajectory have emerged, with some ambivalences at points around which philosophical and ethical foundations are at stake. Some of the tensions in legal concepts driving the debates in privacy and data protection for workers, and paradoxical circumstances within which they are seated, are then dealt with in chapter four, where the possibility for deriving inference from data can lead to discrimination and reputational damage; where the concept of worker ‘consent’ to data collection; and the implications for data collection from wellness and wellbeing initiatives in the workplace are increasingly under the microscope. 

The fifth chapter outlines a series of country case studies, where applied labour and data protection and privacy policy are revealed. Many countries are reviewing data and privacy and labour laws because of new requirements emerging with the GDPR, which has also been extensively reviewed in the present report. Some legal cases have emerged whereby employers have been judged to breach data protection and privacy rules, such as Bărbulescu vs Romania. The sixth chapter, called ‘Worker cameos’, provides a series of worker narratives based on field interviews about their experiences of monitoring and tracking. In particular, content moderators and what the author calls ‘AI trainers’ are the highest surveilled and under the most psychosocial strain. Taking all of these findings into account, the seventh chapter provides a series of the author’s suggestions for first principles and policy options, where worker representation and co-determination through social partnerships with unions, and more commitments to collective governance, are put forward.

Moore argues 

Workplace surveillance over time has occurred within a series of historical phases, where work design, labour markets, and industry trends have differed and business, social and labour processes have taken particular forms. Surveillance of workers can be both analogue and technological, and operates at a series of tangible and psychosocial levels. ... In the report on this nearly year-long project, we look at how insights in technological development have evolved within a sociological and business operations framework, and identify how technologies are being implemented to manage worker performance, productivity, wellness and other activities, in order to analyse and predict the social impact and future of work, within regulatory parameters. Workplace surveillance in the European Union is predominantly outlined, but as early, government-commissioned North American research into workplace surveillance was also very perceptive in foresight, some historical discussion of the United States’ early activity as well as some insights from Norway and Nigeria, are included. 

Workplace surveillance is not separate from the larger structures and systems of labour relations, management styles, workplace design, legal and ethical social trends and today, are explicitly part of the accelerating trends in digitalised surveillance in many spheres of everyday life. Therefore, we address all of these categories of analysis, alongside identifying where and how digitalised surveillance has and is occurring in workplaces or perhaps better said, workspaces, called as such because the concept of ‘place’ has to be interrogated and critiqued, precisely because work is carried out in an increasingly virtual spaces globally, and with the onset of Covid 19 working conditions, increasingly, in homes. 

In 2017, a Motion for a European Parliament Resolution with recommendations to the Commission on Civil Law Rules on Robotics clearly stated that:

...assessments of economic shifts and the impact on employment as a result of robotics and machine learning need to be assessed; whereas, despite the undeniable advantages afforded by robotics, its implementation may entail a transformation of the labour market and a need to reflect on the future of education, employment, and social policies accordingly. (European Parliament 2017) 

This Recommendation predicted that the use of machine learning and robotics will not ‘automatically lead to job replacement’, but indicates that lower skilled jobs are going to be more vulnerable to automation. Furthermore, the Recommendation cautions the likelihood of labour market transformations and changes to many spheres of life, including as above, ‘education, employment and social policies’ (ibid.). In this light, the current report builds on some of these earlier recommendations to the European Parliament, because it is now more important than ever to address the lagging discussions on ethics, social responsibility, social justice and importantly, the role of unions and worker representative groups in the continuous development and integration of technologies and digitalization into workplaces. The report is written with a human rights focus, seeing data privacy and protection as a fundamental human right. 

Automation, robotics and artificial intelligence (AI) are part and parcel to the discussion of monitoring and surveillance of work,where a bindingfeatureforhow these processes emerge is the collection, storage, processing and usage of large data sets that are collected in more and more spheres of people’s everyday lives and in particular, workplaces. A report prepared for the United Kingdom’s Information Commissioner’s Office (ICO) declared that ‘we live in a surveillance society. It is pointless to talk about surveillance in the future tense... everyday life is suffused with surveillance encounters, not merely from dawn to dusk but 24/7’ (Ball and Wood 2006). More than one decade later, this statement could not be more relevant. From cameras at self-operated grocery store check-outs in New York, to facial recognition sensors at a local gym in London, to recorded calls with a call centre employee of banks who themselves may be based in India or Bangladesh, surveillance is an activity that is no longer only seemingly conducted by law officers on the streets watching out for robbers wearing balaclavas. People are watched in almost every corner of society, and sometimes people are even asked to watch one another, in what Julie E. Cohen calls a participatory surveillance (Cohen 2015). Gary T. Marx earlier referred to forms of participatory surveillance as a kind of ‘new surveillance’ in 1988, just as computers were becoming integrated into everyday life and seemingly integrating a new type of soft surveillance (Marx 1988). In 1982, Craig Brod had already warned of the dangers of over-use of computers at work and talked about the hazards of technostress resulting from the uptake of new technologies in everyday lives (Brod 1982). 

Cohen, an established figure in the research arena of surveillance, looked at the issues surrounding privacy and systems of surveillance. Cohen argues that privacy, as a concept informing practices, has a bad reputation, whereit has been touted as an old-fashioned concept and a delayto progress. Cohen counters these systemic assumptions and says that privacy should be a form of protection for the liberal self (2013: 1905) and important for the democratic process. Indeed, trading privacy for supposed progress reduces the scope for self-making and informed, reflective citizenship and a range of other values that are foundational to consumer society. 

So, effective privacy regulation must render both public and private systems of surveillance meaningfully transparent and accountable, where privacy is defined in the dynamic sense: ‘an interest in breathing room to engage in socially situated processes of boundary management’ (Cohen 2012: 149, cited in Cohen 213: 1926-1927). Privacy incursions harm individuals, but not only individuals. Freedom from surveillance, Cohen argues, whether public or private, is foundational to the practice of informed and reflective citizenship. These ideas are important when looking at workers and their right to privacy. A reasonable expectation of privacy is likely when the actions of the employer suggest that privacy is a condition of work. Internationally there are variations in law and culture in terms of privacy, especially differences between the European Union and the USA. In Europe, privacy has tended to be seen as somewhat more fundamental, something that should not be forfeited, whilst in the US privacy can be viewed as a commodity (Smith and Tabak 2009). 

There was a lot of discussion about business culture after Scientific Management, during the Human Relations and Systems Rationalism periods. Alder (2001) reviewed a range of organisational culture types, asking which ones are more/less amenable for electronic performance monitoring (EPM) integration. Right at the end of the latter period, Deal and Kennedy outlined four cultural types in 1982, which they argue are oriented around risk-taking and frequency of feedback within the organisation: ‘1) tough-guy macho, 2) work hard/play hard, 3) bet your company, and 4) process’ (1982, cited in Alder 2001). Petrock, Alder notes, also outlines a typology of four organisational cultures: 1) clan culture 2) adhocracy, 3) hierarchy, and 4) market cultures (1990), but these types are limiting because there are no associated ways to measure or identify them, offered. Alder believes these delineations are incomplete and not fit for purpose. Wallach, however, came up with the best typology, Alder states, noting the signifiers within three types: ‘bureaucratic, innovative and supportive’ (Wallach 1983, cited in Ibid.). A bureaucratic culture is identified with hierarchy, regulation and procedure and is the organisational culture type that is most responsive and accepting of technological tracking. Alder (2001) argues that workers will respond differently to EPM in different organisational cultures. Therefore, Alder indicates that a management body that wants to implement EPM must think about the culture of their organisation to assess to what extent resistance to it will emerge, and how to accommodate this. These days, however, the culture-based arguments are less and less relevant, as metrics and data appear to hold the promise to make irrelevant specificities in qualitative differences and as data rights become increasingly mainstreamed across the consumer and worker spheres. 

This report, overall, aims to highlight what kinds of technologies are being integrated to monitor, track and therefore, surveil workers; to identify how technologies are being implemented; to understand the impact that having new technologies in workplaces impacts the employment relationship; to throw light on the social, organisational, legal and institutional frameworks within which this is occurring; and reveal the institutional, legal and union responses. Finally, the goal of this report is to provide a set of first principles and policy options to provide EU member and associated states with guidance on protection the privacy and rights of worker data subjects. 

Leading up to the General Data Protection Regulation (GDPR), the EU’s Data Protection Working Party (Art. 29 WP) stressed that ‘workers do not abandon their right to privacy and data protection every morning at the doors of the workplace’ (2002: 4). Some degree of gathering and processing of personal data is a normal and in fact, a vital part of almost any employment relationship (Ball 2010). Some workers’ personal data is necessary to complete contracts i.e. to pay workers and record absences, and much of it is both reasonable and justifiable for use by management. However, that is not to say that any and all forms of surveillance and data processing should be so considered. 

Indeed, employers’ surveillance practices must often be reviewed in light of concerns for the privacy or simply for the human dignity of the worker (Jeffrey 2002; Levin 2009), and this report sets out to do just this. The present author has already situated this trend within the contemporary pressures of global political economy pressures (Moore 2018c) and looked at the psychosocial violence and safety and health risks that workers face with the introduction of digitalised tracking and monitoring (Moore 2019, 2018b). Welfare state retrenchment and austerity policies alongside these technologicalinterventionshavecoalescedintothe‘politicaleconomyofanxiety’(Moore2018a:43) for workers, where self-quantification and wellness discourses and frames thrive, but structural economic change is not occurring fast enough with relevant protections and social partnerships with unions and other worker representative groups. Now, we set out the aims and intentions for the project which form each chapter. 

The aim of the first chapter of the report is to review the concept of surveillance, where workplace electronic performance monitoring and privacy questions are increasingly important. The Taylorist employment model of mental vs manual work in a set hierarchy is increasingly a thing of the past, and while tools for measure were used in Taylor’s workshops, the kinds of technology now available on the market have fed into significant differences to a new world of work. Parallel to this change, the pursuits for surveillance have entered more intimate and everyday spaces than before. The known categories of the ‘watched’ and the ‘watcher’ are transformed. ’New surveillance workplaces’ or what we also refer to as ‘workspaces’, indeed, feature these new characteristics. The first chapter therefore looks at a range of new technologies which are contributing to the recent trends in an uptake of electronic monitoring and tracking at work, backed with existing empirical evidence and primary and secondary literature. 

In the second chapter, the report’s aim is to look at changes to a once presumed standard employment relationship, where managers and corporate and organisational hierarchies were explicit and clearly known. Now, management and operations processes are being digitalised, and workplaces are moving in to a myriad of domains. As a result of the changes to a more standard type of employment relationship and work environment, uncertainty or other psychosocial discomfort can emerge, where workers may feel their managers no longer trust them; or workers experience the issue of function creep, where data is used for other purposes than it was first collected for. Or competition between workers is intensified when performance data is viewable such as on the walls in call centre workplaces or on shared dashboards in gamified wellness programmes. The second chapter outlines the observable and documented as well as probable changes to the employment relationship which new technologies imbue. The third chapter turns to the policy and regulatory frameworks and instruments surrounding privacy and data protection and technological tracking, starting with the Data Protection Directive. 

The most important points in data protection and worker issue based policy are covered in the leadup to the GDPR. Interestingly, the International Labour Office’s Code of Practice around workers’ data, published as early as the 1990s, made similar interventions and recommendations that are now enforceable in the GDPR today. This chapter outlines this process and picks up on some of the most important and insightful developments to provide a foundation for the first principles and policy options outlined in later chapters of the report. The aim of the fourth chapter is to identify some of the tensions in legal principles about which the present author has been concerned, where e.g. inviolability does not seem to cohere with the concept of power of command; or where inferences from data and the link to workers’ reputations must be problematised. This chapter also looks at the concept of ‘consent’, which tends to be de-prioritised in discussions of the employment relationship (where consent is normally discussed in relation to a consumer, in the context of the GDPR) due to its already existing unequal nature. We argue that there are possibilities to rethink the definition of consent, nonetheless, and to perhaps look at a way to update the unidirectional conceptualization of this type of relationship. 

The fifth chapter then provides a series of country case studies provided in part by a series of legal scholars from across the EU, and Norway and Nigeria, where contributors have outlined information about which technologies are characteristic in specific countries; identified which legal mechanisms are being used including aspects of labour law, to ultimately protect workers’ privacy and data; looked at the ways local cases are working to integrate the GDPR as per Art. 88; and begins to put the focus on the role of worker representatives who, we ultimately recommend, should be considered meaningful social partners in dialogue with employers and with co-determination rights (see policy options). 

In the sixth chapter, we present a series of ‘Worker Cameos’ which are based on semi-structured interviews carried out with a series of workers to identify where EPM and tracking are occurring and to investigate and identify workers’ experience of this. Workers in many sectors and spheres, from dentists, to bankers, to content moderators, are being tracked. All workers interviewed feel that their work has intensified, expectations are higher, performance management is increasingly granular, and stress and anxiety are at an all-time high, as tracking and monitoring technologies become increasingly good at surveillance. 

The report concludes with a set of first principles and policy options for European Parliament policymakers, prioritising the role of trade unions and other worker representative groups. These Principles and Options are designed to mitigate against the worst impacts of digitalised tracking, monitoring and surveillance in the world of work.

19 November 2020

Wearables

'Wearable Sensor Technology and Potential Uses Within Law Enforcement: Identifying High-Priority Needs to Improve Officer Safety, Health, and Wellness Using Wearable Sensor Technology' by Sean E. Goodison, Jeremy D. Barnum, Michael J. D. Vermeer, Dulani Woods, Siara I. Sitar, Shoshana R. Shelton and Brian A. Jackson (RAND, 2020) asks 

How do WSTs intersect with law enforcement interests, both for the individual officer and the agency? What are the specific challenges that WST presents for data privacy, ownership, and the public? What are the salient issues associated with WST, and what are specific ways to address them? Many wearable sensor technology (WST) devices on the market enable individuals and organizations to track and monitor personal health metrics in real time. These devices are worn by the user and contain sensors to capture various biomarkers. Although these technologies are not yet sufficiently developed for law enforcement purposes overall, WSTs continue to advance rapidly and offer the potential to equip law enforcement officers and agencies with data to improve officer safety, health, and wellness. 

The report  reflects a workshop by RAND  and the Police Executive Research Forum for the US National Institute of Justice on the current state of WST and how it might be applied by law enforcement organizations. Workshop participants discussed possible issues with acceptance of WST among members of law enforcement; new policies that will be necessary if and when WST is introduced in a law enforcement setting; and what data are gathered, how these data are collected, and how they are interpreted and used. 

 RAND's key findings are that 

  •  Current WSTs are not sufficiently developed for law enforcement purposes overall 
  • Commercial devices, although inexpensive and portable, lack the accuracy and precision needed to inform and support decisionmaking. 
  • WSTs used in medical settings, although capable of excellent accuracy and precision with high-quality data, are cost-prohibitive for wide distribution and are not portable. 
  • The short-term focus should be on preparing for a time when technology will be more applicable to law enforcement roles The key is to obtain buy-in among law enforcement officers now — not for current technology, but for devices developed in the future and possible downstream effects on the field as WSTs are deployed to support officer safety and wellness, workforce retention, liability, and other issues. 
  • The intersection between WST and law enforcement is currently defined by uncertainty The applicability of WST to law enforcement will be proportionate to how well the technology can reliably inform decisions about an officer's daily activities. 
  • Devices need to seamlessly integrate with the technology that law enforcement already carries, measures need to be valid and reliable, interpretation of the data needs to be clear, and policies need to be in place for managing and monitoring the data. 
  • Now is the time for law enforcement to participate in the process of developing WSTs Law enforcement specifications for WSTs might not match the commercial industry standard, so law enforcement needs to talk to — and be heard by — WST manufacturers. 

The consequent recommendations are

  • Officers should be educated about the multiple uses and purposes of WST. 
  • Pilot testing should be conducted, and feedback should be collected on experiences. Outcome measures should be identified early in the process. 
  • Policies and processes for when and why data may be shared should be developed and implemented. 
  • A sequenced or phased approach should be developed for taking validated technology to the field for scaled evaluations. 
  • Individual baselines should be established to account for differences among individuals. 
  • The state of the research should be monitored, and law enforcement and public expectations should be managed. 
  • A set of best practices should be defined for consumer wearable devices. 
  • Data should be encrypted at each layer, and end-to-end encryption should be employed. 
  • Guidance and education about how to interpret data and metrics should be developed for WST users.

05 January 2019

Reidentification

'Feasibility of Reidentifying Individuals in Large National Physical Activity Data Sets From Which Protected Health Information Has Been Removed With Use of Machine Learning' by Liangyuan Na, Cong Yang, Chi-Cheng Lo,Fangyuan Zhao, Yoshimi Fukuoka and Anil Aswani in (2018) 1(8) JAMA Network Open e186040 comments
Policymakers have raised the possibility of identifying individuals or their actions based on activity data, whereas device manufacturers and exercise-focused social networks maintain that sharing deidentified data poses no privacy risks. Wearable device users are concerned with privacy issues, and ethical consequences have been discussed. There are also potentially legal requirements from the Health Insurance Portability and Accountability Act (HIPAA) on the privacy of activity data. One key unresolved question is whether it is possible to reidentify activity data. A better understanding on the feasibility of such reidentification will provide guidance to researchers, health care providers (ie, hospitals and physicians), and policymakers on creating practical privacy regulations for activity data. 
Reidentification of data is not just theoretical but has been demonstrated in several contexts. For instance, demographics in an anonymized data set can function as a quasi-identifier that is capable of being used to reidentify individuals. Reidentification is also possible using online search data, movie rating data, social network data, and genetic data. However, a key feature in these examples is a type of data sparsity, specifically, a large number of characteristics for each individual, which leads to a diversity of combinations in such a way that any particular combination of the data is identifying. For example, individuals’ movie ratings are highly revealing because of the many permutations of likes and dislikes. As another example, the particular genetic sequence combinations (and especially single-nucleotide polymorphisms) of a single individual are unique and capable of identifying that individual. 
In contrast, physical activity data do not feature the type of data sparsity found in the above examples because health data from a single individual often exhibit high variability. For example, for heart rate, variability is a constant and expected feature in healthy and unhealthy individuals. However, this variability does not protect against reidentification. A previous study found that high temporal resolution data from wearable devices transform this variability into repeated patterns that can be used for reidentification. In response, commercial organizations have argued that aggregated sets of wearable device data (without the high resolution) cannot be reidentified. It was recently reported that location information from activity trackers could be used to identify the location of military sites. Although this is not strictly an example of reidentifying specific individuals, it is nonetheless an example of the potential loss of privacy attributable to sharing of physical activity data. As a result, many location data are no longer being shared by commercial organizations; however, to our knowledge, reidentification excluding location data has not been studied or demonstrated. 
The primary aim of this study was to examine the feasibility of reidentifying activity data (collected from wearable devices) that have been partially aggregated. In this article, we specifically considered aggregations of an individual's activity data into walking intensity at the resolution of 20-minute intervals. This intensity represents a substantial level of aggregation compared with the raw digital accelerometer data that were used for reidentification in a previous study. We further studied other different levels of aggregation (from 15-minute intervals to 24-hour intervals) in the same manner. 
The scenario that we envisioned is summarized in Figure 1, and we gave one specific scenario to better describe the threat model considered in this article. This scenario involves an accountable care organization (ACO), such as the Kaiser Permanente network, that has stored their patients’ demographic data, complete health records, and physical activity data, which were collected as part of a weight loss intervention conducted by the ACO. This intervention involved recording physical activity data using a smartphone, activity tracker, or smartwatch. This scenario also involved an employer who has access to the names, demographic information, and physical activity data of their employees. The employer has access to physical activity data because they were collected by a smartphone, activity tracker, or smartwatch during the employees’ participation in a wellness program in exchange for a discount on health insurance premiums. There is a potential danger to privacy when the ACO shares deidentified data with the employer if the employer is able to reidentify the data using demographics and physical activity data. We evaluated the feasibility of this scenario by attempting to match a second data set of physical activity data and demographic information to a first data set of record numbers, physical activity data, and demographic information. From the standpoint of machine learning, matching record numbers is algorithmically and mathematically equivalent to matching names or other identifying information

07 July 2016

Apps, Privacy and Regulatory Arbitrage

'Regulatory Disruption and Arbitrage in Healthcare Data Protection' by Nicolas Terry in (2016) 17 Yale Journal of Health Policy, Law, and Ethics argues 
Regulatory turbulence, disruption and arbitrage presuppose the juxtaposition of at least two regulatory domains. In the simplest case one domain would be highly regulated; the other unregulated. Turbulence and disruption exist on a continuum. Regulatory turbulence may be only transient or, in the scheme of things, relatively benign. Regulatory disruption has more permanent and serious implications. Regulatory arbitrage occurs when a business purposefully exploits disruption, making business choices on the basis of the differential between the two regulatory domains.
Policymakers’ persistent, systemic failure to safeguard healthcare data outside the HIPAA domain is now exemplified by the minimal, sub-HIPAA data protection afforded healthcare data either held by data brokers or created by mobile apps and wearables outside of the conventional health care space. The former, healthcare data held by data brokers is an example of regulatory arbitrage. The latter, mobile health is presenting with regulatory turbulence and disruption. This article explains how the structure of U.S. healthcare data protection (specifically its sectoral and downstream properties) has led to a chronically uneven policy environment for different types of healthcare data. It examines claims for healthcare data protection exceptionalism and competing demands such as data liquidity. In conclusion the article takes the position that healthcare data exceptionalism remains a valid imperative and that even current concerns about data liquidity can be accommodated in an exceptional protective model. However, re-calibrating our protection of healthcare data residing outside of the traditional healthcare domain is challenging, currently even politically impossible. Notwithstanding, a hybrid model is envisioned with downstream HIPAA model remaining the dominant force within the healthcare domain, but being supplemented by targeted upstream and point-of-use protections applying to healthcare data in disrupted spaces.

08 May 2016

US BodyCams

'Justice Visualized: Courts and the Body Camera Revolution' by Mary D. Fan in (2016-17) 50 UC Davis Law Review (forthcoming) comments
 What really happened? For centuries, courts have been magisterially blind, cloistered far away from the contested events that they adjudicate, relying primarily on testimony to get the story – or competing stories. Whether oral or written, this testimony is profoundly human, with all the passions, partisanship and imperfections of human perception. Now a revolution is coming. Across the nation, police departments are deploying body cameras. Much of the current focus is on how body cameras will impact policing and public opinion. Yet there is another important audience for body camera footage – the courts that forge constitutional criminal procedure, the primary conduct rules for police. This article explores what the coming power to replay a wider array of police enforcement actions than ever before means for judicial review and criminal procedure law. The body camera revolution means an evidentiary revolution for courts, transforming the traditional reliance on reports and testimony and filling in gaps in a domain where defendants are often silent. 
The article envisions a future where much of the main staple events of criminal procedure law will be recorded. Analyzing body camera policies from departments across the nation reveals that this future is unfolding now. The article proposes rules of judicial review to cultivate regular use of the audiovisual record in criminal procedure cases and discourage gaps and omissions due to selective recording. The article also offers rules of restraint against the seductive power of video to seem to depict the unmediated truth. Camera perspective can subtly shape judgments. Personal worldviews impact image interpretation. And there is often a difference between the legally relevant truth and the depiction captured on video. Care must be taken therefore to apply the proper perceptual yardsticks and reserve interpretive questions for the appropriate fact-finders.

30 December 2014

BodyCams and other visibility

Three items on body cams and a defence by Microsoft apologists ...

'Moral Panics and Body Cameras' by Howard M. Wasserman in (2015) Washington University Law Review (Forthcoming) states that
This Commentary uses the lens of "moral panics" to evaluate public support for equipping law enforcement with body cameras as a response and solution to events in Ferguson, Missouri in August 2014. Body cameras are a generally good policy idea. But the rhetoric surrounding them erroneously treats them as the single guaranteed solution to the problem of excessive force and police-citizen conflicts, particularly by ignoring the limitations of video evidence and the difficult questions of implementing the body camera program. In overstating the case, the rhetoric of body cameras becomes indistinguishable from rhetoric surrounding responses to past moral panics.
'Visibility is a Trap: Body Cameras and the Panopticon of Police Power' by Eric Anthamatten claims that
One of the responses to the recent grand jury non-indictment in the death of Michael Brown was a call to equip police officers with cameras, the idea being that somehow this “third eye” will allow us to “see” the truth in a more objective way. If only we had a camera, we would know better what happened between Officer Darren Wilson and Michael Brown on that street. The human eye is not reliable, so we need a machine eye that is by its very nature disinterested and objective. The discussion of justice become not about systems and institutions of power, but conversations about vision, whether or not it is legal to film the police, whether or not it is a violation of our rights to have the police film us. If we can just police the police, watch the watchers, perhaps the asymmetry of power would be balanced or negated, and justice will somehow obtain. ...
For Foucault, the Panopticon became a symbol for “disciplinary society,” one that “called for multiple separations, individualizing distributions, an organization in depth of surveillance and control, an intensification and a ramification of power.” Power did not operate (only) by repression and overt force, but through these more subtle (and now, not so subtle) fragmentations that tear apart and recreate subjectivity and personhood, shape this “collection of separated individualities,” atomize and vaporize, a power that makes those “elementary particles” more manageable and docile. Disciplined bodies become “the object of information, never a subject of communication.”
“The Panopticon is a privileged place for the experiments on men,” “a kind of laboratory of power.”
On December 1st, amidst the varying levels of response to the non-indictment of Officer Wilson, President Obama requested $236 million to invest in body cameras and police training in order to restore trust in policing (nevermind that “trust” involves not having to watch someone at every moment). Two days later, a Staten Island grand jury decided not to indict Officer Daniel Pantaleo in the death of Eric Garner, an event that was caught on camera. Many, both liberal and conservative, clearly “saw” an injustice and an abuse power. Others saw Garner resist which, in their minds, justified the response by Pantaleo. Immediately, the “solution” of increasing cameras became problematic, if not farcical — even the visual evidence was not enough to indict, belying and underlying systemic problem that shapes the way we “see.”
But it is not simply a question of interpretation and how one “judges” the events, something that inevitably occurs in and through the double-interpretation of perception via any medium (text, photograph, video). Such a solution is a fetishization of sight that evades the underlying problem, a problem that not only has to do with race and class, but also the very structures, technologies and deployments of power in modern society ...
While Foucault provides a compelling analysis of the relationship between surveillance, discipline, and the deployment of power, what he’s articulating is something that is experienced daily by people of color in the United States, namely the experience of constantly being watched while moving through public space, of being always marked by skin color, manner of dress, or physical comportment, what W.E.B. Dubois calls a “double-consciousness.” It is the experience of not only being a “suspicious” body, but of being disciplined, controlled, and already indicted in and through those surveilling eyes. It is the expiring of being incarcerated, unfree.
“The Panopticon ... must be understood as a generalizable model of functioning; a way of defining power relations in terms of the everyday life of men.” Yes, we all live in a Panopticon. But it is not only the Panopticon of Bentham or Orwell, a central tower from which the gaze operates. Rather, it is the Panopticon of Kafka, one that is everywhere precisely because there is no centralization, where we, the surveilled, are also the surveillers, we the watched are also the watchers. “Consequently, it does not matter who exercises power. Any individual, taken almost at random, can operate the machine: in the absence of the director, his family, his friends, his visitors, even his servants.”
Such surveillance has become normalized and distributed, into our own pockets, onto our own bodies. It is not a great leap to imagine the police outfitted with, alongside their peppery spray and pistols, glasses that record everything, or perhaps even cameras embedded into their own eyes. Is this the image of justice and freedom? Will this protect the citizenry and help to reduce racism, classism, and abuses of power?
Perhaps surveillance will help both police officers and citizens feel more secure because they feel they will be accountable to some disinterested third party or to the “court” of public judgment. There is some recent evidence that use of force declines when body cameras are present. But, as Foucault emphasizes, surveillance is yet another refinement of power and control, a technology, however well-intentioned, that continues to atomize our bodies in time and space as a way of examining, fragmenting, and controlling those bodies. There is no justice “behind” the way we realize it through our technologies and systems. These cameras, then, do not become the tool of justice, but a catalyst for surveillance, discipline, and punishment. The camera replaces the gun — the violence and control over a body is no less totalizing.
“Broken windows” leads to broken windows. The “riot” is, at some level, an expression of exclusion from property and meaningful participation and recognition in the life of society. Many see it as a breaking in, but it is in fact a breaking out of the “dungeon” of surveillance and control perpetuated by modern biopower. This is something that bodies that are not under siege do not and perhaps cannot understand. From the safety of their own “Panopticon,” behind the glass of the television, in the comfort of their living room chair, they watch these “animals” and only see “thugs,” “hoodlums,” “criminals,” a “prison riot,” not to mention other choice labels by which they “see” these bodies.
This is precisely the point: poor communities where most of the bodies are brown experience “public” and “free” space as surveilled space, controlled space, a space where their bodies are not their own but perpetually disciplined, fragmented, and examined by the various modes of power. Are more eyes the answer?
Visibility is a trap
'Are You Recording This?: Enforcement of Police Videotaping' by  Martina Kitzmueller in (2014) 47(1) Connecticut Law Review comments
Increasing numbers of police departments equip officers with dashboard or body cameras. Advances in technology have made it easy for police to create and preserve videos of their citizen encounters. Videos can be important pieces of evidence; they may also serve to document police misconduct or protect officers from false allegations. Yet too often, videos are lost, destroyed, or never made, often depriving criminal defendants of the only objective evidence in a case. When this happens, there is not always a consequence to the prosecution. This Essay explores this problem of enforcement by examining how different states are compelling law enforcement to make and preserve videos through a combination of legislation and judicial intervention.
'Do-Not-Track and the Economics of Third-Party Advertising' (Boston University School of Management Research Paper No. 2505643) by Ceren Budak, Sharad Goel, Justin M. Rao and Georgios Zervas argues that
 Retailers regularly target users with online ads based on their web browsing activity, benefiting both the retailers, who can better reach potential customers, and content providers, who can increase ad revenue by displaying more effective ads. The effectiveness of such ads relies on third-party brokers that maintain detailed user information, prompting legislation such as do-not-track that would limit or ban the practice. We gauge the economic costs of such privacy policies by analyzing the anonymized web browsing histories of 14 million individuals. We find that only 3% of retail sessions are currently initiated by ads capable of incorporating third-party information, a number that holds across market segments, for online-only retailers, and under permissive click-attribution assumptions. Third-party capable advertising is shown by 12% of content providers, accounting for 32% of their page views; this reliance is concentrated in online publishing (e.g., news outlets) where the rate is 91%. We estimate that most of the top 10,000 content providers could generate comparable revenue by switching to a “freemium” model, in which loyal site visitors are charged $2 (or less) per month. We conclude that do-not-track legislation would impact, but not fundamentally fracture, the Internet economy.

03 December 2014

Participatory Sensing

'Participatory Sensing: Enabling interactive local governance through citizen engagement' [PDF] by Slaven Marusic, Jayavardhana Gubbi, Helen Sullivan, Yee Wei Law and M. Palaniswami of the Department of Electrical and Electronic Engineering at the University of Melbourne argues that
Local government (such as the City of Melbourne) is accountable and responsible for establishment, execution and oversight of strategic objectives and resource management in the metropolis. Faced with a rising population, Council has in place a number of strategic plans to ensure it is able to deliver services that maintain (and ideally improve) the quality of life for its citizens (including residents, workers and visitors). This work explores participatory sensing (PS) and issues associated with governance in the light of new information gathering capabilities that directly engage citizens in collecting data and providing contextual insight that has the potential to greatly enhance Council operations in managing these environments. 
Their paper examines:
  • Key hurdles affecting the viability and uptake of PS from different stakeholder perspectives 
  • The capacity of PS as a new and mutually beneficial communication link between citizens and government; the respective value propositions in participating, whilst simultaneously increasing engagement and enhancing City operations through co-production with citizens
  • Technological elements of PS and associated privacy impacts through the application lens of noise monitoring
  • Social impacts of emerging pervasive technologies, particularly the encroachment upon privacy, associated risks and implications, not only for the individual but also the impact in shared environments
  • Responsibilities and avenues for mitigation assigned to respective stakeholders; including user awareness factors, policy frameworks and design level strategies
  • The role of reputation and trust management between stakeholders in fostering productive links, along with the capacity for citizen empowerment
  • The balance of perceived competing objectives of privacy and transparency, ethical strategies to address this challenge
  • User perceptions of related issues taken from studies of internet and social media usage through computing and mobile platforms
  • A development platform to measure user awareness of privacy risks, behavioural responses to a spectrum of engagement options effectively returning to the user a level of control over their participation 
  • Essential requirements for responsible implementation of PS platforms, considering ethical issues, responsibilities, privacy, transparency and accessibility 
The key findings are -
Participatory sensing
• The active role of the user is critical for the success of PS, requiring effective engagement, but also mitigation of disincentives, such as privacy concerns. As privacy risks increase in the context of multiplied information sources, despite available privacy preservation strategies, user awareness and control remain key elements. • Establishment and management of mutual trust is key to PS functioning as an effective medium for communication between stakeholders and for ensuring accountability. • Citizen empowerment is only achieved with the provision of information to assist individual decision-making, as well as the opportunity for responsibility and control over level of participation. • Incentivisation schemes need to recognise the value of the data/service being supplied by the user, the accessibility of the organiser provided service and ongoing value propositions .....
Privacy vs transparency
• Privacy is one’s control over access and flow of their information. Legal protections are limited to specific circumstances, so ethical means for supplementary privacy protection offer transparency with respect to data accuracy and embed privacy in the design. System transparency and verifiable privacy measures can build trust between stakeholders. • Social implications warrant mechanisms for managing data history. Additionally, informed user consent needs to be the goal and supported by effective communication of risks. Accordingly, users will utilise various means for protecting privacy, according to their level of awareness and evaluation of risks
Policy frameworks
• Systems for protection of personal information are essential for maintaining/building trust between organisations and citizens. This includes discovery of breaches and recourse for compensation. Existing policies cover data collection and handling; citizen engagement strategies; and feedback management. They reflect privacy concerns; principles of open and responsive government; and value in citizen contributions to governance. • Limitations of privacy legislation demands supplementary principles/guidelines for system implementation, including industry self-regulation, privacy by design and privacy impact assessment (PIA). Existing and supplementary measures thus need to be utilised and adapted for effective PS.
Pilot study
• The pilot study is based on a noise measurement app and central server for data aggregation and display. The app provides a spectrum of privacy level options, to be selected by the user, that reflect the type and amount of data to be collected/shared. • The privacy level selection interface serves to inform the user of data handling (and implies associated risk), while a refined interface can more explicitly convey this. This capability demonstrates a means for increasing user control over their level of participation. • A larger study can expand this capability; provide detailed assessment of public participation capacity; conduct a PIA; reach broader demographics; and further evaluate the issues raised throughout this paper.
Recommendations
• Prior to embarking on PS implementations, organisations thoroughly analyse all stakeholder concerns and ensure necessary steps are taken to address these, as outlined here • The City of Melbourne look for opportunities to test participatory sensing as a means of addressing specific community concerns in relation to noise nuisance • Policy makers review existing policy frameworks to ensure that they offer the appropriate combination of incentives and safeguards to facilitate greater citizen involvement in addressing issues of community concern (co-production) • Policy makers review existing organisational structures and professional cultures to identify any additional barriers to effective citizen engagement
In discussing privacy the authors comment
In literature associated with PS, social media platforms and now IoT, privacy concerns are listed as a significant issue. However, insufficient attention is given to the nature of these concerns, such as: the implications and risk factors of inadequate privacy protection measures; the impact on technology utilisation of users’ actual understanding of existing risk factors and any safeguards that may be available. Solutions that are provided are often limited in scope. Indeed, there is a risk that designers will focus on development of new system capabilities, neglecting the necessary ethical dilemmas by associating these challenges with data utilisation and so a task for someone else (Shilton, Participatory Sensing: Building Empowering Surveillance 2010). A survey of PS applications and associated challenges is provided in (Christin, Reinhardt, et al. 2011). Importantly, they acknowledge the lack of flexibility needed to reflect diverse viewpoints and awareness of privacy risk, implications and available mitigation strategies.
The first privacy requirement is the provision of secure communication links between the user and platform (hosted by the service provider). Conventional data encryption methods available on mobile computing platforms (such as Secure Socket Layer (SSL)/Transport Layer Security (TLS)) intend to ensure that only the intended receiver of the data transmissions is able to access the contents (De Cristofaro and Soriente 2013).
One of the obvious risks associated with PS applications is the same as that of any data sharing application, where the information being shared may actually reveal more about the user than is being intended, or agreed upon. It is now widely accepted that users of social media platforms need to take care in the way personal information is shared or publicly displayed, particularly when utilising multiple platforms. It has been demonstrated that seemingly innocuous postings can reveal information about location, behaviour, routine, social networks and identity. This may be described as information leakage, where it crosses over to another domain or platform to reveal either something more detailed or completely new when combined with other data. Each of the sensing modalities available on smartphones present the risk of revealing private information, from: daily routine (based on time stamped location); identity (from photos, gait analysis, voice recognition); or associations (from photos and voice recognition).
Importantly, the nature of participatory sensing has the capacity to reveal information, not only about the user, but also about others in their vicinity. Therefore, in addition to personal risk of exposure, awareness also needs to be established of the secondary exposure introduced into a given environment. Providing adequate safeguards is thus a multidimensional problem (Christin, Reinhardt, et al. 2011). When the associated services are primarily location focussed, such information has real-world implications. Location can be established, with varying degrees of accuracy, from GPS signals, cell tower locations, as well as from Wi-Fi and Bluetooth links to associated infrastructure. For example, some traffic authorities have deployed infrastructure to detect Bluetooth devices of vehicle users in order to map vehicle paths and travel times. Whilst in such instances, the information is being utilised to improve traffic flow and road infrastructure services, it demonstrates the vulnerability of individuals in allowing unsecured access to their communications and the secondary information that can be extracted from doing so. These concepts have also been explored in the context of Intelligent Transportation Systems that utilise various means of vehicle identification from license plate recognition, electronic tags for tolling systems or GPS devices. The roles of different interested stakeholders are noted along with the potential for establishing personally identifiable location information and how existing US privacy law impacts such operations (Garry, Douma and Simon 2012). Similar challenges have been faced by location based services on mobile phone or computing platforms for some time (Anderson, Henderson and Kotz 2007) (He, Wu and Khosla 2004) (Shahabi and Khoshgozaran 2007).
Privacy preservation
There exists a suite of proposed solutions for preserving privacy in participatory sensing whilst still permitting a desired level of engagement in the program, with some taken directly from networked computing strategies.
The first option is to provide some degree of anonymity for the user. It must be determined then, from whom anonymity is required, or rather, for the meaningful operation of the PS platform, for whom is identification required/permitted. It is widely acknowledged that users demonstrate different degrees of willingness in sharing data, depending on the relationship with the other party (or parties) involved.
Degrees of identity revelation maybe classed as:
• Completely anonymous • System Organisers/Network Operators o Requires secure end-to-end communications regardless of the number hops or network types utilised in the transmissions (e.g. Tor). • Selected peers/participants o A user/organiser defined subgroup, established based on certain criteria, such as predetermined trust or existing community group. • Other participants on the system o Identifiable to other users with access rights to the system. • Anyone able to eavesdrop given communications links somewhere along the network • Notionally hidden, but unsecured. • Open o Unrestricted publishing of identity linked with data contributions
Providing anonymity is not without its own implications. If not carefully designed, a system permitting anonymous contributions can be easily compromised if there is no means of verifying the quality and validity of the data. In such cases, it may leave organisers with no recourse to identify or manage misbehaving or malicious users. In this respect, the same tools that protect the privacy of the innocent also hide the identity of the malicious or criminal.
The common argument that is often posed in shifting the balance of privacy towards transparency is that some privacy must be sacrificed in order to ensure security. If this is indeed true and unavoidable, then certain questions immediately follow: to what degree must privacy be sacrificed; to whom is privacy to be sacrificed; and what level of trust can be assigned to this authority? Indeed, is it even possible to apply a threshold in trying to answer these highly debatable questions? Consideration must be given to the risk and implications associated with any subsequent abuses of this trust. So whilst a goal may be set for balancing privacy and security, according to some established value system, the viability of such thresholds only exist in as much as aggrieved parties have recourse to compensation from any breaches. Where this is implausible to guarantee, a more cautious approach would be to first question whether the emerging capability is actually increasing the degree of vulnerability of users without adequate protection and for insufficient return/reward for their participation.
From a practical perspective, a number of measures can be implemented at the design level, that provide users with varying levels of protection. Some of these include:
• Masking identity (utilising independent verification methods; allowing anonymous data contribution; simulating high participant density) • Masking location (data perturbation or reduced granularity) • Limited data release (sharing of aggregated or filtered rather than raw data)
Essentially, such methods rely on the existence or appearance of a large number of users within the system and within the specified location (k-anonymity), so increasing the difficulty with which a single user (or their data) may be identified. In this way, the system can compensate for malicious users (e.g. supplying misleading data); unreliable users (e.g. supplying incorrect/erroneous/low-quality data); collaborating users (e.g. to uncover identity or other restricted information about other participants; bias overall measurements; manipulate user reputations).
Self-surveillance describes the personal information that is captured, stored and potentially transmitted though the complicity of individuals. To combat the increased vulnerability to privacy breaches arising from self-surveillance and indeed the expected decreasing privacy, the concepts of Personal Data Vaults (PDV) and Personal Data Guardians (PDG) have been proposed as a means of giving users greater control over how their data is shared, whilst still making use of cloud infrastructure (Kang, et al. 2012). Self-surveillance applications relate more to measurement of biological parameters, activity and mobility. As such, whilst obviously of interest to the individual, there is also substantial interest from third-parties in being able to analyse and aggregate such data to deliver population-wide insights. There is a need then to balance the usefulness to the individual of such applications, whilst also being able to share some aspects of this data (agreeable to the user) with outside entities. The PDG serves as a trusted entity, with whom the user enters into a legal, fiduciary and confidential relationship (in a similar fashion to that with lawyers or doctors). In proposing PDG to act as trusted intermediaries, the flow of data is slowed, thereby preserving some degree of privacy.
Despite the advantages that PDVs seemingly provide, the privacy guarantee of a system that proposes secure storage in the cloud and transmission utilising many layers of communications infrastructure is difficult to ensure, particularly in light of many well publicised hacking and spying episodes. Two issues remain unresolved; the first is public perception of the degree of security available, whilst the second considers user behaviour in light of available security of privacy options and concern for associated risks. Investigating user perceptions of cloud computing security and data vulnerability in (Ion, et al. 2011), it is observed that a large degree of scepticism still exists impacting what users will store. There is also general acceptance that existing means for presenting Terms of Service (TOS) are largely ineffective, in that they are difficult to understand or generic and thus often ignored. As many as 51% of users don’t read online privacy policies, with most perceived as too long or complex and of those who read privacy policies, only 37% are able to gain the necessary information to decide whether or not to use the site (OAIC, Community attitudes to privacy survey 2013). This results in mismatched expectations of users as to their rights and the actual requirements of the service provider. Further complexity is introduced by changes across demographics and culture, in how privacy risk is perceived and trust of organisations assigned (Bélanger and Crossler 2011). Analysis of TOS for cloud services has also revealed a tendency for bias towards more detail regarding user obligations rather than the provider’s, with the authors recommending a legal framework (albeit in the US context) offering greater control and portability for the user for the management of their data (Kesan, Hayes and Bashir 2013). Ultimately, the effectiveness of any service guarantee is only as effective as one’s ability to detect contravening behaviour and the scope for compensation.
It must be considered whether the majority of users interact with these services naively or wellinformed. The significance of such factors is often merely assumed and only interpreted in terms of the effect on the total number of users, following which a range of privacy preservation measures are proposed in order to mitigate any detrimental impact on participation rates. In (De Cristofaro and Soriente 2013), the provision of adequate privacy protection is considered the single most important factor affecting the willingness of users to contribute data. Consequently, they propose a cryptographic system for Privacy-Enhanced Participatory Sensing Infrastructure (PEPSI), based on Identity Based Encryption and third-party generation of decryption keys (private key generator). They also acknowledge challenges in protecting query privacy from organisers; node privacy from network operators; and collusion attacks.
Reputation and trust
Trust management systems may be categorized as either rule-based or reputation-based (Yang, Zhang and Roe 2012). A rule-based system applies credential matching, based on credentials (certification), chain discovery (with importance placed on associated storage locations) and trust negotiation (protecting credentials, avoiding unnecessary exposure). Reputation-based trust management characterises user behaviour by collecting, distributing and aggregating assessments of user contributions in a way that may identify malicious behaviour. They describe the different stages of trust management as establishment of initial trust; observation of behaviour; and evolution of reputation and trust. Initialisation is always problematic, as there is little on which to base an applied trust level. Approaches have been proposed that draw on community ratings of new participants; however, this does not preclude other participants from providing biased or malicious reports.
The more conservative approach will allocate a low trust level to new users with a period of time for this reputation to be improved. The alternative is to assume trustworthiness and then downgrading the assigned trust level if warranted. In observing behaviour, anomaly detection methods may be applied to automatically detect potential misuse, by first classifying normal behaviour in a way that enables rapid detection of anomalous behaviour.
The impacts of these respective decisions needs to be quantified and related to the number of users in the system in order to determine the effectiveness and suitability of the associated trust management framework. This is particularly important when considering the concept of sharing reputations across communities, reputations on which other communities can then initialise their own trust levels for new users. This reaches back into questions of privacy, with identification being required across different domains. It also draws into question the reliability of trust management frameworks in one domain and their impacts on other domains. This may point to the need for a central trusted authority. Yet a distributed approach, applied with certain safeguards and agreed criteria, provides some protection against attacks on a single central repository of reputations.
Existing reputation models include: summation and average (aggregation of ratings to produce a single reputation score); discrete trust models (assigning labels to actions for ease of interpretability); and Bayesian systems (applying positive or negative ratings along with a probability distribution to determine reputation) (Reddy, Estrin and Srivastava, Pevasive 2010 2010). The first is susceptible to bias if the number rating is heavily skewed towards one side, while discrete models (non-mathematical) do not inherently support statistical inference of reputation confidence. Bayesian systems, however, offer the ability to establish a measure of confidence in the reputation score, by determining the probability that the expectation of a distribution (which determines the reputation) is within a certain error margin. It also allows for weighting of new and old measurements, effectively applying a forgetting factor, to more effectively update reputations giving preference to either more recent or historical behaviour. Another system evaluates a number of different attributes that are combined to determine reputation (Abdulmonem and Hunter 2010). These include: direct rating between members; inferred trustworthiness; rating of observations; contributor’s role/qualifications; quality of past contributed data; completed training programs; amount of contributed data; frequency of contributions.
In a similar fashion, (Yang, Zhang and Roe 2012) proposed a combination of direct information (objective evaluation based observable parameters), personal information (personal information which infers a degree of accountability if accurately provided) and indirect reputation (subjective measures such as community and organiser’s trust). As mentioned earlier, the last criteria is problematic and thus needs to be weighted accordingly, while personal information can similarly be weighted according to the level of detail supplied and its verifiability. In fact, this challenge was reflected in their experiments, observed in a small number of participants not willing to supply personal details.
In (Christin, Roßkopf, et al. 2012), the user reputation is cloaked utilising cryptographic primitives. Similarly based on k-anonymity, in (Huang and Kanhere 2012) a trusted reputation server is utilised in a manner that masks the reputation in transit so as to reduce prospects of linking user identity to reputation by external parties, whilst claiming greater flexibility in reputation assignment and accuracy. In (Wang, et al. 2013) anonymity and reputation challenges are balanced through the separation of the data reporting and reputation update processes. ...
4 Privacy vs Transparency
A commonly understood definition of privacy is the right to have one’s personal environment (and information contained therein) protected from intrusion. In this way, it is also interpreted as the right to be left alone (Brandeis and Warren 1890). Indeed a focus of this view by Brandeis and Warren in 1890, was in the provision of compensation of suffering resulting from privacy invasion, and so drew attention to dignitary harms (such as reputation) (Kesan, Hayes and Bashir 2013). Alternatively, in the modern context, it can be defined as control over personal information (Shilton and Estrin, Ethical Issues in Participatory Sensing 2012). It implicitly draws a contrast between what constitutes private space and public space and the subsequent delineation of private information versus public information. It also draws into question the status of private activity that necessarily crosses over to public spaces. To appreciate the significance of privacy protection, the function of privacy in a societal context “protects situated practices of boundary management through which the capacity for self-determination develops,” while “conditions of diminished privacy also impair the capacity to innovate” (Cohen 2013).
With specific reference to participatory sensing, privacy has been interpreted as “the guarantee that participants maintain control over the release of their sensitive information” including “protection of information that can be inferred from both the sensor readings themselves as well as from the interaction of the users with the participatory sensing system.” (Christin, Reinhardt, et al. 2011). Formal definitions have been established with reference to what is protected by law. As such, it varies in differing degrees according to the jurisdiction to which the law is being applied. According to the Victorian Government Privacy Commissioner, the common element is the ability to keep “your own actions, conversations, information and movements free from public knowledge and attention.” This can be interpreted differently in the context of the home, workspace or other environments. Importantly, associated legislation only covers certain types of information and activities. It is important to recognise that protections tied explicitly to identity are inadequate, as enough consolidated information may be used to identify an individual’s activities, locations and relationships and in that way leaves them vulnerable to exploitation, without ever requiring the establishment of identity (Wright, et al. 2009). To appreciate the legal boundaries applied in privacy law, it is necessary to acknowledge that it is different from both confidentiality and secrecy. Confidentiality in legal terms relates to information given under the obligation that it not be shared further. Such information is not usually publically available or easily accessed. Secrecy relates to the prevention of information becoming known, where such action may assist in privacy protection or in serving the public interest.
Limitations of existing US legal frameworks to effect genuine protections through combinations of privacy, data protection and communications laws, have been explored in (Bast and Brown 2013). In light of the inability to singularly protect privacy in that context, a combined approach is recommended that employs education, empowerment and enforcement (where available) (Thierer 2013).
Ethics in Design
Ideally, privacy safeguards embedded in the system design should give users the necessary control over the collection and release of their data. However, the personal nature of PS as well as the proximity to other persons not actively participating in the PS program raises certain complications. In this respect, it is important to consider alongside privacy, the notions of consent, equity and social forgetting (Shilton and Estrin, Ethical Issues in Participatory Sensing 2012).
Balancing the need for privacy and proposed means for satisfying it raises further questions about the fundamental ethics being applied and about the strength of those ethical principles when considered in the light of seemingly necessary compromise. When such principles are applied at the design level, applied haphazardly, or not applied at all, the implications for the user are likely to be an increased vulnerability from having shared revealing data and for the organisers, exposure from basing released information on inaccurate and possibly malicious data.
Design objectives can be easily placed in opposition as a means of more easily arriving at a certain outcome. For example, privacy versus accuracy has been proposed as such a compromise, where in order to preserve some degree of privacy, it has been suggested to perturb data supplied to the system in order to mask the real data that reveals perhaps the actual location or actual time at which samples were collected. Consideration must be given to how this is achieved. If the granularity of the data is merely reduced, it is more a reflection of the preparedness of the user to supply a certain level of detail. Therefore, it is not the accuracy of the data in question, but rather the quality. Transparency in the system ensures that the error margin or data granularity is known, such that subsequent operations on this data can factor in the associated data quality. This is different from intentionally supplying incorrect data, albeit with the same intention to mask activity or identity. In this instance, there is little recourse to separate legitimate users from malicious users, as both courses of action seek to bias the data pool. Social forgetting is raised in (Shilton and Estrin, Ethical Issues in Participatory Sensing 2012) as another element of the system necessary to reflect the broader principles at stake. PS presents the possibility for a historical archive of activity. The social media generation are slowly becoming aware of the longer term impacts of seemingly frivolous sharing of photos or posts, as what may be considered private moments suitable for perhaps a circle of friends is not seen in the same light by current or potential employers. As many as 17% of Australians regret something they have posted on a social networking site (increasing to 33% for young people) (OAIC, Community attitudes to privacy survey 2013). It has been suggested that such long-term recording and retention, reduces the capacity for a fresh start, or to be able to recover from one’s mistakes. In the US, this has been referred to as the “Eraser button” approach, whilst a similar EU version refers to a “Right to be forgotten” (Thierer 2013). As raised in the analysis of reputation management, to what extent does recent activity predict future behaviour compared with more distant activity? In that context the challenge was, rather, how to establish reputation quickly in the absence of historical data. Also important, is the duration of data and activity retention, and associated weighting applied in determining other factors such as reputation. In this instance, it becomes a factor for the user in establishing a trust that they (and their data) will be treated fairly and justly (Shilton and Estrin, Ethical Issues in Participatory Sensing 2012). Mechanisms exist within law to address such issues, so measures must surely be applied to extend such a capacity to emerging technologies that in principle seek to reflect social systems.
System transparency also applies where user consent is concerned. It is essential that where users are relied upon to actively participate in the sensing process, adequate consent be obtained for the utilisation of associated data contributions. This is challenging in an environment that by its nature encroaches upon people’s personal environments and furthermore by the degree of consent which may be obtained. Active and informed consent is essential if any sort of parity between users and organisers is to be established. As noted earlier in the challenge of overly complex Terms of Service Agreements, it is difficult to infer the level of understanding reached by a user in order to obtained informed consent. However, the level of consent can be qualified and ranges from:
• Passive consent – where utilisation of the platform implies consent given, and so utilised by organisers to extract content at will (so called ‘soft surveillance’ (Shilton and Estrin, Ethical Issues in Participatory Sensing 2012)). • Active consent – requires some action by the user to acknowledge agreement (standard TOS agreement), but does not necessarily indicate understanding, rather places responsibility upon the user. • Qualified consent - contingent on circumstances specified by users or guarantees provided by organisers, with the system operation reflecting selected preferences. • Informed consent – effective communication of usage conditions and implications, with consent elicited in a manner that reflects the level of understanding and agreement.
The extent of the implications and subsequent agreement required still needs to be explored in order to establish the level of risk that applies to each of the stakeholders. This can be extended to include a more thorough analysis of demographics within stakeholder groups, particularly where it concerns more vulnerable participants. Where the implications impact only the individual user, the direct and active consent suffices. However, where sensing applications increasingly extend to environments shared with others (be they private or public, home or work) the same mechanism may no longer be adequate. The system may protect the privacy of the user, but what of bystanders who may have no knowledge of sensing taking place, no knowledge of implications and potentially different views on the conditions for qualified consent. Is the user in a position to take responsibility for the privacy of people within the vicinity of the sensing system? The significance of these flow-on effects is dependent on the type of sensing taking place. In any case, due consideration must be given to these impacts in the design and in communicating any vulnerabilities to the user.
In order to establish trust, organisers need to be able to provide assurances of how the data is used, whilst system designers embed related functionality within the design. Where insufficient trust exists between users and organisers, users can still have the option of reducing the granularity of data shared or applying some other means of anonymisation (such that the supplied data may still contribute something meaningful).
In the context of citizen engagement and promoting interactive government, applying such procedures is a means for demonstrating transparency and for growing trust. The functionality of the PS platform then enables users to better understand the significance of risk factors as well as the measures applied by organisers (e.g. government) to mitigate such risks. It is ultimately the responsibility of the system operator to adequately inform users, particularly in the absence of adequate risk mitigation and vulnerabilities being exposed.
In determining the viability of such deployments, it must be established whether the introduction of such technology platforms is potentially increasing the vulnerability of particular demographics, or whether through careful design and deployment it acts to reduce such vulnerabilities. Whilst as many as 60% of Australian youth acknowledge the privacy risk of personal information and online services (OAIC, Community attitudes to privacy survey 2013), this still leaves too large a number unaware of the implications associated with what they perceive as ordinary online activity. If it is not possible at the development stage to mitigate these risks, than it may still serve this purpose by providing effective and explicit information transfer relating to the data sharing risks.
User perceptions
Requiring further investigation is the existing knowledge level of such issues as well as the relationship of risk awareness to participation rates.
Research by the Australian Communications and Media Authority (ACMA) has found increasing concern regarding security and privacy amongst mobile and online platform users. This is reflected in user online behaviour, where users commonly employ different digital identities (transactional, social and personal) as a proactive means for restricting access to personal information. In these different scenarios, users were more or less willing to contribute detailed identity information, responding to information demands by going elsewhere for the same service, or even providing misleading information (defensive inaccuracy). In this way, some users could be found exercising their own balance of data integrity and pseudonymity (ACMA, Digital footprints and identities Community attitudinal research 2013). At the same time, whilst nearly 40% of respondents were confident they could effect their desired privacy level through available privacy setting options, another 40% were only hopeful that this was the case, with remaining number holding a negative view.
More than two-thirds were concerned about the level of information shared when using location-based services. Other important findings, included: increased usage does not translate to greater understanding; risks are poorly understood; knowledge of risks and available protections were poor; and users desired more information to assist them to protect personal data (ACMA, Here, there and everywhere—Consumer behaviour and location services 2012).
There was found to be substantial trust in government and established banks for securing transactions and using them only for legitimate purposes. Significant trust and responsibility for managing and policing digital identity and breaches was still placed in government. The distinct roles of individual stakeholders have also been acknowledged, with: individuals having primary responsibility for protecting their personal information; service providers and industry operators responsible for enabling a secure environment; and government providing information and education services, raising awareness and enforcing safeguards (ACMA, Here, there and everywhere—Consumer behaviour and location services 2012). Indeed, high standards of transparency in data handling are also universally expected from all organisations, as well as demands for notification of handling breaches and protection and management practices (OAIC, Community attitudes to privacy survey 2013).
Similar studies of mobile user attitudes to privacy conducted by other agencies across Australia and around the world have also revealed concerns for users of mobile and online platforms. This includes apps that do not intrinsically incorporate data sharing.
GSMA studies of mobile user attitudes to privacy conducted across the UK, Spain and Singapore, with follow-up studies in Malaysia, Indonesia, Mexico, Columbia and Brazil, found that approximately half of the respondents expressed concern about sharing personal data whilst using mobile online services or apps, with over three quarters subsequently very selective about who they shared such information with. Before sharing location information of mobile phones, as many as 81% of people wanted their permission to be requested. It was also noted that most users took less security/privacy precautions when using mobile devices compared with PC use. Interestingly, 47% of users would change their usage behaviour if apps were found to use their personal information without consent, whilst 41% would limit their use (FutureSight 2011).
The Office of the Australian Information Commissioner (OAIC) survey on community attitudes to privacy, revealed some user awareness of privacy risks associated with online activity (OAIC, Community attitudes to privacy survey 2013). This naturally translates (in some degree) to mobile apps. Some of the key findings are noted here.
With respect to personal information, online services are viewed as the biggest privacy risks (including ID fraud and theft, followed by data security). Concerns about handling of personal information are evident in dissatisfaction with such data being sent offshore (with 90% expressing concern). This certainly raises questions about data ownership and the ability to guarantee associated protections according to how this data is handled (including communication and storage).
As many as 78% of Australians dislike having their activities monitored covertly on the internet. Some awareness of this activity exists, with around half respondents of the view that most websites and smartphone apps collect user information. Of those aware of such risks, more are actively seeking to protect their information, with 90% at times declining to provide personal information, 80% first checking website security, 72% clearing browser histories and 62% not using smartphone apps because of related concerns and around 30% providing false names or details.
There is seemingly a point at which user demands for privacy are relaxed or traded, with over a quarter of the population prepared to provide personal information in exchange for improved service or reduced prices.
How much then do breaches of trust by system operators or government affect user perceptions and ongoing behaviour? One third of those surveyed experienced problems with the treatment of their personal information. But while there is a better understanding of ombudsmen schemes, many still aren’t aware of reporting or complaint procedures. More trust is placed in government organisations than private companies (with only health and financial institutions exceptions). Due to concerns about the handling of personal information, 60% decided not to interact with a private company. No figures were reported regarding continued engagement with given organisations following a data breach. Efforts to embed privacy with associated guarantees, means of retribution or compensation for data breaches and adequate transparency of data usage all surely contribute to building or rebuilding trust  between different stakeholders. In this way, robust technological and policy frameworks for emerging ICT are essential. Participatory sensing is one such capability that occupies a unique space that can mutually benefit users and organisers, if implemented accordingly. Implemented haphazardly, it simply reflects the existing flaws and lopsided exchange dynamics that often exist respectively between users, commercial operators and governments. It goes one step further than straight forward app development principles. Likewise, it extends beyond activities of disengaged mass-surveillance, principally because it actively engages the user to contribute data and collection resources. So, the respective parties enter into an agreement or contract for the exchange of information. The user is only able to provide informed consent if they are made adequately aware of the system’s operation and their associated role and rights in interacting with the system.
Key points
• Privacy is one’s control over access and flow of their information • Legal protections are limited to specific circumstances • Ethical means for privacy protection offer transparency with respect to data accuracy and embed privacy in the design • System transparency and verifiable privacy measures can build trust • Social implications warrant mechanisms for managing data history • Informed user consent needs to be the goal and supported by effective communication of risks • Users will utilise various means for protecting privacy, according to their level of awareness and evaluation of risks

23 August 2014

Wearables

'Google Glass While Driving' (William & Mary Law School Research Paper No. 09-280) by Adam M. Gershowitz comments
 Is it legal to use Google Glass while driving? Most states ban texting while driving and a large number also forbid drivers from being able to see television and video screens. But do these statutes apply to Google Glass? Google advises users to check their states’ law and to “Read up and follow the law!” Yet, laws designed for a tangible world are very difficult to apply to virtual screens projected by futuristic wearable technology. In short order, however, police and prosecutors across the country will be called upon to apply outdated distracted driving laws to Google Glass. 
This article describes how the plain language of most distracted driving statutes is not broad enough to reach Google Glass. Moreover, even statutes that arguably forbid drivers from “using” Glass are practically unenforceable because drivers could easily claim the devices were turned off or that they were being used for lawful functions – such as phone calls or GPS directions – that are allowed under texting while driving statutes. The lack of a clear prohibition on Google Glass while driving is troublesome. Social science evidence demonstrates that using hands-free devices while driving creates “cognitive tunnel vision” that drastically reduces drivers’ mental focus on the road. 
After analyzing the nation’s distracted driving laws and reviewing the social science evidence, this article proposes a statutory framework for effectively banning Google Glass while driving.

01 August 2014

Wearables

The UK Independent reports that under a year-long English pilot scheme offenders who "repeatedly commit alcohol-related crime will be forced to wear ankle tags that monitor if they are still drinking". under a pilot scheme.
 The "sobriety tags" aim at enforcing abstinence by measuring a person's perspiration every half an hour and testing for traces of alcohol. If any trace is discovered, an alert will be sent to the offender's probation officer and they can then be recalled to court, where they could face sanctions such as a fine or even be re-sentenced. 
The 12-month scheme is being trialled in four London boroughs - Croydon, Lambeth, Southwark and Sutton and is being backed by the Metropolitan Police and the Mayor of London, Boris Johnson. 
The tags register alcohol consumption, but do not keep track of people's movements or where they are. 
Up to 150 offenders could be fitted with the tags under the new scheme. They will be banned from drinking alcohol for up to 120 days, and the tag will test them to see if they flout the ban. 
Offenders will be screened before they are chosen to wear the ankle tag. People who are alcohol-dependent and need specialist support will not be a part of the scheme.
Other reporting indicates that the transdermal ankle tags measure the alcohol content of perspiration.

They are apparently an extension of the Secure Continuous Remote Alcohol Monitoring (SCRAM) bracelet.