14 August 2018

EU ID Cards and Biometrics

The European Data Protection Supervisor (EDPS) - an independent institution of the EU - has released EDPS Opinion 7/2018 on the Proposal for a Regulation strengthening the security of identity cards of Union citizens and other documents.

The Opinion states
This Opinion relates to the EDPS' mission to advise the EU institutions on the data protection implications of their policies and foster accountable policymaking - in line with Action 9 of the EDPS Strategy: 'Facilitating responsible and informed policymaking'. While the EDPS supports the objectives to enhance the security of ID cards and residence documents, thus contributing to a more secure Union overall, he considers that the Proposal should be improve in certain key aspects so as to ensure compliance with data protection principles. 
This Opinion outlines the position of the EDPS on the Proposal for a Regulation of the European Parliament and of the Council on strengthening the security of identity cards of Union citizens and of residence documents issued to Union citizens and their family members exercising their right of free movement. 
In this context, the EDPS observes that the Commission has clearly chosen to prioritise the free movement aspects of the Proposal and to treat the security-related objective as corollary. The EDPS remarks that this might have an impact on the analysis of necessity and proportionality of the elements of the Proposal. 
The EDPS supports the objective of the European Commission to enhance the security standards applicable to identity cards and residence documents, thus contributing to security of the Union as a whole. At the same time, the EDPS considers that the Proposal does not sufficiently justify the need to process two types of biometric data (facial image and fingerprints) in this context, while the stated purposes could be achieved by a less intrusive approach. 
Under the EU legal framework, as well as within the framework of Modernised Convention 108, biometric data are considered sensitive data and are subject to special protection. The EDPS stresses that both facial images and fingerprints that would be processed pursuant to the Proposal would clearly fall within this sensitive data category. 
Furthermore, the EDPS considers that the Proposal would have a wide-ranging impact on up to 370 million EU citizens, potentially subjecting 85% of EU population to mandatory fingerprinting requirement. This wide scope, combined with the very sensitive data processed (facial images in combination with fingerprints) calls for close scrutiny according to a strict necessity test. 
In addition, the EDPS acknowledges that, given the differences between identity cards and passports, the introduction of security features that may be considered appropriate for passports to identity cards cannot be done automatically, but requires a reflection and a thorough analysis. Moreover, the EDPS wishes to stress that Article 35(10) of the General Data Protection Regulation (hereinafter “GDPR”)1 would be applicable to the processing at hand. In this context, the EDPS observes that the Impact Assessment accompanying the Proposal does not appear to support the policy option chosen by the Commission, i.e. the mandatory inclusion of both facial images and (two) fingerprints in ID cards (and residence documents). Consequently, the Impact Assessment accompanying the Proposal cannot be considered as sufficient for the purposes of compliance with Article 35(10) GDPR. Therefore, the EDPS recommends to reassess the necessity and the proportionality of the processing of biometric data (facial image in combination with fingerprints) in this context. 
Furthermore, the Proposal should explicitly provide for safeguards against Member States establishing national dactyloscopic databases in the context of implementing the Proposal. A provision should be added to the Proposal stating explicitly that the biometric data processed in its context must be deleted immediately after their inclusion on the chip and may not be further processed for purposes other than those explicitly set out in the Proposal. 
The EDPS understands that using biometric data might be considered as a legitimate anti-fraud measure, but the Proposal does not justify the need to store two types of biometric data for the purposes foreseen in it. One option to consider could be to limit the biometrics used to one (e.g. facial image only). 
Moreover, the EDPS would like to underline that it understands that storing fingerprint images enhances interoperability, but at the same time it increases the amount of biometric data processed and the risk of impersonation in case of a personal data breach. Thus, the EDPS recommends to limit the fingerprint data stored on the documents chip to minutiae or patterns, a subset of the characteristics extracted from the fingerprint image. 
Finally, taking into account the wide range and potential impact of the Proposal outlined above, the EDPS recommends setting the age limit for collecting children's fingerprints under the Proposal at 14 years, in line with other instruments of EU law.

Instagram, originality and critical methods

'Regulating Social Media: Copyright's Regulation of Content-Generative User Behaviours on Instagram' by Corinne Tan in (2018) 2 Intellectual Property Quarterly 140 comments
 This article uses Richard Prince’s controversial use of Instagram images in his artworks, as well as other uses preceding or arising from his use as a case study, as a basis to analyse how copyright laws in the United States (US), the United Kingdom (UK) and Australia could apply. The article also considers the application of Instagram’s terms of use, before proceeding to identify the technological features on Instagram that encourage and constrain the content generative activities of Prince and other users. Users are observed to receive mixed signals as to the activities that are permissible onsite. Further, each of the terms of use and the technological features is found to deviate from the copyright regime. These deviations arguably foster incorrect expectations in users, and can compromise the effectiveness of copyright laws in regulating content generative behaviours on social media sites such as Instagram.
'IP and Critical Methods' by Margare Chon in Irene Calboli and Lillà Montagnani (eds), Handbook on Intellectual Property Research (Edward Elgar, 2018) comments
The heyday of the critical legal studies movement has passed, yet its impact continues to be felt in many areas of legal scholarship including intellectual property (IP) law, not to mention numerous disciplines outside of legal studies where it is still very much alive and well. Key concepts incubated within this school of thought, such as intersectionality and micro-aggression, have migrated into our everyday political discourse and common vocabulary of describing social relationships of power. Philosophically, critical theory is indebted to the insights of Continental theory such as Foucault, Gramsci, Habermas, and Lévi-Strauss (as received and interpreted by US and other North American scholars), in addition to those philosophers typically associated with the Anglo-American- Commonwealth legal tradition such as Locke, Mills, Rawls, and Adam Smith.
This chapter provides a brief outline of the principal goals, tenets, and methods of critical theory. Describing how this theoretical turn appears in IP scholarship, it then illustrates with non- exhaustive examples (some of which are written by scholars not necessarily identified with critical legal studies). These examples expose the under-acknowledged influence of critical theory in scholarship about IP, including the big three (patent, copyright, and trademark) as well as plant genetic resources and traditional knowledge. It concludes with a brief speculation about the reason for this lack of credit as well as the value of future scholarship guided by this approach..

Amazon

Amazon’s Antitrust Paradox' by Lina M. Khan in (2017) The Yale Law Journal comments
 Amazon is the titan of twenty-first century commerce. In addition to being a retailer, it is now a marketing platform, a delivery and logistics network, a payment service, a credit lender, an auction house, a major book publisher, a producer of television and films, a fashion designer, a hardware manufacturer, and a leading host of cloud server space. Although Amazon has clocked staggering growth, it generates meager profits, choosing to price below-cost and expand widely instead. Through this strategy, the company has positioned itself at the center of e-commerce and now serves as essential infrastructure for a host of other businesses that depend upon it. Elements of the firm’s structure and conduct pose anticompetitive concerns—yet it has escaped antitrust scrutiny. 
This Note argues that the current framework in antitrust—specifically its pegging competition to “consumer welfare,” defined as short-term price effects—is unequipped to capture the architecture of market power in the modern economy. We cannot cognize the potential harms to competition posed by Amazon’s dominance if we measure competition primarily through price and output. Specifically, current doctrine underappreciates the risk of predatory pricing and how integration across distinct business lines may prove anticompetitive. These concerns are heightened in the context of online platforms for two reasons. First, the economics of platform markets create incentives for a company to pursue growth over profits, a strategy that investors have rewarded. Under these conditions, predatory pricing becomes highly rational—even as existing doctrine treats it as irrational and therefore implausible. Second, because online platforms serve as critical intermediaries, integrating across business lines positions these platforms to control the essential infrastructure on which their rivals depend. This dual role also enables a platform to exploit information collected on companies using its services to undermine them as competitors. 
This Note maps out facets of Amazon’s dominance. Doing so enables us to make sense of its business strategy, illuminates anticompetitive aspects of Amazon’s structure and conduct, and underscores deficiencies in current doctrine. The Note closes by considering two potential regimes for addressing Amazon’s power: restoring traditional antitrust and competition policy principles or applying common carrier obligations and duties.

Personhood and Animal Welfare

Edgy Animal Welfare' by Richard Cupp in (2018) Denver Law Review comments 
Legal animal welfare proponents should not reject out-of-hand reforms that may be celebrated by some as steps toward a radical version of animal rights. Rather, animal welfare proponents should consider the costs, risks, and benefits of all potential reforms. Some potential reforms’ risks and costs outweigh their benefits. But, both to improve animals’ welfare and to avoid irrelevance in an evolving society, legal animal welfare advocates should be willing to tolerate some costs and risks. Walking on the edge of slippery slopes is in some situations better than avoiding the slopes altogether. Connecticut’s 2016 animal advocacy statute provides an illustration of legal reform that legal animal welfare proponents should embrace even though it presents some risks of being perceived as a step toward a radical legal personhood rights paradigm.

Smart Cities

'Citizens as Consumers in the Data Economy: The Case of Smart Cities' by Sofia Ranchordas (2018) 4 Journal of European Consumer and Market Law comments
 This article offers a critical account of the concept of “citizen-consumer” in the context of smart cities. This hybrid concept emerged in the 1990s with the New Labour Movement in the setting of the liberalization and privatization of public infrastructures to refer to the consumption of public goods and services. The notion of “citizen-consumer” recently reappeared in the literature on smart cities as public bodies collaborate closely with private actors to offer more responsive, efficient, and data-driven public services to their residents and visitors. In this modern form of privatization, public bodies treat citizens as consumers of data-driven services. In this article, I argue that treating citizens as consumers can be problematic for four reasons: (i) citizenship and consumer protection have different political and economic foundations; (ii) it relies on the heavy collection of personal data by both public bodies and private companies; (iii) it assumes—often incorrectly—that citizen-consumers in cities have choices and can refuse to give their consent to the data collection underlying the provision of smart public services; (iv) it excludes citizens who are less tech-savvy, do not fit in the vision of smart cities or wish to remain offline. Drawing on an interdisciplinary analysis of the literature on the liberalization of public services, smart cities and consumer protection, this article aims to contribute to the legal literature by shedding new light on the concept of “citizen-consumer” and its implications for the inclusiveness of public services

Algorithms

'Algorithm-assisted decision-making in the public sector: framing the issues using administrative law rules governing discretionary power' by Marion Oswald in (2018) Philosophical Transactions of the Royal Society A comments
This article considers some of the risks and challenges raised by the use of algorithm- assisted decision-making and predictive tools by the public sector. Alongside, it reviews a number of long-standing English administrative law rules designed to regulate the discretionary power of the state. The principles of administrative law are concerned with human decisions involved in the exercise of state power and discretion, thus offering a promising avenue for the regulation of the growing number of algorithm-assisted decisions within the public sector. This article attempts to re- frame key rules for the new algorithmic environment and argues that ‘old’ law – interpreted for a new context – can help guide lawyers, scientists and public sector practitioners alike when considering the development and deployment of new algorithmic tools. 
Introduction 
In 1735, in this very journal, one Reverend Barrow published a short piece, hardly a page in length, in which he surveyed births, deaths and overall population in the parish of Stoke-Damerell in Devon.  He notes that ‘the Number of Persons who died, is one more than half the Number of Children born; and that about 1 in 54 died’ in a year when the ‘General Fever’ infected almost all the inhabitants. He further points out that one of the persons buried was ‘a Foreigner brought from on board a Dutch Ship’ and two more were drowned from on board a Man of War ‘but that the Ships Companies are not included in the Number of Inhabitants.’ This data, together with ‘Experience and Observations, both of my self and better Judges’ leads him to ‘reckon the Parish of Stoke-Damerell as healthful an Air as any in England.’ Fifty-four years later, we find William Morgan (communicated by a Reverend Richard Price) promoting ‘the method of determining, from the real probabilities of life, the value of a contingent reversion in which three lives are involved in the survivorship.’ 
In an age when prospects in society – and lines of credit - might be dependent on one’s ‘great expectations’ of an inheritance, calculating the probability of achieving that inheritance (known to lawyers as contingency reversion) becomes of great interest. For instance, I might transfer a piece of land on the following basis: to my niece for her lifetime, remainder to my nephew and his heirs, but if my nephew dies in the lifetime of my niece, then the land reverts to me and my heirs; I have a ‘reversionary interest’ in the land. The question for my eighteenth century nephew is how to value the sum that might be payable on the contingency that he will survive his sister. The method and calculations proposed by Morgan are set out at length and in considerable detail so as to enable a reader to test and critique them. To this author’s non-expert eye, two points are striking. First, that the calculations appear to be based on group data i.e. on the number of persons living at the age of my nephew, and at the end of first year, second year, third year and so, from the age of my nephew. Secondly, the article goes onto criticise a rule proposed by a certain ‘Mr Simpson’ and points to its results as deviating ‘so widely from the truth as to be unfit for use’ [my emphasis] in some cases producing ‘absurd’ results. 
A modern reader might be tempted to regard these articles as illustrations of a naïve age or to a context long past, or to highlight the lack of causal evidence for Reverend Barrow’s conclusion about the ‘healthful’ nature of his parish. Yet both articles tackle issues with which we remain concerned today: the healthiness (or otherwise) of a community, the reasons behind it and the life expectancy of an individual when compared to others. Risk forecasting and predictive techniques to aid decision- making have become commonplace in our society, not least within public services such as criminal justice, security, benefit fraud detection, health, child protection and social care. We should be better at it than our eighteenth century clergymen. It has become almost unnecessary to say that we now inhabit an information society. Information technologies driven by the flow of digital data have become pervasive and everyday, often leading to the assumption that access to vast banks of (often individualised) digital data, combined with today’s networked computing power and complex algorithmic tools, will lead automatically to greater knowledge and insight, and so to better predictions. 
Knowledge, however, is not the same as information (as many before me have pointed out): Knowledge, Hassan argues, ‘emerges through the open and experiential and diverse (and often intuitive) working and interpreting of raw data and information.’  Reverend Barrow’s conclusion as to the healthfulness of his parish, for instance, was based, not only on the outcome of analysis of raw data, but on additional ‘experience and observations’ of himself and others. Some criticise such human ‘intrusion’ on the data as casting further doubt on the conclusion. Grove and Meehl, a leading proponent of the use of statistical, algorithmic methods of data analysis over clinical methods, argued that ‘To use the less efficient of two prediction procedures in dealing with such matters is not only unscientific and irrational, it is unethical. To say that the clinical-statistical issue is of little importance is preposterous.’  It is this often-claimed superiority, together with the potential for more consistent application of relevant factors often taken from large datasets, that give algorithmic tools their appeal in many public sector contexts. Although this article is written from a legal perspective, it draws attention to arguments made in the ongoing ‘algorithmic predictions versus purely human judgement’ debate and applies these to the legal principles discussed below. It is particularly concerned with algorithm-assisted decisions, whereby an algorithmic output, prediction or recommendation produced by machine learning technique is incorporated into a decision-making process requiring a human to approve or apply it. ‘Machine learning involves presenting the machine with example inputs of the task that we wish it to accomplish. In this way, humans train the system by providing it with data from which it will be able to learn. The algorithm makes its own decision regarding the operation to be performed to accomplish the task in question.’  Machine learning algorithms are ‘probabilistic...their output is always changing depending on the learning basis they were given, which itself changes in step with their use.’ 
Predictive algorithms and administrative law 
The growth in the use of intensive computational statistics, machine-learning and algorithmic methods by the UK public sector shows no sign of abating. What then should be the role of the human when these tools are planned and then deployed, particularly when the accuracy of an algorithmic prediction is claimed to be at least comparable to the accuracy of a human one? I consider this question by reference to a number of connected English administrative law rules, some of which (such as natural justice) date back to before the origins of this journal. I have done this because this body of law governs the exercise of discretionary powers and duties by state bodies, and thus the humans working within them; discretion must be exercised within boundaries or the public body is acting unlawfully. As Le Sueur explains, ‘The assumption made until comparatively recently is that the decision-maker using the executive power conferred by Parliament is a human being or an institution composed of humans and that there is a human who will be accountable and responsible for the decision.’ 
We see this today in witnesses called to give evidence to Parliamentary Select Committees. The introduction of an algorithm to replace, or even only to assist, the human decision-maker represents a challenge to this assumption and thus to the rule of law, and the power of Parliament to decide upon the legal basis of decision-making by public bodies. I argue below however that English administrative law – in particular the duty to give reasons, the rules around relevant and irrelevant considerations and around fettering discretion – is flexible enough to respond to many of the challenges raised by the use of predictive machine learning algorithms, and can signpost key principles for the deployment of algorithms within public sector settings. These principles, although derived from historic case- law, have already been applied and refined to modern government, to the development of the welfare state, privatisation, the development of executive agencies and so on. 
I then attempt to re-frame each of these rules in order to suggest how they could guide future algorithm-assisted decision-making by public bodies affecting rights, expectations and interests of individuals. In doing so, I do not recommend any particular method of building or interpreting these systems - as to do so would require consideration of many different contexts and informational needs - but to suggest principles to guide those engaged in future development work. I focus attention on the requirements of legitimate decision-making from the perspective of the public sector decision-maker, rather than from the perspective of the subject. Fair decision-making in accordance with administrative law rules by its very nature also protects the interests of the human subject of those decisions. I argue that carefully considering exactly what the algorithm is or is not predicting, and explaining to the decision-maker at the point results are displayed, is key to ensuring this fairness.

Consent

'Moving Beyond Consent For Citizen Science in Big Data Health and Medical Research' by Anne SY Cheung in (2018) 16 Northwestern Journal of Technology and Intellectual Property 15 comments
Consent has been the cornerstone of the personal data privacy regime. This notion is premised on the liberal tenets of individual autonomy, freedom of choice, and rationality. The above concern is particularly pertinent to citizen science in health and medical research, in which the nature of research is often data intensive with serious implications for individual privacy and other interests. Although there is no standard definition for citizen science, it includes generally the gathering and volunteering of data by non-professionals, the participation of non-experts in analysis and scientific experimentation, and public input into research and projects. Consent from citizen scientists determines the responsibility and accountability of data users. Yet with the advancement of data mining and big data technologies, risks and harm of subsequent data use may not be known at the time of data collection. Progress of research often extends beyond the existing data. In other words, consent becomes problematic in citizen science in the big data era. The notion that one can fully specify the terms of participation through notice and consent has become a fallacy. 
Is consent still valid? Should it still be one of the critical criteria in citizen science health and medical research which is collaborative and contributory by nature? With a focus on the issue of consent and privacy protection, this study analyzes not only the traditional informed consent model but also the alternative models. Facing the challenges that big data and citizen science pose to personal data protection and privacy, this article explores the legal, social, and ethical concerns behind the concept of consent. It argues that we need to move beyond the consent paradigm and take into account the much broader context of harm and risk assessment, focusing on the values behind consent – autonomy, fairness and propriety in the name of research.