'Logged out: Ownership, exclusion and public value in the digital data and information commons' by Barbara Prainsack in (2019)
Big Data and Society comments
In recent years, critical scholarship has drawn attention to increasing power differentials between corporations that use data and people whose data is used. A growing number of scholars see digital data and information commons as a way to counteract this asymmetry. In this paper I raise two concerns with this argument: First, because digital data and information can be in more than one place at once, governance models for physical common-pool resources cannot be easily transposed to digital commons. Second, not all data and information commons are suitable to address power differentials. In order to create digital commons that effectively address power asymmetries we must pay more systematic attention to the issue of exclusion from digital data and information commons. Why and how digital data and information commons exclude, and what the consequences of such exclusion are, decide whether commons can change power asymmetries or whether they are more likely to perpetuate them.
In referring to 'The iLeviathan: Trading freedom for utility' Prainsack argues
As a concept, ‘Big Data’ started to become an object of attention and concern around the start of the new millennium. Enabled by new technological capabilities to create, store and analyse digital data at greater volume, velocity, variety and value1 the phenomenon of Big Data fuelled the imagination of many. It was hoped to help tackle some of the most pressing societal challenges: Fight crime, prevent disease and offer novel insights into the ways in which we think and act in the world. With time, some of the less rosy sides of practices reliant on big datasets and Big Data epistemologies became apparent (e.g., Mittelstadt and Floridi, 2016): Data-driven crime prevention, for example, requires exposing large numbers of people to predictive policing (e.g., Perry, 2013), and ‘personalised’ disease prevention means that healthy people have to submit to extensive surveillance to create the datasets that allow personalisation in the first place (Prainsack, 2017a). In addition, it became apparent that those entities that already had large datasets of many people became so powerful that they could eliminate their own competition, and at the same time de facto set the rules for data use (e.g., Andrejevic, 2014; Pasquale, 2017; see also van Dijck, 2014; Zuboff, 2015). GAFA – an acronym combining the names of some of the largest consumer tech companies, Google, Apple, Facebook and Amazon – have become what I call the iLeviathan, the ruler of a new commonwealth where people trade freedom for utility. Unlike with Hobbes’ Leviathan, the freedom people trade is no longer their ‘natural freedom’ to do to others as they please, but it is the freedom to control what aspects of their bodies and lives are captured by digital data, how to use this data, and for what purposes and benefits. The utility that people obtain from the new Leviathan is no longer the protection of their life and their property, but the possibility to purchase or exchange services and goods faster and more conveniently, or to communicate with others across the globe in real time. Increasingly, the iLeviathan also demands that people trade privacy and freedom from surveillance for access to services provided by public authorities (Prainsack, 2019). The latter happens, for instance, when people are required to use services by Google, Facebook, or their likes in order to book a doctor's appointment or communicate with a school (see also Foer, 2017). For many of us, it also happens when access to a public service requires email.
This situation has garnered reactions by activists, scholars and policy makers. Reactions can be grouped in two main approaches, depending on where the focus of their concern lies: On the one side are those who want individual citizens to have more effective control over their own data. I call this the Individual Control approach (Table 1). It comprises (a) those who deem property rights to be the most, or even the only, effective way to protect personal data, as well as (b) some of those who see personal information as an inalienable possession of people. The latter group reject the idea that personal data should be protected by property rights and prefer it to be protected via human rights such as privacy, whereby privacy is understood to be an individual, rather than a collective right (see Table 1). Solutions put forward by scholars in the Individual Control group include the granting of individual property rights to personal data (see below), or the implementation of ever more granular ways of informing and consenting data subjects (e.g., Bunnik et al., 2013; Kaye et al., 2015). The spirit of the new European Union General Data Protection Regulation (GDPR) tacks mostly to an Individual Control approach in the sense that it gives data subjects more rights to control their personal data – to the extent that some might see it as granting quasi-property rights.
The second approach – which I call the Collective Control approach – comprises of authors who emphasise that increasing individual-level control over personal data is a necessary but insufficient way to address the overarching power of multinational companies and other data capitalists. Scholars within the Collective Control group are diverse in their assessment of the benefits and dangers of increasing individual-level control.4 What they all have in common, however, is that they foreground the use of data for the public good.5 Many of them see the creation of digital data and information commons as the best way to do this, often because of the emphasis that commons place on collective ownership and control. Some authors also see the creation of commons explicitly as a way to resist ‘the prevailing capitalist economy’ (Birkinbine, 2018:6 291; Hess, 2008; De Peuter and Dyer-Whiteford, 2010; for overviews see Hess, 2008;7 Purtova, 2017a).
In the following section, I will scrutinise the claim made by some authors within the Collective Control group that digital data and information commons can help to address power asymmetries between data givers and data takers. Despite the frequent use of terms such as ‘digital commons’ and ‘data commons’ in the literature, I argue that the question of what kind of commons frameworks are applicable to digital data and information, if any, has not been answered with sufficient clarity. In the subsequent part of the paper I will discuss another aspect that has not received enough systematic attention in this context, namely the topic of exclusion. I argue that collective measures to address power asymmetries in our societies need to pay explicit and systematic attention to categories, practices and effects of exclusion. I end with an overview of what governance frameworks applicable to digital data and information commons need to consider if they seek to effectively tackle inequalities. If they fail to do this, they risk that they are most useful to those who are already privileged and powerful.
'How data protection fits with the algorithmic society via two intellectual property rights – a comparative analysis' by Cristiana Sappa in (2019) 14(5)
Journal of Intellectual Property Law and Practice 407–418 comments
Big Data, IoT and AI are the three interrelated elements of the algorithmic society, responsible for an unprecedented flourishing of innovation.
Companies working within such an algorithmic society need to protect the information created and stored for entrepreneurial purposes. Thus, their concerns relate to data protection, in particular with regard to trade secrets and the sui generis protection of databases.
This paper tries to answer two questions from a EU and US law perspective. First, it asks whether data generated and managed within the frameworks of Big Data, IoT and AI meet the essential requirements to enjoy trade secret protection and the database right, if any. The answer seems to be in the affirmative in most cases. Second, it studies whether trade secrets and the sui generis right are appropriate in a sharing-based paradigm, such as that of Big Data, IoT and AI. The focus on this upstream protection helps to understand the bottlenecks created at the downstream level, which challenge innovation and transparency, as well as consumer protection. In other words, when both exclusive rights (the sui generis protection for databases) and quasi-intellectual property rights (trade secrets) are present, innovation and the circulation of information are not necessarily promoted, and the presence of this double protection may be beneficial to big businesses only. On the other hand, the presence of mere trade secrets does not seem to utterly exclude an encouragement to innovation and the circulation of information and it is therefore more suitable to SMEs and to safeguard the public interest.
According to a non-exhaustive notion, Big Data is the huge amount of digital data generated from transactions and communication processes1 collected in datasets, in particular via apps, sensors and other (smart) devices, which regularly lead to predictive analyses via complex algorithms and processors. The Internet of Things (IoT) is a network of interconnected physical objects, each embedded with sensors that collect and upload data to the Internet for analysis or monitoring and control, such as smart-city traffic and waste-management systems. IoT generates and is built upon Big Data. Artificial Intelligence (AI) is created by the interaction between intelligent agents, ie devices perceiving inputs from their environment and being able to reproduce methods and achieve aims. In other words, intelligent agents are able to reproduce cognitive human functions, such as learning and problem solving. AI generates Big Data and its functionalities extend beyond IoT. Big Data, IoT and AI are the three interrelated elements of the algorithmic society, responsible for an unprecedented flourishing of innovation, which no longer seems to require the same outside incentives of the conventional world.
Companies working with Big Data, IoT or AI need to protect the information stored against the unfair practices of (former) employees, collaborators and other market operators. Data protection via intellectual property rights (IPRs), personal data rules and also contractual and technical measures ensuring confidentiality – as well as a competitive advantage on the market – have become a major concern of companies (and consumers).
This paper does not study all of the above-mentioned rights. Instead it focuses on two IPRs only. More precisely it discusses how trade secrets and the sui generis right for databases fit with the algorithmic society from a EU and US law perspective. These forms of protection were introduced prior to the advent of Big Data, IoT and AI. In particular, trade secrets protection was first introduced (in different ways) at the national level within the geographical areas covered by this work;8 only at a later stage did both the US and EU adopt measures to ensure a more harmonized legal regime. On the other hand, the sui generis right for databases was introduced in the EU only, while the US has not implemented it and has consistently shown a clear reluctance for such form of protection.
This analysis tries to answer a two-fold question. The first one is whether data generated and managed within the frameworks of Big Data, IoT and AI meet the essential requirements to enjoy the above-mentioned protections of trade secrets and the sui generis database right, if any. The answer seems to be in the affirmative in most cases. The second question is whether trade secrets and the sui generis right are appropriate in a sharing-based paradigm, such as that provided by the computational innovation generated by the above-mentioned phenomena of Big Data, IoT and AI. In particular, the focus on this existing upstream protection helps to understand the bottlenecks that may be created at the downstream level, which challenge innovation and transparency as well as consumer protection. In other words, when both exclusive rights (such as the sui generis protection of databases) and quasi-IPRs (such as trade secrets) are present, innovation and the circulation of information are not necessarily promoted, and the existence of this double layer of protection seems to be beneficial to big businesses only. On the other hand, the mere presence of trade secrets as currently designed needs further study, but in principle it does not seem to exclude an encouragement to innovation and the circulation of information, and would therefore have more favorable results for SMEs and community interests as a whole.
In order to answer these suggested questions, Section I will provide information on the current legal framework of trade secrets in EU and US law, and the sui generis right for databases in EU law. Section II will try to answer whether trade secrets and the sui generis right apply to the phenomenon of the algorithmic society. Finally Section III will discuss whether exclusive rights and quasi-IPRs are suited to the newly emerging algorithmic society.
Automated Decision-Making in the EU Member States Laws: The Right to Explanation and Other 'Suitable Safeguards'' by Gianclaudio Malgieri
comments
The aim of this paper is to analyse the very recently approved national Member States’ laws that have implemented the GDPR in the field of automated decision-making (prohibition, exceptions, safeguards): all national legislations have been analysed and in particular 9 Member States Laws address automated decision making providing specific exemptions and relevant safeguards, as requested by Article 22(2)(b) of the GDPR (Belgium, the Netherlands, France, Germany, Hungary, Slovenia, Austria, the United Kingdom, Ireland).
The approaches are very diverse: the scope of the provisions can be narrow (just automated decisions producing legal or similarly significant effects) or wide (any decision with a detrimental impact) and even the specific safeguards proposed are very diverse.
Taking into account this overview, article will also address the following questions: are Member States free to broaden the scope of automated decision-making regulation? Are ‘positive decisions’ allowed under Article 22, GDPR, as some Member States seem to affirm? Which safeguards can better guarantee rights and freedoms of the data subject?
In particular, while most Member States mention just the three safeguards mentioned at Article 22(3) (i.e.subject’s right to express one’s point of view; right to obtain human intervention; right to contest the decision), three approaches seem very innovative: a) some States guarantee a right to legibility/explanation about the algorithmic decisions (France and Hungary); b) other States (Ireland and United Kingdom) regulate human intervention on algorithmic decision through an effective accountability mechanism (e.g. notification, explanation of why such contestation has not been accepted, etc.); c) another State (Slovenia) require an innovative form of human rights impact assessments on automated decision-making.