'When data is capital: Datafication, accumulation, and extraction' by Jathan Sadowski in (2019) Big Data and Society comments
The collection and circulation of data is now a central element of increasingly more sectors of contemporary capitalism. This article analyses data as a form of capital that is distinct from, but has its roots in, economic capital. Data collection is driven by the perpetual cycle of capital accumulation, which in turn drives capital to construct and rely upon a universe in which everything is made of data. The imperative to capture all data, from all sources, by any means possible influences many key decisions about business models, political governance, and technological development. This article argues that many common practices of data accumulation should actually be understood in terms of data extraction, wherein data is taken with little regard for consent and compensation. By understanding data as a form capital, we can better analyse the meaning, practices, and implications of datafication as a political economic regime. ...
Data has become central and essential for increasingly more sectors of contemporary capitalism. Industries focused on technology, infrastructure, finance, manufacturing, insurance, and energy are now treating data as a form of capital. No longer is data just a concern of scientists or a by-product of other processes. Until recently, companies simply deleted data or chose not to collect it because paying for storage did not seem like a good investment (Oracle and MIT Technology Review Custom, 2016). Now, though, companies are clamouring to collect data – as much as they can, wherever they can. For the increasing number of companies participating in the ‘data economy’ or ‘digital economy,’ deleting data because of storage costs would be like burning piles of money or dumping barrels of oil down the drain because renting a warehouse was too much trouble. While data is not the same as profit, they share a similar logic. Just as we expect corporations to be profit-driven, we should now expect organisations to be data-driven; that is, the drive to accumulate data now propels new ways of doing business and governance. It is a key factor in major corporate decisions, such as Amazon’s acquisition of Whole Foods for $13.7 billion (Stevens and Haddon, 2017), and of government policies such as investment in urban sensor networks (Heinzmann, 2014). Indeed, as The Economist (2017b) has noted, ‘Industrial giants such as GE and Siemens now sell themselves as data firms.’ In short, data – and the accumulation of data – is a core component of political economy in the 21st century.
As a paradigm and logic, the idea of data-as-capital affects and transforms many spaces and sectors. Thanks to technologies like the Internet of Things, online platforms, and data analytics the list of things that now count as ‘digital products and services’ – and hence what counts as part of the digital economy – is growing at a rapid pace (Srnicek, 2016). This, in turn, means that data is a foundational form of capital for everything from the ‘smart home’ to the ‘smart city,’ finance to governance, production to distribution, consumer devices to enterprise systems, and much more (Kitchin, 2014). Without data, many of these technologies and organisations would not be able to operate, let alone be able to generate value.
This article contributes to the study of data within contemporary capitalism by analysing data as a form of capital. The existing literature on the social, political and economic dimensions of data treats data as a commodity. Whether implicitly or explicitly, analyses in both academic and media outlets typically take this analytical frame as a given. Yet, as this article makes clear, the distinction between capital and commodity is important and we cannot assume data is always a commodity. By understanding data as a form of capital, we can better analyse the nature and dynamics of digital capitalism. Rather than data collection being seen as simply a way of producing and obtaining commodities that are somehow converted into monetary value, datafication takes shape as a political economic regime driven by the logic of perpetual (data) capital accumulation and circulation. Framing data as a form of capital casts new light on the imperatives motivating contemporary organisations, the ways value can be derived from data, and the normative importance of data extraction.
'Regulating Big Tech expansionism? Sphere transgressions and the limits of Europe’s digital regulatory strategy' by Tamar Sharon and Raphaël Gellert in (2023) Information, Communication & Society comments
The increasing power of Big Tech is a growing concern for regulators globally. The European Union has positioned itself as a leader in the stride to contain this expansionism; first with the GDPR and recently with a series of proposals including the DMA, the DSA, the AI Act, and others. In this paper we analyse if these instruments sufficiently address the risks raised by Big Tech expansionism. We argue that when this phenomenon is understood in terms of ‘sphere transgressions’ – i.e., conversions of advantages based on digital expertise into advantages in other spheres of society – Europe’s digital regulatory strategy falls short. In particular, seen through the lens of sphere transgressions, Big Tech expansionism raises three risks in addition to well-known privacy and data protection risks, which this regulatory strategy does not properly address. These are: non-equitable returns to the public sector; the reshaping of sectors in line with the interests of technology firms; and new dependencies on technology firms for the provision of basic goods. Our analysis shows that this mismatch may be inherent to Europe’s digital strategy, insofar as it focusses on data protection – while data is not always at stake in sphere transgressions; on political and civil rights – while socio-economic rights may be more at risk; and on fair markets – while the sectors being transgressed into by Big Tech, such as health and education, are not markets that require fairer competition, but societal spheres which need protection from market (and digital) logics.
... The increasing power of Big Tech is a growing source of concern for governments and regulators globally. In the past decade, large technology corporations including Apple, Alphabet, Meta, Amazon, Microsoft, Palantir and others, have not only consolidated their dominance in their original spheres of activity, but have begun to expand into new areas (Lopez et al., 2022; Sharon, 2016; van Dijck et al., 2019), including health and medicine, education, public administration, humanitarian aid and welfare, science, agriculture, banking, transportation, and even space exploration (for an overview see Stevens et al., 2022 and other articles in this special issue). During this time, the European Union (EU) has sought to position itself as a global leader in the stride to regulate digital innovation and its potential harms (European Commission, 2020, p. 6); beginning with the General Data Protection Regulation (GDPR) and most recently with a series of ambitious new legislative proposals, including the AI Act, the Digital Services Act (DSA), the Digital Markets Act (DMA), the Data Governance Act (DGA), the Data Act (DA), and the European Health Data Space (EHDS). In this paper we ask if this ambitious digital regulatory strategy sufficiently addresses the risks raised by Big Tech expansionism. We argue that when Big Tech expansionism is understood in terms of ‘sphere trangressions’ (Sharon, 2021a, 2021b; Walzer, 1983) – i.e., conversions of advantages based on digital expertise into advantages and dominance in other spheres of society – this digital strategy falls short.
The paper is structured as follows: In Section 2 we briefly describe the ‘sphere transgressions’ framework as a means of understanding Big Tech expansionism into new areas of society, before discussing the novel risks that this analytic lens makes visible. In addition to the privacy and data protection risks that are typically associated with the practices of tech corporations, we identify three additional risks. These include non-equitable returns (i.e., exploitation of public data without fair compensation); a gradual reshaping of critical sectors in line with the interests and practices of tech actors; and the creation of new dependencies on tech corporations for the provision of basic goods. Most of our examples come from the health and medical sector in light of the focus of a research project carried out by one of us on Big Tech expansionism into health (Sharon 2016, 2018). But we also draw on examples from other sectors, including education and agriculture. In Section 3 we first offer a brief description of the existing and proposed regulatory instruments that make up the EU’s digital regulatory strategy before discussing how each of these instruments can or cannot address the identified risks of Big Tech expansionism as sphere transgressions.
Our analysis shows that while these legal instruments are helpful for addressing numerous risks ensuing from digital developments, none of them properly address the risks we identify in Section 2. We argue that this can be explained in terms of the two-pronged approach underlying Europe’s digital regulatory strategy: on the one hand, a focus on fundamental rights and data protection as a means of protecting fundamental rights, and on the other, the development of fair (digital) markets. Concerning the first, data protection can only be a guarantor of fundamental rights when the collection and exchange of personal data is actually at stake. But, as we show, this is not necessarily the case in examples of sphere transgressions, which can be data-protection compliant and still raise other risks. Moreover, the catalogue of fundamental rights safeguarded through data protection (but to some extent also through the AI Act and the DSA) may be too narrow to encompass the broader socio-economic rights, such as health, education and welfare, which are at stake when tech corporations move into new sectors. Concerning the second focus, on fair markets, we argue that this promotes a view of sectors such as healthcare and education – which distribute basic social goods – as markets, the good governance of which requires no more than fair competition. We contend that this does little to protect the ‘publicness’ (Lopez et al., 2022) of public sectors susceptible to Big Tech expansionism, and may actually increase opportunities for transgressions rather than thwart them.
In light of this, we suggest several new directions for regulation which may be required to address the risks of Big Tech expansionism. These include: increased regulation for socio-economic rights; a decoupling of digitalisation and marketisation, thereby ensuring that the transformation of traditional social goods into computational goods nonetheless precludes them from being reconfigured as market goods; and developing regulation that seeks not just to protect the fundamental rights of individual (data) subjects and fair markets, but that also seeks to protect societal spheres.