04 July 2020

Algorithms and Democracy

'Are Algorithms a Threat to Democracy? The Rise of Intermediaries: A Challenge for Public Discourse' from AlgorithmWatch comments
 A healthy democracy needs informed citizens. People are expected to be aware of important issues and public affairs in order to provide feedback on the political system. Therefore, a diversity of viewpoints is considered a core democratic value and one of the core public values in media law and policy. With the increasing importance of intermediaries, the question arises whether algorithmic news curation is a threat to democracy. 
Fears that algorithmic personalization leads to filter bubbles and echo chambers are likely to be overstated, but the risk of societal fragmentation and polarization remains
  • The fear that large parts of the population are trapped in filter bubbles or echo chambers seems overstated. Empirical studies offer a much more nuanced view of how social media affects political polarization. Due to the fact that our information repertoires are still broadly dispersed, we adapt our worldview, allow opposing opinions, and are exposed to relevant societal issues. Several studies show that incidental exposure and network effects can even contribute to an expansion of the diversity of information. 
  • Nevertheless, there is evidence for possible polarization at the ends of the political spectrum. The current state of research permits the assumption that echo chambers may arise under certain circumstances; that is, facilitated by homogeneous networks, highly emotionalized and controversial topics, and strong political predispositions. In particular, social media logics can reinforce affective polarization because the features of social media platforms can lead to very stereotypical and negative evaluations of out-groups. 
  • Moreover, social media may indirectly contribute to polarization by facilitating a distorted picture of the climate of opinion. As a result, spiraling processes begin because the perception of the strength of one’s own opinion camp compared to those of other camps is overstated. The entire process leads to an overrepresentation of radical viewpoints and arguments in the political discourse. At this point during the opinion formation process, individuals are more vulnerable to being influenced by “fake news” on Facebook or Twitter. Thus, strategic disinformation cannot only influence the media’s agenda through specific agenda-setting effects but can also impact the public’s climate of opinion. 
Social media are vulnerable to facilitating a rapid dissemination of disinformation, but exposure seems to be limited
  • There are orchestrated disinformation campaigns online, but data on the actual scope of and exposure to disinformation is scarce. 
  • The few available scientific studies suggest that the extent of the problem is likely to be overestimated since exposure to disinformation seems to be rather limited. 
  • Studies on the effects of disinformation of users show no persuasive effects but a confirmation bias: disinformation may therefore widen existing gaps between users with opposing worldviews because it is able to confirm and strengthen pre-existing attitudes and (mostly right-wing) worldviews. In this context political microtargeting poses a concern, as it can be used to disseminate content tailored to target groups particularly susceptible to disinformation. 
  • More research on the scope of, interaction with and individual and societal effects of disinformation is crucial to better assess the actual extent of the problem regarding disinformation. 
Social media contribute to the dissemination of incivility and hate speech
  • Incivility appears to be significantly widespread online and has real, negative effects on individual attitudes and the discourse climate. 
  • A potentially serious problem is the indirect effects of incivility on recipients and journalists: the credibility of journalistic content is reduced by incivility, including hate speech, in comment sections, which can have detrimental effects on trust in journalism as an institution of social cohesion in the long term. 
  • In addition, empirical evidence indicates that journalists react to incivility directed at them by shying away from controversial reporting or trying to hide controversies as a reaction to incivility. This is worrying because it hinders the free development of democratic discourse. 
  • A related problem is that especially women, who have been victims of hate speech, stop participating in discussions. This again backfires on the free development of public discourse on the macro-level, if whole groups of the population are cast out.  
  • Discourse moderation in comment sections that focuses on sociable replies on comments by journalists seems to be an effective tool in containing and preventing incivility including hate speech. 
  • Measures inhibiting freedom of expression have to be carefully applied and can only be used to combat illegal content such as hate speech. 
Research agenda for platform governance 
It can be stated that fears of filter bubbles and echo chambers seem overstated. Echo chambers and polarization seem to emerge only at the fringes of the political spectrum. Worrisome, however, are indications that social media may indirectly contribute to polarization by facilitating a distorted picture of the climate of opinion. 
There are reasons for vigilance in the cases of disinformation and incivility. A serious assessment of the extent of the problem of disinformation is hampered by the scarcity of the available scientific data. The available studies suggest that an excessively alarmist political and societal debate is to be avoided, but the actual scope of the issues remains unclear. Incivility and hate speech are prevalent phenomena that should be tackled with evidence-based policy measures. That means that (further) regulation, bans or deletion of content, which entails legal problems, are not necessarily the best solution. From the perspective of communication science, the main goal of any intervention should be to strengthen a reasonable, fruitful and free democratic discourse. 
In this context, we emphasize that existing successful approaches (e.g., community management in the form of moderation that does not entail deleting content to contain and prevent incivility) should be extended and consistently applied. In addition, further scientific evidence is needed, in particular on disinformation, in order to investigate the extent of the phenomenon and its consequences for public discourse and society in more detail, so that evidence-based measures can be developed. From a communication science perspective, it is precisely at this point that regulation appears most useful, especially with respect to demystifying the inner workings of “black box” algorithms and providing relevant data for research purposes. Hence, without access to Google’s or Facebook’s internal data, it is hard to reach firm conclusions. Therefore, it is critical to continue monitoring the evolution of digital news markets and the ways in which users are exposed to news on social media platforms. In particular, structural changes in the news market require the attention of regulators and policy makers. Intermediaries establish economic power and create new dependencies.
The 69 page study concludes
Platform Transparency Deficits 
Information intermediaries are increasingly important actors in high-choice media environments. They change the structures and processes of how communication in digitalized societies proceeds—with potentially profound consequences for the functioning and stability of our democracies. In contrast to existing media organizations, intermediaries wield far broader power because they internalize markets: Users of intermediaries choose their sources within an environment whose logic is set by the platform itself. The open market—whether the selection of newspapers at a newsstand or the list of channels available on TV— is typically regulated to prevent anti-competitive behavior, and it guarantees some degree of transparency and a level playing field. Market participants can evaluate the behavior of competitors simply by “walking over and having a look”. Internal markets for news, however, are opaque to individual and institutional observers. Within them, users of intermediaries are presented with a personalized pre-selection of content, but neither other users nor content producers can easily identify what those are. This implies two transparency deficits:
(1) Individual users only see their own recommendations. They have no way of knowing what information was hidden from them, and 
(2) they cannot observe what information was presented to other users.
Outside actors (such as competitors, content suppliers, media authorities, and researchers) suffer from these limitations: They have no way of observing the treatment and behavior of individuals or groups of users. Information intermediaries, therefore, create new potential impacts (through personalization), along with detailed measurements thereof (creating what Webster, 2010 termed “user information regimes”)—but hide both within a proprietary product. As intermediaries’ importance to public opinion formation and political processes grows, societies will need to encourage effective transparency in order to safeguard a level playing field in information dissemination. As identified above, two elements are crucial for such transparency to work:
(1) Individual users should be empowered with regard to the recommendations presented to them—they should be able to download everything that was presented, along with a sensible description of the processes that produced this exact set. Doing so would help individuals understand whether they were the subject of a biased selection and enable them to seek legal recourse in that case. Such a model is unproblematic from the perspective of user privacy, as users could only receive information which they can view anyway, yet it would facilitate the act of “whistleblowing” in the case of perceived wrongdoing. 
(2) An arguably more difficult goal would be the creation of some form of transparency that encompasses not individual users but the overall impact of intermediaries. Such a measure is necessary for monitoring the societal influence that arises from the use of platforms, including endogenous algorithmic effects but also encompassing external factors, such as manipulation attempts from malevolent third parties (such as hackers and foreign government agencies). Transparency on this level could, for example, take the form of providing privileged parties (state offices, researchers or trusted NGOs) with accurate aggregate information on the demographic makeup of user populations, the prevalence of news usage and other key insights. It might also be worth considering making less detailed data publicly available, but such a decision would need to be weighed against the potential negative effect of facilitating targeted manipulation.
 In general, transparency should help users make informed choices. But it is clear that “digital transparency—whether it is enacted through technological solutions or more classical administrative and organizational means — will not on its own provide an easy solution to the challenges posed by the growing role of platforms in political and public life” (Gorwa and Ash, 2020, p. 20). The ongoing debates on the implementation of the transparency rules in the German Media State Treaty clearly support this statement (Dogruel, Stark, Facciorusso, and Liesem, 2020).
Key Data Needs 
The combination of novel impacts, and a lack of transparency, create three distinct classes of threat to societies. Protecting against each threat requires access to specific data—most crucially the individual personalized results presented to users, the actions taken by intermediaries to change the flow of information, and the overall impact on the whole user population—that is currently unavailable (Tucker et al., 2018). 
(1) As illustrated by existing anti-trust cases against technology companies, it is possible that (for whatever reason) intermediaries fail to adhere to internal or external guidelines, resulting in detrimental treatment of users, advertisers, content providers, or other actors. In contrast to past well-documented legal disputes (e.g., in competition regulation), affected parties will have difficulties monitoring for, detecting, and collecting evidence of unfair treatment because (due to personalization and highly individual behavior) they do not have access to the recommendations that intermediaries produce for their users. Assessing (and proving) unfair bias in intermediaries would require access to a representative set of recommendations, so that differences in consumption could be clearly attributed. Consider, for example, a hypothetical search engine that systematically alters access to political cross-cutting information—displaying only conservative results to conservatives and only liberal results to liberals. Creating a legal case against such a platform would require access to a representative set of users (anecdotal evidence could always be discounted as spurious. Or chance findings). For each of those users, researchers should identify the political leaning, and then record the search results they obtained. 
Such data collection is typically unfeasible in practice for two reasons: (a) Platforms offer no way of accurately recording the output of an intermediary for a single user. Neither third parties, nor the users themselves, have access to technical interfaces that would show a comprehensive dataset of personalized recommendations, such as the personal news feed on Facebook. Even though users can access the feed visually in their browser, considerable effort would be required to extract it in an automated fashion (i.e., through web scraping). Furthermore, there is currently no company that provides such data, and researchers’ capabilities to obtain them independently are increasingly limited by the locked nature of proprietary smartphones (Jürgens, Stark and Magin, 2019). The second factor that encumbers research is (b) the unavailability of a suitable sampling strategy. Intermediaries are, of course, cognizant of their entire population of users along with key socio-demographic data (which is required i.e., for selling personalized advertisement spaces). Researchers, on the other hand, usually have no way of creating random samples from a platform’s population. Since the available information on the socio-demographic makeup of the userbase is typically drawn from (moderately sized, i.e., N in the thousands) surveys, attempts to create samples which are representative, with regard to the intermediary’s national users, offer somewhat limited precision (Jürgens et al., 2019). 
(2) Outside actors (individual, institutional and state-sponsored) frequently attempt to manipulate intermediaries in order to gain influence over citizens—e.g., through misinformation, disinformation, and the manipulation of public opinion perception (Lazer et al., 2017; Magin et al., 2019). Although intermediaries spend commendable resources on the containment and removal of such attempts, two risks remain which are outside the reach of the companies themselves: (a) Without access to large-scale data containing purported manipulation attempts, external watchdogs cannot perform independent audits in order to identify overlooked external influences. (b) Problematic content is also routinely deleted, so that external watchdogs cannot scrutinize and understand those attacks. An exception is Twitter, which regularly publishes datasets on malevolent campaigns). Intermediaries should provide trustworthy actors with a way to perform their own, independent attempts at large-scale detection of manipulation, including data that were already removed by in-house systems. The simplest strategy that would enable such attempts is simply making the raw public flow of information accessible through a technical interface (an API), as Twitter has done: It not only offers a true random sample of tweets, but includes information about which of them are removed later on. Furthermore, the company offers a growing list of datasets containing all content produced by bot networks and state-sponsored manipulation campaigns. Platforms with more restrictive privacy contexts such as Facebook (where much of the flow of information is not visible to the broader usership or public) could still allow automated analyses, for example by offering to run researchers’ models without providing access to the data itself. 
(3) In addition to institutional actors, harmful dynamics (such as radicalizing echo chambers, Stark et al., in press) may develop within intermediaries, even in the absence of external influence. Such dynamics need not have a clearly identifiable culprit; they could equally arise from the interaction of multiple individuals that leads to a mutual reinforcement of harmful tendencies. Detecting such structural phenomena is contingent on a complete picture of users’ interaction networks. Furthermore, singular snapshots do not provide much insight; instead, the development must be traced over time in order to assess the true impact as well as causes. Researchers should, therefore, gain access to representative, longitudinal, and detailed data on those (semi-public) parts of intermediaries that pertain to public debates. This includes first and foremost the discussions surrounding media, politics, and social issues. 
New Models for Partnerships 
Some attempts have been made to increase transparency; while they attempt to address the issues outlined above, they have not yet achieved any significant success. Following an initiative from King and Persily (2019), a consortium of scientists cooperated with Facebook in order to create an institution (Social Science One) and a process that would allow scholarly usage of a limited collection of pre-defined datasets. The project received much criticism from scientists, who warned that it would decrease transparency in research practices, lead to a dependence on Facebook that would encumber critical research, and create divisions between privileged “insiders” with access to data and the rest of the field (Bruns, 2019c). So far, Facebook also failed to facilitate the agreed upon access, frustrating even the existing scientific partners (Statement from the European Advisory Committee to Social Science One). Despite its pragmatic appeal, the cooperative model underpinning Social Science One has a fatal conceptual flaw: Even though it provides some access to some data, that access is pre-defined and limits researchers to a specific approach in tackling the above-mentioned threats. Participating teams are prevented from finding problems, answers, and solutions that intermediaries themselves did not identify. A cooperation between an intermediary and scientific partners can only succeed in generating trust if researchers are given the freedom to seek and find potential negative effects. Where such inquiries are prohibited ex ante, through pre-defined datasets or topical questions, both sides suffer from a lack of credibility. 
There is also a deeper issue to the proposed cooperative model: Just as independent scholarly work from different institutes is required for long-term trustworthy, rigorous, and reliable scientific insights, independence is a defining feature of work on intermediaries. Only when external observers are free to implement an autonomous process for data collection, analysis, and interpretation can they serve as the much-needed check and balance that society requires (and demands). Researchers’ ability to do so ultimately hinges on two key ingredients, both mentioned above: The capacity to obtain or create high- quality representative samples, and the availability of tools that record digital content, recommendations and user behavior within intermediaries. While the first is certainly possible (if perhaps expensive), the second remains under strong pressure from the progressive “lock-down” of platforms and mobile devices (Jürgens et al., 2019).  
Diversity as Policy Goal 
From a normative point of view, diversity is the key term: “Media pluralism and diversity of media content are essential for the functioning of a democratic society,” as the Council of Europe (2007) put it, because functioning democracies require all members of society to be able to participate in public debate — and to be able to fully participate in this democratic debate, citizens need to encounter a diversity of high-quality information and opinions. Or, as the Council of Europe (2007) adequately phrased it, the right to freedom of expression “will be fully satisfied only if each person is given the possibility to form his or her own opinion from diverse sources of information.” The close link between media diversity and democratic participation may also explain the scope of the public debate about the rise of platforms and their growing influence in the information landscape. The advances of at least some of these platforms into the business of distributing and aggregating media content have fundamentally changed our news ecosystem. In this context, a very important aspect is the newly created economic dependencies. The better part of the advertising revenue flows to the platform providers, Google and Facebook. This raises the normative question: to what extent should diversity matter in the context of social media platforms? 
Communications researchers (e.g., Moeller, Helberger, and Makhortykh, 2019) emphasize that to ensure the preservation of the news ecosystem that has come under considerable strain, the public needs to receive the curated news supply provided by traditional mass media. The length of time that traditional mass media can still assume this function is uncertain. If, at some point, this is no longer the case, the underlying conditions of the news ecosystem and consequently opinion-formation will fundamentally change. If traditional mass media disappears, high- quality news products will no longer be available, and the impact of low-quality information on processes of opinion formation will rise. If only softened and highly personalized content will be distributed, the news ecosystem will change dramatically with potentially negative consequences for democratic societies. 
Against this background, the social dynamics of media diversity become apparent (Helberger, 2018, p. 156). Thus, “media diversity on social media platforms must be understood as a cooperative effort of the social media platform, media organizations, and users. The way users search for, engage with, like, shape their network, and so forth has important implications for the diversity of content, ideas and encounters that they are exposed to. Similarly, the way the traditional media collaborate with information intermediaries to distribute content and reach viewers impacts structural diversity” (Helberger, 2018, p. 171). Put differently, future diversity policies must therefore go beyond the traditional framework and generate a new conception of media diversity, which addresses the different actors (platforms, users, and media organizations) together. These future policies must, first and foremost, ensure that diversity reaches users. 
A potential way of increasing exposure diversity could be to employ a design that focuses on serendipity and/or on diversity as a principle (see considerations on “diversity by design”: Helberger, 2011). Such a design would, for example, focus less on search engine ranking and would encourage users to have a closer look at different teasers and click on more results. Besides, users should have the opportunity to choose between, or weight, different filtering and sorting criteria. Such changes could also create more diversity in Facebook’s news feed, e.g., Facebook could implement the ability for users to adopt a different point of view, exposing them to a feed with totally new perspectives (Ash, Gorwa, and Metaxa, 2019). 
As the debate about the impact of algorithmic news recommenders on democracy is still an ongoing process, diversity-sensitive design as part of a possible solution should be taken into account. For such solutions to work, it should be clear that different perspectives on the democratic role of news recommenders imply different design principles for recommendation systems (Helberger, 2019), i.e., an explicit normative conception of the democratic potential is critical. It may also become clear, that we need to work towards a coherent mix of appropriate government regulation, co-regulation, and platform-specific self-regulation in order to minimize the negative effects of the discussed threats.

Currencies

'After Libra, Digital Yuan and COVID-19: Central Bank Digital Currencies and the New World of Money and Payment Systems' (European Banking Institute Working Paper Series 65/2020) by Douglas W. Arner, Ross P. Buckley, Dirk A. Zetzsche and Anton Didenko comments
Technology, money and payment systems have been interlinked from the earliest days of human civilization. But of late technology has reshaped money and payment systems to an extent and speed never before seen. Milestones include the establishment of M-Pesa in Kenya in 2007 (creating mobile money systems), Bitcoin in 2009 (triggering in time the explosive growth in distributed ledger technology and blockchain), the announcement of Libra in 2019 (triggering a fundamental rethinking of the potential impact of technology on global monetary affairs), and the announcement of China’s central bank digital currency – the Digital Currency / Electronic Payment (DCEP) referred herein to as Digital Yuan (marking the first launch by a major economy of a sovereign digital currency). 
The COVID-19 pandemic and crisis of 2020 has spurred electronic payments in ways never before seen. In this paper, we ask the question: In the context of the crisis and beyond, what role can technology play in improving the effectiveness of money and payment systems around the world? 
This paper analyses the impact of distributed ledger technologies and blockchain on monetary and payment systems. It particularly considers the policy issues and choices associated with cryptocurrencies, stablecoins and sovereign (central bank) digital currencies. We examine how the catalysts reshaping monetary and payment systems around the world – Bitcoin, Libra, China’s DCEP, COVID-19 – challenge regulators and give rise to different levels of disruption. While the thousands of Bitcoin progenies were able to be ignored, safely, by regulators, Facebook’s proposed Libra, a global stablecoin, brought an immediate and potent response from regulators globally. This proposal by the private sector to move into the traditional preserve of sovereigns – the minting of currency – was always likely to provoke a roll-out of sovereign digital currencies by central banks. China has moved first, among major economies, with its Digital Yuan – the initiative that may well trigger a chain reaction of central bank digital currency issuance across the globe. 
In contrast, in the COVID-19 crisis, we argue most central banks should focus not on rolling out novel new forms of blockchain-based money but rather on transforming their payment systems: this is where the real benefits will lie both in the crisis and beyond. Looking forward, neither the extreme private nor public model is likely to prevail. Rather, we expect the reshaping of domestic money and payment systems to involve public central banks cooperating with (new and old) private entities which together will provide the potential to build better monetary and payment systems at the domestic and international level. Under this model, for the first time in history, technology will enable the merger of the monetary and payment systems.

Datafication

'Datafication and the Welfare State' by Lina Dencik and Anne Kaun in (2020) 1(1) Global Perspectives 12912 comments
 Both vehemently protected and attacked in equal measure, the welfare state as an idea and as a policy agenda remains as relevant as ever. It refers not only to a program of social welfare and the provision of social services, but also to a model of the state and the economy. According to Offe (1984), the welfare state in advanced capitalist economies is a formula that consists of the explicit obligation of the state apparatus to provide assistance and support to those citizens who suffer from specific needs and risks characteristic of the market society, and it is based on a recognition of the formal role of labor unions in both collective bargaining and the formation of public policy. Although actively dismantled in recent decades as globalization and neoliberalism have taken hold of much of the modern world-system, its future continues to be fought over. It serves as a model for society that is seen to privilege a commitment to decommodification, universal access, and social solidarity as a way to overcome the most prominent contradictions of capitalism. A product of the twinned global crises of the Great Depression and the Second World War, the modern welfare state therefore encapsulates a moment of political and economic settlement, a mechanism of stabilization that arguably could emerge only out of such crises. 
From the outset, technology, particularly information and communication technologies, has played a key role in the development of the welfare state (Hobsbawm 1994). It has been instrumental in the creation of bureaucracies and forms of population management that have long been central to the way the welfare state is administered. Gunnar and Alva Myrdal, for example, famously argued for social engineering based on statistics and the use of technology to solve the population crisis of Sweden in the 1930s and 1940s. Their suggestions are now considered central to the ideas and cornerstones of the Nordic welfare state model (Kananen 2014). The creation of databases and the monitoring of citizens was from early on a fundamental part of assessing population needs and determining allocation of resources, a type of surveillance that has been the subject of much critique for creating categories of “deserving” and “undeserving” citizens (Offe 1984). At the same time, the advent of digitization has also been seen as a challenge to the welfare state and its ability to deliver on its promises, disrupting labor relations, undermining social security, and changing the parameters of state governance. With growing trends such as mass data collection, automation, and artificial intelligence, these tensions have only intensified, putting the welfare state into further question (Petropoulos et al. 2019). 
At the time of writing this introduction, the question of not only the future of the welfare state but also how technology intersects with it has gained new pertinence as we find ourselves in the midst of another global crisis. The global pandemic brought about by the rapid spread of COVID-19 has put social welfare questions and the role of the state at the top of the agenda once more. The crisis is seen to have prompted a return of the Leviathan state, a social contract with an absolute sovereign in which the state provides the ultimate insurance against an intolerable human condition (Mishra 2020), and it has provided renewed impetus for demands for universal health care, stable employment, and a basic income (Standing 2020). Certainly, initial responses to the pandemic and ongoing lockdowns across the world have aligned around state interventions in the economy not seen in a generation, with governments designing various packages of increased public spending, which has (re)invited a rhetoric of the importance of economic planning and strong social security. 
Technology is proving to be at the heart of this crisis and how the welfare state might emerge from it. As “social distancing” speeds up the transition to social and economic life online, often presented as a seamless process, Big Tech has quickly (in partnership with governments) established itself as our (new) infrastructure for everything from health to education to work (Bharthur 2020). At the same time, Big Tech is also presented as a solution to the crisis through extensive data collection, contact tracing, and certification. At the time of writing, the big data analytics company Palantir is in talks with a number of governments, including those of the United Kingdom, Germany, and France, to provide data infrastructure for health services during the pandemic, and Google and Apple have announced a joint venture to develop infrastructure for contact-tracing apps that determine if an individual has been in close proximity to someone COVID-19 positive (Fouquet and Torsoli 2020; Kelion 2020). Furthermore, the EU Commission has requested metadata from large mobile phone carriers, including German Telekom and Orange, to calculate mobility patterns and track the spread of the coronavirus across Europe (Scott, Cerulus, and Kayali 2020). It is claimed that only anonymized and aggregated data will be collected and that data will not be used to control or sanction lockdown measures but to use this data to be able to predict where medical supplies will be needed most. 
These initiatives introduce new questions about the nature of surveillance in governance, the place of data protection frameworks such as the EU’s General Data Protection Regulation (GDPR), and the role of private companies in the delivery of public services that all form an important part of the contemporary debate on technology and the welfare state. As Baker (2020, p. para. 13) puts it, “for governments looking to monitor their citizens even more closely, and companies looking to get rich by doing the same, it would be hard to imagine a more perfect crisis than a global pandemic.” Moreover, the turn to data and the reliance on data-driven systems in governance introduces key epistemological and ontological assumptions about what constitutes relevant social knowledge for decision-making and how individuals and populations should be understood and managed. Data, on this premise, needs to be collected in as large a quantity as possible (total information capture), processed through automation, with the view to calculate all possible outcomes—a knowing of all risks—so as to preempt them before they occur (Andrejevic 2019). While a global crisis like the one we are currently in might present itself as a state of exception in these terms, the trend of datafication across social life is one that was already firmly in place. 
What does it mean to organize the welfare state around this trend of datafication? With this special issue, we take stock of this question and explore the multiple ways in which the practices, values, and logics that underpin the advancement of datafication intersect with the practices, values, and logics that form the basis of the public services that we commonly associate with the modern welfare state. The idea for this special issue emerged out of discussions in the Nordic research network Datafication, Data Inequalities and Data Justice, of which we are both members. It is perhaps no surprise that it is a Nordic context that spurred on the engagement with the welfare state as this has long been a central feature of Nordic societies, both as an idea and in practice. However, the question of how datafication impacts public services, particularly in relation to social welfare, is a global one and one that cannot be universalized, whether in terms of data-driven developments or their implications (Milan and Treré 2019). At the same time, the history of the modern welfare state is one that has most frequently been associated with Europe in what Judt (2007) has described as the “social-democratic moment” of the postwar period. This history is reflected in our contributions that predominantly engage with European and Western settings, while doing so in the context of globalization. Many of the issues discussed in our contributions are being raised elsewhere as technology infrastructures globalize and standardize practices (cf. Booth 2019).

Data Localisation

'“Data localization”: The internet in the balance' by Richard D.Taylor in (2020) 44(8) Telecommunications Policy comments
There is a steady global trend towards “Data Localization,” laws by which data is required to be maintained and processed within the geographic boundaries of its state of origin. This development has raised concerns about its possible adverse impacts on emerging data-intensive technologies such as Cloud services/E-commerce, Big Data, Artificial Intelligence and the Internet of Things (collectively, the Embedded Infosphere). The inability to reach an international agreement on rules for cross-border data flows may have significant adverse consequences for all future users of the Internet. 
The basis of Data Localization is grounded in two distinct but inter-related policy models: Data Sovereignty and Trans-Border Data Flows. These two concepts have different origins. “Data Sovereignty” is derived from the historic power of a state of absolute and exclusive control within its geographic borders. Policies behind TBDFs arose in Europe following World War II, primarily motivated by Nazi use of early proto-computers to help round up Jews and others. As they have evolved, TBDF policies have been directed primarily at protecting personal data and privacy. 
This article first examines the issues of: 1) “Information Sovereignty” and 2) TBDFs. It then describes the arguments for and against “Data Localization,” offers some examples of strong localization policies (Russia, China), and summarizes contesting policy proposals. It then contextualizes TBDF with issues of human rights (free flow of information) and privacy. 
While the utility of an international agreement on TBDFs is clear, the differences in approaches are tenacious. For the free-market developed world (e.g, EU, OECD), the path forward seems to lead through policy convergence to compatible rules, with differentiated levels of data protection and accountability. It is far from clear whether these rules will address, in a mandatory way, issues of the “free flow” of information in the human rights sense. At the same time, there are countries (e.g., BRICS), representing a majority of the world's population, in which political and cultural resistance will produce stringent Cyber Sovereignty and Data Localization policies with few if any human rights components. 
The article concludes that the more the Internet is “localized”, the more attenuated its benefits will become. The negative consequences of Data Localization will become increasingly obvious as new, data-intensive technologies become ubiquitous, creating a condition of “Data Dependence”. It is projected that in the future the nations with the least amount of Data Localization and the most open flow of information will be the most successful in benefiting from new data-intensive embedded, networked technologies. This will most likely be characterized by values adopted as policies and practices in the EU.

Apple and Competition Policy

'The Antitrust Case Against Apple' by Bapu Kotapati, Simon Mutungi, Melissa Newham, Jeff Schroeder, Shili Shao and Melody Wang comments
This article explores several potential antitrust claims against Apple - namely tying, essential facilities, refusal to deal and monopoly leveraging. We argue that the Apple ecosystem's large revenue share in terms of app transactions, lock-in effects and consumers' behavioral bias in online markets give the iPhone maker monopoly power as a mobile platform. Apple has exploited its market power to illegally tie the distribution of digital goods to its proprietary in-app purchase system to impose a 30% tax and extract supracompetitive profits, leading to higher app prices and reduced innovation. Moreover, Apple has excluded rivals and favored its own apps by downgrading competitors' discovery and promotions, blocking certain rivals entirely from the App Store, and limiting others' access to key APIs, in some cases right after copying their apps. In conjunction with the discriminatory application of the 30% tax, Apple's conduct towards major multi-homing apps such as Spotify reduces cross-platform competition with Android. These anticompetitive practices prolong and expand Apple's monopoly at the expense of competition.

02 July 2020

Spam

The Australian Communications and Media Authority (ACMA) has announced that dominant retailer Woolworths Group Limited has paid a $1,003,800 infringement notice and agreed to a court-enforceable undertaking in response to significant breaches of the Spam Act 2003 (Cth). The penalty is the largest issued by ACMA under that Act.

 ACMA found over five million breaches of the Act by Woolworths in marketing emails to consumers   between October 2018 and July 2019 after those people had unsubscribed from previous messages. 

ACMA states its investigation found Woolworths’ systems, processes and practices were inadequate to comply with spam rules, with ACMA executive Nerida O’Loughlin  commenting
The spam rules have been in place for seventeen years and Woolworths is a large and sophisticated organisation. The scale and prolonged nature of the non-compliance is inexcusable. Woolworths failed to act even after the ACMA had warned it of potential compliance issues after receiving consumer complaints. 
Consumers claimed that they had tried to unsubscribe on multiple occasions or for highly personal reasons, but their requests were not actioned by Woolworths because of its systems, processes, and practices.

 In its court-enforceable undertaking Woolworths has committed to appoint an independent consultant to review its systems, processes, and procedures, to implement improvements, and to report to ACMA. Woolworths has also committed to undertake training, and to report all non-compliance it identifies to ACMA for the term of the undertaking.

O'Loughlin comments
Our enforcement action, a substantial infringement notice and a comprehensive three-year court-enforceable undertaking, is commensurate to the nature of the conduct, number of consumers impacted and the lack of early and effective action by Woolworths.  CMA’s actions should serve as a reminder to others not to disregard customers’ wishes when it comes to unsubscribing from marketing material.

Competition and Digital Platforms

The Competition and Markets Authority (CMA), the UK’s primary competition and consumer authority ('an independent non-ministerial government department with responsibility for carrying out investigations into mergers, markets and the regulated industries and enforcing competition and consumer law') has announced a Digital Markets Taskforce with the Information Commissioner's Office and OfCom.

The Taskforce was originally commissioned by the government. It will build on the conclusions of the CMA's  Platforms market study (a counterpart to ACMA's Digital Platforms inquiry), as well as looking more widely across all platforms to consider the functions, processes and powers which may be needed to promote competition. It will advise government on how a new regulatory regime for digital markets should be designed. To inform its work, the CMA is publishing a call for information, and writing to a number of platforms, seeking views and information. The Taskforce will deliver advice to government by the end of 2020.

 The CMA is calling on the UK government to 'introduce a new pro-competition regulatory regime to tackle Google and Facebook’s market power'.

Its media release states
The dynamic nature of digital advertising markets and the types of concerns identified by the Competition and Markets Authority (CMA) in its market study are such that existing laws are not suitable for effective regulation. It is therefore recommending a new pro-competition regulatory regime to govern the behaviour of major platforms funded by digital advertising, like Google and Facebook. 
This recommendation to government is the result of a year-long examination of the markets. The CMA used its statutory information gathering powers to lift the lid on how advertising revenue drives the business model of major platforms. 
The CMA’s concerns 
UK expenditure on digital advertising was around £14bn in 2019, equivalent to about £500 per household. About 80% of this is earned by just 2 companies: Google and Facebook. Google enjoys a more than 90% share of the £7.3 billion search advertising market in the UK, while Facebook has a share of over 50% of the £5.5 billion display advertising market. Google’s revenue per search has more than doubled since 2011, while Facebook’s average revenue per user has increased from less than £5 in 2011 to over £50 in 2019. 
The services provided by Facebook and Google are highly valued by consumers and help many small businesses to reach new customers. While both originally grew by offering better services than the main platforms in the market at the time, the CMA is concerned that they have developed such unassailable market positions that rivals can no longer compete on equal terms: Their large user base is a source of market power – it means that Facebook is a “must-have” network for users to remain in contact with each other, and enables Google to train its search algorithms in ways that other search engines cannot. Each has unmatchable access to user data, allowing them to target advertisements to individual consumers and tailor the services they provide. Both use default settings to nudge people into using their services and giving up their data – for example Google paid around £1.2bn in 2019 to be the default search provider on mobile devices and browsers in the UK, while Facebook requires people to accept personalised advertising as a condition for using their service. Their presence across many different markets, partially acquired through many acquisitions over the years, also makes it harder for rivals to compete. Each of these factors individually presents a potential barrier to new competition, but together they work to reinforce each other and are extremely difficult to overcome. 
These issues matter to consumers. Weak competition in search and social media leads to reduced innovation and choice, as well as to consumers giving up more data than they would like. Further, if the £14bn spend in the UK last year on digital advertising is higher than it would be in a more competitive market, this will be felt in the prices for hotels, flights, consumer electronics, books, insurance and many other products that make heavy use of digital advertising. The CMA found that Google’s prices are around 30% to 40% higher than Bing when comparing like-for-like search terms on desktop and mobile. 
Google and Facebook’s market positions also have a profound impact on newspapers and other publishers. The CMA has found that newspapers are reliant on Google and Facebook for almost 40% of all visits to their sites. This dependency potentially squeezes their share of digital advertising revenues, undermining their ability to produce valuable content. 
The need for a new regime 
The scale and nature of these issues mean that a new pro-competition regulatory regime is needed so that users can continue to benefit from innovative new services; rival businesses can compete on a level playing field and publishers do not find their revenues unduly squeezed. 
The CMA’s proposals are consistent with those made by Professor Jason Furman in his report for the government. 
The CMA has proposed that within the new regime a ‘Digital Markets Unit’ should have the ability to:
  • enforce a code of conduct to ensure that platforms with a position of market power, like Google and Facebook, do not engage in exploitative or exclusionary practices, or practices likely to reduce trust and transparency, and to impose fines if necessary. 
  • order Google to open up its click and query data to rival search engines to allow them to improve their algorithms so they can properly compete. This would be designed in a way that does not involve the transfer of personal data to avoid privacy concerns. 
  • order Facebook to increase its interoperability with competing social media platforms. Platforms would need to secure consumer consent for the use of any of their data. 
  • restrict Google’s ability to secure its place as the default search engine on mobile devices and browsers in order to introduce more choice for users. 
  • order Facebook to give consumers a choice over whether to receive personalised advertising. 
  • introduce a “fairness-by-design” duty on the platforms to ensure that they are making it as easy as possible for users to make meaningful choices. 
  • order the separation of platforms where necessary to ensure healthy competition. 
Whilst this recommendation is UK-focused, many of the problems that the CMA has identified are international in nature. It will therefore continue to take a leading role globally in relation to these issues as part of the CMA’s wider digital strategy. .... 
Our clear recommendation to government is that a new pro-competitive regulatory regime be established to address the concerns we have identified and regulate a sector which is central to all our lives. 
Privacy 
Safeguarding people’s control over their data is paramount to privacy as well as to the healthy operation of the market, so the CMA has worked with the Information Commissioner’s Office (ICO) to examine the impact of privacy regulation on the market. 
The General Data Protection Regulation is still in its early stages and the CMA is concerned that big platforms could be interpreting it in a way which favours their business models, instead of in a way which gives users control of their data. For example, big platforms might share user data freely across their own sizeable business ecosystem, while at the same time refusing to share data with reputable third parties – which could have a detrimental impact on smaller players. 
The CMA’s market study advocates a competitive-neutral approach to implementing privacy regulation, so that the big platforms are not able to exploit privacy regulation to their advantage. It will be working with the ICO and Ofcom further to address these issues through the Digital Regulation Cooperation Forum, the details of which were also published today.