A healthy democracy needs informed citizens. People are expected to be aware of important issues and public affairs in order to provide feedback on the political system. Therefore, a diversity of viewpoints is considered a core democratic value and one of the core public values in media law and policy. With the increasing importance of intermediaries, the question arises whether algorithmic news curation is a threat to democracy.
Fears that algorithmic personalization leads to filter bubbles and echo chambers are likely to be overstated, but the risk of societal fragmentation and polarization remains
- The fear that large parts of the population are trapped in filter bubbles or echo chambers seems overstated. Empirical studies offer a much more nuanced view of how social media affects political polarization. Due to the fact that our information repertoires are still broadly dispersed, we adapt our worldview, allow opposing opinions, and are exposed to relevant societal issues. Several studies show that incidental exposure and network effects can even contribute to an expansion of the diversity of information.
- Nevertheless, there is evidence for possible polarization at the ends of the political spectrum. The current state of research permits the assumption that echo chambers may arise under certain circumstances; that is, facilitated by homogeneous networks, highly emotionalized and controversial topics, and strong political predispositions. In particular, social media logics can reinforce affective polarization because the features of social media platforms can lead to very stereotypical and negative evaluations of out-groups.
- Moreover, social media may indirectly contribute to polarization by facilitating a distorted picture of the climate of opinion. As a result, spiraling processes begin because the perception of the strength of one’s own opinion camp compared to those of other camps is overstated. The entire process leads to an overrepresentation of radical viewpoints and arguments in the political discourse. At this point during the opinion formation process, individuals are more vulnerable to being influenced by “fake news” on Facebook or Twitter. Thus, strategic disinformation cannot only influence the media’s agenda through specific agenda-setting effects but can also impact the public’s climate of opinion.
Social media are vulnerable to facilitating a rapid dissemination of disinformation, but exposure seems to be limited
- There are orchestrated disinformation campaigns online, but data on the actual scope of and exposure to disinformation is scarce.
- The few available scientific studies suggest that the extent of the problem is likely to be overestimated since exposure to disinformation seems to be rather limited.
- Studies on the effects of disinformation of users show no persuasive effects but a confirmation bias: disinformation may therefore widen existing gaps between users with opposing worldviews because it is able to confirm and strengthen pre-existing attitudes and (mostly right-wing) worldviews. In this context political microtargeting poses a concern, as it can be used to disseminate content tailored to target groups particularly susceptible to disinformation.
- More research on the scope of, interaction with and individual and societal effects of disinformation is crucial to better assess the actual extent of the problem regarding disinformation.
Social media contribute to the dissemination of incivility and hate speech
- Incivility appears to be significantly widespread online and has real, negative effects on individual attitudes and the discourse climate.
- A potentially serious problem is the indirect effects of incivility on recipients and journalists: the credibility of journalistic content is reduced by incivility, including hate speech, in comment sections, which can have detrimental effects on trust in journalism as an institution of social cohesion in the long term.
- In addition, empirical evidence indicates that journalists react to incivility directed at them by shying away from controversial reporting or trying to hide controversies as a reaction to incivility. This is worrying because it hinders the free development of democratic discourse.
- A related problem is that especially women, who have been victims of hate speech, stop participating in discussions. This again backfires on the free development of public discourse on the macro-level, if whole groups of the population are cast out.
- Discourse moderation in comment sections that focuses on sociable replies on comments by journalists seems to be an effective tool in containing and preventing incivility including hate speech.
- Measures inhibiting freedom of expression have to be carefully applied and can only be used to combat illegal content such as hate speech.
Research agenda for platform governance
It can be stated that fears of filter bubbles and echo chambers seem overstated. Echo chambers and polarization seem to emerge only at the fringes of the political spectrum. Worrisome, however, are indications that social media may indirectly contribute to polarization by facilitating a distorted picture of the climate of opinion.
There are reasons for vigilance in the cases of disinformation and incivility. A serious assessment of the extent of the problem of disinformation is hampered by the scarcity of the available scientific data. The available studies suggest that an excessively alarmist political and societal debate is to be avoided, but the actual scope of the issues remains unclear. Incivility and hate speech are prevalent phenomena that should be tackled with evidence-based policy measures. That means that (further) regulation, bans or deletion of content, which entails legal problems, are not necessarily the best solution. From the perspective of communication science, the main goal of any intervention should be to strengthen a reasonable, fruitful and free democratic discourse.
In this context, we emphasize that existing successful approaches (e.g., community management in the form of moderation that does not entail deleting content to contain and prevent incivility) should be extended and consistently applied. In addition, further scientific evidence is needed, in particular on disinformation, in order to investigate the extent of the phenomenon and its consequences for public discourse and society in more detail, so that evidence-based measures can be developed. From a communication science perspective, it is precisely at this point that regulation appears most useful, especially with respect to demystifying the inner workings of “black box” algorithms and providing relevant data for research purposes. Hence, without access to Google’s or Facebook’s internal data, it is hard to reach firm conclusions. Therefore, it is critical to continue monitoring the evolution of digital news markets and the ways in which users are exposed to news on social media platforms. In particular, structural changes in the news market require the attention of regulators and policy makers. Intermediaries establish economic power and create new dependencies.The 69 page study concludes
Platform Transparency Deficits
Information intermediaries are increasingly important actors in high-choice media environments. They change the structures and processes of how communication in digitalized societies proceeds—with potentially profound consequences for the functioning and stability of our democracies. In contrast to existing media organizations, intermediaries wield far broader power because they internalize markets: Users of intermediaries choose their sources within an environment whose logic is set by the platform itself. The open market—whether the selection of newspapers at a newsstand or the list of channels available on TV— is typically regulated to prevent anti-competitive behavior, and it guarantees some degree of transparency and a level playing field. Market participants can evaluate the behavior of competitors simply by “walking over and having a look”. Internal markets for news, however, are opaque to individual and institutional observers. Within them, users of intermediaries are presented with a personalized pre-selection of content, but neither other users nor content producers can easily identify what those are. This implies two transparency deficits:
(1) Individual users only see their own recommendations. They have no way of knowing what information was hidden from them, and
(2) they cannot observe what information was presented to other users.
Outside actors (such as competitors, content suppliers, media authorities, and researchers) suffer from these limitations: They have no way of observing the treatment and behavior of individuals or groups of users. Information intermediaries, therefore, create new potential impacts (through personalization), along with detailed measurements thereof (creating what Webster, 2010 termed “user information regimes”)—but hide both within a proprietary product. As intermediaries’ importance to public opinion formation and political processes grows, societies will need to encourage effective transparency in order to safeguard a level playing field in information dissemination. As identified above, two elements are crucial for such transparency to work:
(1) Individual users should be empowered with regard to the recommendations presented to them—they should be able to download everything that was presented, along with a sensible description of the processes that produced this exact set. Doing so would help individuals understand whether they were the subject of a biased selection and enable them to seek legal recourse in that case. Such a model is unproblematic from the perspective of user privacy, as users could only receive information which they can view anyway, yet it would facilitate the act of “whistleblowing” in the case of perceived wrongdoing.
(2) An arguably more difficult goal would be the creation of some form of transparency that encompasses not individual users but the overall impact of intermediaries. Such a measure is necessary for monitoring the societal influence that arises from the use of platforms, including endogenous algorithmic effects but also encompassing external factors, such as manipulation attempts from malevolent third parties (such as hackers and foreign government agencies). Transparency on this level could, for example, take the form of providing privileged parties (state offices, researchers or trusted NGOs) with accurate aggregate information on the demographic makeup of user populations, the prevalence of news usage and other key insights. It might also be worth considering making less detailed data publicly available, but such a decision would need to be weighed against the potential negative effect of facilitating targeted manipulation.
In general, transparency should help users make informed choices. But it is clear that “digital transparency—whether it is enacted through technological solutions or more classical administrative and organizational means — will not on its own provide an easy solution to the challenges posed by the growing role of platforms in political and public life” (Gorwa and Ash, 2020, p. 20). The ongoing debates on the implementation of the transparency rules in the German Media State Treaty clearly support this statement (Dogruel, Stark, Facciorusso, and Liesem, 2020).
Key Data Needs
The combination of novel impacts, and a lack of transparency, create three distinct classes of threat to societies. Protecting against each threat requires access to specific data—most crucially the individual personalized results presented to users, the actions taken by intermediaries to change the flow of information, and the overall impact on the whole user population—that is currently unavailable (Tucker et al., 2018).
(1) As illustrated by existing anti-trust cases against technology companies, it is possible that (for whatever reason) intermediaries fail to adhere to internal or external guidelines, resulting in detrimental treatment of users, advertisers, content providers, or other actors. In contrast to past well-documented legal disputes (e.g., in competition regulation), affected parties will have difficulties monitoring for, detecting, and collecting evidence of unfair treatment because (due to personalization and highly individual behavior) they do not have access to the recommendations that intermediaries produce for their users. Assessing (and proving) unfair bias in intermediaries would require access to a representative set of recommendations, so that differences in consumption could be clearly attributed. Consider, for example, a hypothetical search engine that systematically alters access to political cross-cutting information—displaying only conservative results to conservatives and only liberal results to liberals. Creating a legal case against such a platform would require access to a representative set of users (anecdotal evidence could always be discounted as spurious. Or chance findings). For each of those users, researchers should identify the political leaning, and then record the search results they obtained.
Such data collection is typically unfeasible in practice for two reasons: (a) Platforms offer no way of accurately recording the output of an intermediary for a single user. Neither third parties, nor the users themselves, have access to technical interfaces that would show a comprehensive dataset of personalized recommendations, such as the personal news feed on Facebook. Even though users can access the feed visually in their browser, considerable effort would be required to extract it in an automated fashion (i.e., through web scraping). Furthermore, there is currently no company that provides such data, and researchers’ capabilities to obtain them independently are increasingly limited by the locked nature of proprietary smartphones (Jürgens, Stark and Magin, 2019). The second factor that encumbers research is (b) the unavailability of a suitable sampling strategy. Intermediaries are, of course, cognizant of their entire population of users along with key socio-demographic data (which is required i.e., for selling personalized advertisement spaces). Researchers, on the other hand, usually have no way of creating random samples from a platform’s population. Since the available information on the socio-demographic makeup of the userbase is typically drawn from (moderately sized, i.e., N in the thousands) surveys, attempts to create samples which are representative, with regard to the intermediary’s national users, offer somewhat limited precision (Jürgens et al., 2019).
(2) Outside actors (individual, institutional and state-sponsored) frequently attempt to manipulate intermediaries in order to gain influence over citizens—e.g., through misinformation, disinformation, and the manipulation of public opinion perception (Lazer et al., 2017; Magin et al., 2019). Although intermediaries spend commendable resources on the containment and removal of such attempts, two risks remain which are outside the reach of the companies themselves: (a) Without access to large-scale data containing purported manipulation attempts, external watchdogs cannot perform independent audits in order to identify overlooked external influences. (b) Problematic content is also routinely deleted, so that external watchdogs cannot scrutinize and understand those attacks. An exception is Twitter, which regularly publishes datasets on malevolent campaigns). Intermediaries should provide trustworthy actors with a way to perform their own, independent attempts at large-scale detection of manipulation, including data that were already removed by in-house systems. The simplest strategy that would enable such attempts is simply making the raw public flow of information accessible through a technical interface (an API), as Twitter has done: It not only offers a true random sample of tweets, but includes information about which of them are removed later on. Furthermore, the company offers a growing list of datasets containing all content produced by bot networks and state-sponsored manipulation campaigns. Platforms with more restrictive privacy contexts such as Facebook (where much of the flow of information is not visible to the broader usership or public) could still allow automated analyses, for example by offering to run researchers’ models without providing access to the data itself.
(3) In addition to institutional actors, harmful dynamics (such as radicalizing echo chambers, Stark et al., in press) may develop within intermediaries, even in the absence of external influence. Such dynamics need not have a clearly identifiable culprit; they could equally arise from the interaction of multiple individuals that leads to a mutual reinforcement of harmful tendencies. Detecting such structural phenomena is contingent on a complete picture of users’ interaction networks. Furthermore, singular snapshots do not provide much insight; instead, the development must be traced over time in order to assess the true impact as well as causes. Researchers should, therefore, gain access to representative, longitudinal, and detailed data on those (semi-public) parts of intermediaries that pertain to public debates. This includes first and foremost the discussions surrounding media, politics, and social issues.
New Models for Partnerships
Some attempts have been made to increase transparency; while they attempt to address the issues outlined above, they have not yet achieved any significant success. Following an initiative from King and Persily (2019), a consortium of scientists cooperated with Facebook in order to create an institution (Social Science One) and a process that would allow scholarly usage of a limited collection of pre-defined datasets. The project received much criticism from scientists, who warned that it would decrease transparency in research practices, lead to a dependence on Facebook that would encumber critical research, and create divisions between privileged “insiders” with access to data and the rest of the field (Bruns, 2019c). So far, Facebook also failed to facilitate the agreed upon access, frustrating even the existing scientific partners (Statement from the European Advisory Committee to Social Science One). Despite its pragmatic appeal, the cooperative model underpinning Social Science One has a fatal conceptual flaw: Even though it provides some access to some data, that access is pre-defined and limits researchers to a specific approach in tackling the above-mentioned threats. Participating teams are prevented from finding problems, answers, and solutions that intermediaries themselves did not identify. A cooperation between an intermediary and scientific partners can only succeed in generating trust if researchers are given the freedom to seek and find potential negative effects. Where such inquiries are prohibited ex ante, through pre-defined datasets or topical questions, both sides suffer from a lack of credibility.
There is also a deeper issue to the proposed cooperative model: Just as independent scholarly work from different institutes is required for long-term trustworthy, rigorous, and reliable scientific insights, independence is a defining feature of work on intermediaries. Only when external observers are free to implement an autonomous process for data collection, analysis, and interpretation can they serve as the much-needed check and balance that society requires (and demands). Researchers’ ability to do so ultimately hinges on two key ingredients, both mentioned above: The capacity to obtain or create high- quality representative samples, and the availability of tools that record digital content, recommendations and user behavior within intermediaries. While the first is certainly possible (if perhaps expensive), the second remains under strong pressure from the progressive “lock-down” of platforms and mobile devices (Jürgens et al., 2019).
Diversity as Policy Goal
From a normative point of view, diversity is the key term: “Media pluralism and diversity of media content are essential for the functioning of a democratic society,” as the Council of Europe (2007) put it, because functioning democracies require all members of society to be able to participate in public debate — and to be able to fully participate in this democratic debate, citizens need to encounter a diversity of high-quality information and opinions. Or, as the Council of Europe (2007) adequately phrased it, the right to freedom of expression “will be fully satisfied only if each person is given the possibility to form his or her own opinion from diverse sources of information.” The close link between media diversity and democratic participation may also explain the scope of the public debate about the rise of platforms and their growing influence in the information landscape. The advances of at least some of these platforms into the business of distributing and aggregating media content have fundamentally changed our news ecosystem. In this context, a very important aspect is the newly created economic dependencies. The better part of the advertising revenue flows to the platform providers, Google and Facebook. This raises the normative question: to what extent should diversity matter in the context of social media platforms?
Communications researchers (e.g., Moeller, Helberger, and Makhortykh, 2019) emphasize that to ensure the preservation of the news ecosystem that has come under considerable strain, the public needs to receive the curated news supply provided by traditional mass media. The length of time that traditional mass media can still assume this function is uncertain. If, at some point, this is no longer the case, the underlying conditions of the news ecosystem and consequently opinion-formation will fundamentally change. If traditional mass media disappears, high- quality news products will no longer be available, and the impact of low-quality information on processes of opinion formation will rise. If only softened and highly personalized content will be distributed, the news ecosystem will change dramatically with potentially negative consequences for democratic societies.
Against this background, the social dynamics of media diversity become apparent (Helberger, 2018, p. 156). Thus, “media diversity on social media platforms must be understood as a cooperative effort of the social media platform, media organizations, and users. The way users search for, engage with, like, shape their network, and so forth has important implications for the diversity of content, ideas and encounters that they are exposed to. Similarly, the way the traditional media collaborate with information intermediaries to distribute content and reach viewers impacts structural diversity” (Helberger, 2018, p. 171). Put differently, future diversity policies must therefore go beyond the traditional framework and generate a new conception of media diversity, which addresses the different actors (platforms, users, and media organizations) together. These future policies must, first and foremost, ensure that diversity reaches users.
A potential way of increasing exposure diversity could be to employ a design that focuses on serendipity and/or on diversity as a principle (see considerations on “diversity by design”: Helberger, 2011). Such a design would, for example, focus less on search engine ranking and would encourage users to have a closer look at different teasers and click on more results. Besides, users should have the opportunity to choose between, or weight, different filtering and sorting criteria. Such changes could also create more diversity in Facebook’s news feed, e.g., Facebook could implement the ability for users to adopt a different point of view, exposing them to a feed with totally new perspectives (Ash, Gorwa, and Metaxa, 2019).
As the debate about the impact of algorithmic news recommenders on democracy is still an ongoing process, diversity-sensitive design as part of a possible solution should be taken into account. For such solutions to work, it should be clear that different perspectives on the democratic role of news recommenders imply different design principles for recommendation systems (Helberger, 2019), i.e., an explicit normative conception of the democratic potential is critical. It may also become clear, that we need to work towards a coherent mix of appropriate government regulation, co-regulation, and platform-specific self-regulation in order to minimize the negative effects of the discussed threats.