'Vulnerability in a tracked society: Combining tracking and survey data to understand who gets targeted with what content' by Nadine Bol, Joanna Strycharz, Natali Helberger, Bob van de Velde and Claes H de Vreese in (2020) New Media and Society comments
While data-driven personalization strategies are permeating all areas of online communication, the impact for individuals and society as a whole is still not fully understood. Drawing on Facebook as a case study, we combine online tracking and self-reported survey data to assess who gets targeted with what content. We tested relationships between user characteristics (i.e. socio-demographic and individual perceptions) and exposure to branded content on Facebook. Findings suggest that social media use sophisticated algorithms to target specific groups of users, especially in the context of gender-stereotyping and health. Health-related content was predominantly targeted at older users, females, and at those with higher levels of trust in online companies, as well as those in poorer health conditions. This study provides a first indication of unfair targeting that reinforces stereotypes and creates inequalities, and suggests rethinking the impact of algorithmic targeting in creating new forms of individual and societal vulnerabilities.
The authors argue
The continuous and ubiquitous tracking of our online activities has initiated public and scholarly debates about big data, privacy, and fairness. The opportunities of leveraging the large amounts of personal data we produce online are tremendous, for example, when large-scale, detailed, and highly integrated patient data are combined in electronic health records to deliver personalized care to patients (Dzau and Ginsburg, 2016). Or when personalization algorithms are used to help users to handle the abundance of information online and find the content that matters to them (Thurman et al., 2018). Users, while concerned about their privacy, also appreciate the advantages of more personally relevant advertising and branded content (Aguirre et al., 2016; Strycharz et al., 2018). Data analytics and personalization strategies, however, do not only decide who receives what kind of care, or gets to see particular selections of content, but also who is included in treatment, news, advertisements, and special deals, and who gets excluded from them. In doing so, tracking and targeting users cannot only create new opportunities but potentially also new disparities and vulnerabilities in society, and in users (Bol, Helberger, et al., 2018). To better identify the opportunities and also risks of targeting, for individuals and society, there is a need for critical research to better understand who gets targeted with what content.
Ongoing debates about online targeting are often emotion-driven and based on assumptions and moral panic of what happens inside the “black box,” and what algorithms might and could do in terms of targeting ill-informed, vulnerable users (Bodo et al., 2017). At the same time, research on the implications of algorithmic targeting is challenging, as the exact nature of such algorithms is more often than not hidden from public oversight (Bucher, 2012), and also requires new research methods to bring into the light. As a result, empirical research substantiating various assumptions is scarce, but is direly needed to assess to what extent targeting occurs and to what kind of users. Scholars have pointed out repeatedly and urgently the need for new research designs to study the black box (Moore, 2016; Pasquale, 2015; Resnick et al., 2015). Put sharply, we need to think outside the box to look inside the box. In doing so, our article contributes to previous research in two distinct ways.
First, this study uses a unique design by combining tracking and survey data to assess who gets targeted with what content on online platforms. Although there is already research focusing on the sender side (i.e. the type of content on online platforms such as social media) or on the receiver side of targeted messages (i.e. the impact of type of content on user attitudes and self-reported behavior), research that has examined both sides simultaneously is scarce. Studies so far that have investigated online targeting typically rely on users’ online behavioral data such as website visits, search activity, advertising exposure, and online purchases (e.g. Lipsman et al., 2012), or conducted (automated) content analyses of social media content created by brands (e.g. Shen and Bissell, 2013). Drawing on Facebook as a case study, the current study moves a step further by integrating insights from both sender and receiver sides, which offers a more profound understanding of the workings of algorithms used for targeting and the implications thereof for user vulnerability online.
Second, we provide a theoretical framework that informs our understanding of the individual and societal impact of online targeting on vulnerabilities. So far, the ways in which algorithmic architectures are being used to target specific audiences remain unclear, and theoretical underpinnings explaining the potential opportunities and risks for users as well as for the society have only begun to emerge. As a result, there is little understanding of why users are exposed to certain content online, and also why it matters, and how to evaluate the status of the tracked society from a normative point of view. The question arises if groups, who are typically said to be more vulnerable to persuasive attempts such as less knowledgeable or older users (Duivenvoorde, 2013), are also the groups that are particularly vulnerable to targeting strategies, either because of greater perceptiveness or concentrations of targeting activities on particular consumers. The current study indeed shows that vulnerability factors, such as age and health condition are related to, for example, being targeted with health-related content. Informing consumers with a health condition about potentially helpful practices can be a useful thing to do. The point that we make in this article is that such a practice can amount to a (societal) problem if vulnerabilities, as result of a health condition, are identified and (ab)used trough data-driven advertising practices.
Next to implications for individual users, another urgent question is the one about the societal implications of targeting certain, and including, but also excluding other categories of users, and whether this can result in new forms of (digital) inequality. The question if we accept the situation in which different market segments are treated differently is not only of academic relevance, but also of societal relevance. In fact, research into the tracked society and its individual and societal implications offers critical input for ongoing policy debates, as the question of data-driven targeting strategies triggers the need for new policy measures to protect consumers’ autonomous decision-making. These are vital questions particularly in light of the pending review of the Unfair Commercial Practice Directive, and overhaul of the current consumer law acquis in Europe. Therefore, the current exploratory study aims to identify factors associated with targeting to assess whether vulnerable groups are exposed to certain content more avidly, which would result in both individual and societal consequences.