'Managed Sovereigns: How Inconsistent Accounts of the Human Rationalize Platform Advertising' by Jake Goldenfein and Lee McGuigan in (2023) 3(3) Journal of Law and Political Economy comments
Platform business models rest on an uneven foundation. Online behavioral advertising drives revenue for companies like Meta, Google, and Amazon, with privacy self-management governing the flows of personal data that help platforms dominate advertising markets. We argue that this area of platform capitalism is reinforced through a process whereby seemingly incompatible conceptions of human subjects are codified and enacted in law and industrial art. A rational liberal “consumer” agrees to the terms of data extraction and exploitation set by platforms. Inside the platform, however, algorithmic systems act upon a “user,” operationalized as fragmentary patterns, propensities, probabilities, and potential profits. Transitioning from consumers into users, individuals pass through a suite of legal and socio-technical regimes that each orient market formations around particular accounts of human rationality. This article shows how these accounts are highly productive for platform businesses, configuring subjects within a legitimizing framework of consumer sovereignty and market efficiency. ...
As you can tell, reasonable people approach behavioral marketing from very, very disparate perspectives.
—Federal Trade Commissioner Jon Leibowitz (FTC 2007)
Policymakers’ beliefs about human cognition and behavior shape how governance structures position people in relation to corporate power (Pappalardo 2012; Hoofnagle 2016). In this paper we ask: Who is the actor presumed and codified in the governance of commercial online platforms? What capacities and subjectivities are imputed onto individuals interacting with these digital surveillance technologies and algorithmic systems of classification and personalization? Focusing on platforms that facilitate online behavioral advertising (OBA), we argue that a contradictory set of rational capacities is assigned
to individuals as they act, respectively, as “consumers” who choose to engage with a platform and accept its terms of service, and as “users” who then actually engage with the platform service. Radically inconsistent definitions of the subject are tolerated, or potentially encouraged, to secure the ideological and legal integrity of using market mechanisms to govern personal data flows while simultaneously using those data to predict and manage user behavior.
When individuals submit to tracking, profiling, and personalization of their opportunities and environments, such as by agreeing to the privacy policies and binding terms of service offered by social media sites and mobile apps or agreeing to pop-up notifications seeking consent for data processing, the presumption is that those individuals act deliberately to maximize self-interest based on a calculation of their options, benefits, and costs (Solove 2013; Susser 2019; Turow, Hennessy, and Draper 2015). Data privacy law in the US and elsewhere encodes a rational consumer, freely trading personal information for “relevant” advertisements and customized services (Baruh and Popescu 2017; Draper 2017; Hoofnagle and Urban 2014). The bargain is considered legitimate so long as (1) transparent disclosures of corporate data practices equip consumers to make reasoned privacy tradeoffs (White House 2012; FTC 2012), and (2) consumers are capable of giving meaningful consent. Marketing experts and policymakers have regarded personalization and the reigning notice-and-choice regime as exemplars of consumer empowerment and market efficiency (Darmody and Zwick 2020; see Thaler and Tucker 2013). As Robert Gehl (2014, 110–111) explains, the subject personified in digital advertising’s policy rhetoric—the “sovereign interactive consumer”—is the “foundational abstraction” of privacy self-management’s governance of social media.
Individuals are conceptualized much differently on the other side of the privacy-permissions contract, where the presentation of information and opportunities—including advertisements, product offers, and prices—responds dynamically to inferences and predictions about profiled people and populations (Andrejevic 2020; Barry 2020; Fourcade and Healy 2017; Moor and Lury 2018). In the OBA practices enabled by permissive privacy rules, strategic actors operate from the premises that human decision-making is susceptible to influence and that the reliability of that influence can be increased by discerning users’ habits or cognitive patterns (Bartholomew 2017; Calo 2014; Susser et al. 2019). The target of influence is not addressed as a self-determining liberal subject, exercising a stable endowment of preferences and capacities, but rather as a machine-readable collection of variable (sometimes only momentary) properties, correlations, probabilities, and profit opportunities represented by “anonymous” identifiers (Barocas and Nissenbaum 2014; Cheney-Lippold 2017; Fisher and Mehozay 2019). Digital platforms, and the marketers who use them, try to engineer the architectures in which people make choices to systematically increase the likelihood of preferred outcomes, guided by statistical analyses of massive and intimate data (Burrell and Fourcade 2021; Gandy 2021; Nemorin 2018; Yeung 2017). The idea is to steer or nudge individuals toward predicted behaviors through constant tweaking of information environments and opportunities.
By designing those behavioral pathways, platforms produce and sell probabilistic behavioral outcomes that can be differentiated according to their apparent or expected value (Zuboff 2015). Frequently, that value is determined by the data available about the subject whose attention is being monetized, and so access to personal information that ostensibly enables better valuations or strategic decisions becomes a source of power and advantage for platforms (Birch 2020; West 2019). These processes of prediction and influence may not be as effective as proponents suggest, and, in practice, many advertising professionals work with a mix of algorithmic identities and the demographic or lifestyle categories long used to define consumer targets (Beauvisage et al. 2023). Nevertheless, the business of platform-optimized advertising has become astronomically profitable. That profit is premised on a belief that comprehensive data collection furnishes new abilities to identify and exploit individuals’ susceptibilities, vulnerabilities, and, in certain accounts, irrationalities (Calo 2014), all justified by the pretense of giving sovereign consumers what they desire and bargained for.
How can we tolerate such a dramatic discrepancy in the conceptions of human rationality that guide high-level policies about the relationships between people and the platforms they use to participate in social, political, and cultural life? How can data collection be authorized by the presumption that rational consumers freely choose to exchange personal data for improved platform services, when the use of that data within the platform presumes that individual users are not rational and that their choices and behaviors can be managed and “optimized” via algorithmic personalization?
This article contributes to an emergent body of critical scholarship addressing the implications of this inconsistency between the assumptions of liberal subjectivity that frame policy discourse, and the markedly different assumptions about human subjectivity and rationality operative in commercial platforms’ modes of computational or algorithmic management (Barry 2020; Benthall and Goldenfein 2021; Cohen 2013; Goldenfein 2019). We demonstrate that a simple question—Who is the platform subject?—provides a lens for examining (1) the political maneuvers that maintain a system and market configuration premised upon incompatible answers, and (2) how, in the name of consumer sovereignty, privacy self-management and norms of market efficiency are installed and defended as the foundation of platform and data governance against other forms of regulatory intervention. Specifically, we argue that this articulation of OBA and privacy self-management is reinforced through an unlikely process whereby seemingly incompatible conceptions of human subjects are codified and enacted in law and industrial art.
As an individual transitions from the law’s “consumer” into the platforms’ “user,” they pass through a suite of legal and socio-technical regimes—notice and choice, data protection, consumer protection, and computational or algorithmic management—that each orient market formations around particular accounts of human rationality. These inconsistent accounts are highly productive for platform business practices and the regulatory activities that hold their shape. The vast advertising-oriented sector of platform capitalism is stabilized by a set of institutions that operationalize rationality in divergent yet complementary ways. Those institutions, including the advertising industry and consumer protection law, configure subjects within a framework that simultaneously upholds the ideals of consumer sovereignty and market efficiency while also legitimizing data extraction and its derivative behavioral arbitrage. This article does not argue that law should respond to this contradiction through a more empirically coherent or less stylized subject. The goal is to demonstrate how inconsistencies in legal and industrial accounts of human rationality are used to privilege market ordering for coordinating data flows and to shape those markets in ways that suit commercial stakeholders.
At this point, defenders of OBA might demand a caveat. They might disavow any manipulative designs, conceding instead that when individuals act as marketplace choosers—both of privacy preferences and of advertised goods and services—they exercise “bounded rationality.” These defenders might insist that the marketplace chooser imagined by designers and practitioners of data- driven, behavioral marketing is a subject who strives for optimal decisions within material constraints, such as limited time, information, and information-processing power. Illegitimate subversion of individuals’ rational-aspiring choices, beyond these unavoidable constraints, will be met by the counteracting force of consumer protection law.
Suppose we accept all that and set aside the possibility that marketers exploit the cognitive biases cataloged by behavioral economists. Even so, the governance of commercial platforms still requires us to recognize that personalization and “choice architecting” are techniques for adjusting the boundedness of rationality—for setting or relaxing the material constraints on decision-making. Rationality is not a natural and persistent endowment, but a contextually situated capacity, shaped by the environments that structure decision-making; it is constituted through calculating devices and it is performative of markets and economization (Callon 1998). What we are suggesting, then, and what the defenders’ caveat does not resolve, is that digital platforms are designed and operated to cultivate specific and often asymmetrical rational capacities. Even at the terms-of-service threshold, where individuals make ostensibly reasoned choices about becoming users who are subject to the pleasures and perils of platform optimization, companies try to secure the continuous supplies of personal data they need by implementing consent-management interfaces that take advantage of human incapacities (Keller 2022; McNealy 2022). Admitting that consumers are “boundedly” rational, as opposed to predictably irrational, does not address the fact that platforms actively manipulate those boundaries, modulating information and design features that produce and delimit user experiences and market activity.
Further, and crucially for our argument, we suggest that consumer protection law’s evolving recognition of bounded rationality is doing important work for this sector of platform capitalism. Consumer protection in platform governance works to recuperate the political function of the liberal decisionmaker as the legal subject necessary to stabilize existing consumer-rationality-market configurations, and justify market mechanisms over stronger regulatory constraints, maintaining platform control over data flows powering the OBA business model. The legal integration of bounded rationality as a remedy to rational choice theory’s well-known limits makes space for a range of ready- to-hand regulatory interventions framed through behavioral economics, with minor impact on platform businesses. By deploying a broader theory of behavior, consumer protection law maintains the human subject as a legal technology around which profitable market configurations continue to be instituted, while avoiding real constraints on how platforms and their advertising apparatuses profit from behavioral management. The accommodation of behavioral economics, while tackling a specific set of predatory market practices and contriving new categories of vulnerable subjects, ensures that the idealized conception of individuals as autonomous market actors and the normative goal of market efficiency persist, while enabling platforms to simultaneously undermine both.
The next sections survey how consumer rationality has been defined and constructed in advertising and marketing, on the one hand, and in law and privacy regulation on the other. Moving through these analyses, it is important to note that we are not explaining how exactly these competing accounts of the human were transmitted across commercial, social-scientific, and legal domains. We are not suggesting, for example, that regulators have been duped or are involved in a knowing conspiracy. Rather, we demonstrate how the platform, as a site of collision between these contradictory accounts, leverages legally and technically codified forms of human rationality (from privacy, data protection, consumer protection, and computational management) into specific market institutions and governance regimes that justify platform business models.