28 August 2024

Moral Economy

 'The Moral Economy of High-Tech Modernism' by Henry Farrell and Marion Fourcade Author and Article Information in (2023) 152(1) Daedalus 225–235 comments 

Algorithms-especially machine learning algorithms-have become major social institutions. To paraphrase anthropologist Mary Douglas, algorithms “do the classifying.” They assemble and they sort-people, events, things. They distribute material opportunities and social prestige. But do they, like all artifacts, have a particular politics? Technologists defend themselves against the very notion, but a lively literature in philosophy, computer science, and law belies this naive view. Arcane technical debates rage around the translation of concepts such as fairness and democracy into code. For some, it is a matter of legal exposure. For others, it is about designing regulatory rules and verifying compliance. For a third group, it is about crafting hopeful political futures. 

The questions from the social sciences are often different: How do algorithms concretely govern? How do they compare to other modes of governance, like bureaucracy or the market? How does their mediation shape moral intuitions, cultural representations, and political action? In other words, the social sciences worry not only about specific algorithmic outcomes, but also about the broad, society-wide consequences of the deployment of algorithmic regimes-systems of decision-making that rely heavily on computational processes running on large databases. These consequences are not easy to study or apprehend. This is not just because, like bureaucracies, algorithms are simultaneously rule-bound and secretive. Nor is it because, like markets, they are simultaneously empowering and manipulative. It is because they are a bit of both. Algorithms extend both the logic of hierarchy and the logic of competition. They are machines for making categories and applying them, much like traditional bureaucracy. And they are self-adjusting allocative machines, much like canonical markets. 

Understanding this helps highlight both similarities and differences between the historical regime that political scientist James Scott calls “high modernism” and what we dub high-tech modernism. We show that bureaucracy, the typical high modernist institution, and machine learning algorithms, the quintessential high-tech modernist one, share common roots as technologies of hierarchical classification and intervention. But whereas bureaucracy reinforces human sameness and tends toward large, monopolistic (and often state-based) organizations, algorithms encourage human competition, in a process spearheaded by large, near-monopolistic (and often market-based) organizations. High-tech modernism and high modernism are born from the same impulse to exert control, but are articulated in fundamentally different ways, with quite different consequences for the construction of the social and economic order. The contradictions between these two moral economies, and their supporting institutions, generate many of the key struggles of our times. 

Both bureaucracy and computation enable an important form of social power: the power to classify. Bureaucracy deploys filing cabinets and memorandums to organize the world and make it “legible,” in Scott's terminology. Legibility is, in the first instance, a matter of classification. Scott explains how “high modernist” bureaucracies crafted categories and standardized processes, turning rich but ambiguous social relationships into thin but tractable information. The bureaucratic capacity to categorize, organize, and exploit this information revolutionized the state's ability to get things done. It also led the state to reorder society in ways that reflected its categorizations and acted them out. Social, political, and even physical geographies were simplified to make them legible to public officials. Surnames were imposed to tax individuals; the streets of Paris were redesigned to facilitate control. 

Yet high modernism was not just about the state. Markets, too, were standardized, as concrete goods like grain, lumber, and meat were converted into abstract qualities to be traded at scale. The power to categorize made and shaped markets, allowing grain buyers, for example, to create categories that advantaged them at the expense of the farmers they bought from. Businesses created their own bureaucracies to order the world, deciding who could participate in markets and how goods ought to be categorized. 

We use the term high-tech modernism to refer to the body of classifying technologies based on quantitative techniques and digitized information that partly displaces, and partly is layered over, the analog processes used by high modernist organizations. Computational algorithms-especially machine learning algorithms-perform similar functions to the bureaucratic technologies that Scott describes. Both supervised machine learning (which classifies data using a labeled training set) and unsupervised machine learning (which organizes data into self-discovered clusters) make it easier to categorize unstructured data at scale. But unlike their paper-pushing predecessors in bureaucratic institutions, the humans of high-tech modernism disappear behind an algorithmic curtain. The workings of algorithms are much less visible, even though they penetrate deeper into the social fabric than the workings of bureaucracies. The development of smart environments and the Internet of Things has made the collection and processing of information about people too comprehensive, minutely geared, inescapable, and fast-growing for considered consent and resistance. 

In a basic sense, machine learning does not strip away nearly as much information as traditional high modernism. It potentially fits people into categories (“classifiers”) that are narrower-even bespoke. The movie streaming platform Netflix will slot you into one of its two thousand-plus “microcommunities” and match you to a subset of its thousands of subgenres. Your movie choices alter your position in this scheme and might in principle even alter the classificatory grid itself, creating a new category of viewer reflecting your idiosyncratic viewing practices. Many of the crude, broad categories of nineteenth-century bureaucracies have been replaced by new, multidimensional classifications, powered by machine learning, that are often hard for human minds to grasp. People can find themselves grouped around particular behaviors or experiences, sometimes ephemeral, such as followers of a particular YouTuber, subprime borrowers, or fans of action movies with strong female characters. Unlike clunky high modernist categories, high-tech modernist ones can be emergent and technically dynamic, adapting to new behaviors and information as they come in. They incorporate tacit information in ways that are sometimes spookily right, and sometimes disturbing and misguided: music-producing algorithms that imitate a particular artist's style, language models that mimic social context, or empathic AI that supposedly grasps one's state of mind. Generative AI technologies can take a prompt and generate an original picture, video, poem, or essay that seems to casual observers as though it were produced by a human being. 

Taken together, these changes foster a new politics. Traditional high modernism did not just rely on standard issue bureaucrats. It empowered a wide variety of experts to make decisions in the area of their particular specialist knowledge and authority. Now, many of these experts are embattled, as their authority is nibbled away by algorithms whose advocates claim are more accurate, more reliable, and less partial than their human predecessors. 

One key difference between the moral economies of high modernism and high-tech modernism involves feedback. It is tempting to see high modernism as something imposed entirely from above. However, in his earlier book Weapons of the Weak, Scott suggests that those at the receiving end of categorical violence are not passive and powerless. They can sometimes throw sand into the gears of the great machinery. 

As philosopher Ian Hacking explains, certain kinds of classifications-typically those applying to human or social collectives-are “interactive” in that when known by people or those around them, and put to work in institutions, [they] change the ways in which individuals experience themselves-and may even lead people to evolve their feelings and behavior in part because they are so classified. 

People, in short, have agency. They are not submissive dupes of the categories that objectify them. They may respond to being put in a box by conforming to or growing into those descriptions. Or they may contest the definition of the category, its boundaries, or their assignment to it. This creates a feedback loop in which the authors of classifications (state officials, market actors, experts from the professions) may adjust the categories in response. Human society, then, is forever being destructured and restructured by the continuous interactions between classifying institutions and the people and groups they sort. 

But conscious agency is only possible when people know about the classifications: the politics of systems in which classifications are visible to the public, and hence potentially actionable, will differ from the politics of systems in which they are not. 

So how does the change from high modernism to high-tech modernism affect people's relationships with their classifications? At its worst, high modernism stripped out tacit knowledge, ignored public wishes and public complaints, and dislocated messy lived communities with sweeping reforms and grand categorizations, making people more visible and hence more readily acted on. The problem was not that the public did not notice the failures, but that their views were largely ignored. Authoritarian regimes constricted the range of ways in which people could respond to their classification: anything more than passive resistance was liable to meet brutal countermeasures. Democratic regimes were, at least theoretically, more open to feedback, but often ignored it when it was inconvenient and especially when it came from marginalized groups. 

The pathologies of computational algorithms are often more subtle. The shift to high-tech modernism allows the means of ensuring legibility to fade into the background of the ordinary patterns of our life. Information gathering is woven into the warp and woof of our existence, as entities gather ever finer data from our phones, computers, doorbell cameras, purchases, and cars. There is no need for a new Haussmann to transform cramped alleyways into open boulevards, exposing citizens to view. Urban architectures of visibility have been rendered nearly redundant by the invisible torrents of data that move through the air, conveying information about our movements, our tastes, and our actions to be sieved through racks of servers in anonymous, chilled industrial buildings. 

The feedback loops of high-tech modernism are also structurally different. Some kinds of human feedback are now much less common. Digital classification systems may group people in ways that are not always socially comprehensible (in contrast to traditional categories such as female, married, Irish, or Christian). Human feedback, therefore, typically requires the mediation of specialists with significant computing expertise, but even they are often mystified by the operation of systems they have themselves designed.