'Physiognomic Artificial Intelligence' by Luke Stark and Jevan Hutson in (2022) 32 Fordham Intellectual Property, Media & Entertainment Law Journal 922 comments
The reanimation of the pseudosciences of physiognomy and phrenology at scale through computer vision and machine learning is a matter of urgent concern. This Article, which contributes to critical data studies, consumer protection law, biometric privacy law, and anti-discrimination law, endeavors to conceptualize and problematize physiognomic artificial intelligence (AI) and offer policy recommendations for state and federal lawmakers to forestall its proliferation.
Physiognomic AI, we contend, is the practice of using computer software and related systems to infer or create hierarchies of an individual’s body composition, protected class status, perceived character, capabilities, and future social outcomes based on their physical or behavioral characteristics. Physiognomic and phrenological logics are intrinsic to the technical mechanism of computer vision applied to humans. In this Article, we observe how computer vision is a central vector for physiognomic AI technologies, unpacking how computer vision reanimates physiognomy in conception, form, and practice and the dangers this trend presents for civil liberties.
This Article thus argues for legislative action to forestall and roll back the proliferation of physiognomic AI. To that end, we consider a potential menu of safeguards and limitations to significantly limit the deployment of physiognomic AI systems, which we hope can be used to strengthen local, state, and federal legislation. We foreground our policy discussion by proposing the abolition of physiognomic AI. From there, we posit regimes of U.S. consumer protection law, biometric privacy law, and civil rights law as vehicles for rejecting physiognomy’s digital renaissance in artificial intelligence. Specifically, we argue that physiognomic AI should be categorically rejected as oppressive and unjust. Second, we argue that lawmakers should declare physiognomic AI to be unfair and deceptive per se. Third, we argue that lawmakers should enact or expand biometric privacy laws to prohibit physiognomic AI. Fourth, we argue that lawmakers should prohibit physiognomic AI in places of public accommodation. We also observe the paucity of procedural and managerial regimes of fairness, accountability, and transparency in addressing physiognomic AI and attend to potential counterarguments in support of physiognomic AI.
Stark's 'The emotive politics of digital mood tracking' in (2020) 22(11) New Media and Society 2039-2057 comments
A decade ago, deploying digital tools to track human emotion and mood was something of a novelty. In 2013, the Pew Research Center’s Internet & American Life Project released a report on the subject of “Tracking for Health,” exploring the growing contingent of Americans keeping count of themselves and their activities through technologies ranging from paper and pencil to digital smart phone apps (Fox and Duggan, 2013). These systems generate what Natasha Dow Schüll terms more broadly “data for life” (Schüll, 2016), traces of our everyday doings as recorded in bits and bytes. Mood tracking, about which the survey queried, received so few affirmative responses that it did not rate at even 1% of positive answers.
Yet in the interim, emotion in the world of computational media has become big business (McStay, 2016, 2018; Stark, 2016, 2018b; Stark and Crawford, 2015). Using artificial intelligence (AI) techniques, social networks such as Twitter and Facebook have joined dedicated health-tracking applications in pioneering methods for the analysis of emotive and affective “data for life.” These mood-monitoring and affect-tracking technologies involve both active self-reporting by users (Korosec, 2014; Sundström et al., 2007) and the automated collection of behavioral data (Isomursu et al., 2007)—methods often collectively known as digital phenotyping (Jain et al., 2015), or the practice of measuring human behavior via smart phone sensors, keyboard interactions, and various other features of voice and speech (Insel, 2017). This continuum of technologies allows an analyst to extrapolate a range of information about the physiology, activity, behaviors, habits, and social interactions from everyday digital emanations (Kerr and McGill, 2007).
The past few years have also seen policymakers and the public becoming increasingly attuned to the political impacts of digital media technologies, including AI and machine learning (ML) systems (Barocas and Selbst, 2016; Crawford and Schultz, 2013; Diakopoulos, 2016). Citizens, activists, and elected politicians are eager to address the ways in which technical particularities of such systems influence social and political outcomes via design and deployment (Buolamwini and Gebru, 2018; Dourish, 2016; Johnson, 2018). Yet critical analyses and responses to these tools of what Zuboff (2019) terms “surveillance capitalism” must account for the role of human affect, emotion, and mood in surveillance capitalism’s extraction and contestation. As Raymond Williams observed, working toward an understanding of the barriers to economic and social justice means being first and foremost “concerned with meanings and values as they are actively lived and felt” (Williams, 1977: 132).
Here, I perform a close reading and comparative values in design (VID) analysis (Flanagan and Nissenbaum, 2014; Friedman et al., 2006) of MoodPanda and Moodscope, two popular consumer applications for tracking mood. Human emotions themselves arise from a tangled nexus of biological, cultural, and contextual factors (Boehner et al., 2005; Sengers et al., 2008). As such, I argue that the design choices in each service shape the particular dynamics of political economy, sociality, and self-fashioning available to their users, and that these design decisions are exemplary of the ties between the computational politics of surveillance capitalism (Tufekci, 2014; Zuboff, 2019), and the quantification and classification of human emotion via digital mechanisms (Stark, 2018a).
Drawing on Tufekci (2014, 2017), Papacharissi (2014), and others (Ahmed, 2004; Martin, 2007), I articulate how the affordances of mood-tracking services such as Moodscope and MoodPanda are indexical to a broader emerging emotive politics mediated and amplified by digital systems. The dynamics of emotive politics underpin many contemporary digitally mediated sociotechnical controversies, ranging from media manipulation by extremist actors, negative polarization, and “fake news,” to collective action problems around pressing global crises such as climate change. Human passions have always been understood as an element of political life, but the particular technical and social affordances of digital systems configure these responses in particular ways: emotive politics foreground contestations regarding how we as social actors should interact together in what Papacharissi (2014) terms as “affective publics,” and the weights and ways in which we as designers, participants, and citizens should treat human feeling as dispositive features of civic discourse. Mood tracking’s explicit engagement with human emotion as a mediated, embodied state points toward how emotive politics emerge out of designer expertise, technical features, and the social contexts and practices of everyday digital mediation (Dourish, 2004; Kuutti and Bannon, 2014).
In this analysis, I also seek to highlight the ways in which user interface and experience (UI/UX) design shape political outcomes alongside the structures of algorithms and databases (Dourish, 2016; Montfort and Bogost, 2009: 145)—though the groundbreaking work of scholars such as Johanna Drucker (2014) and Lisa Nakamura (2009) means this insight should be of surprise to no one. “The same quantitative modulations and numerical valuations required by the new information worker,” Alexander R. Galloway likewise observes, come “in a dazzling array of new cultural phenomena…to live today is to know how to use menus” (Galloway, 2006). Analyses taking interface design into account as an aspect of broader conversations around the fairness, ethics, and accountability of digital systems, which I seek to model here, will bolster the interdisciplinary work of interrogating the impact of these automated systems on our collective political future.
'The “Criminality from Face” Illusion' by Kevin W. Bowyer, Michael C. King, Walter Scheirer, and Kushal Vangara comments
Criminal or not? Is it possible to create an algorithm that analyzes an image of a person’s face and accurately labels the person as Criminal or Non-Criminal? Recent research tackling this problem has reported accuracy as high as 97% [14] using convolutional neural networks (CNNs). In this paper, we explain why the concept of an algorithm to compute “criminality from face,” and the high accuracies reported in recent publications, are an illusion.
Facial analytics seek to infer something about an individual other than their identity. Facial analytics can predict, with some reasonable accuracy, things such as age [10], gender [6], race [9], facial expression / emotion [25], body mass index [5], and certain types of health conditions [29]. A few recent papers have attempted to extend facial analytics to infer criminality from face, where the task is to take a face image as input, and predict the status of the person as Criminal / Non- Criminal for output. This concept is illustrated in Figure 1.
One of these papers states that “As expected, the state-of- the-art CNN classifier performs the best, achieving 89.51% accuracy...These highly consistent results are evidences for the validity of automated face-induced inference on criminality, despite the historical controversy surrounding the topic” [40]. Another paper states that, “the test accuracy of 97%, achieved by CNN, exceeds our expectations and is a clear indicator of the possibility to differentiate between criminals and non- criminals using their facial images” [14]. (During the review period of this paper, we were informed by one of the authors of [14] that they had agreed with the journal to retract their paper.) A press release about another paper titled “A Deep Neural Network Model to Predict Criminality Using Image Processing” stated that “With 80 percent accuracy and with no racial bias, the software can predict if someone is a criminal based solely on a picture of their face. The software is intended to help law enforcement prevent crime.” The original press release generated so much controversy that it “was removed from the website at the request of the faculty involved” and replaced by a statement meant to defuse the situation: “The faculty are updating the paper to address concerns raised” [13].
Section II of this paper explains why the concept of an algorithm to compute criminality from face is an illusion. A useful solution to any general version of the problem is impossible. Sections III and IV explain how the impressive reported accuracy levels are readily accounted for by inadequate experimental design that has extraneous factors confounded with the Criminal / Non-Criminal labeling of images. Learning incidental properties of datasets rather than the intended concept is a well-known problem in computer vision. Section V explains how Psychology research on first impressions of a face image has been mis-interpreted as suggesting that it is possible to accurately characterize true qualities of a person. Section VI briefly discusses the legacy of the Positivist School of criminology. Lastly, Section VII describes why the belief in the illusion of a criminality-from- face algorithm potentially has large, negative consequences for society.