'Physiognomic Artificial Intelligence' by Luke Stark and Jevon Hutson in (2022) 32 Fordham Intellectual Property, Media and Entertainment Law Journal 922 comments
The reanimation of the pseudosciences of physiognomy and phrenology at scale through computer vision and machine learning is a matter of urgent concern. This Article—which contributes to critical data studies, consumer protection law, biometric privacy law, and antidiscrimination law—endeavors to conceptualize and problematize physiognomic artificial intelligence (“AI”) and offer policy recommendations for state and federal lawmakers to forestall its proliferation.
Physiognomic AI, as this Article contends, is the practice of using computer software and related systems to infer or create hierarchies of an individual’s body composition, protected class status, perceived character, capabilities, and future social outcomes based on their physical or behavioral characteristics. Physiognomic and phrenological logics are intrinsic to the technical mechanism of computer vision applied to humans. This Article observes how computer vision is a central vector for physiognomic AI technologies and unpacks how computer vision reanimates physiognomy in conception, form, and practice and the dangers this trend presents for civil liberties.
This Article thus argues for legislative action to forestall and roll back the proliferation of physiognomic AI. To that end, it considers a potential menu of safeguards and limitations to significantly limit the deployment of physiognomic AI systems, which hopefully can be used to strengthen local, state, and federal legislation. This Article foregrounds its policy discussion by proposing the abolition of physiognomic AI. From there, it posits regimes of U.S. consumer protection law, biometric privacy law, and civil rights law as vehicles for rejecting physiognomy’s digital renaissance in AI. Specifically, it contends that physiognomic AI should be categorically rejected as oppressive and unjust. Second, it argues that lawmakers should declare physiognomic AI unfair and deceptive per se. Third, it proposes that lawmakers should enact or expand biometric privacy laws to prohibit physiognomic AI. Fourth, it recommends that lawmakers should prohibit physiognomic AI in places of public accommodation. It also observes the paucity of procedural and managerial regimes of fairness, accountability, and transparency in addressing physiognomic AI and attend to potential counterarguments in support of physiognomic AI.
The robust and important 'Neurorights: The Land of Speculative Ethics and Alarming Claims?' by
Frederic Gilbert and Ingrid Russo in (2024) 15(2) AJOB Neuroscience 113 comments
The intersection of AI and neurotechnology has resulted in an increasing number of medical and non-medical applications and has sparked debate over the need for new human rights, or “neurorights,” to better protect users. In his article, Bublitz critically examines the prospect of an international instrument regarding Neurotechnologies and Human Rights. In evaluating the feasibility of establishing new human rights, Bublitz argues in favor of advancing the law without introducing novel rights (Bublitz 2024). He acknowledges the criticality of protecting certain fundamental aspects of the mind—specifically, the unconditionally protected core of freedom of thought and opinion—alongside qualified rights to mental integrity and privacy which protect against less severe neurotechnological interferences (Bublitz 2024). In this commentary, we build upon Bublitz’s position by examining the calls for new human rights based on assertions that the mind requires safeguarding from invasive ‘reading’ technologies (Bublitz 2024).
Let us look at the ‘reading’ terminology used in these assertions. First, we need to delve into the veracity of terms like “brain-reading” and “mind-reading” in the context of neurotechnological advancements to discern whether the claims are underpinned by evidence or hype. In recent years, there has been a surge in news media reports discussing the potential of AI applications to decode brain activity for mind-reading purposes. The portrayal of AI mind-reading capabilities is both remarkable and concerning. Recent headlines, such as “The brain is the final frontier of our privacy, and AI is about to breach it” (Yahoo News), “Mind-reading technologies have arrived” (VOX), “AI makes non-invasive mind-reading possible by turning thoughts into text” (The Guardian), “This ‘mind-reading’ AI system can recreate what your brain is seeing” (Euronews), “AI-Powered ‘Thought Decoders’ Won’t Just Read Your Mind—They’ll Change It” (Wired), are so commonplace that one feels ‘AI ability to read the mind’ is mainstream reality.
However, given that news media also often depict brain-computer interfaces (BCIs) in an unjustifiably overall positive and sensationalist tone, a degree of skepticism arises regarding the claims that AI can access and decrypt hidden aspects of the mind (Gilbert et al. 2019; Pham and Gilbert 2019).
Interestingly, the claims about AI-enabled mind-reading find resonance even within the most respected and influential institutions. For instance, the International Bioethics Committee of UNESCO’s report on ‘The Risks and Challenges of Neurotechnologies for Human Rights’ underscores the multifaceted impacts of combining AI and neurotechnologies capable of ‘reading’ and ‘writing’ brain activity. Furthermore, academic journals contribute to this discourse, with titles like “Mind-reading devices are revealing the brain’s secrets” (Nature) and “Artificial intelligence is learning to read your mind—and display what it sees” (Science).
We conducted a scoping review of 1017 academic articles to gain insights into the current state of the art and examine assertions made by academics (Gilbert and Russo under review). Our analysis revealed that up to 91% of the examined articles suggest the possibility of mind reading through brain reading (Figure 1). Overall, we observed an increase in the number of articles connecting brain reading and mind reading by year (Figure 2), along with discussion that mind-reading will be possible in the future (Figure 3). Ethical issues discussed frequently include mental privacy, mental freedom, and personhood.