'Schrödinger’s Robot: Privacy in Uncertain States' by the late great Ian Kerr in (2019) 20(1)
Theoretical inquiries 123 comments
Can robots or AIs operating independently of human intervention or oversight diminish our privacy? There are two equal and opposite reactions to this issue. On the robot side, machines are starting to outperform human experts in an increasing array of narrow tasks, including driving, surgery, and medical diagnostics. This is fueling a growing optimism that robots and AIs will exceed humans more generally and spectacularly; some think, to the point where we will have to consider their moral and legal status. On the privacy side, one sees the very opposite: robots and AIs are, in a legal sense, nothing. The received view is that since robots and AIs are neither sentient nor capable of human-level cognition, they are of no consequence to privacy law. This article argues that robots and AIs operating independently of human intervention can and, in some cases, already do diminish our privacy. Epistemic privacy offers a useful analytic framework for understanding the kind of cognizance that gives rise to diminished privacy. Because machines can actuate on the basis of the beliefs they form in ways that affect people’s life chances and opportunities, I argue that they demonstrate the kind of cognizance that definitively implicates privacy. Consequently, I conclude that legal theory and doctrine will have to expand their understanding of privacy relationships to include robots and AIs that meet these epistemic conditions. An increasing number of machines possess epistemic qualities that force us to rethink our understanding of privacy relationships with robots and AIs.
Kerr argues
This Article responds to two equal and opposite reactions tugging at the intersection of robots and privacy.
On the robot side, too many technologists, government decision-makers, and captains of industry have become preoccupied with what they see as an inevitable shift from today’s artificial narrow intelligence (ANI) to tomorrow’s artificial general intelligence (AGI). The fact that machines are starting to outperform human experts in an increasing array of narrow tasks fuels a growing optimism that AIs will exceed humans more generally and spectacularly. Seduced by the rapture of singularity and superintelligence, many credible (and incredible) experts and governmental bodies are pressing us to look beyond the social implications of today’s AIs and robots. Instead of the focus being squarely on how human rights like equality and privacy are affected by rapid technological advance, undue attention is being paid to fantastical ideas. These include an all-out robot apocalypse and — less traumatic but still highly problematic — granting legal status to robots. Many policymakers seem fascinated by the idea of robot rights, or other protections and entitlements to incentivize and facilitate an increasing population of robots and AIs.
On the privacy side, one sees the opposite: on their own, robots and AIs are nothing. According to the received view, there can be no loss of privacy without human sentience or cognition. Since robots and AIs are neither sentient nor capable of human-level cognition, they are seen to be of no consequence to privacy law. Robots and AIs can collect, use, disclose, make decisions about and act upon exabytes of personal information but, from a doctrinal perspective, none of that matters, not one single bit, as long as no human has laid eyes on the data.
Without invoking the Copenhagen Interpretation, this Article offers a refutation of dead-or-alive, all-or-nothing accounts of robots and privacy. It is my contention that current robots and AIs can diminish our privacy without sentience, consciousness or cognition, and without human intervention, oversight, knowledge, or awareness. Building on an epistemic theory of privacy, I demonstrate that today’s robots and AIs are capable of truth-promoting belief formation processes, thereby allowing them to form reliable beliefs and observational knowledge about people without human intervention, oversight, knowledge, or awareness. Because machines can actuate on the basis of the beliefs they form, they can affect people’s life chances and opportunities in ways that definitively implicate privacy.
To be clear, the rather modest claim I am advancing in this short Article is that non-sentient robots and AIs can diminish our privacy. The Article is meant to say very little about how people perceive privacy violations by robots. It says even less about the normative elements of human-robot privacy relationships and the violations, infringements, wrongs, or harms that could be occasioned or avoided by robots. Although my argument is a necessary precondition for such discussions, my focus here is limited to the epistemological conditions giving rise to privacy, and my narrow claim is that some robots and AIs are already capable of epistemological states that can reduce our privacy. A proper account of the deeper normative elements would require a full-blown relational theory of robots, which this Article seeks to encourage, but does not strive to accomplish.
The Article proceeds as follows. In Part I, I argue that privacy is relational and briefly examine several key theories in order to establish privacy’s relational core, namely: a person loses privacy just in case some “other” gains some form of epistemic access to her. Part II offers a closer examination of the “other”11 in a privacy relationship — historically conceived as the person who comes to know personal facts about a data subject. I demonstrate how robots and AIs are replacing the human “other” and that the delegation of informational transactions to robots and AIs therefore puts the traditional privacy relationship in an uncertain state. The uncertainty rests on whether an AI has the epistemic qualities necessary to diminish privacy in cases where there is no human intervention, oversight, knowledge, or awareness. Part III responds to the doctrinal view that individuals whose information is exposed only to automated systems incur no cognizable loss of privacy. To do so, I borrow from epistemic privacy — a theory that understands a subject’s state of privacy as a function of another’s state of cognizance regarding the subject’s personal facts. The theory of epistemic privacy offers a useful analytic framework for understanding the kind of cognizance that implicates privacy. In Part IV, I apply the theory of epistemic privacy in order to determine whether artificial cognizers are truly ignorant in the way that legal doctrine suggests. To the contrary, I conclude that artificial cognizers can be said to form truth-promoting beliefs that are justified. In Part V, I examine how today’s navigational robots form beliefs and argue that the observational knowledge they acquire through reliable belief formation processes easily meets the epistemic conditions necessary for diminished privacy. I suggest that, because the beliefs generated by artificial cognizers can also be programmed to actuate automatically, not only can they diminish a person’s state of privacy, they also have the potential to violate it. Having shown that today’s robots are by no means ignorant, I propose in Part VI the need to develop a theory of relational privacy that counts robots and AIs as integral to the configuration of what Julie Cohen has called the “networked self,” not to mention what I am calling the “networked other.” In Part VII, I conclude with the observation that when we view epistemic privacy’s notion of “a duty of ignorance” through a relational lens, we are led toward a useful heuristic for our increasingly complex web of human-robot relationships: a presumption of ignorance.
This Article is meant to demonstrate how robots and AIs disturb the presumption of ignorance in epistemologically significant ways, undermining the presumption’s core aim of providing fair and equal treatment to all by setting boundaries around the kinds of assumptions and beliefs that can and cannot be made about people. Consequently, it is my contention that legal theory and doctrine will have to expand their understandings of privacy relationships to include robots and AIs that meet these epistemic conditions. An increasing number of machines possess epistemic qualities that force us to rethink our understanding of privacy relationships.