'Schrödinger's Robot: Privacy in Uncertain States' by Ian E Kerr in (2019) 20
Theoretical Inquiries in Law asks
Can robots or AIs operating independent of human intervention or oversight diminish our privacy? There are two equal and opposite reactions to this issue. On the robot side, machines are starting to outperform human experts in an increasing array of narrow tasks, including driving, surgery, and medical diagnostics. This is fueling a growing optimism that robots and AIs will exceed humans more generally and spectacularly; some think, to the point where we will have to consider their moral and legal status. On the privacy side, one sees the very opposite: robots and AIs are, in a legal sense, nothing. Judge Posner, for example, has famously opined that they do not invade privacy because they are not sentient beings. Indeed, the received view is that since robots and AIs are neither sentient nor capable of human-level cognition, they are of no consequence to privacy law.
This article argues that robots and AI operating independently of human intervention can and, in some cases, already do diminish our privacy. Rejecting the all-or-nothing account of robots and privacy described above, I seek to identify the conditions that give rise to diminished privacy in order to see whether robots and AI can meet those conditions. To do so, I borrow from epistemic privacy — a theory that understands a subject’s state of privacy as a function of another’s state of cognizance regarding the subject’s personal facts. Epistemic privacy offers a useful analytic framework for understanding the kind of cognizance that gives rise to diminished privacy.
I demonstrate that current robots and AIs are capable of developing truth-promoting beliefs and observational knowledge about people without any human intervention, oversight, knowledge, or awareness. Because machines can actuate on the basis of the beliefs they form in ways that affect people’s life chances and opportunities, I argue that they demonstrate the kind of cognizance that definitively implicates privacy. Consequently, I conclude that legal theory and doctrine will have to expand their understanding of privacy relationships to include robots and AIs that meet these epistemic conditions. An increasing number of machines possess epistemic qualities that force us to rethink our understanding of privacy relationships with robots and AIs.