'Is Tricking a Robot Hacking? (
University of Washington School of Law Research Paper No. 2018-05) by
Ryan Calo,
Ivan Evtimov,
Earlence Fernandes,
Tadayoshi Kohno and
David O'Hair
comments
The term “hacking” has come to signify breaking into a computer system. A number of local, national, and international laws seek to hold hackers accountable for breaking into computer systems to steal information or disrupt their operation. Other laws and standards incentivize private firms to use best practices in securing computers against attack.
A new set of techniques, aimed not at breaking into computers but at manipulating the increasingly intelligent machine learning models that control them, may force law and legal institutions to reevaluate the very nature of hacking. Three of the authors have shown, for example, that it is possible to use one’s knowledge of a system to fool a driverless car into perceiving a stop sign as a speed limit. Other techniques build secret blind spots into machine learning systems or seek to reconstruct the private data that went into their training.
The unfolding renaissance in artificial intelligence (AI), coupled with an almost parallel discovery of its vulnerabilities, requires a reexamination of what it means to “hack,” i.e., to compromise a computer system. The stakes are significant. Unless legal and societal frameworks adjust, the consequences of misalignment between law and practice include inadequate coverage of crime, missing or skewed security incentives, and the prospect of chilling critical security research. This last one is particularly dangerous in light of the important role researchers can play in revealing the biases, safety limitations, and opportunities for mischief that the mainstreaming of artificial intelligence appears to present.
The authors of this essay represent an interdisciplinary team of experts in machine learning, computer security, and law. Our aim is to introduce the law and policy community within and beyond academia to the ways adversarial machine learning (ML) alter the nature of hacking and with it the cybersecurity landscape. Using the Computer Fraud and Abuse Act of 1986 — the paradigmatic federal anti-hacking law — as a case study, we mean to evidence the burgeoning disconnect between law and technical practice. And we hope to explain what is at stake should we fail to address the uncertainty that flows from the prospect that hacking now includes tricking.