26 April 2019

Machines as Actors

'Machine behaviour' by Iyad Rahwan, Manuel Cebrian, Nick Obradovich, Josh Bongard, Jean-François Bonnefon, Cynthia Breazeal, Jacob W. Crandall, Nicholas A. Christakis, Iain D. Couzin, Matthew O. Jackson, Nicholas R. Jennings, Ece Kamar, Isabel M. Kloumann, Hugo Larochelle, David Lazer, Richard McElreath, Alan Mislove, David C. Parkes, Alex ‘Sandy’ Pentland, Margaret E. Roberts, Azim Shariff, Joshua B. Tenenbaum and Michael Wellman in (2019) 568 Nature 477–486 comments
Machines powered by artificial intelligence increasingly mediate our social, cultural, economic and political interactions. Understanding the behaviour of artificial intelligence systems is essential to our ability to control their actions, reap their benefits and minimize their harms. Here we argue that this necessitates a broad scientific research agenda to study machine behaviour that incorporates and expands upon the discipline of computer science and includes insights from across the sciences. We first outline a set of questions that are fundamental to this emerging field and then explore the technical, legal and institutional constraints on the study of machine behaviour. 
The authors argue
 In his landmark 1969 book Sciences of the Artificial, Nobel Laureate Herbert Simon wrote: “Natural science is knowledge about natural objects and phenomena. We ask whether there cannot also be ‘artificial’ science—knowledge about artificial objects and phenomena.” In line with Simon’s vision, we describe the emergence of an interdisciplinary field of scientific study. This field is concerned with the scientific study of intelligent machines, not as engineering artefacts, but as a class of actors with particular behavioural patterns and ecology. This field overlaps with, but is distinct from, computer science and robotics. It treats machine behaviour empirically. This is akin to how ethology and behavioural ecology study animal behaviour by integrating physiology and biochemistry—intrinsic properties—with the study of ecology and evolution—properties shaped by the environment. Animal and human behaviours cannot be fully understood without the study of the contexts in which behaviours occur. Machine behaviour similarly cannot be fully understood without the integrated study of algorithms and the social environments in which algorithms operate. 
At present, the scientists who study the behaviours of these virtual and embodied artificial intelligence (AI) agents are predominantly the same scientists who have created the agents themselves (throughout we use the term ‘AI agents’ liberally to refer to both complex and simple algorithms used to make decisions). As these scientists create agents to solve particular tasks, they often focus on ensuring the agents fulfil their intended function (although these respective fields are much broader than the specific examples listed here). For example, AI agents should meet a benchmark of accuracy in document classification, facial recognition or visual object detection. Autonomous cars must navigate successfully in a variety of weather conditions; game-playing agents must defeat a variety of human or machine opponents; and data-mining agents must learn which individuals to target in advertising campaigns on social media. 
These AI agents have the potential to augment human welfare and well-being in many ways. Indeed, that is typically the vision of their creators. But a broader consideration of the behaviour of AI agents is now critical. AI agents will increasingly integrate into our society and are already involved in a variety of activities, such as credit scoring, algorithmic trading, local policing, parole decisions, driving, online dating and drone warfare. Commentators and scholars from diverse fields—including, but not limited to, cognitive systems engineering, human computer interaction, human factors, science, technology and society, and safety engineering—are raising the alarm about the broad, unintended consequences of AI agents that can exhibit behaviours and produce downstream societal effects—both positive and negative—that are unanticipated by their creators.
In addition to this lack of predictability surrounding the consequences of AI, there is a fear of the potential loss of human oversight over intelligent machines5 and of the potential harms that are associated with the increasing use of machines for tasks that were once performed directly by humans. At the same time, researchers describe the benefits that AI agents can offer society by supporting and augmenting human decision-making. Although discussions of these issues have led to many important insights in many separate fields of academic inquiry, with some highlighting safety challenges of autonomous systems and others studying the implications in fairness, accountability and transparency (for example, the ACM conference on fairness, accountability and transparency (https://fatconference.org/)), many questions remain.
This Review frames and surveys the emerging interdisciplinary field of machine behaviour: the scientific study of behaviour exhibited by intelligent machines. Here we outline the key research themes, questions and landmark research studies that exemplify this field. We start by providing background on the study of machine behaviour and the necessarily interdisciplinary nature of this science. We then provide a framework for the conceptualization of studies of machine behaviour. We close with a call for the scientific study of machine and human–machine ecologies and discuss some of the technical, legal and institutional barriers that are faced by researchers in this field.
'Machines as the New Oompa-Loompas: Trade Secrecy, the Cloud, Machine Learning, and Automation' by Jeanne C. Fromer in(2019) 94 New York University Law Review comments
In previous work, I wrote about how trade secrecy drives the plot of Roald Dahl’s novel Charlie and the Chocolate Factory, explaining how the Oompa-Loompas are the ideal solution to Willy Wonka’s competitive problems. Since publishing that piece, I have been struck by the proliferating Oompa-Loompas in contemporary life: computing machines filled with software and fed on data. These computers, software, and data might not look like Oompa-Loompas, but they function as Wonka’s tribe does: holding their secrets tightly and internally for the businesses for which these machines are deployed. 
Computing machines were not always such effective secret-keeping Oompa Loompas. As this Article describes, at least three recent shifts in the computing industry — cloud computing, the increasing primacy of data and machine learning, and automation — have turned these machines into the new Oompa-Loompas. While new technologies enabled this shift, trade secret law has played an important role here as well. Like other intellectual property rights, trade secret law has a body of built-in limitations to ensure that the incentives offered by the law’s protection do not become so great that they harm follow-on innovation — new innovation that builds on existing innovation — and competition. 
This Article argues that, in light of the technological shifts in computing, the incentives that trade secret law currently provide to develop these contemporary Oompa-Loompas are excessive in relation to their worrisome effects on follow-on innovation and competition by others. These technological shifts allow businesses to circumvent trade secret law’s central limitations, thereby overfortifying trade secrecy protection. The Article then addresses how trade secret law might be changed — by removing or diminishing its protection — to restore balance for the good of both competition and innovation.