16 March 2020

Robophobia

'Robophobia' by Andrew Keane Woods comments
 Robots — machines, algorithms, artificial intelligence — play an increasingly important role in society, often supplementing or even replacing human judgment. Scholars have rightly become concerned with the fairness, accuracy, and humanity of these systems. Indeed, anxiety about machine bias is at a fever pitch. While these concerns are important, they nearly all run in one direction: we worry about robot bias against humans; we rarely worry about human bias against robots. 
This is a mistake. Not because robots deserve, in some deontological sense, to be treated fairly — although that may be true — but because human bias against non-human deciders is bad for humans. For example, it would be a mistake to reject self-driving cars merely because they cause a single fatal accident. Yet this is what we do. We tolerate enormous risk from humans, but almost none from robots. A substantial literature — almost entirely ignored by legal scholars concerned with algorithmic bias — suggests that we routinely prefer worse-performing humans over better-performing robots. We do this on our roads, in our courthouses, in our military, and in our hospitals. Our bias against robots is costly, and it will only get more so as robots become more capable. 
This paper catalogs the many different forms of anti-robot bias and suggests some reforms to curtail the harmful effects of that bias. The paper’s descriptive contribution is to develop a taxonomy of robophobia. Its normative contribution is to offer some reasons to be less biased against robots. The stakes could hardly be higher. We are entering an age when one of the most important policy questions will be how and where to deploy machine decision-makers. In doing so, we must be mindful of our own biases just as we must be aware of algorithmic biases.