17 September 2021

Algorithms

'The Flaws of Policies Requiring Human Oversight of Government Algorithms' by Ben Green (Berkman Klein Center for Internet & Society) comments 

Policymakers around the world are increasingly considering how to prevent government uses of algorithms from producing injustices. One mechanism that has become a centerpiece of global efforts to regulate government algorithms is to require human oversight of algorithmic decisions. However, the functional quality of this regulatory approach has not been thoroughly interrogated. In this article, I survey 40 policies that prescribe human oversight of government algorithms and find that they suffer from two significant flaws. First, evidence suggests that people are unable to perform the desired oversight functions. Second, human oversight policies legitimize government use of flawed and controversial algorithms without addressing the fundamental issues with these tools. Thus, rather than protect against the potential harms of algorithmic decision-making in government, human oversight policies provide a false sense of security in adopting algorithms and enable vendors and agencies to shirk accountability for algorithmic harms. In light of these flaws, I propose a more rigorous approach for determining whether and how to incorporate algorithms into government decision-making. First, policymakers must critically consider whether it is appropriate to use an algorithm at all in a specific context. Second, before deploying an algorithm alongside human oversight, vendors or agencies must conduct preliminary evaluations of whether people can effectively oversee the algorithm.