12 September 2019

Algorithmics

'Algorithmic Transparency and Decision-Making Accountability: Thoughts for Buying Machine Learning Algorithms' by Jake Goldenfein in Office of the Victorian Information Commissioner (ed), Closer to the Machine: Technical, Social, and Legal aspects of AI (Office of the Victorian Information Commissioner, 2019) comments
There has been a great deal of research on how to achieve algorithmic accountability and transparency in automated decision-making systems - especially for those used in public governance. However, good accountability in the implementation and use of automated decision-making systems is far from simple. It involves multiple overlapping institutional, technical, and political considerations, and becomes all the more complex in the context of machine learning based, rather than rule based, decision systems. This chapter argues that relying on human oversight of automated systems, so called ‘human-in-the-loop’ approaches, is entirely deficient, and suggests addressing transparency and accountability during the procurement phase of machine learning systems - during their specification and parameterisation - is absolutely critical. In a machine learning based automated decision system, the accountability typically associated with a public official making a decision has already been displaced into the actions and decisions of those creating the system - the bureaucrats and engineers involved in building the relevant models, curating the datasets, and implementing a system institutionally. But what should those system designers be thinking about and asking for when specifying those systems? 
There are a lot of accountability mechanisms available for system designers to consider, including new computational transparency mechanisms, ‘fairness’ and non-discrimination, and ‘explainability’ of decisions. If an official specifies for a system to be transparent, fair, or explainable, however, it is important that they understand the limitations of such a specification in the context of machine learning. Each of these approaches is fraught with risks, limitations, and the challenging political economy of technology platforms in government. Without understand the complexities and limitations of those accountability and transparency ideas, they risk disempowering public officials in the face of private industry technology vendors, who use trade secrets and market power in deeply problematic ways, as well as producing deficient accountability outcomes. This chapter therefore outlines the risks associated with corporate cooption of those transparency and accountability mechanisms, and suggests that significant resources must be invested in developing the necessary skills in the public sector for deciding whether a machine learning system is useful and desirable, and how it might be made as accountable and transparent as possible.