'Through a Glass, Darkly: Artificial Intelligence and the Problem of Opacity' by Simon Chesterman in (2020)
American Journal of Comparative Law
comments
As computer programs become more complex, the ability of non-specialists to understand how a given output has been reached diminishes. Opaqueness may also be built into programs to protect proprietary interests. Both types of system are capable of being explained, either through recourse to experts or an order to produce information. Another class of system may be naturally opaque, however, using deep learning methods that are impossible to explain in a manner that humans can comprehend. An emerging literature describes these phenomena or specific problems to which they give rise, notably the potential for bias against specific groups.
Drawing on examples from the United States, the European Union, and China, this article develops a novel typology of three discrete regulatory challenges posed by opacity. First, it may encourage — or fail to discourage — inferior decisions by removing the potential for oversight and accountability. Secondly, it may allow impermissible decisions, notably those that explicitly or implicitly rely on protected categories such as gender or race in making a determination. Thirdly, it may render illegitimate decisions in which the process by which an answer is reached is as important as the answer itself. The means of addressing some or all of these concerns is routinely said to be through transparency. Yet while proprietary opacity can be dealt with by court order and complex opacity through recourse to experts, naturally opaque systems may require novel forms of ‘explanation’ or an acceptance that some machine-made decisions cannot be explained — or, in the alternative, that some decisions should not be made by machine at all.