'Strengthening legal protection against discrimination by algorithms and artificial intelligence' by Frederik J. Zuiderveen Borgesius in the latest issue of
The International Journal of Human Rights comments
Algorithmic decision-making and other types of artificial intelligence (AI) can be used to predict who will commit crime, who will be a good employee, who will default on a loan, etc. However, algorithmic decision-making can also threaten human rights, such as the right to non-discrimination. The paper evaluates current legal protection in Europe against discriminatory algorithmic decisions. The paper shows that non-discrimination law, in particular through the concept of indirect discrimination, prohibits many types of algorithmic discrimination. Data protection law could also help to defend people against discrimination. Proper enforcement of non-discrimination law and data protection law could help to protect people. However, the paper shows that both legal instruments have severe weaknesses when applied to artificial intelligence. The paper suggests how enforcement of current rules can be improved. The paper also explores whether additional rules are needed. The paper argues for sector-specific – rather than general – rules, and outlines an approach to regulate algorithmic decision-making.
Borgesius argues
The use of algorithmic decision-making has become common practice across a wide range of sectors. We use algorithmic systems for spam filtering, traffic planning, logistics management, diagnosing diseases, speech recognition, and much more. Although algorithmic decision-making can seem rational, neutral, and unbiased, it can also lead to unfair and illegal discrimination.
The two main questions for this paper are as follows. (i) Which legal protection against algorithmic discrimination exists in Europe, and what are its limitations? (ii) How could that legal protection be improved? The first research question for this paper is evaluative, as it evaluates current law. The second research question can be characterised as a design question, as it discusses whether, and if so how, new laws should be designed.
The paper focuses on the two most relevant legal instruments for defending people against algorithmic discrimination: non-discrimination law and data protection law. The paper speaks of ‘discrimination’ when referring to objectionable or illegal discrimination, for example on the basis of gender, sexual preference, or ethnic origin. The word ‘differentiation’ refers to discrimination, or making distinctions, in a neutral, unobjectionable, sense.
The paper’s main contributions to scholarship are the following. First, there has not been much legal analysis of European non-discrimination law in the context of algorithmic decision-making. The few papers that discuss European non-discrimination law do so with a focus on EU law; this paper discusses the norms from the European Convention on Human Rights. Second, building on other literature, the paper assesses how data protection law can help to protect people against discrimination. Third, the paper proposes an approach to regulate algorithmic decision-making in a sector-specific way. The paper could be useful for scholars, practitioners, and for policymakers that want to regulate algorithmic decision-making.
The paper focuses on the overarching rules in Europe (the region of the Council of Europe, with 47 member states); national rules are out of scope. Because of the focus on discrimination, questions relating to, for instance, privacy and freedom of expression are outside the scope of the paper. The paper is based on, and includes text from, a report by the author for the Anti-discrimination department of the Council of Europe.
The paper is structured as follows. Section 2 introduces algorithmic decision-making, artificial intelligence, and some related concepts. Section 3 shows that there is a problem, and gives examples of algorithmic decision-making that leads, or could lead, to discrimination. Section 4 turns to law. The paper discusses current legal protection against algorithmic discrimination, and flags strengths and weaknesses of that protection. Section 5 suggests how enforcement of current non-discrimination norms can be improved. The section also explores whether algorithmic decision-making necessitates amending non-discrimination norms. The paper outlines an approach to adopting rules regarding algorithmic decision-making. Section 6 offers concluding thoughts.
'The Automated Administrative State: A Crisis of Legitimacy' by Danielle K. Citron and Ryan Calo
comments
The legitimacy of the administrative state is premised on our faith in agency expertise. Despite their extra-constitutional structure, administrative agencies have been on firm footing for a long time in reverence to their critical role in governing a complex, evolving society. They are delegated enormous power because they respond expertly and nimbly to evolving conditions.
In recent decades, state and federal agencies have embraced a novel mode of operation: automation. Agencies rely more and more on software and algorithms in carrying out their delegated responsibilities. The automated administrative state, however, is demonstrably riddled with concerns. Legal challenges regarding the denial of benefits and rights—from travel to disability—have revealed a pernicious pattern of bizarre and unintelligible outcomes.
Scholarship to date has explored the pitfalls of automation with a particular frame, asking how we might ensure that automation honors existing legal commitments such as due process. Missing from the conversation are broader, structural critiques of the legitimacy of agencies that automate. Automation throws away the expertise and nimbleness that justify the administrative state, undermining the very case for the existence and authority of agencies.
Yet the answer is not to deny agencies access to technology. This article points toward a positive vision of the administrative state that adopts tools only when they enhance, rather than undermine, the underpinnings of agency legitimacy.
'Artificially Intelligent Government: A Review and Agenda' by David Freeman Engstrom and Daniel E. Ho in Roland Vogl (ed)
Big Data Law (Forthcoming)
comments
While scores of commentators have opined about the need for governance of artificial intelligence (AI), fewer have examined the implications for government itself. This chapter offers a synthetic review of an emerging literature on the distinct governance challenges raised by public sector adoption of AI.
Section 2 begins by providing a sense of the landscape of government AI use. While existing work centers on a few use cases (e.g., criminal risk assessment scores), a new wave of AI technology is exhibiting early signs of transforming how government works. Such AI-based governance technologies cover the waterfront of government action, from securities enforcement and patent classification to social security disability benefits adjudication and environmental monitoring. We show how these new algorithmic tools differ from past rounds of public sector innovation and raise unique governance challenges. We highlight three such challenges emerging from the literature.
Section 3 reviews the legal challenges of reconciling public law’s commitment to reason-giving with the lack of explainability of certain algorithmic governance tools. Because existing work has fixated on a small set of uses, it reflects the tendency in the wider algorithmic accountability literature to focus on constitutional doctrine. But the diverse set of algorithmic governance tools coming online are more likely to be regulated under statutory administrative law, raising distinct questions about transparency and explainability. Next, Section 4 reviews the challenges of building state capacity to adopt modern AI tools. We argue that a core component of state capacity includes embedded technical expertise and data infrastructure. Standard frameworks fail to capture how capacity-building can be critical for (a) shrinking the public-private sector technology gap and (b) “internal” due process, which administrative law has increasingly recognized as key to accountability. Finally, Section 5 turns to challenges of gameability, distributive effects, and legitimacy as the new AI-based governance technologies move closer to performing core government functions. We highlight the potential for adversarial learning by regulated parties and contractor conflicts of interest when algorithms are bought, not made. Gaming concerns highlight the deeper political complexities of a newly digitized public sector.
Section 6 concludes by providing cautious support for adoption of AI by the public sector. Further progress in thinking about the new algorithmic governance will require more sustained attention to the legal and institutional realities and technological viability of use cases.