'[Un]Usual Suspects: Deservingness, Scarcity, and Disability Rights' by Doron Dorfman in (2019)
UC Irvine Law Review comments
People encounter disability in public spaces where accommodations are granted to those who fit into this protected legal class. Nondisabled people desire many of these accommodations—such as the use of reserved parking spots or the ability to avoid waiting in a queue—and perceive them as “special rights” prone to abuse. This apprehension about the exploitation of rights by those pretending to be disabled, which I refer to as “fear of the disability con,” erodes trust in disability law and affects people with disabilities both on an individual level and a group level. Individuals with disabilities are often harassed or questioned about their identity when using their rights. As a group, disabled people are forced to navigate through new defensive policies that seek to address widely held perceptions of fakery and abuse. This Article uses a series of survey experiments conducted with multiple nationally representative samples totaling more than 3,200 Americans along with 47 qualitative in-depth interviews. It brings to light the psychological mechanism of suspicion and identifies factors that motivate fear of the disability con in public spaces. Findings counterintuitively suggest that the scarcity of the desired public resources has no effect on the level of suspicion against potential abusers. Rather, it is the sense of deservingness (or lack thereof) in the eyes of others that drives suspicion. Using these empirical findings, as well as analysis of relevant case law, this Article outlines the normative implications for the design and implementation of laws affecting millions of individuals. Furthermore, this research contributes to our understanding of how rights behave on the ground, both with regard to disability and to myriad distributive policies.
'Access to Algorithms' by Hannah Bloch-Wehba (in (2019)
Fordham Law Review (Forthcoming)
comments
Federal, state, and local governments increasingly depend on automated systems — often procured from the private sector — to make key decisions about civil rights and civil liberties. When individuals affected by these decisions seek access to information about the algorithmic methodologies that produced them, governments frequently assert that this information is proprietary and cannot be disclosed.
Recognizing that opaque algorithmic governance poses a threat to civil rights and liberties, scholars have called for a renewed focus on transparency and accountability for automated decision making. But scholars have neglected a critical avenue for promoting public accountability and transparency for automated decision making: the law of access to government records and proceedings. This Article fills this gap in the literature, recognizing that the Freedom of Information Act, its state equivalents, and the First Amendment provide unappreciated legal support for algorithmic transparency.
The law of access performs three critical functions in promoting algorithmic accountability and transparency. First, by enabling any individual to challenge algorithmic opacity in government records and proceedings, the law of access can relieve some of the burden otherwise borne by parties who are often poor and under-resourced. Second, access law calls into question government’s procurement of algorithmic decision making technologies from private vendors, subject to contracts that include sweeping protections for trade secrets and intellectual property rights. Finally, the law of access can promote an urgently needed public debate on algorithmic governance in the public sector.
'Contesting Automated Decisions: A View of Transparency Implications" by Emre Bayamlioglu in (2019) 4
European Data Protection Law Review 433-446
comments
The paper intends to identify the essentials of a ‘transparency model’ which aims to analyse automated decision-making systems not by the mechanisms of their operation but rather by the normativity embedded in their behaviour/action. First, transparency-related concerns and challenges inherent in ML are conceptualised as “informational asymmetries”. Under a threefold approach, this part explains and taxonomises how i) intransparencies and opacities, ii) epistemological flaws (spurious or weak causation), and iii) biased processes inherent in machine learning (ML) create cognitive obstacles on the side of the data subject in terms of contesting automated decisions. Concluding that the transparency needs of an effective contestation scheme go much beyond the disclosure of algorithms or other computational elements, the following part explains the essentials of a rule-based ‘transparency model’ as: i) the data as ‘decisional cues/input’; ii) the normativities contained both at the inference and decisional (rule-making) level; iii) the context and further implications of the decision; iv) the accountable actors. This is followed by the identification of certain impediments at the technical, economical and legal level regarding the implementation of the model. Finally, the paper provides theoretical guidance as the preliminaries of a ‘contestability scheme’ which aims for compliance with transparency obligations such as those provided under the EU data protection regime (the GDPR).
'Binary Governance: Lessons from the GDPR's Approach to Algorithmic Accountability' by Margot Kaminksi in (2019) 92(6)
Southern California Law Review comments
Algorithms are used to make significant decisions about individuals, from credit determinations to hiring and firing. But they are largely unregulated under U.S. law. I identify three categories of concerns behind calls for regulating algorithmic decision-making: dignitary, justificatory, and instrumental. Dignitary concerns lead to proposals that we regulate algorithms to protect human dignity and autonomy; justificatory concerns caution that we must assess the legitimacy of algorithmic reasoning; and instrumental concerns lead to calls for regulation to prevent consequent problems such as error and bias. No one regulatory approach can effectively address all three.
I therefore propose a two-pronged approach to algorithmic governance: a system of individual due process rights, combined with collaborative governance (the use of private-public partnerships to govern). Only through this binary approach can we effectively address all three concerns raised by algorithmic, or AI, decision-making. The interplay between the two approaches will be complex; sometimes the two systems will be complimentary, and at other times they will be in tension. I identify that the EU’s General Data Protection Regulation (GDPR) is one such binary system. I explore the extensive collaborative governance aspects of the GDPR and how they interact with its individual rights regime. Understanding the GDPR in this way both illuminates its strengths and weaknesses, and provides a model for how to construct a better binary governance regime for algorithmic, or AI, decision-making.