21 November 2024

Recruitment

'Screened Out: The Impact of Digitized Hiring Assessments on Disabled Workers' (CDT, 2024) by Michal Luria, Matt Scherer, Dhanaraj Thakur, Ariana Aboulafia, Henry Claypool, Wilneida Negrón comments 

companies have incorporated hiring technologies, including AI-systems (AEDSs), into various stages of the hiring process across a wide range of industries. While proponents argue that these technologies can aid in identifying suitable candidates and reducing bias, researchers and advocates have identified multiple ethical and legal risks that these technologies present, including discriminatory impacts on members of marginalized groups. This study examines some of the impacts of modern computer-based assessments (“digitized assessments”) — the kinds of assessments commonly used by employers as part of their hiring processes — on disabled job applicants. The findings and insights in this report aim to inform employers, policymakers, advocates, and researchers about some of the validity and ethical considerations surrounding the use of digitized assessments, with a specific focus on impacts on people with disabilities. 
 
Methodology 
 
We utilized a human-centered qualitative approach to investigate and document the experiences and concerns of a diverse group of participants with disabilities. Participants were asked to complete a series of digitized assessments, including a personality test, cognitive tests, and an AI-scored video interview, and were interviewed about their experiences. Our study included participants who identified as low vision, people with brain injuries, autistic people, D/deaf and/or hard of hearing people, those with intellectual or developmental disabilities, and those with mobility differences. We also included participants with diverse demographic backgrounds in terms of age, race, and gender identity. 
 
The study focused on two distinct groups: (1) individuals who are currently working in, or intend to seek, hourly jobs, and (2) attorneys and law students who have sought or are likely to seek lawyer jobs. By studying these groups, we aimed to understand potential impacts of digitized assessments on workers with roles that require different levels of education and experience. 
 
Findings 
 
Disabled workers felt discriminated against and believed the assessments presented a variety of accessibility barriers. Contrary to the claims made by developers and vendors of hiring technologies that these kinds of assessments can reduce bias, participants commonly expressed that the design and use of assessments were discriminatory and perpetuated biases (“They’re consciously using these tests knowing that people with disabilities aren’t going to do well on them, and are going to get self-screened out”). Participants felt that the barriers they grappled with stemmed from assumptions made by the designers in how assessments were presented, designed, or even accessed. Some viewed these design choices as potentially reflective of an intent to discriminate against disabled workers. One participant stated that it “felt like it was a test of, ‘how disabled are you?’” Not only that, participants generally viewed the assessments as ineffective for measuring job-relevant skills and abilities. 
 
Participants were split on whether these digitized assessments could be modified in a way that would make them more fair and effective. Some participants believed the ability to engage in parts of the hiring process remotely and asynchronously could be useful during particular stages, if combined with human supervision and additional safeguards. Most, however, did not believe that it would be possible to overcome the inherent biases against individuals with disabilities in how assessments are used and designed. As one participant put it “We, as very flawed humans, are creating even more flawed tools and then trying to say that they are, in fact, reducing bias when they’re only confirming our own already held biases.” 
 
Given the findings of this study, employers and developers of digitized assessments need to re-evaluate the design and implementation of assessments in order to prevent the perpetuation of biases and discrimination against disabled workers. There is a clear need for an inclusive approach in the development of hiring technologies that accounts for the diverse needs of all potential candidates, including individuals with disabilities. 
 
Recommendations 
 
Below we highlight our main recommendations for developers and deployers of digitized assessments, based on participants’ observations and experiences. Given the harm these technologies may introduce, some of which may be intractable, the following recommendations set out to reduce harm rather than eliminate it altogether. 
 
Necessity of Assessments: Employers should first evaluate whether a digitized assessment is necessary, and whether there are alternative methods for measuring the desired skills with a lower risk of discrimination. If employers select to use digitized assessments, they should ensure that the assessments used are fair and effective; that they measure skills or abilities directly relevant to the specific job, and that they can do so accurately. 
 
Accessibility: Employers must ensure assessments adhere to existing accessibility guidelines, like the Web Content Accessibility Guidelines (WCAG)1 or initiatives of the Partnership on Employment and Accessible Technologies (PEAT), and that the selected assessments accommodate and correctly assess the skills of disabled workers with various disabilities. 
 
Implementation: For effective, fair, and accessible assessments, employers can take additional steps to potentially reduce biases by implementing significant human oversight in all assessment processes, using assessments to supplement, not replace, comprehensive candidate evaluations, and being transparent about when and how assessments are used.

The Impact of Digitized Hiring Assessments on Disabled Workers  

“It was soul crushing [...] AI is great and all, but these are people’s lives.” – Reaction of an interviewee after completing a series of hiring assessments 

The recent proliferation of artificial intelligence (AI) and other automated technologies and tools has permeated many companies’ business practices and workflows, including recruitment and hiring processes. Successfully hiring employees is a challenging and time-consuming task, and many employers have turned to technology to automate parts of the process. Proponents argue that such automation can help with collecting, screening, and recommending job candidates. However, some candidates may be marginalized by this automation, including people with disabilities. In this report we focus on the impacts of computer-based assessments – specifically AI-scored video interviews, gamified personality evaluations, and cognitive tests – on disabled people. 

The Integration of Technology in Hiring  

Technology has been incorporated into nearly every stage of the hiring process (Rieke & Bogen, 2018), from targeting advertisements for jobs, to collecting and screening applications and conducting interviews. While AI-powered assessments and other automated employment decision systems (AEDSs) have drawn attention from both the media and advocates, modern hiring technologies can take many forms and can be used in many different ways. They include gamified tests, assessments that rely on facial recognition and analysis, and computerized or algorithmic versions of assessments that have long been used by employers in hiring processes (like personality tests, cognitive tests, and more) (Mimbela & Akselrod, 2024). 

While the lack of transparency regarding companies’ use of technology in hiring and the lack of regulation of such technologies make it hard to precisely quantify the prevalence of these hiring technologies, various surveys and studies indicate that their use is widespread. For example, the chair of the Equal Employment Opportunity Commission suggested that “some 83% of employers, including 99% of Fortune 500 companies, now use some form of automated tool as part of their hiring process” (Hsu, 2023). Another survey noted that 76% of companies with more than 100 employees use personality tests, and that employers are turning to algorithms to administer and analyze the tests at a larger scale (Brown et al., 2020). The developers and vendors of AI-integrated hiring technologies claim that their tools can help employers identify the applicants that are the best fit for a given job, help sort and organize candidates (ACLU, 2024), and potentially even reduce bias in the hiring process (Savage & Bales, 2016; Raghavan et al., 2020). In contrast to the claims of vendors, research shows that the use of modern hiring technologies can introduce a range of ethical and legal risks (Rieke & Bogen, 2018), including privacy risks (Kim & Bodie, 2020) and a high risk of enabling employment discrimination based on race (Gershgorn, 2018), gender (Dastin, 2018), disability (Brown et al., 2020; Glazko et al., 2024), and other characteristics, including through perpetuating implicit bias (Persson, 2016). 

In addition to discrimination concerns, the use of these technologies also raises questions around effectiveness. Many modern computer-based employment assessments assess skills or traits that are not necessary for some jobs (Akselrod & Venzke, 2023). For example, personality assessments measured general traits like positivity, emotional awareness, and liveliness (ACLU, 2024). Such characteristics are not clearly linked to most job functions, and risk screening out workers with autism or mental health conditions, like depression and anxiety. 

Further, these kinds of assessments may not be able to meaningfully measure or predict the skills and qualities they purport to assess in the first place (Stark et al., 2021; Birhane, 2022). For example, recent research shows the validity of cognitive ability tests for predicting future job performance ratings has been substantially overestimated for several decades (Sackett et al., 2022). It has also long been established that cognitive ability tests often have adverse impacts based on race (Outtz & Newman, 2009; Cottrell et al., 2015). 

Job seeking has long been a process riddled with barriers for people with disabilities for a number of reasons, including ableist norms about desired qualities of a worker, choices disabled people have to make about whether to disclose their disabilities, and approaches to evaluating and communicating with applicants that don’t account for their disabilities (Fruchterman & Mellea, 2018; Bonaccio et al., 2020). Despite significant gains since the passage of the Americans with Disabilities Act in 1991, the participation rate of disabled individuals within the labor force is approximately half that of non-disabled individuals, and the unemployment rate for disabled workers is roughly double that of non-disabled workers (National Trends in Disability Employment, 2024; Bureau of Labor Statistics, 2024). 

The use of modern hiring technologies, including those that use AI as part of the assessment or scoring process, may create even more barriers. AI systems often fail to account for the needs, experiences, and perspectives of disabled people (Brown et al., 2020; Williams, 2024). For example, a recent study found that résumé sorters incorporating OpenAI’s GPT-4 exhibited prejudice in rankings if they contained activities or awards suggesting the candidate had a disability (Glazko et al., 2024). These and other issues can lead to a variety of negative consequences for disabled workers (Fruchterman & Mellea, 2018; Bonaccio et al., 2020). 

Some of these barriers stem from the fact that disabled workers’ needs are often overlooked in the design and evaluation of selection procedures, both those that leverage automation or AI and those that do not (Papinchock et al., 2023). This is especially likely to happen when hiring procedures and technologies are designed without input from disabled workers and disability experts, and thus do not consider the full range of people who may use a new technology or feature (Brown et al., 2020). Examples include systems that rely on facial recognition or automated analysis of interactions with a computer (Rieke & Bogen, 2018), or automated systems designed to recognize and analyze speech. These kinds of systems are commonly used to power video interviewing systems in hiring, and have been shown to perform worse for speakers with a variety of disabilities (Tu et al., 2016; Glasser et al., 2017; Hidalgo Lopez et al., 2023). 

While there is clear evidence of the harms certain hiring technologies can have, including for disabled workers, there is a need for more research examining the extent and nature of hiring technologies’ impacts on jobseekers with disabilities. In particular, we identified a gap in research regarding the multi-faceted experiences of disabled workers in engaging with modern hiring technology, as well as in understanding how hiring tools may have different impacts on job seekers with different kinds of disabilities. In this research report we contribute to addressing this gap by examining the experiences of people with disabilities with certain kinds of digitized assessments. In particular, the focus of this report is on the impact of computer-based hiring assessments — including personality testing, cognitive tests, and an AI-scored video interview (hereafter referred to collectively as “digitized assessments”) — on disabled workers.