Imagine that, today or in the not-so-distant-future, a company desires to take full advantage of the developments of artificial intelligence by effectively delegating all its hiring decisions to a computer. It gives the computer only one instruction: “Pick good employees.” Taking “Big Data” to the logical extreme, the computer is also provided with all the employer’s available data and empowered to find whatever data it might consider relevant on the web.
Thought experiments, such as this one, can be useful not only in exploring new concepts but also in bringing interesting perspectives to bear on old problems. “People analytics,” perhaps someday leading to use of artificial intelligence in selecting and managing employees, offers an opportunity to do both.
One disturbing conclusion from analyzing this scenario is that the current disparate treatment paradigm does not seem to reach even the explicit use of race, sex, or other “protected classes” as selection criteria when deployed by artificial intelligence. That sheds some interesting light on the limitations of current law, entirely apart from actual developments in AI.
Equally important, applying disparate impact theory to artificial intelligence’s use of correlations between any of a number of variables and various measures of job performance poses challenges for long-standing ways of viewing the job relation/business necessity defenses to a showing that a particular employment practice has a disparate impact.