‘Setting the Framework for Accountability for Algorithmic Discrimination at Work’ by Alysia Blackham in (2023) 47(1) Melbourne University Law Review comments
Digital inequalities at work are pervasive yet difficult to challenge. Employers are increasingly using algorithmic tools in recruitment, work allocation, performance management, employee monitoring and dismissal. According to a survey conducted by the Society for Human Resource Management, nearly one in four companies in the United States (‘US’) use artificial intelligence (‘AI’) in some form for human resource management. Of those surveyed who do not use automation for such processes, one in five organisations plan to either use or increase their use of such AI tools for performance management over the next five years.
The elimination of discrimination in employment and occupation is a fundamental obligation of International Labour Organization (‘ILO’) members, and is included in the ILO Declaration on Fundamental Principles and Rights at Work. This obligation invariably extends to the digital sphere. It is critical, then, to create a meaningful framework for accountability for these algorithmic tools. At present, though, it is unclear who is responsible for monitoring the risks of algorithmic decision-making at work: is it the technology companies who develop and market these algorithmic products? The employers using algorithmic tools? Or the individual workers who might experience inequality as a result of algorithmic decision-making? Or, indeed, all three?
This article considers how we might create a framework for accountability for digital inequality, specifically concerning the use of algorithmic tools in the workplace that disadvantage groups of workers. In Part II, I consider how algorithms and algorithmic management might be deployed in the workplace, and the way this might address or exacerbate inequality at work. I argue that the automated application of machine learning (‘ML’) algorithms in the workplace presents five critical challenges to equality law related to: the scale of data used; their speed and scale of application; lack of transparency; growth in employer control; and the complex supply chain associated with digital technologies. In Part III, I consider principles that emerge from privacy and data protection law, third-party and accessorial liability, and collective solutions to reframe the operation of equality law to respond to these challenges. Focusing on transparency, third-party and accessorial liability, and supply chain regulation, I draw on comparative doctrinal examples from the European Union (‘EU’) General Data Protection Regulation (‘GDPR’), the Australian Privacy Principles (‘APP’) and Fair Work Act 2009 (Cth) (‘Fair Work Act’), and collectively negotiated solutions to identify possible paths forward for equality law. This analysis adopts comparative doctrinal methods, reflecting what Örücü describes as a ‘problem-solving’ or sociological approach to comparative law, examining how different legal systems have responded to similar problems in contrasting ways. The fact that these jurisdictions are facing a similar problem warrants the comparison; differences in national context increase the potential for mutual learning. The GDPR is seen as setting the standard or benchmark for global data protection regulation: it is therefore considered here as an important comparator to Australian provisions.
Drawing on these principles, I argue that there is a need to develop a meaningful accountability framework for discrimination effected by algorithms and automated processing, with differentiated responsibilities for algorithm developers, data processors and employers. While discrimination law — either via claims of direct or indirect discrimination — might be adequately framed to accommodate algorithmic discrimination, I argue for a need to reframe equality law around proactive positive equality duties that better respond to the risks of algorithmic management. This represents a critical and innovative contribution to Australian legal scholarship, which has rarely considered the implications of technological and algorithmic tools for equality law. Given the critical differences between Australian, US and EU equality law, there is a clear need for jurisdiction-specific consideration of these issues.