The Facial recognition technology: Towards a model law report from UTS states
There is growing community concern about the rise of facial recognition technology (FRT). As with other jurisdictions around the world, Australian law does not provide the legal guardrails necessary to ensure that FRT is developed and deployed in ways that uphold basic human rights.
The Model Law is intended to be applied to any individual or organisation that develops, distributes, or deploys FRT in Australia. It covers use of FRT by both government and private sector organisations.
The precise human rights impact of FRT turns on how the technology is developed, deployed and regulated. Therefore, the Model Law proposed in this report focuses on how FRT is used in practice, adopting a risk-based approach grounded in international human rights law. While the report has been written primarily by reference to Australian law, the reform principles set out in this report are applicable to other, comparable jurisdictions.
This report proposes reform. It provides an outline of a model law for FRT (the Model Law). The Model Law aims to foster innovation and enable the responsible use of FRT, while protecting against the risks posed to human rights.
This report recognises that FRT can be used consistently with international human rights law, and indeed in ways that achieve public and other benefits. However, FRT necessarily also engages, and often limits or restricts, a range of human rights. As a result, the use of FRT can – and has been proven to – cause harm.
Australian law does not provide the legal guardrails necessary to ensure that FRT is developed and deployed in ways that uphold basic human rights.
Why is reform needed?
There is rapid, almost exponential, growth in the development and deployment of FRT and other remote biometric technologies. These technologies can identify and extract a wealth of sensitive personal information about an individual, often without the individual’s knowledge, let alone consent. Australian law, like the laws of most jurisdictions around the world, was not developed with the prospect of widespread use of FRT in mind. In particular, our law was not drafted to address the challenges posed by FRT to human rights such as the right to privacy, freedom of assembly and association, freedom of expression and of movement. In Australia and other similar jurisdictions, several existing laws apply to the development and use of FRT. For example, Australian privacy law includes several provisions dealing with the handling of biometric information. Yet, on the whole, these existing laws are inadequate in addressing many of the risks associated with FRT.
Some jurisdictions have responded to the rise of FRT by prohibiting certain uses of FRT. Most famously, in 2019, the city of San Francisco issued a legal moratorium that prohibits many uses of FRT by the San Francisco Police Department. While this sort of moratorium may be useful in addressing a very specific risk, it is a limited and blunt instrument, which can leave many uses of FRT unregulated. In addition, if a moratorium were introduced to prohibit all development and use of FRT (something that no major jurisdiction has done), it would preclude uses of the technology that have a demonstrable public benefit.
Against this backdrop, a small but growing number of jurisdictions have begun to explore a more nuanced approach to regulating FRT. Especially in the United States and Europe, risk-based laws have been proposed to enable beneficial forms or applications of FRT, while restricting or prohibiting harmful uses of FRT. This report has been drafted to apply the lessons from those reform processes to create a nuanced, risk-based, FRT-focused Model Law.
Many civil society organisations, government and inter-governmental bodies and independent experts have sounded the alarm about dangers associated with current and predicted uses of FRT – including the inadequacy of existing law to protect communities and individuals from having their human rights restricted. Several leading trans-national technology companies have expressed concern that existing laws do not protect against harmful use of FRT. This has prompted a number of companies to voluntarily limit their own use of FRT, including in the products and services they sell. However, many other companies have not tempered their use of FRT.
What is facial recognition technology?
Facial recognition technology is defined in this report as any computer system or device with embedded functionality that uses data drawn from human faces to verify an individual’s identity, identify an individual and/or analyse characteristics about an individual.
This report focuses on FRT, which is a specific form of biometric technology that has some unusual, if not unique, characteristics. In considering broader reform in this area, the authors urge that the reform principles set out in this report be adapted to apply also to other forms of remote biometric technology, including those based on an individual’s voice, gait, ear, iris, body odour and other biometric data.
How does the Model Law work?
The Model Law sets out a risk-based approach to FRT, grounded in human rights. Under the Model Law, anyone who develops or deploys an FRT Application must first assess the level of human rights risk that would apply to their particular FRT Application. In assessing this risk, it will be necessary to consider a range of factors including:
- how the FRT application functions
- where and how it is deployed (for example, the spatial context)
- hether affected individuals can provide free and informed consent.
- the performance or accuracy of the application, and
- the effect of any decisions made in reliance on the FRT application’s outputs.
Drawing on these factors, the Model Law provides for a structured way of assessing the human rights risk of each specific FRT Application through a ‘Facial Recognition Impact Assessment’ (FRIA). FRT Developers and Deployers must complete this FRIA process, and assign a risk rating to the relevant FRT Application: base-level, elevated or high risk. That assessment can be challenged by members of the public and the regulator.
To address this human rights risk, the Model Law contains a cumulative set of legal requirements, limitations and prohibitions that apply based on this risk assessment. The Model Law imposes stricter legal constraints, and prohibitions, as the level of risk for any particular FRT Application increases.
Some of the Model Law’s requirements are procedural – for example, FRIAs must be registered with the regulator and made publicly available to ensure transparency of operation and use. Other requirements are substantive – for example, the Model Law applies and extends existing privacy law obligations to FRT Applications. In addition, the Model Law provides for the creation of a new FRT technical standard that would have the force of law.
The Model Law prohibits the development and use of high-risk FRT Applications, subject to three exceptions: where the regulator provides specific authorisation; in genuine research; and in the context of law enforcement and national security agencies, where the Model Law provides for specific legal rules, including a ‘face warrant’ scheme.
Finally, the report recommends that a suitable regulator be legally empowered and resourced to oversee the development and use of FRT in Australia. The Office of the Australian Information Commissioner (OAIC) would be the most obvious candidate to regulate the development and use of FRT in the federal jurisdiction, with a harmonised approach in respect of the state and territory jurisdictions.
Next steps for urgent reform
There is an emerging consensus across diverse stakeholder groups that reform in this area is both urgent and important. This report calls on Australia’s Federal Attorney-General to lead the reform process by taking four key steps:
1. The Attorney-General should introduce a bill into the Australian Parliament, based on the FRT Model Law set out in this report. This bill would apply to FRT within the regulatory purview of the Australian Government.
2. The Attorney-General should assign regulatory responsibility to the Office of the Australian Information Commissioner, or another suitable regulator, empowering that body to take a central role in the creation of an FRT technical standard, and in providing advice for FRT Developers, Deployers and affected individuals. The Australian Government should provide appropriate resourcing to the FRT regulator to fulfil these new functions.
3. The Attorney-General should initiate a process with his state and territory counterparts to ensure that the law on FRT is harmonised across all Australian jurisdictions. This process should ensure the law is consistent and easy to understand for FRT Developers, Deployers and affected individuals regardless of where one is located in Australia.
4. The Attorney-General should work with other relevant federal ministers to establish an Australian Government taskforce on FRT. The taskforce would have two functions. First, it would work with all relevant Federal Government departments and agencies, such as the Australian Federal Police, to ensure their development and use of FRT accords with legal and ethical standards. Second, it would lead Australia’s international engagement on FRT, so that Australia can have a positive influence on the development of international standards and other assurance mechanisms for FRT, and to ensure that Australia’s legal approach to FRT is consistent with international law and international best practice.