08 December 2020

Facial Recognition in NZ

Facial Recognition Technology in New Zealand: Towards a Legal and Ethical Framework by Nessa Lynch, Liz Campbell, Joe Purshouse and Marcin Betkier comments 

‘The algorithms of the law must keep pace with new and emerging technologies' R (On Application of Bridges) v The Chief Constable of South Wales Police [2019] EWHC 2341 (Admin) .

This technology allows remote, contactless, data processing, even without a person’s knowledge. In the current digital environment, where people’s faces are available across multiple databases and captured by numerous cameras, facial recognition has the potential to become a particularly ubiquitous and intrusive tool. The increased surveillance enabled by this technology may ultimately reduce the level of anonymity afforded to citizens in the public space. 

1 FRT AND ITS USE 

The use of automated facial recognition technology (FRT) is becoming commonplace globally and in New Zealand. FRT involves the use of an algorithm to match a facial image to one already stored in a system, is used in automated passport control and other border control measures, as a biometric identifier in the banking, security and access contexts, and on social media platforms and various other consent-based applications. 

2 VALUE AND RISKS OF FRT 

FRT offers accuracy, speed and convenience in identity management in the commerce, travel, immigration, border control and security contexts. 

The ability to identify and intercept an individual through automated crosschecking of images could be of immense value in the investigation of crime, counter- terrorism, and immigration. However, there are critical implications for the right to privacy and the right to be free from discrimination, and its use can compound existing biases. It is unlike other biometrics such as DNA and fingerprints in that facial images can be collected at a distance and their collection, use and storage is not specifically covered by legislation in New Zealand. 

3 CONTRIBUTION OF THIS REPORT 

This report contributes to the understanding of how and when this rapidly emerging technology should be used and how it should be regulated. It is centred in what has been described as the ‘second wave’ of algorithmic accountability –

While the first wave of algorithmic accountability focuses on improving existing systems, a second wave of research has asked whether they should be used at all—and, if so, who gets to govern them.

This project seeks to address the regulation gap through ascertaining how FRT can and should be regulated in New Zealand. While the benefits that might be offered by FRT surveillance are increasingly observable, its effect on civil liberties is subtler, but certainly pernicious. Given the potential for FRT to be used as a key identity and access management tool in the future, there are pertinent questions around how images are being collected and stored now by the private sector. Where are these images being stored? Who has access to this data? What else might the images be used for? 

Without a detailed appraisal of the benefits of state FRT surveillance, and an understanding of the ethical issues raised by its use, any framework for the regulation of this activity cannot hope to engender public confidence that its use is fair and lawful. 

4 METHODOLOGY AND APPROACH 

We are a project team with extensive expertise in the theory and practice of biometrics, data privacy and state surveillance, with established collaborative relationships and a track record of impactful co-authored publications. Our experience extends to bridging the gap between academic scholarship and policy and practice, including comparative insight into ethical issues, governance and regulation in this space. 

The methodology for this project used a combination of literature review, legal reasoning, analysis of theoretical frameworks and stakeholder consultation and interviews to produce an accessible but insightful analysis of the use of FRT in New Zealand, the risks and benefits of the technology, and the options for regulation, governance and oversight. The principal phases of the project were:

Phase 1 – Literature review and scoping: This phase involved surveying the literature and stocktaking uses of FRT nationally and internationally. 

Phase 2 – Issues paper: This phase involved the writing of an issues paper which outlined the key questions and scoped some preliminary recommendations. 

Phase 3 – Workshop and panel discussion: A workshop was held in Wellington in October 2019. Attendees were drawn from New Zealand Police, MBIE, the Privacy Commissioner, the Office of the Prime Minister’s Chief Science Advisor, the Law Commission, Artificial Intelligence Forum of NZ, Department of Internal Affairs, Department of Prime Minister and Cabinet, the private sector and academic colleagues. Two international experts – Clare Garvie of the Centre for Privacy and Technology at Georgetown University in Washington DC and Rachel Dixon, the Privacy and Data Protection Deputy Commissioner for the State of Victoria, attended and participated in the workshop, as well as all members of the research team. A public panel discussion was held at Victoria University of Wellington on 17 October 2019. 

Phase 4 – Report Writing: 2020 was an exceptional year in many ways, and Covid-19 impacted our work in many ways. Like academic colleagues around the world, our research was impacted by lockdowns, increased teaching duties and cancellation of conferences, seminars and research trips. Government operations in New Zealand was also significantly impacted as civil servants were deployed on the Covid-19 response. Our thanks to our funders for permitting an extension to the time available for drawing down the funding. 

Phase 5 – Peer review and publication: Several colleagues from the academic and public sectors generously gave their time to peer review our recommendations and other sections. Any errors are of course our own.

5 OUTLINE OF THE REPORT 

Section 1 – stocktakes the use of FRT across New Zealand and comparable jurisdictions, Section 2 – discusses the content and application of the human rights framework, Section 3 – discusses ethical standards for the use of technologies such as FRT, public attitudes and social licence, Section 4 – considers the threats that FRT may pose to human rights, Section 5 – analyses the application of existing laws and regulation in New Zealand, Section 6 – considers models of regulation from comparable jurisdictions, Section 7 – draws together general and specific recommendations.