The Office of the Australian Information Commissioner has, somewhat belatedly and with uncertain effect, issued a Determination under the Privacy Act 1988 (Cth) regarding use by the Australian Federal Police of Cleaview.
The Determination states
110. I find that the Respondent interfered with the privacy of individuals whose images it uploaded to the Facial Recognition Tool, by failing to take reasonable steps under APP 1.2 to implement practices, procedures and systems relating to its functions or activities that would ensure that it complied with Clause 12 of the Code.
Disquietingly but unsurprisingly the Determination states
116. While these appear to be constructive developments, on the evidence before me, I cannot be satisfied that steps the Respondent has taken to date will ensure that the breaches of clause 12 of the Code and APP 1.2 are not repeated or continued.
It notes
There is a disagreement between the OAIC and the Respondent about whether an interference with privacy has occurred, and this determination allows this question to be resolved. There is a public interest in making declarations setting out my reasons for finding that an interference with privacy has occurred and the appropriate response by the Respondent.
Further it states
According to a media article dated 21 January 2020, a spokesperson for the Respondent [ie the AFP] had advised at that time that it did not use the Facial Recognition Tool (see paragraph 13). The Respondent refused 3 FOI requests on 13 February 2020 on the basis that no information relating to the third party service provider had been identified, notwithstanding that the Facial Recognition Tool had been used by several Trial participants (see paragraph 15). There were limited records of how this novel technology was used. For example, the dates of registration for Trial participants are unknown; the Respondent did not have logs recording details of access and/ or use of the Facial Recognition Tool; and for many uploaded images, the Respondent had no record of the particular image that had been uploaded.
Regrettably the OAIC is not using the Determination as an opportunity to encourage best practice in FOI responses.
The OAIC states
96. I am not satisfied that during the Trial Period, the Respondent had appropriate systems in place to identify, track and accurately record its use of new investigative technologies to handle personal information.
97. I consider that the Respondent should have instituted a more centralised approach to identifying and assessing new and emerging investigative techniques or technologies that handle personal information. This would have assisted the Respondent to identify new high privacy risk projects within its organisation and take a consistent approach to risk assessment. It would also have supported the Respondent’s compliance with APP 1.2 in future, by enabling it to explain why a new or changed way of handling personal information did not have the potential to be high privacy risk (noting that it is the responsibility of each agency to be able to demonstrate whether a new or changed way of handling personal information was a high privacy risk project).
98. In addition, the Respondent’s policies should have specifically addressed the use of free trials and other freely available online search applications, for investigative purposes. The privacy risks of using such applications (such as those outlined in 69 to 73), were foreseeable given that search tools and applications are easily accessible on the internet, and noting the ACCCE’s commitment to exploring ‘new and innovative solutions’ to meet challenges posed by offenders evolving their operating methods to avoid detection.
95 The policies should have explained how attendant privacy risks should be assessed to enable compliance with Clause 12 of the Code, and the controls and approval processes in place to support such privacy risk assessments. ...
Privacy training
99.Under the Respondent’s written policies that applied during the Trial Period, functional areas were responsible for ensuring that PIAs were undertaken for all high privacy risk projects. The policies clearly stated that personnel could contact the Privacy Officer for assistance in determining whether a PIA is required, and included their contact details.
100. Notwithstanding this, none of the 10 members of the ACCCE who registered for trial accounts conducted a threshold assessment or a PIA (see paragraph 64). Given this omission, I have considered the steps the Respondent took to implement its written policies about privacy risk assessments, including through staff training and other communications about requirements under the Code.
101. While I recognise that the Respondent’s written policies contained some information about requirements to undertake a PIA under clause 12 of the Code, the Respondent’s online training module:
- did not include sufficient information to enable staff to identify whether a planned project may involve high privacy risk, such as factors indicating that a project may be high privacy risk,
- information about the process of conducting threshold assessments and PIAs, or relevant operational examples did not set out clear pathways and triggers for functional areas to consult with appropriate legal and technical experts, before engaging in new or changed personal information handling practices
- did not clearly identify who was responsible for undertaking threshold assessments and PIAs, and for keeping relevant records
- did not include information about the potential privacy risks of novel high privacy-impact technologies, or the risks to individuals of uploading personal information held by the agency to a third party service provider in the absence of a Commonwealth contract (as discussed in paragraph 71).
102. The Respondent’s submissions also indicate that at least 3 of the Trial participants had not received privacy training in the 12 months leading up to the Trial Period.
103. Based on the Respondent’s submissions and documentation provided, I cannot be satisfied that adequate training was provided to functional areas about how to undertake such an assessment, when to do so, and when to involve the Privacy Officer or other privacy experts.
PIA
104. In addition to being a discrete obligation under the Code, an example of the practices, procedures and systems that an APP entity should consider implementing to comply with APP 1.2, is a commitment to conducting a PIA for new projects in which personal information will be handled or when a change is proposed to information handling practices. A PIA can assist in identifying the practices, procedures or systems that will be reasonable to ensure that new projects are compliant with the APPs.
105. I have concluded at paragraph 76 above that the Respondent breached clause 12 of the Code by failing to undertake a PIA for a high privacy risk project.
What additional steps were reasonable in the circumstances?
106. The requirement in APP 1.2 is to take ‘reasonable steps’ to implement practices, procedures and systems to ensure compliance with the APPs and the Code.
107. I have considered the seriousness of decisions that may flow from use of the Facial Recognition Tool (see paragraph 71), the fact that the personal information of victims (including children and other vulnerable individuals) was searched, and the likelihood that the Trial involved the handling of sensitive biometric information for identification purposes. I would expect the Respondent to take steps commensurate with this level of risk under APP 1.2, to ensure any privacy risks in using technologies like the Facial Recognition Tool are carefully identified, considered and mitigated against. In some circumstances, the privacy impacts of a high privacy risk project, may be so significant that the project should not proceed.
108. I consider that having regard to these heightened risks and the deficiencies outlined above, the Respondent should have at least taken the following additional steps before the Trial Period:
- The Respondent should have implemented a centralised system to identify, track and accurately record its use of new investigative technologies to handle personal information.
- The Respondent’s written policies should have specifically identified the privacy risks of using new technologies to handle personal information as part of its investigative functions (including on a trial basis and when a service is available free of charge) and included controls and approval processes to address these risks.
- The Respondent should have ensured that staff who were responsible for assessing privacy risk received appropriate privacy training on a regular basis, which covered at least the matters outlined at paragraph 101.
- The Respondent should have conducted a PIA in relation to the Trial.
109. I have taken into account the relevant circumstances, including the Respondent’s role as a federal law enforcement agency, its use of the Facial Recognition Tool to search for victims, suspects and persons of interest for investigative purposes, the sensitive nature of the biometric information collected and used by the Facial Recognition Tool, and the time and costs of implementing appropriate policies, procedures, and training. Having regard to these circumstances, I am satisfied that the Respondent did not take steps as were reasonable in the circumstances to implement practices, procedures and systems relating to its functions or activities that would ensure that it complied with clause 12 of the Code, as required under APP 1.2. ...
Remedies
111. There are a range of regulatory options that I may take following an investigation commenced on my own initiative. In determining what form of regulatory action to take, I have considered the factors outlined in the OAIC’s Privacy Regulatory Action Policy103 and the OAIC’s Guide to Privacy Regulatory Action.
112. I am satisfied that the following factors weigh in favour of making a determination that finds that the Respondent has engaged in conduct constituting an interference with the privacy of an individual and must not repeat or continue such conduct: The objects in s 2A of the Act include promoting the protection of the privacy of individuals, and promoting responsible and transparent handling of personal information by entities.
Specified steps
113. Under s 52(1A)(b) I may declare that the Respondent must take specified steps within a specified period to ensure that an act or practice investigated under s 40(2) is not repeated or continued.
114. I recognise that the Respondent is proactively working to build the maturity of its privacy governance framework and embed a culture of privacy compliance across the agency. I particularly acknowledge the Respondent’s commitment since the Trial Period, to reviewing and strengthening parts of its privacy governance framework. This includes reviewing and updating its privacy management plan (1 July 2021 to 1 July 2022), which identifies specific, measurable privacy goals and targets and sets out how the agency will meet its compliance obligations under APP 1.2.
115. In addition, the Respondent submitted during the investigation that it:
- had appointed a dedicated position within the ACCCE, who would be responsible for undertaking software evaluations of similar kinds of applications in future
- was undertaking a review of existing internal governance processes and documents to specifically address the use of free trials in the online environment
- had commissioned a broader review of the Respondent’s privacy governance with the assistance of an external legal services provider
- was reviewing its training module to ensure operational relevance to all staff by including sufficient context and explanation.
116. While these appear to be constructive developments, on the evidence before me, I cannot be satisfied that steps the Respondent has taken to date will ensure that the breaches of clause 12 of the Code and APP 1.2 are not repeated or continued.
117. The Respondent has not provided the OAIC with specific information about how any steps it has taken or is taking, will prevent similar breaches occurring again in the future, by addressing the deficiencies in paragraphs 95 to 105 above. For example, during this investigation, the OAIC was not provided with details of how the Respondent’s policies, decision making processes, and approval processes in relation to the use of new technologies have changed since January 2020. In addition, while the OAIC’s preliminary view contained findings about additional steps that should have been taken to train staff about privacy impact assessments, the Respondent did not provide any updated information about changes to its training program.
118. Without a more coordinated approach to identifying high privacy risk projects and improvements to staff privacy training, there is a risk of similar contraventions of the Privacy Act occurring in the future. This is particularly the case given the increasing accessibility and capabilities of facial recognition service providers and other new and emerging high privacy impact technologies that could support investigations.
119. For these reasons, I consider that it is reasonable, proportionate and appropriate to make the declarations in paragraph 2(c) of this determination, under s 52(1A)(b) of the Privacy Act, requiring an independent review of the changes made to the Respondent’s relevant practices, procedures, systems (including training) since the Trial Period. The declarations will provide the OAIC with ongoing oversight of updates to the Respondent’s privacy governance framework. The independent review may also provide additional assurance to Australians that the deficiencies identified in this determination have been addressed. These specified steps will help the Respondent to prevent similar contraventions, and ensure any privacy risks in using high privacy impact technologies are carefully identified, considered and mitigated against.
In October the OAIC in a separate Determination addressed data collection by Clearview, with the OAIC subsequently commenting
Clearview AI, Inc. breached Australians’ privacy by scraping their biometric information from the web and disclosing it through a facial recognition tool.
The determination follows a joint investigation by the Office of the Australian Information Commissioner (OAIC) and the UK’s Information Commissioner’s Office (ICO).
Commissioner Falk found that Clearview AI breached the Australian Privacy Act 1988 by: collecting Australians’ sensitive information without consent collecting personal information by unfair means not taking reasonable steps to notify individuals of the collection of personal information not taking reasonable steps to ensure that personal information it disclosed was accurate, having regard to the purpose of disclosure not taking reasonable steps to implement practices, procedures and systems to ensure compliance with the Australian Privacy Principles.
The determination orders Clearview AI to cease collecting facial images and biometric templates from individuals in Australia, and to destroy existing images and templates collected from Australia.
Clearview AI’s facial recognition tool includes a database of more than three billion images taken from social media platforms and other publicly available websites. The tool allows users to upload a photo of an individual’s face and find other facial images of that person collected from the internet. It then links to where the photos appeared for identification purposes.
The OAIC determination highlights the lack of transparency around Clearview AI’s collection practices, the monetisation of individuals’ data for a purpose entirely outside reasonable expectations, and the risk of adversity to people whose images are included in their database.
“The covert collection of this kind of sensitive information is unreasonably intrusive and unfair,” Commissioner Falk said.
“It carries significant risk of harm to individuals, including vulnerable groups such as children and victims of crime, whose images can be searched on Clearview AI’s database.
“By its nature, this biometric identity information cannot be reissued or cancelled and may also be replicated and used for identity theft. Individuals featured in the database may also be at risk of misidentification.
“These practices fall well short of Australians’ expectations for the protection of their personal information.”
Commissioner Falk found the privacy impacts of Clearview AI’s biometric system were not necessary, legitimate and proportionate, having regard to any public interest benefits. “When Australians use social media or professional networking sites, they don’t expect their facial images to be collected without their consent by a commercial entity to create biometric templates for completely unrelated identification purposes,” she said.
“The indiscriminate scraping of people’s facial images, only a fraction of whom would ever be connected with law enforcement investigations, may adversely impact the personal freedoms of all Australians who perceive themselves to be under surveillance.”
Between October 2019 and March 2020, Clearview AI provided trials of the facial recognition tool to some Australian police forces which conducted searches using facial images of individuals located in Australia. ...
Clearview AI argued that the information it handled was not personal information and that, as a company based in the US, it was not within the Privacy Act’s jurisdiction. Clearview also claimed it stopped offering its services to Australian law enforcement shortly after the OAIC’s investigation began.
However, Commissioner Falk said she was satisfied Clearview AI was required to comply with Australian privacy law and that the information it handled was personal information covered by the Privacy Act.