A global trend of growing regulatory scrutiny 8 min read
Determinations issued by the Office of the Australian Information Commissioner (OAIC) following two recent investigations against 7-Eleven and Clearview AI, reinforce a global trend of growing regulatory scrutiny of the use of facial recognition technology by private sector organisations, and follows Facebook's recent decision to shut down its facial recognition system.
While the facts underpinning the investigations were relatively different, both determinations:
- provide further guidance about what constitutes 'biometric' information under the Privacy Act 1988 (Cth) (Privacy Act);
- reinforce statements made in previous determinations around the requirements for consent; and
- provide insight into how the OAIC will assess whether the collection of personal information is 'reasonably necessary' for an organisation's functions or activities.
- The OAIC has affirmed that it considers the Privacy Act to have a very broad extraterritorial application, finding that an overseas organisation will be an APP entity and bound by the Privacy Act even if it has no physical presence in Australia, does not sell products in the Australian market, but does systematically collect information from individuals in Australia via a website hosted outside of Australia.
- A facial image alone (even where blurred) is generally sufficient to establish a link back to a person, and so will constitute personal information. Similarly, faceprints (being the algorithmic representations of facial images and the basis of a biometric verification system) will also generally constitute personal information as, by their nature, these verification systems will permit faceprints to be matched. Whether a person is reasonably identifiable is an objective test, and includes where images could be identified by a machine or algorithm.
- Even where an organisation relies on a largely end-to-end service provider to collect personal information, and cannot practically, independently access that information, the organisation may still be treated as having collected that personal information for the purposes of the Privacy Act if it has contractual control over the information, and the purpose of collection is directly tied to the organisation's purposes.
- The bar is set reasonably high for whether collection of biometric information is reasonably necessary for a particular function or activity. Organisations should expect to provide evidence (ideally, by way of a privacy impact assessment) of the particular risks which would be addressed by the collection (including the prevalence of the risk), why that risk outweighs the potential harm to individuals, and why the collection is in fact appropriate and adapted, rather than simply convenient or helpful.
From 15 June 2020 until 24 August 2021, 7-Eleven deployed facial recognition technology (provided by a third party) to obtain customer feedback in 700 stores across Australia. Tablets at the stores allowed customers to complete a voluntary survey about their experience, and took facial images both when the customer first engaged with the survey, and after completion for the purposes of matching survey results to understand customer demographics.
The underlying facial images were deleted from the tablet after they were uploaded to a third party service provider's system, and deleted from the service provider's system within 7 days.
7-Eleven claimed the facial images and faceprints were not personal information as they were not used to identify, monitor or track individuals, and only a limited number of service provider personnel could view non-blurred versions of these images for the purposes of identifying errors. It also claimed the service provider's system was independent of 7-Eleven's and that none of the information collected by the facial recognition tool was matched with any other personal information or customer data.
The OAIC found that 7-Eleven interfered with the privacy of individuals by collecting facial images and faceprints through its customer feedback mechanism, breaching APPs 3 and 5.
Clearview AI provides a facial recognition search tool for its users. The tool works in the following way:
- An automated image scraper collects images of individuals’ faces (Scraped Image), URLs and metadata from publicly available sources on the internet (eg social media, metadata and other publicly available sources) and stores them in a database. AI then generates a Scraped Image Vector, which is a mathematical representation of the Scraped Image.
- A registered user can upload an individual’s image (Probe Image), which is then also converted to a vector (Probe Image Vector).
- The tool compares the Probe Image Vector against all Scraped Image Vectors and shows the user any Scraped Image matches with an option to be redirected to the original web page for additional information. The respondent’s database has more than 3 billion Scraped Images.
- Clearview offers its service to government customers for law enforcement and national security purposes. From October 2019 to March 2020, the respondent offered free trials to Australian police agencies (Trial Period). No formal customers were onboarded, and Clearview now refuses users from Australia.
The OAIC found that Clearview breached APPs 1.2, 3.3, 3.5, 5 and 10.2.
Importantly, Clearview AI's defence relied on an argument that it was not an APP entity, and therefore was not bound by the Privacy Act. As such, it did not specifically defend a number of the alleged breaches. Clearview AI has already indicated its intent to challenge the determination with the Administrative Appeals Tribunal (AAT).
Clearview asserted it neither carried on a business in Australia nor collected personal information in Australia and was therefore not bound by the Privacy Act. Its argument was that:
- it conducts its business in the US without interaction with Australian individuals (and stores all images in servers in the US) and no person has authority to use its product in Australia;
- it does not confirm the presence or absence of location data (Australian or otherwise), or otherwise have regard to geography or source when collecting images; and
- to the extent that any image did originate from Australia, as the image was published on the open web, it was published in the US and therefore collected by Clearview in the US rather than Australia.
The Commissioner rejected these positions and found:
- Clearview carried on a business in Australia by providing services to Australian police agencies. The trials were commercial in nature and had the purpose of converting customers, as evidenced by marketing materials (including communications to trial users encouraging them to conduct unlimited searches and sign up for paid accounts).
- Clearview otherwise carried on a business through its automated, repetitious collection of sensitive information (facial images and vectors) from Australians, via the internet for profit, on the basis that:
- Clearview collected a very significant number of images without determining their location, so it must have collected Australians’ facial images;
- scraping images was not mere solicitation of business transactions on the internet, but instead an integral component of Clearview's business and a feature it used to distinguish itself from its competitors (ie a basis on which it monetised its database and improved its service); and
- as at the date of the Determination, the website remained accessible by Australian IP addresses.
- Clearview collects personal information in Australia.
- The Scraped Images constituted personal information and were collected by Clearview. Further, during the Trial Period, Clearview provided Australians with an ability to search for themselves and to opt out (which suggests personal information of Australians must have been collected).
- The Commissioner also found that Clearview collected a large number of images without identifying the location or nationality of depicted individuals (and therefore did not exclude images of people located in Australia).
- The Commissioner's view was that the Explanatory Memorandum to the Privacy Act (EM) provides that collection 'in Australia' includes the collection of personal information from an individual who is physically within the borders of Australia by an overseas organisation. However, the EM also states this would arise in circumstances where the organisation has an online presence in Australia, and where information is collected from the individual through a website. This suggests a more proactive provision of personal information by an individual and a clearer relationship between the offshore entity and the individual than existed in this circumstance.
The OAIC found that facial images and faceprints are personal and sensitive information:
- Facial images are personal information, as they are about a person and used for the purposes of biometric identification. Faceprints are digital representations of a particular individual's facial features and are therefore 'about' an individual. As faceprints can be used to distinguish a particular respondent or person from other faceprints held in the server (whether to identify repeat responses to a survey, or to identify potential responses to an image search), the individual represented by the faceprint must be reasonably identifiable.
- The fact that only a small number of people can view non-blurred versions of these images will not render them no longer personal information. Similarly, the OAIC rejected an argument that a vector or faceprint was not personal information as it could not be used to identify individuals but only distinguish images from one another, finding that vectors or faceprints constitute personal information even if an individual cannot necessarily be identified from that faceprint without other available information.
- Where facial images are collected and used as inputs into facial recognition technology, they will constitute sensitive information as they show persistent and unique biometric information to be used for the purposes of biometric identification. Faceprints themselves are also sensitive information as biometric templates.
7-Eleven itself collected the facial images and faceprints
The Commissioner found that the collection or generation of the facial images and faceprints by the service provider in an end-to-end solution constituted collection by 7-Eleven. This was because the collection was in accordance with its contractual arrangement with 7-Eleven, at 7-Eleven's request and for its purposes, and 7-Eleven had contractual rights to use the tablets and control the data held on the servers that processed faceprints for internal business purposes. This finding was notwithstanding the fact that 7-Eleven could not directly access, change or retrieve the images processed by the service provider, and the fact that the service provider could remove or reconfigure services without needing to engage third parties.
Individuals did not provide express or implied consent
Consent cannot be implied if individuals are not adequately informed, as they will not be able to understand the implications of giving or withholding consent. Consent also cannot be implied if there is ambiguity and reasonable doubt about an individual's intention.
In both determinations, the Commissioner found that the organisations' privacy policies (and, in the case of 7-Eleven, the notices at the entrance to stores) did not provide clear information about the handling of biometric information and vectors; and bundled together different uses and disclosures of personal information (meaning individuals could not choose which collections they agreed to and which they did not).
The Commissioner rejected an argument that consent could be implied from the fact that individuals did not make an express opt-out request around collection of information from public websites, particularly given social media companies' policies generally discourage scraping and individuals can upload photos of third persons. The Commissioner held that the onus cannot be on an individual to proactively identify that a particular collection is occurring (particularly where the organisation has not provided an appropriate APP 5 collection notice).
Facial recognition in public places other than for security
The relevant collection notice should clearly clarify how the biometric personal information is collected (to ensure individuals do not assume that the organisation is simply using facial recognition CCTV cameras as part of general surveillance).
The collection of sensitive information for the organisations' functions was neither proportionate nor reasonably appropriate and adapted to the relevant benefit to the organisation, given the significant risk of potential harm to customers
While implementing systems to understand and improve customer experience is a legitimate function or activity of 7-Eleven, individuals are generally not comfortable with the collection of their biometric information and there were other ways 7-Eleven could have identified non-genuine responses to its survey or collected customer demographic information (eg additional survey questions). Further, 7-Eleven had not undertaken a privacy impact assessment, which could have demonstrated that the benefit of collecting the biometric information was in fact reasonably necessary.
Similarly, collection by Clearview via scraping was neither necessary nor proportionate given the significant risk of harm in connection with potential misidentification and identity fraud.
Given the sensitivity of the information, the OAIC stated it would have expected 7-Eleven to include a more detailed description of the fact, circumstances and purposes of collection (of both facial images and faceprints), in the vicinity of the tablet screen. Specifically, to advise individuals before the start of the survey (and collection of the first facial image) that it:
- collects facial images of individuals who complete the feedback survey on tablets in front of cashiers in the respondent's stores; and
- analyses those facial images using facial recognition technology to generate and collect faceprints of those individuals,
in order to identify if an individual is leaving multiple survey responses within a period of time, and to assist 7-Eleven with demographic profiling.
In the Human Rights and Technology Report, the Australian Human Rights Commission (AHRC) expressed its concern about the growth in the high risk uses of facial recognition and other biometric technology, such as in policing and law enforcement, and the lack of targeted legislation to prevent and address harm associated with its use. Until appropriate legislation is in effect, the AHRC has recommended a limited or partial moratorium on the use of facial recognition and other biometric technology in high risk areas of decision making.
Against this backdrop, the Australian Government has launched a comprehensive review of the Privacy Act, along with exposure draft legislation to deal with online privacy (see our in-depth analysis here and here). Speaking at the IAPP ANZ Summit, the Commissioner specifically welcomed the proposed amendments to the Privacy Act in relation to the definition of 'personal information' (being to specify it is information that relates to a reasonably identifiable individual, rather than information about such an individual) and broadening the extraterritorial application of the Privacy Act by removing the requirement that an organisation has to collect or hold information in Australia for the Privacy Act to apply to that organisation). The Commissioner said that challenges to both of these issues had been a common thread in recent enforcement actions, such as those against Clearview, 7-Eleven and Facebook. Both amendments, if adopted, would therefore support the conclusions made by the OAIC in these determinations.
Looking further abroad, the challenges associated with facial recognition technology have recently attracted the attention of international regulators and governments. For instance, the UK ICO issued an Opinion in July 2021, setting out detailed requirements on the use of live facial recognition technology in public places. The Council of Europe, in its guidelines for legislators and decision-makers published in January 2021, also called for strict rules to avoid the significant risks to privacy and data protection posed by the increasing use of facial recognition technology, and also recommended that certain applications of such technology should be banned altogether to avoid discrimination.