String Of Numbers Or Identifier: The Ninth Circuit Weighs In On BIPA’s Application To Non-Users
In June, the U.S. Court of Appeals for the Ninth Circuit affirmed a social media company’s summary judgment win on BIPA claims, in a sophisticated ruling providing a plausible path forward for technology companies and others offering facial matching services. The case involved a social media platform’s “tag suggestion” feature, which would seek to identify other users on the social media platform that appear in an uploaded photo. According to the lawsuit, the feature worked by comparing the faces in the uploaded photos against known photos of the platform’s users.
As a reminder, BIPA is an Illinois law regulating when and how organizations collect and use biometric information about Illinois residents. Texas and Washington each have a similar law in place, and other state privacy laws coming on the books at times seek to include biometric information in their definitions. So, privacy lawyers look to BIPA and court’s rulings as a form of bellwether in regulating biometric information in the U.S.
Why this matters: This is a significant win for technology companies regarding what constitutes a biometric identifier under BIPA. BIPA cases continue to be filed at a rapid pace and present a significant amount of legal risk. As connected cameras and microphones proliferate and the technology supporting those devices evolves rapidly (including with the availability of AI), companies offering those products and service or any matching technologies, such as doorbell- and alarm-camera providers, need clear guidance to manage their BIPA and biometric data risks. This decision should provide some. And, this decision illustrates why a partnership between legal risk and technology development are critical for a company to manage its privacy risk.
The plaintiff, a non-user of the platform, sued claiming that the matching feature involved a biometric identifier and violated his rights under BIPA because the social media company did not secure his written consent before performing this comparison. BIPA includes a consent mechanism that has received significant attention from the courts over the years.
To the Ninth Circuit panel, however, the plaintiff’s BIPA claim did not hold up. Rather than approaching the question from a unitary perspective -- which could suggest that the platform’s comparison of facial images required a biometric identifier to be created -- the court broke down the technology into its operational steps. One step involved the creation of a facial signature, “a string of numbers that represents a particular image of a face” that “do not -- and cannot -- reveal information about a face's geometric information” and, importantly to the court, is not retained after the feature completes its operation. The string of numbers then is compared against facial templates (or other strings of numbers) created from photos of platform users' who opted into the “tag suggestion” feature; if there is no match, the facial signature is deleted.
The question for the court was whether the facial signature, that string of numbers, was a biometric identifier under BIPA. If it was, the court theorized that the plaintiff may have a claim even if the social media company itself was unable to identify the plaintiff. But if it was not, then could be no BIPA violation. The district court thought there was a genuine factual dispute about whether the facial signatures could identify an individual. The Ninth Circuit panel disagreed: “[W]e conclude, contrary to the district court, that there is no material dispute of fact about whether face signatures can identify a person.”
Reading between the lines, the court appears to have found two points to be persuasive. First, that the facial signature cannot be reverse engineered to serve as an identifier in the future. On this point, the court approvingly quoted from a declaration submitted by the social media company:
Because the numbers that constitute a face signature cannot be reverse engineered, [the declarant] explained that “faces of non-users . . . that appear in photos are anonymous to [the platform]” and that “it is not possible to identify” non-users from their face signatures. The creation of face signatures “do[es] not create or store any other data from the detected faces of non-users . . . that could be used to recognize or identify them through the use of face recognition.”
Second, the court looked to the imperfect nature of the steps in the process that led to the creation of the facial signature. The platform could identify a face in a photo, yes (in order to standardize it and then create the facial signature); but it does not mean that the platform can identify the person in the photo. Similarly, even though the facial signature could be used to predict age or gender, it was inaccurate and a prediction of age and gender were insufficient to identify any specific person.
Although the panel affirmed the summary judgment win for the social media company, the panel rejected the district court’s practical-impossibility rationale for granting summary judgment in the first place. “The district court's decision turned on the practical impossibility of [the social media company’s] complying with BIPA if it had to obtain consent from everyone whose photo was uploaded to [the platform] before it could employ Tag Suggestions. Because the plain text applies to everyone whose biometric identifiers or information is held by [the company], this conclusion was wrong.”