Biometric privacy statute does not violate First Amendment

biometric privacy First Amendment
Biometric identifiers extracted from a photo are not public in the same way the photo itself is

 

Plaintiffs filed a class action lawsuit against a facial recognition technology company and related individual defendants, asserting violations of the Illinois Biometric Information Privacy Act (“BIPA”). Plaintiffs alleged that defendants covertly scraped over three billion photographs of faces from the internet and then used artificial intelligence algorithms to scan the face geometry of each individual depicted to harvest the individuals’ unique biometric identifiers and corresponding biometric information. One of the defendants then created a searchable database containing this biometric information and data that enabled users of its proprietary platform to identify unknown individuals by uploading a photograph to the database. Accordingly, plaintiffs alleged that defendants collected, captured, or otherwise obtained their biometric data without notice and consent, and thereafter, sold or otherwise profited from their biometric information, all in violation of BIPA.

Unconstitutional restriction on public information?

Defendants moved to dismiss the BIPA claim on a number of grounds, including an argument that BIPA violated defendants’ First Amendment rights. More specifically, defendants maintained that the capture and analysis of faceprints from public images was protected speech, and thus, BIPA was unconstitutional because it inhibited the ability to collect and analyze public information. Plaintiffs, however, asserted that the capturing of faceprints and the action of extracting private biometric identifiers from the faceprints was unprotected conduct. The court sided with plaintiffs and rejected defendants’ argument.

The court held that defendants’ argument oversimplified plaintiffs’ allegations. Although defendants captured public photographs from the internet, they then harvested an individual’s unique biometric identifiers and information – which are not public information – without the individual’s consent. Put differently, plaintiffs asserted that the defendants’ business model was not based on the collection of public photographs from the internet, some source code, and republishing information via a search engine, but the additional conduct of harvesting nonpublic, personal biometric data. And, as plaintiffs further alleged, unlike fingerprints, facial biometrics are readily observable and present a grave and immediate danger to privacy, individual autonomy, and liberty.

An intermediate approach to biometric privacy

Accordingly, the court looked at defendants’ conduct as involving both speech and nonspeech elements. Looking to the test set out in the Supreme Court case of United States v. O’Brien, 391 U.S. 367 (1968), the court evaluated how when “elements are combined in the same course of conduct, a sufficiently important governmental interest in regulating the nonspeech element can justify incidental limitations on First Amendment freedoms.” The court applied the intermediate scrutiny standard set out in O’Brien, namely, a regulation does not violate the First Amendment if (1) it is within the power of the government to enact, (2) furthers an important government interest, (3) the governmental interest is unrelated to the suppression of free expression, and (4) any incidental restriction on speech is no greater than is necessary to further the government interest.

The first element was easy to dispense with because the parties did not argue that the Illinois General Assembly lacked the power to enact BIPA. On the second element, the court found that the General Assembly enacted BIPA to protect Illinois residents’ highly sensitive biometric information from unauthorized collection and disclosure. Regarding the third element, the court noted that BIPA, including its exceptions, does not restrict a particular viewpoint, nor does it target public discussion of an entire topic. And on the fourth O’Brien element, the court found BIPA to be narrowly tailored by legitimately protecting Illinois residents’ highly sensitive biometric information and data, yet allowing residents to share their biometric information through its consent provision. And BIPA is not overly-broad, in the court’s view, because it does not prohibit a substantial amount of protected speech.

In re Clearview AI, Inc., Consumer Privacy Litigation, 2022 WL 444135 (N.D. Illinois, February 14, 2022)

How Portland has not demonstrated long-term commitment to a ban on facial recognition technologies

facial recognition ban

Portland, Oregon yesterday passed a ban on facial recognition technology. Officials cited two primary reasons for the ban. First, current facial recognition technologies less accurately identify people who are not young, white and/or male. Second, everyone should have some sense of anonymity and privacy when in public places.

Should the facial recognition ban focus on disparate impact?

Do Portland’s efforts to “improve people’s lives, with a specific focus on communities of color and communities with disabilities” demonstrate an effective long-term commitment to keeping invasive facial recognition technology at bay? Such a focus implies that when facial recognition technologies get better and less biased, they should then be deployed full scale, because then everyone will be harmed equally.

That’s one of the problems with looking to ban a technology based on its nascent state and accompanying imperfect implementation. Given the choice between arguing (1) that a technology is being harmfully implemented now, and (2) that the technology, no matter how perfect it is, infringes some fundamental human right, I’d go with number (2) every time.

We will find ourselves halfway down the slippery slope

We know the accuracy of this technology will increase with the development of better cameras, smarter algorithms and more data. When that happens, if you are still seeking to argue against its harmful effects on fundamental rights such as anonymity and privacy, you will already have slid halfway down the slope. With your previous “best” argument made moot, your argument now – an appeal to fundamental rights – will have less impact.

So maybe we should focus on the real issues – the fundamental right of anonymity and privacy for everyone – rather than leading with a social justice argument. At some later point, having made it the primary argument and it having becoming moot, the rationale will be a liability.

About the author

Evan Brown is a technology and intellectual property attorney in Chicago. Follow him on Twitter at @internetcases. Phone: (630) 362-7237. Email: ebrown@internetcases.com.

See also

Police not required to publicly disclose how they monitor social media accounts in investigations

Scroll to top