ACLU v. Clearview AI: The First Amendment Fight over Scrutiny and Governmental Interests in the Digital Privacy Space
In August, an Illinois court denied Clearview AI’s motion to dismiss the lawsuit, entitled ACLU v. Clearview AI, brought against it under the Illinois Biometric Information and Privacy Act (BIPA). The court’s ruling that BIPA did not violate Clearview’s First Amendment rights is a significant judicial foray into the tug-of-war between free speech and privacy against the backdrop of technological innovation.
BIPA requires companies collecting the digital biometric identifiers of Illinois citizens, such as images of an Illinoisan’s face, to provide notice and obtain a release to use such information before it is collected. These requirements would cripple businesses like Clearview, which scrape publicly available information for their databases without any knowledge of whom the images represent. Two free speech questions are implicated: 1) Does BIPA regulate behavior that merits First Amendment protection? 2) If so, is it properly tailored, under the correct level of scrutiny, towards furthering the government’s stated purpose?
As to the first question, the court recognized, without the need to spill much ink, that Supreme Court precedent has held that the First Amendment “protects not just expression, but [its] necessary predicates.” Thus, rather than applying the First Amendment only when society colloquially deems an action “speech” worth protecting, as some suggest, the Illinois Court avoided the policy determination that inevitably follows that approach and applied the First Amendment to Clearview’s data collection just as if it had occurred for academic or artistic reasons—e.g., using AI to construct images of people or art that do not exist.
The court’s certainty that the First Amendment applied suggests the meat of the fight lies in determining the applicable level of scrutiny and thus whether the law’s means and ends are properly tailored. The court determined BIPA was content-neutral and applied intermediate scrutiny using the analysis of amici law professors who distinguished between the source of the information and the content it conveys: by barring certain sources (i.e., images of Illinoisans), the law was not regulating more harshly one topic over another. In distinguishing between images of humans, which are covered, and images of cats, which are not, the court explained, BIPA was not discriminating between subjects the images could convey, but rather between the sources of the biometric identifiers that could be depicted in the images.
This argument holds the benefit of simplicity, but there is no guarantee it will withstand further review. The Supreme Court rejected a similar argument in Barr v. American Association of Political Consultants that permitting robocalls by authorized debt collectors but not campaigns was a distinction based on the call’s source. The robocall distinction was based on “whether the caller [wa]s speaking about a particular topic,” just as BIPA’s applicability depends on whether a company uses imagery to convey protected expression about the subject of human facial features rather than the subject of animal facial features. This tension between the Clearview court’s analysis (emphasizing the source) and the Supreme Court’s Barr decision (emphasizing content) is something the courts will continue to confront in this area. Evaluating what aspect of protected speech is targeted—content, speakers, manner—is complicated by the difficulty in recognizing what facet of the behavior is protected.
After ruling that intermediate scrutiny applied, the court found BIPA satisfied the O’Brien test: the government had a substantial interest in protecting privacy and the law did not burden speech more than was necessary, because, as the law’s supporters noted, BIPA’s opt-in rights only applied to images implicating the privacy of Illinoisans. At first blush, this argument seems to hold water. But further analysis of the chilling effect on the collection of images not covered by BIPA appears to tip the scale towards Clearview; as the company explained, there is no existing mechanism to identify which images implicate BIPA.
The court acknowledged this concern, but it concluded, analogizing to the child pornography context, that Clearview, as the data expert, should bear the burden of determining how to exclude images of Illinoisians. Though the court cogently notes child pornography laws survive despite incidentally burdening speech, there are distinct physical features that make identifying minors in images possible, but that do not apply to categorizing people by their state citizenship merely using facial features. Thus, this factual distinction raises concerns that BIPA may be unconstitutionally overbroad—akin to a regulation that results in bans on political protests across the board simply to prevent threats—rather than properly tailored. Given this question admits no easy answer with real concerns on both sides of the coin—including the Fourth Amendment implications of law enforcement’s use of biometric data—this close-fit question is bound to remain front and center.
What happens to Clearview when the lawsuit runs its course remains to be seen. But more broadly, this lawsuit highlights the heart of the First Amendment dispute between free speech and privacy. Though contemporary media look nothing like the newspapers of old, laws burdening speech by regulating these modern vehicles pose the same concerns. Thus, to ensure the 1791 promise that all may speak freely remains in force—especially using disfavored means of expression—governments will have to justify their laws with proper consideration of the doctrinal issues at play here: the appropriate level of scrutiny and whether the law is properly tailored to its ends.
Note from the Editor: The Federalist Society takes no positions on particular legal and public policy matters. Any expressions of opinion are those of the author. We welcome responses to the views presented here. To join the debate, please email us at [email protected].