Note from the Editor: The Federalist Society takes no positions on particular legal and public policy matters. Any expressions of opinion are those of the author. We welcome responses to the views presented here. To join the debate, please email us at [email protected].
Information technology revolutions inevitably raise hard questions about the scope of the First Amendment’s free speech protections. From the invention of motion pictures to the advent of hyper-realistic violent video games to the rise of global social media platforms, the Supreme Court has consistently mapped First Amendment principles onto novel applications. The ongoing artificial intelligence revolution poses a similar challenge. This latest technological wave arrives as competing concerns about election integrity, misinformation, and global censorship threaten to submerge protected speech—including political satire and parody.
These trends converged on July 26, 2024, when an X account run by a man named Christopher Kohls with the handle “@MrReaganUSA” posted a satirical video in which an AI-generated voice mimicking that of Kamala Harris purports to announce her candidacy by sarcastically mocking it. The video quickly went viral after Elon Musk reposted it on his X account.
The video immediately sparked a debate about so-called “deepfakes”—a fake or modified image, video, or audio recording that uses AI to make it appear authentic—impacting political campaigns. On July 28, 2024, California Governor Gavin Newsom responded to the video by posting on his X account that “[m]anipulating a voice in an ‘ad’ like this should be illegal,” and claimed that he would “be signing a bill in a matter of weeks to make sure it is.” Shortly afterward, on September 17, 2024, Newsom signed two bills restricting certain AI-generated content relating to elections.
The first bill signed into law, AB 2839, took effect immediately. It declares that “California is entering its first-ever [AI] election, in which disinformation powered by generative AI will pollute our information ecosystems like never before.” The law addresses this perceived threat by creating a window of up to 180 days before and after an election during which individuals may not disseminate certain “materially deceptive content.” The law describes materially deceptive content as “audio or visual media that is intentionally digitally created or modified, which includes, but is not limited to, deepfakes, such that the content would falsely appear to a reasonable person to be an authentic record of the content depicted in the media.”
Among the specific types of content prohibited are AI-generated depictions of a candidate “portrayed as doing or saying something that the candidate did not do or say if the content is reasonably likely to harm the reputation or electoral prospects of a candidate.” The law also bars content portraying elections officials, elected officials, or election equipment in a “materially false way” that is “likely to undermine confidence” in the outcome of an election. In addition, the law requires that otherwise covered content constituting “satire or parody” contain an obtrusive disclaimer, “no smaller than the largest font size of other text appearing in the visual media” and running the entirety of the video, stating “[t]his [content] has been manipulated for purposes of satire or parody.”
The second bill signed into law, AB 2655, takes effect in early 2025 and prohibits large online platforms from hosting content prohibited by AB 2839. The law requires platforms to create mechanisms for users to report prohibited content; to deploy “state-of-the art techniques to identify and remove” prohibited content; and to use similar techniques to label prohibited content outside the 180-day election window. It excludes “satire and parody,” but such content must still be labeled as required by AB 2839.
Kohls filed suit against Newsom challenging AB 2839 and AB 2655 (the “deepfake laws”) the same day Newsom signed them and requested a preliminary injunction against AB 2839. The next day, the popular satire site the Babylon Bee posted an containing AI-generated images depicting (among other things) Newsom attending a Trump rally and “losing badly to Donald Trump” in a game of Catan. The Babylon Bee and a lawyer and blogger named Kelly Chang Rickert then sued Newsom on September 30 challenging the deepfake laws, and they filed a similar motion for preliminary injunction on October 1.
Both suits allege that the deepfake laws violate the First and Fourteenth Amendments on their face and as applied. The lawsuits claim the laws are content- and viewpoint-based because they single out and prohibit specific types of election-related speech to which the state of California objects. The suits also allege that the laws both target and compel speech by requiring that obtrusive disclaimers be affixed to works of satire and parody—a requirement likely to torpedo the effectiveness of such material. In addition, the suits allege that the laws deploy “vague” standards that would empower enforcers to engage in arbitrary activity that would chill constitutionally protected speech.
On October 2, 2024, the Federal District Court for the Eastern District of California issued a preliminary injunction in favor of Kohls barring enforcement of AB 2839 against visual and audio-visual depictions. The court found that the law targeted political speech and was neither narrowly tailored nor the least restrictive means of “protecting the integrity and reliability” of the electoral process. The court rejected California’s argument that AB 2839 was akin to a “restriction on defamatory statements,” noting that it does not require a showing of actual harm and reaches false statements that could harm the “reputation” or the “electoral prospects” of political candidates.
Moreover, the law definitionally includes large swaths of constitutionally protected political satire and parody created using generative AI because such content invariably includes deliberate exaggerations and falsehoods intended to elucidate certain arguments through the use of irony to impact the “electoral prospects” of the targeted candidate. Of course, even false speech enjoys constitutional protection. And the court observed that political satire and parody fall within the First Amendment’s heartland and far from the narrow band of false speech the government may regulate. In the court’s words, “Supreme Court precedent illuminates that while a well-founded fear of a digitally manipulated media landscape may be justified, this fear does not give legislators unbridled license to bulldoze over the longstanding tradition of critique, parody, and satire protected by the First Amendment.”
The court also found the law’s labeling requirement for satire and parody to be unduly burdensome as, in the case of Kohl’s parody, the “requirement renders his video almost unviewable, obstructing the entirety of the frame.” Moreover, the court found that the labeling requirement “forces parodists and satirists to ‘speak a particular message’ that they would not otherwise speak, which constitutes compelled speech that dilutes their message.”
The court concluded by recognizing that while AI-generated deepfakes could be problematic, AB 2839 constitutes a “blunt tool that hinders humorous expression and unconstitutionally stifles free speech and unfettered exchange of ideas which is so vital to American democratic debate.” Counter speech, not censorship, is the best answer to satire and parody, particularly in the “political or electoral” context.
The court’s ruling should come as no surprise to the California legislature. As the Babylon Bee’s complaint recounted, both the Assembly and the Senate Judiciary Committees issued reports anticipating the laws would be challenged and acknowledging that they regulated core political speech. Moreover, as the Babylon Bee’s complaint illustrates, those concerns were well founded as the breadth and sweep of the laws would effectively neuter sites like the Babylon Bee that use generative AI to engage in obvious political satire at the time such satire matters most—in the leadup to contested elections.
Like other revolutionary technologies before it, generative AI comes with both great promise and significant peril to our public discourse. Moreover, the complexity of generative AI coupled with the unprecedented hype—and even predictions of societal doom—surrounding the technology are fertile soil for the sort of demagoguery that seems to have contributed to the passage of the deepfake laws. As the district court’s opinion and the challenges brought by the Babylon Bee, Kelly Rickert, and Chris Kohls point out, uncertainty about generative AI and overhyped concerns about “misinformation” must not be allowed to trump core First Amendment values. Indeed, the alternative would be the death of satire—and that would be no laughing matter.