Many Americans have heard of Section 230 of the Communications Decency Act. Not many know that its key provision, Section 230(c)(1), is just one sentence long. “No provider or user of an interactive computer service,” it says, “shall be treated as the publisher or speaker of any information provided by another information content provider.” This one sentence ensures that websites can generally host others’ speech—your speech—without fear of civil liability. Enacted in 1996, Section 230 fostered the creation of the internet we know today: a place where the most prominent services—Google, Facebook, YouTube, Yelp, Wikipedia—are filled with user-generated content.
Last month, the Supreme Court heard oral argument in its first Section 230 case, Gonzalez v. Google. The petitioners are the family of a woman killed by ISIS in a 2015 terrorist attack. They want to hold Google, owner of YouTube, liable for the attack under the Anti-Terrorism Act (ATA). Seeking to get around Section 230, they argue on appeal that Google is at fault not for letting ISIS videos appear on YouTube at all, but for presenting them to users in an “up next” video feed. The Justices are being asked to decide whether Section 230 protects these “targeted recommendations”—though, as we shall see, the true issue is whether such “recommendations” are distinct, in some legally practicable sense, from other ways a publisher might organize and present third-party content to an audience.
How will the Court resolve this important case? Who knows? But here are some of the elements at play.
Can You DIG It?
The Court also granted review in a companion case, Twitter v. Taamneh. Like the plaintiffs in Gonzalez, the plaintiffs in Taamneh seek to hold social media platforms responsible, under the ATA, for aiding and abetting a terrorist attack. In Taamneh, though, the court of appeals skipped over Section 230 and addressed the underlying question of ATA liability.
The Court does not need to decide both cases. It could rule that Section 230 protects the platforms, and decline to reach the merits of the ATA claim. Alternatively, it could find that the ATA claim fails as a matter of law, and decline to rule on the scope of Section 230.
Gonzalez and Taamneh were argued on consecutive days. Following the mess that was the Gonzalez argument—more on that in a moment—some speculated that the Justices might address the ATA and duck Section 230. The speculation was premised, in large part, on the notion that the ATA dispute is straightforward. In neither Gonzalez nor Taamneh do the plaintiffs allege a direct link between ISIS videos on social media and the specific attacks in question. The aim in each case is to pin liability on platforms for providing a generally available service, some of whose users were terrorists. This is a stretch.
In the event, the Taamneh argument was trickier for the platforms than expected, leading some to suggest that the Court might tackle Section 230 but not the ATA. Yet it is hard to say that the ground has really shifted. Although the Justices spent much time testing the ATA’s boundaries, the plaintiffs’ attorney continued to concede what should be the core point. Asked whether a party can stand an ATA claim on acts or omissions that have “nothing to do” with assisting a “particular attack,” counsel said: “That is precisely our position.”
The Gonzalez petitioners have added a twist. In their petition for certiorari, they asked the Court to decide whether Section 230(c)(1) covers “targeted recommendations.” In their opening brief, they changed the question presented, urging the Court to decide “under what circumstances” Section 230(c)(1) protects “recommendations,” in general. Then, in their reply brief, they announced that “the Court should not undertake to fashion a special legal rule about recommendations as such.”
The petitioners have provided the Justices an easy way, should they want one, to avoid making their first ruling on Section 230. All they would have to do is cite the petitioners’ careless presentation of the appeal and dismiss the case as improvidently granted (DIG).
Go Big or Go Home
An oddity about Gonzalez is that the Court granted review without a circuit split. Two circuits have ruled that Section 230 protects algorithmically targeted recommendations. No circuit has gone the other way. More generally, many decisions hold that Section 230 is a broad protection. To rule for the petitioners, therefore, the Court would have to break new ground. It would have to disrupt the settled understanding of Section 230 in the lower courts.
Were it to do that, how far should it go? What new line should it draw? That was the paramount question at oral argument. That was obviously going to be the paramount question at oral argument. Yet the petitioners’ counsel had no answer. He did not have the semblance of an answer. Repeatedly asked for a governing principle, he offered a variety of irrelevant responses, ranging from digressions about thumbnails (“they shouldn’t use thumbnails!?” Justice Alito asked incredulously) to assertions about there “hav[ing] to be” an underlying “cause of action” (a truism that holds regardless of Section 230’s scope).
That might sound a little harsh. But look at what the Justices themselves had to say. Justice Sotomayor: “How do I draw the line?” Justice Alito: “I don’t know where you’re drawing the line.” Justice Thomas: “Give us a clearer example of what your point is.” Justice Kagan: “Does your position” entail “that 230 really can’t mean anything at all?” Justice Jackson: “I’m thoroughly confused.” Justice Thomas: “I don’t understand.” Justice Alito: “I’m completely confused by whatever argument you’re making at the present time.” Chief Justice Roberts mused that, in response to one question, counsel might just declare, “I give up, Your Honor.”
To fault YouTube for serving users certain “up next” videos, the petitioners contend, is not to treat it “as the publisher” of those videos under Section 230(c)(1). Such recommendations, they reason, contain an implicit message created by the platform. But under this theory, Google’s lawyer, Lisa Blatt, observed, plaintiffs could “always plead around [230](c)(1),” because “all publishing requires organization and inherently conveys that same implicit message.” The Justices seemed to agree that they were being presented with such an all-or-nothing divide. Whenever “there is content,” Justice Kagan remarked, “there’s also a choice about presentation and prioritization.” The petitioners’ position, she continued, would all but nullify Section 230 and “creat[e] a world of lawsuits.”
Because the petitioners offered no middle choice—no way to curtail Section 230 incrementally—the Justices are likely to stick with something very like the status quo. “Congress drafted a broad text,” noted Justice Kavanaugh, “and that text has been unanimously read by the courts of appeals . . . to provide protection in this sort of situation.” Why “challenge that consensus”? A narrow ruling is in order.
Other questions are floating about. Must an algorithm be “neutral” to enjoy Section 230 protection? (What is a “neutral” algorithm?) Does advertising warrant special treatment? Should racial discrimination be carved out somehow? Will artificial intelligence upend everything? These subjects came up at argument. Some of them might appear in a concurrence or a dissent. But none of them stands between the Justices and the resolution of this case. The actual decision (if we get one) is likely to affirm the judgment, and do little else.
Thomas and Jackson: Section 230 BFFs?
Which Justice takes the narrowest view of Section 230? Before the oral argument in Gonzalez, the answer had to be Justice Thomas. He was the only Justice who had to that point said much of anything about Section 230—all of it skeptical. In a series of separate statements regarding denials of review, he complained about the accumulation, in the lower courts, of “questionable precedent” setting forth an “expansive understanding of publisher immunity.”
At the argument, however, we discovered another Justice curious about rolling back Section 230. “Isn’t it true,” Justice Jackson asked, that Section 230 “had a more narrow scope of immunity” than “courts have . . . ultimately interpreted it to have?” She repeatedly suggested that the statute might protect platforms only against “strict” publisher liability—liability simply for having “offensive” (her word) third-party content somewhere on a website.
Jackson started from the premise that “what the people who were crafting this statute were worried about was filth on the Internet.” Was she thinking of people like Sen. J. James Exon? He sponsored the Communications Decency Amendment, a set of anti-porn provisions ultimately struck down in Reno v. ACLU (1997). Exon was quite concerned about “filth on the Internet.” No doubt. The same cannot be said of Rep. Chris Cox and Rep. (now-Sen.) Ron Wyden, authors of a bill called the Internet Freedom and Family Empowerment Act. (Note what came first—“Internet Freedom.”) That bill is what became Section 230. It was clumsily boxed with Exon’s anti-porn measure; it even got stuck with the name of Exon’s original bill (everything was rolled together as the Communications Decency Act); but it is very much its own animal.
At any rate, Jackson took it “as a given” that Section 230 is “about offensive material,” and that it aims to protect only platforms “that are doing things to try to clean up the Internet.” These assumptions led her to propose that Section 230(c)(1) works in tandem with Section 230(c)(2)(A)—which protects platforms when they “restrict access” to “objectionable” content—to encourage content removal. Section 230(c)(2)(A) protects websites when they search for and remove offensive material, the argument runs, and Section 230(c)(1) protects them when, in the course of doing so, they mistakenly leave some offensive material up. It’s all a package, Jackson proposed, to shield platforms trying “in good faith” to “block[] and screen[] offensive material.” On this view, Section 230(c)(1) should not apply when, in Jackson’s words, a “separate algorithm . . . pushes” offensive material “to the front,” so that more people see it.
Again, the historical assumptions that allow this conclusion are incorrect. In response to Jackson’s theory, Lisa Blatt explained that Section 230 is in fact “about diversity of viewpoints, jump-starting an industry, having information flourishing on the Internet, and free speech.” The point here, however, is not to show that the theory is wrong, but to underscore who is interested in it. We already knew that Justice Thomas thinks this reading of the statute has legs. He said as much in one of his separate statements, and he has advanced the idea that Section 230 should protect platforms only when they “unknowingly” fail to “remove third-party content.” Now we’ve learned that Justice Jackson might be on board, too.
We’ve all heard of the horseshoe theory of politics. Is there a horseshoe theory of jurisprudence? How many times, while they sit together on the bench, will Clarence Thomas and Ketanji Brown Jackson join together, them against the rest of the Court? Time will tell. But Gonzalez might be one such instance.
The Hairy Hidden Henderson Issue
Section 230(c)(1) states (to repeat) that a platform “shall not be treated as the publisher” of the third-party content it disseminates. The seminal decision on Section 230, Zeran v. America Online (4th Cir. 1997), construes “publisher,” as used in that phrase, to cover “a publisher’s traditional editorial functions—such as deciding whether to publish, withdraw, postpone, or alter content.”
The Gonzalez petitioners promote a narrower construction. In using the word “publisher,” they assert, Section 230(c)(1) points to an element of the common law tort of defamation. To defame someone, you must publish—i.e., convey—your defamatory statement to a third party. In the petitioners’ view, this means that Section 230(c)(1) protects the simple hosting of content—of putting it online, where others will see it—but not a whit more. By this route do they hope to cut “recommendations” out of Section 230.
This reading’s biggest flaw is that it’s unworkable. The petitioners haven’t provided a justiciable line between algorithmically recommending content, on the one hand, and “simply” hosting it—organizing and presenting it—on the other. And the problem of line-drawing to one side, there is no basis for concluding that “publisher” as used in Section 230 has a technical meaning, or even that, if it did, it would make much difference.
The petitioners’ best authority is Henderson v. The Source for Public Data (4th Cir. 2022). Because “the scope” of the term “publisher” is “guided by the common law” of defamation, Henderson claims, Section 230(c)(1) applies only when liability is premised “on the content of the speech published.”
In its brief, Google argued that the word “publisher” can and should be given its “ordinary meaning.” But it was untroubled by the prospect of defining the term in line with common law principles. “Under defamation law,” it maintained, “publication” includes the act of editing, and websites are thus “‘publishers’ when organizing content.” (As Zeran puts it, “publication does not only describe the choice by an author to include certain information.”) At argument, pressed for her view of Henderson specifically, Blatt called it “like 96 percent correct.” “I got a little lost when they were going down the common law on publication,” she went on, “but the result was great.” In her mind, Henderson like other Section 230 decisions “look[s] to the harm” alleged. If that harm is “independent of the third-party information,” Section 230(c)(1) does not apply.
So the introduction of common law technicalities might add nothing to the discussion. Yet it seems clear that the judge who authored Henderson wanted them to matter. He cited one of Justice Thomas’s solo Section 230 opinions three times, seeming almost to treat it as on par with binding authorities such as Zeran.
In embracing Henderson, has Google thrown a hostage to fortune? If the Supreme Court decides that to understand Section 230, it must dust off its common law treatises, its investigation could take it in unpredictable directions. And that, in turn, could cause headaches for platforms in future cases.
Note from the Editor: The Federalist Society takes no positions on particular legal and public policy matters. Any expressions of opinion are those of the author. We welcome responses to the views presented here. To join the debate, please email us at [email protected].