Let the Algorithm Speak?: Third Circuit Indicates in Anderson v. TikTok That the First Amendment and Section 230 are Inversely Related
Social media platforms sift user-generated content through a variety of algorithms, some of which collect content and channel it to users in certain demographics. That algorithmic mediation, according to a recent decision by the Third Circuit, changes the affected content from a user’s speech into the platform’s own “first-party” speech. The consequence: the liability shield in Section 230 of the Communications Decency Act is withdrawn, and private litigants may hold social media platforms liable for the content that they moderate. The decision in Anderson v. TikTok is another vignette in the ongoing judicial drama over whether social media companies can exempt themselves from all interference by political authority.
A few months ago, during oral arguments in Murthy v. Missouri, Justice Ketanji Brown Jackson posed a hypothetical—a social media challenge that “involved teens jumping out of windows at increasing elevations.” She then asked respondents’ counsel whether the government could permissibly tell platforms to remove that content. That ludicrous-sounding scenario seemed like a dodge to avoid addressing doubts the respondents had raised over the propriety of the federal government’s suppression of dissent from its COVID-19 policy. But her hypothetical, it turns out, was tame when compared with the actual content that social media platforms funnel to minors. Consider the facts of Anderson v. TikTok.
In 2021, ten-year-old Nylah Anderson accidentally killed herself attempting something called the “blackout challenge,” “which encourages viewers to record themselves engaging in acts of self-asphyxiation” using such commonplace items as “belts [or] purse strings.” Nylah discovered this “challenge” through the “For You Page” on her TikTok account. Nylah’s mother, Taiwanna Anderson, sued TikTok, pleading various causes of action under Pennsylvania law. She alleged that not only were videos depicting the blackout challenge “widely circulated on TikTok,” but that TikTok had “determined that the Blackout Challenge was ‘tailored’ and ‘likely to be of interest’ to Nylah.” Thus, TikTok’s algorithms ensured the child saw these videos. According to the complaint, Nylah was not the first child to die performing the challenge after seeing it on TikTok.
In 2022, Judge Paul Diamond of the Eastern District of Pennsylvania determined that all of Anderson’s claims were an effort to impose on TikTok the publisher liability precluded by Section 230 of Communications Decency Act. Judge Diamond adopted the view held by other courts that algorithms are not “content” but merely “tools . . . well within the range of publisher functions covered by Section 230.” Distancing himself from the consequences of that position, Diamond opined that the “wisdom of conferring such immunity is something properly taken up with Congress, not the courts,” and he dismissed Anderson’s complaint.
What changed at the Third Circuit? Not the algorithms. Ironically, the intervening change was something that looked at first like a major victory for social media platforms like TikTok: the Supreme Court’s July 2024 ruling in Moody v. NetChoice. That decision encompassed two cases in which tech companies brough facial First Amendment challenges to Texas and Florida laws that attempted to constrain the platforms’ ability to remove or demote certain types of content or users. Although the Court unanimously agreed that the lower courts had more work to do disentangling the laws’ probable applications, five Justices, led by Justice Elena Kagan, saw an opportunity to comment on the interplay between the First Amendment and algorithms.
“The record,” Kagan acknowledged, was “incomplete even as to the major social-media platforms’ main feeds, much less the other applications that must now be considered.” And the novel methods of communication at issue strained the usefulness of analogies to traditional media. Nevertheless, sans discussion of the algorithms themselves, Kagan and four colleagues declared that “whatever the challenges of applying the Constitution to ever-advancing technology, the basic principles of the First Amendment do not vary.” Underwriting this confident pronouncement is a presumption that the platforms engage in what their appellate counsel Paul Clement labeled “the expressive business”—and if their algorithmic curation of content is not exactly “speech,” then it is at least an “expressive product.”
In his concurrence, Justice Samuel Alito observed that what the majority styled as a discussion of the “relevant constitutional principles” was, in fact, mere “dicta,” but that its inclusion in the majority opinion all but ensures that platforms like TikTok will prevail in using the First Amendment to shield their most lucrative features from state regulation. But the Anderson case demonstrates how the free-speech logic of Moody can boomerang on the platforms. It also demonstrates how a minority view can emerge out of dissents and concurrences to take hold of a debate that mainline thinking has failed to resolve.
Beginning with the two judges on the Third Circuit panel’s majority, the lessons they learned from Moody were that the platforms’ algorithmic collection and representation of user content is an “expressive product,” which is a kind of speech. And because Section 230 does not immunize the platforms from liability for their own speech, algorithmically compiled content, at least in certain instances, can expose platforms to liability under state law. Although the majority cites Kagan’s opinion, its own holding effectuates some of the logic of Alito’s Moody concurrence, in which he observed that “an entity that exercises ‘editorial discretion’ accepts reputational and legal responsibility for the content it edits.”
That reasoning complicates the simple division between content and editorial tools that drives Judge Diamond’s opinion. In the revised understanding, an algorithm that enables curation and recommendation is both a kind of knowing and a kind of speaking, supplying the platforms with a more unified mind-like concept. “TikTok’s algorithm,” the Third Circuit said, “is not based solely on a user’s online inputs. Rather, the algorithm curates and recommends a tailored compilation of videos for a user’s FYP [For You Page] based on a variety of factors, including the user’s age and other demographics, online interactions, and other metadata.” Yes, TikTok does not make the blackout-challenge videos, but the fact remains that “TikTok makes choices about the content recommended and promoted to specific users” and “decides on the third-party speech that will be included in or excluded from a compilation—and then organizes and presents the included items.” Therefore, when TikTok sent Nylah videos showing the blackout challenge because of her demographics, TikTok was engaged in first-party speech. The panel left open the legal ramifications of content that platforms promoted in response to user queries.
Moody was not the panel’s only citation for the first-party-speech holding. They also made use of a dissent from denial of certiorari by Justice Clarence Thomas in which he wrote that, “[i]n the platforms’ world, they are fully responsible for their websites when it results in constitutional protections, but the moment that responsibility could lead to liability, they can disclaim any obligations and enjoy greater protections from suit than nearly any other industry.” His critique is clear enough to be persuasive, but of course, a dissent from denial of cert. is not a citation of first resort. Why quote this when the Moody majority already provides the necessary reasons and authority? Perhaps because Thomas’s argument provides a partial answer to Judge Diamond’s judicial quietism—the view that it is for Congress, not courts, to consider the wisdom of the social media industry’s blanket immunity. True enough, balancing the goods and ills of speech would ordinarily be a policy matter outside a judge’s remit. But Thomas suggests that much of the regime of maximum protection that social media companies presently enjoy was judicially, not congressionally, created. In other words, policy choices had long since crept into the judiciary’s interpretive approach to Section 230, and judges bear some responsibility for correcting judicial errors.
That segues to the separate opinion from the Third Circuit panel by Judge Paul Matey, concurring in part and dissenting in part. Matey agreed that Section 230 is no bar to Anderson’s claims, but he would not rely on Moody. For him, the question is whether statutory originalism applied to Section 230 should prevail over a precedential drift towards free-speech maximalism. The history of Section 230’s enactment, Matey explained, indicates that Congress intended to absolve platforms of liability only for the content which they passively host, not for the content which they actively distribute. Because TikTok tried to keep Nylah’s attention by showing her the blackout challenge, it engaged in conduct and left the protected zone of hosting.
While social media platforms see themselves as the offspring of free speech and free markets, Matey takes a more jaundiced view, commenting that they “smuggle[] constitutional conceptions of a ‘free trade in ideas’ into a digital ‘cauldron of illicit loves.’” In Moody, the platforms styled themselves as purveyors of an “expressive product” in the “expressive business” to situate their operations far beyond any political control. But the result of this logic is as stark an illustration of the divorce of rights from duties as one can find in modern America. As Matey observed, the “marketplace of ideas, such as it now is, may reward TikTok’s pursuit of profit above all other values,” but the courts’ persistent overextension of Section 230 immunity exacerbates the bad incentives by providing platforms with license “to ignore the ordinary obligation that most businesses have to take reasonable steps to prevent their services from causing devastating harm.”
Behind the argument for a narrower but more faithful reading of Section 230 is a view of the First Amendment that leaves more to democratic determination than the current jurisprudence allows. Matey quoted the work of Professor Jud Campbell for the proposition that the First Amendment “was not designed or originally understood to provide a font of judicially crafted doctrines protecting expressive freedom.” Rather, it provided recognition to “natural rights that were expansive in scope but weak in their legal effect, allowing for restrictions of expression to promote the public good.” Campbell’s work was cited to similar effect by Judge Andy Oldham in the Fifth Circuit opinion, NetChoice v. Paxton, which was overturned by Moody.
The interest in recovering an understanding of the First Amendment more faithful to the Founding has been prompted in part by the tech industry’s success in converting that text (and Section 230’s text) into sources of economic license, legal privilege, and quasi-governmental power. By persisting in the use of an attenuated analogy between social media platforms and newspaper editors, the Moody majority forestalled reexamination of the premises undergirding the industry’s enviable array of privileges and immunities. But well-reasoned opinions like both the majority and partial dissent in Anderson v. TikTok and Oldham’s in NetChoice v. Paxton indicate that the Supreme Court may not be able to tamp down the doubts in lower courts that the guiding assumptions in this area of the law may be wrong.
One questionable aspect of the court’s social-media jurisprudence is the belief in algorithms as approximations of human thinking. Maintaining that view entails at least partial adoption of the computational theory of mind, which holds that the mind’s performance is essentially a form of computing that can be replicated tolerably well in binary code. When adopted by judges as a predicate to their holdings, that assumption resolves without analysis some core points of dispute. If social media platforms are more like newspaper editors than telephone companies, as the platforms insist, shouldn’t there be some inquiry into how similar the algorithms are to human judgment?
As Alito explained in his Moody concurrence, many platforms now “use AI algorithms to help them moderate content. And when AI algorithms make a decision, even the researchers and programmers creating them don’t really understand why the models they have built make the decisions they make.” The calculations behind these algorithms are a notoriously “inscrutable,” “relentless churning of endless multiplications,” that more closely mimic a “digital version of the telephone game.” Perhaps there are underappreciated similarities between that process and human thought, but the resemblance is not self-evident. As philosopher D.C. Schindler notes, the comparison of thought to computation also relies on the presumed but dubious equivalence of knowledge held in the conscious mind and information encoded in binary form. Thus, the easy comparison between an algorithm and a human editor begins to sound like science fiction that is only accepted when judges find it expedient.
If algorithms are not thought-like, then using them is not necessarily expressive. In some cases, this may accomplish nothing more legally than taking the platforms’ conduct out from under the First Amendment heading only to place it back under the protective Section 230 heading. But if Matey’s view is correct, and the scope of Section 230 is narrower than many lower court judges have held, then reconceptualizing the application of algorithms as a form of conduct opens the platforms to private regulation—that is, lawsuits for the harm platforms inflict on private parties.
Note from the Editor: The Federalist Society takes no positions on particular legal and public policy matters. Any expressions of opinion are those of the author. We welcome responses to the views presented here. To join the debate, please email us at [email protected].