After avoiding Section 230 for thirty years, the Supreme Court will finally hear oral arguments in Gonzalez v. Google next month. The case presents the question whether YouTube’s targeted recommendations are the platform’s own speech or user speech; Section 230(c)(1) protects platforms like YouTube from liability for user speech only. The Gonzalez plaintiffs are families of victims of the Paris, Istanbul, and San Bernardino terrorist attacks. They claim that YouTube’s targeted recommendations assisted or aided in the terrorism that led to their loved ones’ murders.

The Gonzalez lawyers are concerned about insufficient content moderation, not just of terrorist activity, but also of constitutionally protected hate speech. This leads them to advance inconsistent positions. In their cert petition and merits brief, they argue for limiting Section 230(c)(1) with respect to targeted recommendations. But in an amicus brief submitted in the constitutional challenge to Florida’s social media law, which limits platforms’ ability to censor content in a viewpoint discriminatory manner, the same legal team argues for expanding Section 230(c)(1) to protect platform censorship. The parties in Gonzalez v. Google may not give the Court the opportunity to fully consider Section 230, as both petitioners and respondent seem, in certain respects, to be on Big Tech’s side.

By way of background, Section 230(c)(1) of the 1996 Communications Decency Act relieves internet platforms from liability for their users’ posts. Section 230(c)(1) is short and states that platforms such as Facebook or Twitter cannot be treated “as publisher or speaker . . . of information provided by another” user or entity on the internet. If you libel your friend on Facebook, Section 230(c)(1) protects Facebook, limiting your friend’s legal recourse to suing you. Section 230(c)(1) mirrors traditional legal rules for telephone and telegraph companies, which generally also have no legal liability for their users’ unlawful messages.

Lower courts have expanded Section 230(c)(1) beyond any conceivable textual interpretation to protect platforms not simply from liability arising from users’ and other third-parties’ exercise of editorial discretion, but from liability arising from the platforms’ own editorial decisions. In other words, courts have misinterpreted Section 230 to mean that if you libel your friend on Facebook, and then Facebook explicitly and maliciously decides to promote the libelous post, putting it at the top of everyone in your communities’ newsfeed, your friend is still limited to suing you.

For instance, lower courts have read Section 230(c)(1) to protect platforms when they make fraudulent representations about their content moderation policies or violate contractual obligations with users. The platforms’ lawyers recently represented to a federal court of appeals that Section 230(c)(1) protects their decision to censor speech in favor of gays; in state court, they have argued that the provision allows them to kick off women and religious minorities—in contravention of civil rights law—without liability. But, however wrong as a matter of textual interpretation, their position is consistent with that of numerous courts that have held that Section 230 blocks civil rights claims.

To get to these results, lower courts have taken two interpretive wrong turns. First, by selective quotation of an early Fourth Circuit opinion, Zeran v. AOL (from which the Fourth Circuit has recently dramatically retreated in Henderson), courts have claimed that Section 230 protects platforms’ “traditional editorial functions.” But the Zeran case was referring to Section 230’s protection of platforms from liability due to their users’ traditional editorial functions. Courts have misinterpreted this dicta.

Second, the platforms have relied upon the so-called three-prong interpretation of Section 230. As Google states the test in its Gonzalez briefs, for Section 230(c)(1)’s liability protection to apply:

First, the defendant must use or operate “an interactive computer service,” i.e., a service that “provides or enables computer access by multiple users to a computer server,” . . . Second, the plaintiff’s claim must “treat[]” the defendant as “the publisher or speaker” of the content the plaintiff is suing over. Third, the actionable content must come from a third party—“another information content provider.”

This test appears to be a faithful restatement of Section 230(c)(1)’s requirements.

But Google’s statement of the second prong omits a key requirement of Section 230(c)(1)’s text: the plaintiff’s claim must involve publishing or speaking content “provided by another”—not the plaintiff, himself. Omitting this key element, courts have applied Section 230(c)(1) to internet platforms’ own speech or to their decisions that merely concern third-party content or information—but which third-parties do not in fact speak and which are more accurately attributed to the platforms themselves. Google’s third prong does not remedy this omission because it does not specify that the “another information content provider” is someone other than the plaintiff.

Google’s three-prong test extends Section 230’s platform protection from suits alleging unlawful content in platform users’ posts to suits by users themselves alleging platforms’ wrongdoing that somehow involves user content. Courts employing a version of this test have used Section 230(c)(1) to block users’ suits for the platform’s fraudulent statements concerning content moderation policies, violation of contract obligations concerning platforms’ duties to publish content, and civil rights claims for wrongful account termination.

The Supreme Court must closely analyze the three-prong test because parties have converged on it in a rather strange way. Gonzalez’s petition for certiorari sought review of the “traditional editorial function” interpretation of Section 230(c)(1) which lower courts have drawn from misreading Zeran—and that was the question on which the Supreme Court granted review. Then, the Gonzalez plaintiffs submitted an amicus brief in the State of Florida’s certiorari petition appealing the 11th Circuit’s ruling striking down parts of the Florida social media law, S.B. 7072. In their amicus brief, the Gonzalez plaintiffs urged that the traditional editorial function interpretation preempts the Florida social media law—an argument Big Tech has made often and vociferously. It is difficult to see how such an argument helps the Gonzalez plaintiffs at all.

Then, in their merits brief, the Gonzalez plaintiffs abandoned the traditional editorial function test, offered the Court a new question presented, and switched to the three-prong test. Google, in its response, while chastising the switch, agreed with Gonzalez on the three-prong test. Gonzalez has united with Google, due to an inexplicable last-minute change, in urging a flawed test upon the Court.

As for the merits of the Gonzalez case, even Google has abandoned the traditional editorial function test in its briefing. Instead, it argues that YouTube’s targeted recommendations are simply transmission of others’ speech and thus protected under Section 230. Gonzalez argues that algorithmically amplified targeted recommendations are YouTube’s speech and fall outside of Section 230.

As I have argued elsewhere, targeted recommendations are actions performed on other people’s speech. If the targeted recommendation is expressive under Supreme Court precedent, then it is platform speech under Section 230 and the First Amendment. To determine whether targeted recommendations are expressive, a court must ask whether platforms intend to convey a particularized message that the intended audience is likely to understand. Alternatively, as I argued in the same paper, at some point when it goes beyond merely transmitting messages to curating and controlling content, a social media company becomes a developer of internet content under Section 230(f) and therefore moves out of Section 230(c)(1) liability protection.

Sometimes targeted recommendations are expressive; other times not. At some point, a social media company is not a telephone company blindly transmitting messages and becomes a developer of content through its control and editing of others’ speech. But the Court cannot make these distinctions in Gonzalez given that the factual record is devoid of any discovery into how platforms use their algorithms and other techniques to make targeted recommendations and otherwise control content. Interestingly, the parties’ only factual submission in the entire record of how targeted algorithms work is a screenshot of  the results of a YouTube search that plaintiffs reproduced in their complaint. Remand to the district court would be necessary to determine how exactly these algorithms work.

 

But what the Court must not do is ratify the parties’ shared version of the three-prong test faute de mieux. Although the test ignores the statutory text and manifest congressional intent, parties for some inexplicable reason have united on this flawed interpretation. After 30 years of avoiding the statute, the Court must get it right.

Note from the Editor: The Federalist Society takes no positions on particular legal and public policy matters. Any expressions of opinion are those of the author. We welcome responses to the views presented here. To join the debate, please email us at [email protected].