For more than two decades, technology firms of all sizes have provided tools for users to upload content online—much of it unedited. While most of these posts are benign, others might be defamatory or otherwise tortious. Individuals are liable for any tortious content contained in their posts. But courts have long held that the technology companies providing the means for uploading such content are protected from potential tort claims, thanks to Section 230 of the Communications Decency Act.

Passed in 1998, Section 230 defines an “interactive computer service” under subsection (f); limits its civil liability under subsection (c); states its “obligations” under subsection (d); and makes clear that the section does not constrain many other forms of law under subsection (e).

From reading the popular press, one might be led to believe that Section 230 provides an unbounded safe harbor for internet platforms to engage in any form of editing, censoring, and curating of content uploaded by internet users that they wish. But that is not what Section 230’s text states, nor is it what courts have found.

Section 230 was enacted as part of the Communications Decency Act, a law whose very title reflects the deep cultural divide that persists today about whether and how to balance the positive and negative aspects of information available online. When entering into an agreement to offer service to a customer, an interactive computer service is obligated under subsection (d) to “notify such customer that parental control protections (such as computer hardware, software, or filtering services) are commercially available that may assist the customer in limiting access to material that is harmful to minors.” But few internet platforms provide such a notification. The courts have not addressed whether, absent such a notification, an online platform qualifies as an “interactive computer service” under Section 230.

Additionally, some internet platforms have blocked individual speakers deemed offensive, rather than merely blocking offensive content. The courts have not addressed this issue, but the language of Section 230(c) provides no specific mention of protection from civility liability for blocking speakers. Section 230(c) refers only to content. Yet, in recent months, several internet platforms have banned certain speakers.

Nor does Section 230(c) immunize from civil liability editorial comments made by internet platforms about user content. This is yet another trend of today’s major internet platforms—to “label” or otherwise characterize users’ posts. Section 230(c) provides protection for blocking third party content, but it does not protect an internet platform from its own potentially tortious comments about third party content—or any content, for that matter. Courts have not yet addressed this concern.

Perceptions of Section 230 divide America. Some observers claim the section provides effectively unbounded protections against liability that would otherwise destroy the internet. Others see the section as providing unbounded protection against liability that enables harmful conduct by internet platforms. Neither of these conflicting views is accurate. The language of Section 230 is far more nuanced. It is up to the courts to explore and explain its boundaries more clearly.

 

Note from the Editor: The Federalist Society takes no positions on particular legal and public policy matters. Any expressions of opinion are those of the author. To join the debate, please email us at [email protected].