Section 230 has been understood to shield internet platforms from liability for content posted by users, and also to protect the platforms’ discretion in removing “objectionable” content.
But policy makers have recently taken a stronger interest in attempting to influence tech companies’ moderation policies. Some have argued the policies are too restrictive and unduly limit the scope of legitimate public debate in what has become something of a high-tech public square. Other policy makers have argued the platforms need to more aggressively target “hate speech,” online harassment, and other forms of objectionable content. And against that background, states are adopting and considering legislation to limit the scope of permissible content moderation to preclude viewpoint discrimination.
Some have suggested that the §230 protection, in combination with political pressure, create First Amendment state action problems for content moderation. Others argue that state efforts to protect the expressive interests of social media users would raise First Amendment concerns, by effectively compelling speech by social media and tech platforms.
What are the First Amendment limits on federal and state efforts to influence platform decisions on excluding or moderating content?
Eugene T. Volokh, Gary T. Schwartz Distinguished Professor of Law, UCLA School of Law
Jed Rubenfeld, formerly Assistant United States Attorney, U.S. Representative at the Council of Europe, and professor at the Yale Law School
Mary Anne Franks, Professor of Law and Dean's Distinguished Scholar, University of Miami School of Law
Moderator: Hon. Gregory G. Katsas, Judge, United States Court of Appeals, District of Columbia Circuit
To register, click on the link above.