Textual Challenges of Section 230

Freedom of Thought Six-Part Zoom Webinar Series: Part 2

Event Video

Listen & Download

This panel addressed the textual questions of §230: is the statute correctly understood to permit discretionary content moderation on the part of social media platforms and other supporting tech entities, or does the text provide for a more limited range of moderation policies? Although several circuit courts have adopted a more expansive interpretation of the statutory protections, Justice Thomas has recently questioned whether the prevailing application is consistent with the text. Does viewpoint discrimination fall within the scope of §230 protection? Are decisions to ban individuals from participating on a platform covered by the statutory protections? To what extent does the statute preclude state regulatory initiatives to protect speech by platform users?

Featuring: 

  • Philip A. Hamburger, Maurice and Hilda Friedman Professor of Law, Columbia Law School; President, New Civil Liberties Alliance
  • Eugene Volokh, Gary T. Schwartz Distinguished Professor of Law, UCLA School of Law
  • Mary Anne Franks, Professor of Law and Dean's Distinguished Scholar, University of Miami School of Law
  • Moderator: Hon. Gregory G. Katsas, Judge, United States Court of Appeals, District of Columbia Circuit

* * * * * 

As always, the Federalist Society takes no position on particular legal or public policy issues; all expressions of opinion are those of the speaker. 

Event Transcript

[Music]

 

Dean Reuter:  Welcome to Teleforum, a podcast of The Federalist Society's Practice Groups. I’m Dean Reuter, Vice President, General Counsel, and Director of Practice Groups at The Federalist Society. For exclusive access to live recordings of practice group teleforum calls, become a Federalist Society member today at fedsoc.org.

 

 

Alida Cass:  Hello. Welcome to today's Federalist Society event in our discussion series. My name is Alida Cass, and I'm the Vice President for Strategic Initiatives at The Federalist Society and the Director of the Freedom of Thought Project. I want to welcome you all to the second event in our showcase discussion series focusing on free speech and social media. This afternoon, June 16, we will be discussing the textual questions and challenges of Section 230. This series is part of The Federalist Society's new initiative to address emerging challenges to freedom of thought, conscience, and expression in a variety of key sectors, including our law schools, our law firms, corporate America, and the tech sector.

 

      We are very pleased to have Judge Katsas moderate the inaugural Freedom of Thought discussion series. Judge Katsas was appointed to the D.C. Circuit in December 2017. After graduating from Harvard Law School, he served as a law clerk to Judge Edward Becker on the Third Circuit and to Justice Clarence Thomas on the Supreme Court. For 16 years, he practiced at Jones Day where he specialized in appellate and complex civil litigation. He has also served as Assistant Attorney General for the Civil Division of the Justice Department, as Acting Associate Attorney General, and as Deputy Counsel to the President.

 

      I'm going to turn this over to Judge Katsas to introduce today's panel. After our speakers give their opening remarks, we will turn to questions. Judge Katsas will direct that discussion. If you would like to suggest questions, please post them in the Q&A.

 

      With that, thank you all for being with us today. Judge Katsas, the floor is yours.

 

Hon. Gregory Katsas:  Welcome, everyone, to the second of this six-part series on free speech and social media. These panels are part of the Freedom of Thought project, which is exploring what may be a kind of new McCarthyism, which is the possible emergence of a culture in which individuals who have or express unpopular views are subjected not only to sharp criticism in the marketplace of public ideas, but also various other much more serious consequences like being hounded out of positions in universities, losing their jobs, perhaps even losing access to goods and services, including, say, a Twitter platform.

 

      This series is focused on social media. We had a great opening panel last Friday on the First Amendment. We're going to focus today on Section 230 of the Communications Decency Act. And we have four more panels planned over the next month or so on other topics like common carrier law and antitrust law, which will be announced soon, if they're not already on the website.

 

      The free speech issues in this area are challenging because they involve possible conflicts between the interests of two different sets of speakers. First, of course, are the social media platforms like Twitter and Facebook who claim to be like newspapers and have a strong degree of discretion to decide what speech they want to sponsor and present in their forum. But we also have the interests of individual speakers who may want to take positions that Facebook and Twitter don't like and who may have no alternative to speak in virtual fora if the dominant platforms kick them off their platforms.

 

      We're going to focus today on Section 230, which, in general terms, favors the interests of the platforms as against the interests of individual speakers on the platform. And we're going to drill down a little bit more into what 230 exactly does or doesn't do. Heeding Justice Kagan's famous remark that we're all textualists now, I'll just begin by laying out the two key provisions of 230 which together protect the platform decisions whether or not to block content from individual speakers that the platforms may find objectionable.

 

      The first is Section 230(c)(1), which immunizes the platform decisions not to block third-party content. So it says that no provider of an interactive computer service, that's Google and Facebook and Twitter, etc., shall be treated as a publisher or speaker of information provided by a third party. That provision seems fairly straightforward. It says that if a third party posts something that is possibly objectionable, and if Facebook or Twitter chooses not to block it, then the target of the harmful speech can sue the poster but can't sue the platform.

     

      Section 230(c)(2) protects the opposite decision, namely the decision to block speech that the platform finds objectionable. So it states—and I'll paraphrase a little bit, and I'll try to quote some of the key phrases here—but it states that no platform shall be held liable for blocking done in good faith of material that the platform considers obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not the speech is protected by the First Amendment.

 

      Our panelists will talk about the breadth of (c)(2). Does it actually permit the platforms to engage in viewpoint-based discrimination against speakers based on their political speech? And also, the relationship between (c)(1) and (c)(2), which one of the two do we think is more concerning, and does it make sense for the platforms to have both protections? We'll probably also circle back to First Amendment issues since obviously the question of how we construe 230 will drive questions about where the constitutional problems are. That's actually great news, since our First Amendment discussion last Friday was very lively and very much ongoing when we ran out of time.

 

      We've got another great panel, all First Amendment experts and all folks who have done important work on the specific question of 230. Let me just introduce them for you now.

 

      Philip Hamburger is the Maurice and Hilda Friedman Professor of Law at Columbia Law School and President of the New Civil Liberties Alliance. His scholarship has focused on freedom of speech, religious liberty, and administrative law. He has written five books, which have won too many awards to mention here, and his sixth book is due out later this year. He is a member of the American Academy of Arts and Sciences, and he has served on the Board of Directors of the American Society for Legal History.

 

      Mary Anne Franks is a Professor of Law and the Michael R. Klein Distinguished Scholar Chair at the University of Miami School of Law. She is a graduate of Loyola University, Oxford University, and Harvard Law School. She teaches on the First Amendment and on law and technology, among other subjects. She too has written an acclaimed book on the First Amendment, and she is a nationally recognized expert on technology and civil rights. She is the President of the Cyber Civil Rights Initiative, which is an organization dedicated to combatting online abuse and discrimination.

 

      Eugene Volokh is the Gary T. Schwartz Distinguished Professor of Law at the UCLA School of Law. After graduating from UCLA law school, he served as a law clerk for Judge Alex Kozinski on the Ninth Circuit and for Justice Sandra Day O'Connor on the Supreme Court. He teaches First Amendment law, runs a First Amendment amicus brief clinic, and has written a First Amendment textbook, among many other publications.

 

      Since many of you have already heard from Professor Franks and Professor Volokh, we're going to kick things off today with Professor Hamburger. Philip, the floor is yours.

 

Prof. Philip Hamburger:  Well, thank you. It's a great pleasure to be here with you, Judge, and  these distinguished fellow scholars. I'm going to give just a crude initial account. They, no doubt, will make it more subtle. I should begin by saying that what I'm going to present is very much in line with a piece I published in The Wall Street Journal January 29th of this year on 230. And I'm going to begin with statutory interpretation at this time. I might fall back on a little bit of constitutional interpretation, if only to give context as to how we should interpret the statute.

 

      I'd like to focus on 230(c)(2). That's the section that has caused, I think, most controversy and interest these days. It's widely assumed to provide the large tech companies with sweeping immunity for almost any speech restrictions, but the text does not really bear this out. In fact, it only protects against damages actions for restricting content and not at all for viewpoint discrimination.

 

      Let's just consider first what Section 230(c)(2) means by "held liable." It protects these companies from being held liable for various speech restrictions, but what does it mean? It is well known that the word liable has two meanings at least. It can mean that a defendant has violated a legal duty or right and that there's thus a cause of action. But it can also simply mean the pain of damages.

 

      So 230's text actually makes clear that held liable means only protection from damages. How does it do that? Well, 230(e)(3) recites, "No cause of action may be brought, and no liability may be imposed, under any state or local law that is inconsistent with this Section." Now, this text distinguishes liability from a cause of action, and it thereby clarifies that liability refers to damages. So when 230(c) protects tech companies from being held liable, it doesn't offer immunity from causes of action generally, only those that lead to damages.

 

      Now, let's consider another statutory question. What material can be restricted without liability? Free speech doctrine, as we all know, distinguishes content and viewpoint discrimination. And if that point seems, well, significant, that Section 230 focuses on content, 230(c)(2) offers a list of content. It refers to material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable. Well, 230 thus apparently immunizes content discrimination but not viewpoint discrimination.

 

      Note that the "otherwise objectionable" could include a host of sins in there. But following a list of content, one would ordinarily interpret this, an ordinary approach to this interpretation, as referring to objectionable content, not objectionable viewpoint. And that's just on the statutory interpretation side. That's before we even get to the constitutional precedents.

 

      Then, let's turn third to our final statutory question. What actions restricting material are protected? 230(c)(2) offers protection for, and I quote, "any action voluntarily taken in good faith to restrict access to or availability of --", and then there's a list of enumerated material. Well, this matters because barring persons or websites, it goes much further than restricting material.

 

      And by protecting actions restricting material, 230 apparently offers no protection for actions barring persons altogether or unnecessarily taking down websites. If the only way to take down material might be taking down the whole website, that would be one thing. But if you can take down less than the website and just take down the bad material, that's different. And this conclusion has been enforced, of course, by the good faith requirement. The text, I think, thus actually read, as opposed to just assumed, cautions against an expansive reading of 230(c)(2).

 

      Now, at this stage, I think I have a few minutes left. Is that correct?

 

Hon. Gregory Katsas:  Yes.

 

Prof. Philip Hamburger:  So at this stage, what I'd like to do is point out we have a choice of arguments here. One could argue, as I did in my Wall Street Journal piece, that without a narrow interpretation, 230(c)(2) would be unconstitutional, another reason for narrowing it in case any ambiguities were not clarified by the text itself. But today, I actually would like to go a step further. It's not altogether clear that a narrow interpretation can save 230(c) from its constitutional problems. In other words, there's a less modest argument than in The Wall Street Journal.

 

      So let me just touch upon the constitutional difficulties that we'd have to resolve if we were actually going to even defend a narrower understanding of 230. The central problem—and there are a whole host of these—but the central problem is the privatization of censorship. The point is not that private companies are violating the First Amendment, but rather that Congress violates the First Amendment by privatizing what it could not do itself. Notice even if 230(c)(2) only immunizes content discrimination, government content discrimination is actually unconstitutional. Notice that phrases like "otherwise objectionable" are both overbroad and profoundly vague. Notice that "otherwise objectionable" is very nearly offensive speech. We're in difficult territory here, aren't we?

 

      Now, Congress itself could not impose these restrictions, but what it has done is immunized powerful private parties in the hope that they will voluntarily carry out Congress's unconstitutional censorship agenda, the one listed under all those materials. So Congress is privileging companies from the ordinary recourse of law so that they can do what it cannot.

 

      Now, you might protest that the First Amendment only bars government censorship. That's right, but it thereby also bars the government from privatizing censorship. If the Secretary of the Interior paid me to drive a bulldozer into somebody else's house, destroying it, there would be a taking, even though it was done through intermediary, me.

 

      And in fact, the First Amendment is very helpful on this. The primary example of what the First Amendment prohibited was 17th century English censorship done under the Star Chamber and Parliament. This censorship was imposed actually not mostly by the government but through private entities, the universities and the Stationers' Company. Sometimes they were required to do the censorship, sometimes they weren't, sometimes they were told they may do it. The point is that privatized censorship is the central example of what the First Amendment prohibits.

 

      Okay, I could say a lot more. For example, one could talk about the problems of 230(c)(1), that it's speaker discrimination. It doesn't protect newspapers individual's meetings, and this has implications for viewpoint discrimination. But I think I'll stop here. Suffice it to say, 230(c)(2) is actually much narrower than commonly supposed, and the constitutional objections are maybe unavoidable, even with a narrow interpretation. Thank you.

 

Hon. Gregory Katsas:  Mary Anne?

 

Prof. Mary Anne Franks:  Thanks very much. I want to start out by saying that I probably take a different view of what is controversial about Section 230, and very broadly speaking, that it is my view that Section (c)(2) is the one that is very uncontroversial and quite traditional, and that (c)(1) is actually where most of the problems come from.

 

      But I do want to put this in the context of this larger conversation that was set up for us here, and that larger conversation is concern about McCarthyism. It's concern about censorship. It's concern about the silencing of speech. And I just want to make a couple of observations about whose speech we are worried about being censored and when do certain objections arise to private or public repression or suppression of speech.

 

      That is to say, if one were just to throw open the conversation to what are we worried about in terms of unpopular speech being suppressed, one could say, well, we should be worried about the fact that a very recent president said that people who disrespect the flag should go to jail. We should worry about the fact that this was a president who targeted private individuals for their speech and said they should be fired if they protested silently during the national anthem.

 

      We should be worried about organizations that keep things called professor watch lists to try to highlight so-called leftist indoctrination and encourage people to report their professors, especially if we're worried about McCarthyism. We should worry about student organizations that are so sensitive about satire that they would try to keep a student from graduating from law school because he's made a joke about their organization. We might worry about the recent spate of hysteria over so-called critical race theory, a term that is often not defined and certainly not very well understood.

 

      We might worry about the fact that the government, quite honestly, is doing quite a lot. There have been attempts in many state legislatures to crack down on protests, to make it harder for people to express themselves, to impose obligations on private parties, to compel speech. So those are all things that we could be worried about, and I want to suggest that it is very revealing, in most cases, what it is that we get worried about and when we get worried.

 

      That is, Section 230 has been around, obviously since 1996. The frenzy over (c)(2), that is to say, the let platforms take down whatever they want provision, this is all very recent vintage. There have been many people who had objected to (c)(1). And just to be very simplistic, (c)(1) is largely the leave up whatever you want and you can escape liability, and (c)(2) is sort of the take down whatever you want and that'll be fine, too.

 

      There has been very little controversy over (c)(2) until very, very recently, and it seems to dovetail almost entirely with the high-profile actions taken against former President Trump and others for misinformation or for incitement or for other things the platforms found objectionable, which follows four years or longer of these platforms doing nothing in the face of this kind of speech, and that seemed to be fine with most people on the conservative end of the spectrum. So things have shifted a lot, and I think it's worth noting how things have shifted, and how selective one could say the concerns are, and how selective the focus is on (c)(2).

 

      Now, I would say that (c)(2), the reason why it is for many people an uncontroversial provision, it is effectively simply underscoring the First Amendment. What it effectively says is you as a private actor have the right to not speak. You have the right to exclude. You have the right to not publish.

 

      So if you are a newspaper, The Wall Street Journal doesn't have to take my op-ed, however much it might hurt my feelings when they don't. Fox News doesn't need to give me a show. A restaurant doesn't need to let me in if I come in and I start screaming obscenities at someone. A nightclub that is holding an open mic night doesn't need to keep inviting me back if I get up and I start spewing racial slurs. So this is just long understood not only First Amendment privileges of private actors, which is to say, as private actors, we have a First Amendment right not to speak, we have a right not to be compelled to speak, and we have a right not to associate with those that we don't want to associate with.

 

      And we also have a very long tradition in this country of respecting private property and the decisions that private property owners make. So as was expressed by the Supreme Court fairly recently in an opinion called Halleck, this is a good thing. When we say that the First Amendment is restricted to state action, that is, we only really get worried about it when it's the government behind it, that is in part be we want to protect the sphere of individual liberty. We want people to make choices. We want people to decide for themselves if they want to publicize certain speech, if they want to give it a platform, if they want to leave it out.

 

      And that is generally thought of as a good thing, and it has for a long time, at least, been a politically conservative -- has been amenable, I would say, to the politically conservative groups who would say actually the laissez-faire approach of the First Amendment, let private actors do what they want, decide not to decorate a cake, not let an LGBT group march in a parade. That has often been celebrated as, well, you know, you may not like it, and it may hurt your feelings, but as private actors, people have the right to exclude in that sense.

 

      So the argument about Section 230 and the internet may be something like, "But the internet is different because it's so powerful, and the companies that control it are so few, and they have almost monopolistic power." So maybe that's the argument for suddenly changing our minds about what we think about private property and about the First Amendment. And I'd suggest that that's not a very good reason to do so.

 

      Of course, the internet is unique in many ways, but it's not wholly unique. And the idea that any particular social media company being excluded from that means that you can no longer speak or that you can no longer speak over the internet simply doesn't work as a descriptive matter. These are individual decisions made within individual platforms. It doesn't stop anyone from having their own blog. It doesn't stop people from emailing. It doesn't stop them from going on Fox News. It doesn't stop them from leafleting. It doesn't stop them from getting lots of money in speaker fees when they speak at universities.

 

      There are plenty of options for speech, and so it is, again, I think something that is very intriguing about the entire debate, such that it is, over the concerns about (c)(2) as how highly charged they are in a political sense and whether or not that is a matter of truly wanting free access, or whether that is a matter of saying up until this point, we have felt -- conservatives have felt that the marketplace works in their favor, and so we've been happy to support a laissez-faire approach to it. Now that we sometimes feel like we're not dominating that entire field, we're really concerned that things should change. And I would caution very much against society changing the rules so that you can win.

 

      So I think that's where I would start with this conversation about Section 230 and just gesture toward the idea that the problems really do tend to stem from (c)(1) of Section 230, which is quite radical, unlike (c)(2), is quite radical because it says that we're essentially releasing platforms, private actors, from liability when they are knowingly causing harm.

 

      That is something we really should grapple with. That should be where the entire conversation and the concern over Section 230 should be because that is a departure from our law, not just First Amendment law, but from premises liability, from dram shop laws, from Title VII and Title IX obligations, the idea that there's no responsibility of the host of an environment if there is foreseeable harm. So my suggestion is that the scrutiny of (c)(2) is highly misplaced and highly politicized in a negative sense, and it's attention to (c)(1) that would actually be reflecting a genuine concern about speech silencing and harm.

 

Hon. Gregory Katsas:  All right. Eugene?

 

Prof. Eugene Volokh:  Thank you. So I totally agree with Professor Franks that we should care about all sorts of speech restrictions, for example, about President Trump calling for flag burning bans. I sharply criticized that on my blog, and lots of others on my side of the aisle did the same. Likewise, Professor Franks was alluding to an incident in which either one or two students at Stanford who were involved in The Federalist Society chapter there complained about an obviously satirical criticism by another student. I criticized that on my blog as well. I believe a co-blogger of mine did too.

 

      So I'm certainly all in favor of criticizing speech suppression, whether it comes from the left or the right. So I agree with that. I don't really see why that's somehow a telling argument against the argument such as Professor Hamburger is making here.

 

      Now, it is true that the worry about platform power is indeed of relatively recent vintage because the platforms' quite radical assertions of their power are also of pretty recent vintage. And by radical, I mean simply by the standards of before, that it used to be that Facebook and Twitter were seen as the new public square. And President Trump spoke there, other people spoke there, and generally speaking, they took a relatively viewpoint neutral attitude. It's not entirely. A few things like advocacy of terrorism and such have long been restricted, in fact, in league with the government, so that does raise interesting state action issues.

 

      But I think people became concerned in part because, indeed, you saw Twitter and Facebook blocking President Trump, then still president. Twitter would block various articles by prominent newspapers and such. Now, I should say, part of it quite likely was on the part of conservatives because they saw conservatives being blocked. It's human nature. We focus more on things that are happening to us and our friends. That's not the way First Amendment law should work, but that's certainly the way that many people do respond.

 

      At the same time, also, and this is reflected in the fact that many liberals -- Erwin Chemerinsky, hard to find somebody more -- a more credentialed liberal than that. Genevieve Lakier, Nelson Tebbe, and various others also spoke out against those platforms' decisions because they saw that these platforms were beginning to use their immense economic corporate power in order to influence the shape of public debate and quite likely, especially if you play out the game a few years forward, quite likely the shape of elections. And that's something that people understandably are concerned about.

 

      Now, it's true it's not a question of whether you can no longer speak, to quote Professor Franks. It's true it doesn't stop them from leafletting, stop people from leafletting and doing other things or having a blog and the like, true. But people's concern with regard to platforms is in a highly competitive environment, both electoral and more broadly in public debate. If the platform can sway -- can influence people's speech at the point that it sways even a few percent of the vote, that could be the next election.

 

      Now, maybe that's wonderful. Maybe that's just the proper exercise of private property rights. But at the same time, it's also something that I think we should be concerned about when multi-billion dollar -- many, many billions of dollars of corporations—I think Facebook is like the fifth wealthiest corporation in the United States—have the power to influence, potentially, elections this way, it is something we should be concerned about.

 

      Now, I should say I share the concern about private property rights, and I'm not at all a full-throated supporter of regulating platforms, in part because I am concerned about restricting property rights. But of course, conservatives have long recognized that, well, property rights should usually be protected. There are some situations possibly involving very large corporations that in certain contexts operate as monopolies, at least in a particular niche, should be regulated.

 

      Incidentally, liberals have long recognized that corporate power, especially over -- let's say long argued that corporate power, especially over political debate, is something that needs to be carefully watched and often restricted. At the same time, liberals have often recognized that some such power ought to be protected as well.

 

      So rather than, I think,  demanding a certain intellectual purity from conservatives, you must be for private property rights, or from liberals, you must be for restrictions on corporate power, I think the interesting question is regardless of whose ox happens to be gored now, what should we be doing about these kinds of platforms? At least, that's the interesting policy question. And so I think that that's something that liberals and conservatives ought to both be talking about.

 

      Among other things, also, and I think this may be, in fact, driving some of the liberal commentators, and it perfectly well should, there's no particular reason to think that platforms are going to be consistently on the liberal side of things. They are on the liberal side of certain kinds of public policy issues today, just the way thing happen to come out. But there's nothing inherent about large, multi-billion dollar business corporations that suggests they're always going to be on the, let's say, take the progressive perspective, to use another political label.

 

      It's certainly quite possible that at some point, they're going to say, "Well, you know, these people are spreading all this fake news about regulating big tech and using antitrust law and such. This is really bad for the country," even though really what it is bad for is their bottom line. "If they have the power to block President Trump, they have the power to block other people as well, including critics of this corporation." So I think that's why this is an important issue to talk about.

 

      Now, of course, if the platforms have a First Amendment right to decide who goes on the platform and who doesn't, then in that case, well, the First Amendment trumps. All right. At the same time, I'm not really sure that they do have such a right. In fact, my suspicion is that at least as to certain things, they don't.

 

      Let me just put up again a quote from someone who is not a conservative that I actually mentioned last time, so at liberty of repeating myself, this is from Justice Breyer. "Requiring someone to host another person's speech is often a perfectly legitimate thing for the government to do." He was joined in this by Justices Ginsburg and Sotomayor, also not conservatives. To be sure, he was in the dissent. But the majority, which was conservatives, did not disagree with him on that.

 

      And of course, there are precedents which we talked about last time, and I won't go into right now, that recognize that, for example, shopping malls could be required—in a sense, they were seen as the new public square back in the '80s—they could be required by state law to allow speakers that they don't want to be there. Cable companies could be so required. Universities could be so required. Perhaps platforms can too.

 

      Now, let me just turn to the statutory question, though. This is the text of 230(c)(2), and this is what we're talking about. And there are two ways of reading 230(c)(2). One is to say, look, when it says that providers of interactive computer services read Facebook, Google—it's capacity is running YouTube—Twitter, and the like, shall not be held liable on account of action taken to restrict access that they consider to be obscene, lewd, lascivious, etc., or otherwise objectionable, that really means they are free to block anything they want, so long as they consider it objectionable.

 

      And certainly, people often consider political speech to be objectionable, and objectionable could include objectionable based on viewpoint. Under that model, basically, this does reaffirm traditional common law property rights but sets them in federal stone. It says, "States, you can't create new public accommodations laws that apply to platforms. States, you can't create common carrier laws to apply to platforms because we're going to say that platforms have a right under federal law to block anything objectionable."

 

      And that was the view I had until recently. And I've had conversations with Adam Candeub from Michigan State University that actually shifted me to more supporting the second. The second view follows the interpretive canon of ejusdem generis, which is to say that when you have a phrase like "otherwise objectionable" following a set of other words, you should be looking for things that are objectionable in a similar sense. So if you were to say that one example was railroad workers, motor carrier workers, or other workers in interstate or foreign commerce, the interstate or foreign commerce should mean transportation workers, not anybody who works in a business that engages in interstate or foreign commerce. That's from one case.

 

      Now, of course, in order for that to work, you have to ask what is the common bond between the things before because if there is no common bond, then sounds like otherwise objectionable would be anything objectionable because here are all of these disparate ways in which things could be objectionable.

 

      But it turns out there is actually a very clear common bond because if you look at Section 230, Section 230 was not a standalone statute. It was part of the Communications Decency Act of 1996, which was itself Title V of the Telecommunications Act of 1996. And let's look—and just for space reasons, I'm only giving you a few relevant subsections—let's look at the table of contents of Title VII, Subtitle A and B. Section 509, that's what became Section 230.

 

      A few sections before, subsections before, talked about obscene and harassing communications. And one of those provisions, the text talked about obscene, lewd, lascivious, and filthy speech. Subsection 551 talked about violent expression. So basically, every one of the adjectives in Section 230 refers to something that was in the accompanying subsections or accompanying sections of the same act.

 

      And also, one thing that it had in common is those were things that Congress at the time viewed as more regulable when communicated through telecommunications technology. And it's true it says excessively violent, but that too was a term that was pretty commonly used by the FCC to refer to what in Section 551 was just talked about as violence, so things that are violent and communicated through communications technology.

 

      Incidentally, Section 551 orders the FCC to try to protect parental control over their children by essentially setting up identification and rating for video programming including sexual, violent, or other indecent material. And that's very much like some of one of the rationales given for Section 230, which is we're trying to empower parents by providing this kind of screening technology and the like. But Section 551 made clear that the attempt to regulate sexual and violent material there, to be sure on video programming, but something that Congress viewed as related and put it in the same act, was not supposed to be construed to authorize any rating of video programming on the basis of its political or religious content.

 

      So I'm inclined -- I've come to the view that Section 230(c)(2), when it says otherwise objectionable, it means objectionable in the ways described in the act. So one example is in another provision in that same act, it talks about luring children into sexual contacts. That may be another way that's otherwise objectionable.

 

      It doesn't mean objectionable in any possible way, and it certainly doesn't mean objectionable on the basis of its political or religious content, which in Section 551, Congress actually deliberately distinguished from sexual and violent content. So that's why I'm inclined to think that a properly crafted state statute protecting -- or essentially banning ideological discrimination by platforms would indeed be consistent with Section 230.

 

Hon. Gregory Katsas:  Thank you all. All right, we are pretty much on time, so at this point, I will invite the panelists to take two or three minutes to respond to anything you heard from any of your colleagues. Philip, you're up.

 

Prof. Philip Hamburger:  Thank you. I have three thoughts, other than generally to express my appreciation, especially for Eugene's comments, that I think are very apt. For Mary Anne's comments, I actually have some gentle disagreement. I understand your point of view. It makes some sense, but I'm not sure it's actually entirely applicable here. Just a personal note, and forgive me for talking about myself briefly, but you expressed concern that 230 has only been of interest to conservatives, although I think we come under many labels and are more attached to the law than any -ism. But you expressed concern that conservatives are only recently agitated about 230 and that this is really just a response to Trumpism and its resonances.

 

      That's simply not true. I'll just take my own case. I had no interest in this area of law, but I was asked at a religious meeting in California four years ago to undertake a project on 230 involving litigation to protect the interest of churches, especially minority churches. It has nothing to do with politics. I got into this because I hang out with people, actually, with a different religion than my own, and I feel deeply a commitment to protect people who feel oppressed. So I just want to put that out there. We don't all fall under one rubric.

 

      Second, and I should say Eugene defends free speech of all persons all the time, as you can tell from his blog and his litigation. So second, I want to get to the substance. You say this is merely private action. These are private parties handling their own property. It's nice to hear you defend property rights and private speech and action. But I must say, I'm sure that's relevant. It's entirely nonresponsive to my argument, at least, and I think Eugene's too, that what we're talking about here is unconstitutionality of the statute, which is an act of Congress, not an act of private parties. The point is about the act of Congress, not about private action.

 

      Now, what's wrong with this act of Congress? It is privatized government censorship. And this is actually a very common problem in our legal system. It's a good opportunity now, I think, to draw attention to it. It happens, actually, in all sorts of spheres. My first engagement with it was with IRBs, institutional review boards. HHS hands off to universities and their institutional review boards the task of prior review of scientific inquiry and publication, which actually has a death toll attached to it because when you censor publications about medicine and prevent research into the needs of minorities, you end up with a very, very high death toll, massive, actually.

 

      So this is privatization of censorship, and it's common across our legal system these days. And far from being an oddity or something that's still really private, it isn't. This is what the First Amendment most centrally addresses. The primary example that the First Amendment addressed was 17th century censorship, which was almost entirely -- almost, not entirely, but almost entirely conducted most regularly through private entities, through universities, as still today with IRBs, and through the Stationers' Company. Sometimes this was required of them, sometimes it wasn't, but the point is it's privatized censorship.

 

      It's protecting powerful entities in silencing others. And that is highly problematic, not because it's private action or it's state action in some large company, but because it's an act of Congress. Congress cannot do this. And in fact, the First Amendment begins, "Congress shall not…", etc.

 

      I'd like to make one final point, third. I didn't hear you, Mary Anne, say anything about the statute or text and its interpretation, and that's probably because you didn't have time for it. And I just want to say I would welcome hearing, is there another interpretation of the statute other than that offered by Eugene and myself and others that we should be focusing on because ultimately, if this is about interpretation, we should be looking at the words. Maybe we've misunderstood them. So I look forward to hearing another actual interpretation of the statute because I think that's key here. So thank you.

 

Hon. Gregory Katsas:  So that's a nice segue to hand the floor over from Philip to Mary Anne.

 

Prof. Mary Anne Franks:  Thanks. And I just want to say that when I point out these background issues and I point out certain political currents, this is not to suggest that anyone who is involved is doing it solely because they are part of a certain political group. Certainly, there are exceptions to any generalization, and I'm making a generalization here.

 

      But I think it's an important one because having worked in this space for over a decade and having been concerned with issues of harms that flow from online behavior and the absolute silence in many quarters about those questions, because something that may not have come out, I think, in some of this discussion is that I am highly concerned about social media platforms and their influence. It's just that I see the problem as being very different from the one that is being mostly focused on here and even in this series, more largely speaking.

 

      That is, there's this tendency, I think, and I'm very much out of step with this tendency, to think of inaction as being non-problematic and action as being problematic. What many of us who work in the space who care deeply about the harms that flow from online behavior, that includes things like being doxed, that includes things like being threatened, that includes having your private information exposed to the world, harassed, intimidated, objectified, excluded. All of those things have been on many people's minds for a long time.

 

      And what most of the people who actually really work in that space have said and have noted is that the problem is not what is taken down. The problem is what is left up. Those are decisions. They are not neutral. The decision to leave things up, to let people spin out misinformation and disinformation and harassment and expose private information, those are all choices, and they do have very serious consequences. And that, I think, is the reason to be very concerned about social media platforms and their influence. And so I'm very much of the mindset that we should be holding them to a higher standard, that we need to rethink section 230.

 

      But again, I don't see another interpretation of (c)(2) other than to say this is an articulation of what the First Amendment already says, and it gives you a procedural shortcut, and that's all there is to it. And whether we need that or we don't need that, I'm not sure that we do. I'm not particularly attached to (c)(2). I think the First Amendment does a lot of work. It's got a lot of power already. It is really (c)(1), the idea that you're going to carve out some immunity or that you're going to create an immunity for causing harm, for profiting from harm, from benefitting from harm, that seems like the real problem.

 

      But I do want to say a couple of other things that might make my position on this a little bit clearer because Eugene referenced the idea that liberals shouldn't be so relaxed about where things are now. I don't consider myself a liberal, as such. I consider myself to be just a grumpy person. But I will say that I would never characterize current social media as progressive or liberal. I think we just have a very different view of the world.

 

      Again, if you look at the top ranking engagements on Facebook or most social media, it's always far right conservative dominating those spaces. It's becoming a really persistent form of misinformation, this idea that social media somehow is slanted towards the left. If what people mean is we can look at the political affiliations of people at the top of those organizations, I find that meaningless. It doesn't tell me anything about the actual platform. If you look at what's actually happening on the platforms, there's no one who works in the online abuse space that will say, oh yes, it's definitely tilting liberal and progressive. It's quite the opposite.

 

      And so I do think that that's a big divide here is the question of what are we worried about when we think about social media. And there's one -- the camp I consider myself to be part of is we are worried about actual harm. We are worried about the fact that when you say there is no incentive to any major company to restrict certain speech or to punish it or to exclude it, and you allow people to silence each other, you allow them to threaten each other, you allow them to destroy each other's lives, you are going to end up with a certain kind of echo chamber. You are going to end up with just an extension of already existing power. And anyone who cares about free speech should care about that very much.

 

      When you take a hands-off approach to what people will do to each other, you are effectively siding with the powerful. You are effectively siding with the people who will exploit others. So when social media platforms very recently have made these incredibly modest attempts to try to tamp down on some of this, and extremely modest attempts, I will say -- and also, I do think it's important to make a distinction between blocking someone's access, that is, kicking someone off the platform, and labeling someone's tweets, which is another part of this conversation which, in my view of the First Amendment, that is classic counterspeech. When Twitter wants to say let's take one of Trump's false tweets and actually say that it's false, they have a right to do that. That is literally counterspeech.

 

      So to the extent that social media companies have made tiny, little steps in the direction of addressing some of their harms, that, I think, is to be celebrated. I would never even question the fact that they have the right to do it because again, First Amendment, I just simply think that really the focus is all the problems that I think all the panelists have maybe made reference to, at least some of them, in terms of the actual harms we can point to about misinformation or political disfunction or government influence, those are the problems that are best addressed by looking really carefully at (c)(1) and possibly having to reform it.

 

Prof. Gregory Katsas:  Eugene, I don't want to cut you off. We’re running a little short. If we could keep it to two or three minutes, we'll have --

 

Prof. Eugene Volokh:  -- Sure, absolutely. I just wanted to say there is one thing I do agree with Professor Franks on. Certainly, platforms have very broad free speech rights, including the right to label tweets that they think are false with their own statements. They are perfectly entitled to do that, I think. It would be unconstitutional to limit them because that would be limitation of their speech.

 

      The question is do they also have a First Amendment right to just refuse to host people's speech? Are they more like a newspaper, which indubitably does have this right, or a parade, which does have this right, or are they more like a shopping center or a university, when it comes to space in its offices and rooms for recruiters, or a cable system when it comes to picking and choosing what channels that it can carry? Those are three examples where the Court said no, there is no First Amendment right to refuse to host things.

 

      So I'm inclined to say that the platforms are more like the examples that I gave, and the key line on this is from the Supreme Court. This is at the bottom of the slide. On a cable system, the programming offered on various channels consists of individual unrelated segments that happen to be transmitted together for individual selection by members of the audience.

 

      I think that describes well also pages on Facebook or feeds on Twitter. If I go to Twitter.com @realDonaldTrump, back when it was there, I would see this individual segment that was transmitted for my individual selection, even though I could also go to Nancy Pelosi's feed and look at that if I wanted to. So I do think that platforms have very broad First Amendment rights. I just am not at all sure, in fact, I suspect that they don't have the First Amendment right to block people's speech, to refuse to allow their speech on the platform, as opposed to criticize their speech, which I do think they have the right to do.

 

Hon. Gregory Katsas:  All right. We have time for a couple of questions, so let me -- I've got a couple of questions here from the audience, so let's just get started.

 

      Philip, on the idea that the government is enlisting private parties to engage in censorship and that's a First Amendment problem, the traditional view of this is you think of state action versus private action. When the government does it, it's constitutionally problematic, but when private parties do it, it's constitutionally protected, and that's what Justice Kavanaugh said in Halleck. That's a longstanding traditional view.

 

      So how far can we push this idea of delegation to private parties to restrict speech, and that becomes a First Amendment violation? Is this really some narrow principle because of this monopoly power, or what are the limiting principles that prevent the complete collapse of both private and public distinction?

 

Prof. Philip Hamburger:  I would heartily endorse a distinction, and I'm actually making no attempt to collapse it. The point is not that Facebook, when it acts with the protection of 230, is violating the Constitution or the First Amendment and so forth. That's not what this is about. It's that the act of Congress that licenses one party to silence another is attempting to secure a sort of censorship by privileging private parties to do something for the government.

 

      The takings example, remember, I used the takings example of the bulldozer. The government cannot simply protect itself from the Constitution by hiring private contractors to accomplish what it otherwise could not. And the history of the First Amendment, of course, supports this because the primary example was of this use of private agents, as it were, to accomplish the government's ends. It is not to say that private parties do violate the First Amendment. It's really a much more modest argument. It's about the government action.

 

      And notice a southern sheriff in 1950, when giving a little hint to one of his neighbors, "If you go out and do something, beat up somebody, I'm just going to -- I just don't see it. I'll just be looking the other direction." It's not that the individual that actually beat somebody up has violated the Constitution, but the sheriff has. And so I think when asked to distinguish between blurring private and public and privatization, the government cannot protect itself from the Constitution by simply hiring private actors. And I think that's true across the board. That's not just the First Amendment.

 

Hon. Gregory Katsas:  All right. Mary Anne, on (c)(1), one of your articles has a very compelling and horrifying list of things that now happen on Facebook, ranging from murders to suicides to rapes, all sorts of really, really horrible stuff. All of that is obviously constitutionally proscribable.

 

      What would you think of a line on (c)(1) side which says that the platform can be held liable if and only if the underlying speech that it's sponsoring is constitutionally proscribable? Does that go too far because it's very hard for Facebook to monitor trillions of posts, or does it go not far enough, or how would you come out on that?

 

Prof. Mary Anne Franks:  I think it's an intriguing suggestion, and it gets at one of the things I have suggested as a concrete fix for (c)(1). And that is to explicitly -- or actually, for the entire operative clause of Section 230, which is to restrict it explicitly to speech. One of the things that is very concerning about what is happening with the immunity online is that it is being used to -- it is being invoked for all kinds of things that are debatably speech, or even clearly not speech in many cases, for instance, firearms sales facilitations. That possibly is speech. It might not be. It's the kind of thing that we would, I think, hope that courts would work out.

 

      So what I think I would say about this, though, is, as you were hinting, the problem with the second part about it limiting to Constitution proscribable is that that is not a black and white issue. That's the kind of thing that does get fought out in the courts, and it should, that the reason why (c)(1) is so troubling is because it takes that out of the courts. It short-circuits the entire discussion about what is speech versus conduct, and what is protected speech versus unprotected speech. Those are contentious questions. They always have been. They're made even more contentious by the advent of technology.

 

      And we need to be doing more creative work to actually figure out what companies can be responsible for or not. When we give them blanket immunity and let them just get out of jail free for anything they do, we are stopping that body of law from developing, and we're not giving them any incentive to think about how they could design their platforms differently or act differently. And so the way to get them to figure out the best practices that will actually mitigate harm and actually ensure more free speech for more people would be to make them absorb the costs of their activities. It would be to get them to internalize the consequences of their choices to leave things up.

 

Hon. Gregory Katsas:  All right. We're pretty much out of time. I'm going to ask one last question for all of you about the structure of 230, and any final thoughts you might have.

 

      And that's this, which is we've talked -- I guess last time, we talked a little bit about whether these platforms are more like a newspaper, which has a lot of editorial discretion, or more like the phone company, which is just a conduit for others' speech. To the extent you think that the companies are more like the phone company, it seems pretty natural they would get the (c)(2) protection. They are under an obligation to carry all the speech, and therefore, the government says we won't hold you liable when you do that. To the extent they're more like a newspaper, you'd think it would seem natural that they get the (c)(1) protection. They get to make editorial judgements.

 

      To what extent is there a problem here that they get both? They get one protection that seems well-suited to newspapers but not common carriers, and a seemingly opposite one that seems well-suited to common carriers but not newspapers. And whoever wants to answer.

 

Prof. Eugene Volokh:  I can just briefly speak to that. I think you've captured exactly what's going on, and also the choice that Congress made. So (c)(1) basically provides immunity of the form that phone companies would normally have, and (c)(2) provides editorial discretion of the form that newspapers would normally have. Congress deliberately chose that. At the time that Section 230 was enacted, there were a couple of decisions that basically ended up more or less taking the view that if a platform takes a totally hands-off approach, then it would be immune from liability for libelous material or other material. But if a platform decides to exercise its editorial discretion to allow or disallow certain things, then it would be treated more like a newspaper and not be immune.

 

      And Congress's worry was, as a consequence, platforms would take this hands-off approach because that's the cheaper thing for them to do because it diminishes the risk of liability, and as a result, there would be all of this indecency. Remember, this is part of the Communications Decency Act. There would be all of this vulgarity and pornography and violence and such on them. And so Congress wanted to encourage -- and this, I think Philip is exactly right that that was their desire, whether one thinks that makes it unconstitutional or not. They wanted to encourage private platforms to engage in a kind of editing and censorship and constraint of material posted on them without requiring them to do so. So that was a deliberate congressional choice.

 

      And I think you're quite right. One question is does it make sense in retrospect? Should we be saying to platforms, look, choose. Either you're going to be kind of like the phone company, have the immunities of a phone company, but also have viewpoint neutrality obligations of a phone company, or you can be like a newspaper or a bookstore, perhaps, which has the power to decide we're going to carry certain things and not other things, but then at least once you're notified that certain material on your site is potentially tortious, then you're going to be liable for it. So that's the question. Congress deliberately decided to give this kind of best of both worlds to platforms, and query whether that makes sense now.

 

Hon. Gregory Katsas:  Mary Anne, want to either take a crack at that or give us some final thoughts?

 

Prof. Mary Anne Franks:  Just the final thought that I think reflection on the role that social media plays in our lives is really important, but I do think that we have to be aware not to fall into the trap that social media companies themselves have created for us, that the internet is all of life, that social media is all media, that speech is only what happens online. I think if we are really aiming at a deeper structural analysis of why free speech is under attack, or if we think that it is, and how we can get more people to speak, we need to not fall into that ideology that tells us that the internet is everything and think more broadly about the dynamics and structures in our society that are leading to speech being held quite tightly in the hands of the powerful.

 

Hon. Gregory Katsas:  Philip, final thoughts?

 

Prof. Philip Hamburger:  Well, thank you. This has been a great pleasure. The broader dispute here about how we should view our lives, relation to the internet, and about the constitutionality of 230 are very interesting.

 

      I'd like to come back down to that mundane question we began with, namely textual interpretation. How should we interpret it? And I actually must say I found this very gratifying that we actually did not have any dispute about that. There was no dispute about the actual text and its interpretation, but rather about the surrounding questions, which are undoubtedly very important. But should this ever come before the courts, I think it's interesting that there is not really much dispute about the relatively narrow text and meaning of 230, at least of Section (c)(2). And that's a gratifying starting point because it reduces the range of controversy that we'll have to deal with at the constitutional level. So some good news, I think.

 

Hon. Gregory Katsas:  Once again, I want to thank the panelists for a terrific discussion. Alida, we've run a little bit over again, but I hope it was worth it.

 

Alida Cass:  Absolutely worth it. Thank you very much, and thanks very much to our panelists. On behalf of The Federalist Society, I want to thank our experts for the benefit of their time and expertise today, and I want to thank our audience for participating. We welcome listener feedback by email at [email protected]. Please join us next week for a discussion focusing on the common carrier regulatory question, Friday, June 25 at 1:00 p.m. Eastern. Thank you all for joining us today. We are adjourned. 

 

[Music]

 

Dean Reuter:  Thank you for listening to this episode of Teleforum, a podcast of The Federalist Society’s Practice Groups. For more information about The Federalist Society, the practice groups, and to become a Federalist Society member, please visit our website at fedsoc.org.