Section 230 has been understood to shield internet platforms from liability for content posted by users, and also to protect the platforms’ discretion in removing “objectionable” content.
But policy makers have recently taken a stronger interest in attempting to influence tech companies’ moderation policies. Some have argued the policies are too restrictive and unduly limit the scope of legitimate public debate in what has become something of a high-tech public square. Other policy makers have argued the platforms need to more aggressively target “hate speech,” online harassment, and other forms of objectionable content. And against that background, states are adopting and considering legislation to limit the scope of permissible content moderation to preclude viewpoint discrimination.
Some have suggested that the §230 protection, in combination with political pressure, create First Amendment state action problems for content moderation. Others argue that state efforts to protect the expressive interests of social media users would raise First Amendment concerns, by effectively compelling speech by social media and tech platforms.
What are the First Amendment limits on federal and state efforts to influence platform decisions on excluding or moderating content?
- Eugene T. Volokh, Gary T. Schwartz Distinguished Professor of Law, UCLA School of Law
- Jed Rubenfeld, formerly Assistant United States Attorney, U.S. Representative at the Council of Europe, and professor at the Yale Law School
- Mary Anne Franks, Professor of Law and Dean's Distinguished Scholar, University of Miami School of Law
- Moderator: Hon. Gregory G. Katsas, Judge, United States Court of Appeals, District of Columbia Circuit
* * * * *
As always, the Federalist Society takes no position on particular legal or public policy issues; all expressions of opinion are those of the speaker.
Dean Reuter: Welcome to Teleforum, a podcast of The Federalist Society's practice groups. I’m Dean Reuter, Vice President, General Counsel, and Director of Practice Groups at The Federalist Society. For exclusive access to live recordings of practice group Teleforum calls, become a Federalist Society member today at fedsoc.org.
Evelyn Hildebrand: Welcome to The Federalist Society's virtual event. This afternoon, June 11th, the topic for discussion is Free Speech and Compelled Speech: First Amendment Challenges to a Marketplace of Ideas. My name is Evelyn Hildebrand, and I’m an Associate Director of Practice Groups at The Federalist Society.
As always, please note that all expressions of opinion are those of the experts on today's call.
A few technical notes before we begin. If you would like to ask a question of our panelists this afternoon, please enter your question into the chat or the Q&A feature at the bottom of your screen. You can enter those questions at any time.
I'll now introduce Alida Kass. Alida recently joined The Federalist Society as Vice President and Director of Strategic Initiatives. She's also the Director of The Federalist Society's newly launched Freedom of Thought Project. Alida will introduce this series and the Freedom of Thought Project itself.
Alida, the floor is yours.
Alida Kass: Thanks, Evelyn. And thanks, everybody, for joining us. I want to welcome you all to this showcase discussion series on the free speech and social media. This is part of The Federalist Society's new initiative to address some of these emerging challenges to freedom of thought, conscience, and expression. We will be offering a variety of programming to consider these questions, but to get started, we're very pleased to have Judge Katsas moderate our inaugural Freedom of Thought discussion series.
Judge Katsas was appointed to the DC Circuit in December 2017. He graduated from Harvard Law School, served as a law clerk to Judge Edward Becker on the Third Circuit and to Justice Clarence Thomas on the Supreme Court. For 16 years, he practiced at Jones Day, where he specialized in appellate and complex civil litigation. He has also served as Assistant Attorney General for the Civil Division of the Justice Department, as Acting Associate Attorney General, and as Deputy Counsel to the President. Before joining the bench, Judge Katsas argued more than 75 appeals, including three cases in the U.S. Supreme Court. I’m now going to turn this over the Judge Katsas to frame our coming discussions and introduce our panelists. Judge Katsas.
Gregory Katsas: Thank you. So as Alida said, this panel is something of a roll out for the Freedom of Thought project, which will explore what seems to be a growing trend to restrict and indeed punish not only controversial speech, but controversial speakers. Traditionally, speakers who expressed unpopular points of view could expect to have those views vigorously criticized, consistent with the conventional liberal view that the remedy for bad speech is good speech. But in today's legal and political culture, it seems that unpopular speakers can expect to face a lot worse. They may lose their jobs, whether in the public or private sector. They may be hounded out of universities, which traditionally have been bastions of tolerance. They may even lose access to essential services like banking and insurance.
This project will address that phenomenon in many different contexts. The Society is planning programs in areas ranging from corporate governments to legal ethics to donor privacy. But maybe nowhere is this concern more acute or more timely than in the area of social media.
Today's panel is the first of six showcase panels that will run virtually from today through mid-July. Not all of them are announced on the website yet, so please be on the lookout. We're going to have future panels explore specific statutory and common law subjects, such as common carrier law, anti-trust and §230 of the communications decency act. We're going to begin today a little bit more broadly where one might expect in any discussion of free speech issues and that is with a discussion of the First Amendment itself.
I'm by no means a First Amendment expert, but I thought I would just begin by noting one very interesting feature of the doctrine in this area that struck me as I was doing a little bit of background reading and that I hope might help frame some of the questions that the panelists will discuss in more detail, and that's this. Most First Amendment cases involve a conflict between one private speaker and one government regulator. Here, there are two private speakers, and sometimes there are two government regulators.
So, who are the speakers? Well, one group of speakers are the social media platforms themselves: Facebook, Twitter, Google, etc. They claim to be just like newspapers, which have a very strong degree of First Amendment protection when they exercise editorial judgments deciding what speech to present in the physical or electronic platform that they control. We also have individual speakers, flesh and blood people who might want to express positions and viewpoints that Facebook and Twitter don't like. And they might say that they are faced with no alternative to speak in what one might think of as a virtual public forum if Facebook and Twitter have free reign to prohibit not only the unpopular speech that they want to promulgate, but in some cases even prohibit the speaker, the unpopular speaker, entirely.
On the government's side, of course we have the federal government and the states, and it seems like those levels of government may soon be tugging in opposite directions. At the federal level, §230 seems to provide very strong protection for the platforms, says that the platform, the Facebook or Google, can't be held liable for blocking the private speech of other third parties even if the First Amendment would prohibit the government from restricting that very same speech. Now, we have a panel next week that's going to explore much more carefully exactly what §230 does or doesn't do, but for purposes of today, I'm going to assume and hope that what I just said is a good enough approximation at least to tee up the First Amendment issues that our speakers are doing to discuss.
So at the state level, we may be seeing a move in the opposite direction to protect not the platforms but the individual speakers. So just three days ago, to take an example, the Ohio attorney general filed a lawsuit seeking to have Google declared a public utility that would be required to accept all commerce on reasonable and non-discriminatory terms. If we have states like Ohio seeking to protect individual speakers and §230, which is a federal statute, preempting the protection that tees up the question whether the federal law §230 instead of protecting the rights of the platforms might better be thought as violating the rights of the individual speakers.
We have a great panel to explore these issues. I'm going to do a traditional introduction of each of them, but I'll just tell you right now the takeaway point is that they are all nationally renowned First Amendment experts, and they have all done groundbreaking work in the specific area of how the First Amendment applies in the social media context. So on to the panelists.
Eugene Volokh is the Gary T. Schwartz Distinguished Professor of Law at the UCLA School of Law. After graduating from UCLA and UCLA Law School, he served at a law clerk for Judge Alex Kozinski on the U.S. Court of Appeals for the Ninth Circuit and for Justice Sandra Day O'Connor on the United States Supreme Court. He teaches First Amendment law, runs a First Amendment amicus brief clinic, and has written a First Amendment textbook, among his many other publications. And of course, he is the founder of the Volokh Conspiracy, which is a leading legal blog.
Jed Rubenfeld has written two books and many articles on constitutional law, and he is a noted expert on the First Amendment and on privacy law. After graduating from Princeton University and Harvard Law School, he clerked for Judge Joseph Sneed on the Ninth Circuit, and since then he has served as an Associate at Wachtell Lipton, as an Asistant U.S. Attorney in the Southern District of New York, as the United States Trade Representative at the Council of Europe, and as a Professor at the Yale Law School.
Mary Anne Franks is a Professor of Law and the Michael R. Klein Distinguished Scholar Chair at the University of Miami School of Law. She is a graduate of Loyola University, Oxford University where she studied as a Rhodes Scholar, and Harvard Law School. She teaches on the First Amendment and on law and technology among other subjects. She, too, has written an acclaimed book on the First Amendment, and she is a nationally recognized expert on technology and civil rights. She's also the President of the Cyber Civil Rights Initiative, which is an organization dedicated to combating online abuse and discrimination.
We'll begin with opening statements from our panelists of 8 to 10 minutes. So, without further ado, Eugene, you're up.
Eugene Volokh: Thank you. Thank you so much. I very much appreciate being on this panel, especially with such illustrious fellow panelists and moderator. I'm going to briefly talk about, I think, sort of the big picture of the question whether social media should be treated more like common carriers. And if so, what aspects of their operation that can be treated this way, and relatedly whether there are any First Amendment barriers, as our moderator alluded to, to imposing these kinds of requirements on them.
So, if there's one analogy that you can take away from this, it's the phone company, right. The phone company, whether it's a landline phone company or the famously competitive cell phone companies, can't say, "Oh, we don't like your speech. We don't want to carry you on our system." And that's true even without regard to privacy questions. Obviously, they shouldn't be listening into your conversations, but let's say it's well known that you're using some phone number on their system in order to promote your ideas. It's on your web page to call us to join the group. It could be a far left group, far right group, far anything else group. Call this number. And let's say the phone company says, "We just don't want evil speech like this on our system," anti-American speech, racist, speech, Antifa speech, whatever else.
The answer is too bad. You're a common carrier. You're a private company. You're not a state actor. It's not the First Amendment that requires you to accept, to carry all conversations. It's the federal law, or it could be, in principle, state law. But the First Amendment doesn't give you any right to exemption from that. The First Amendment doesn't allow you to say, "Oh, we refuse to carry these speakers," end of story. That's not something the First Amendment protects. So that's the key idea. I'm going to elaborate some on it. I don't want to claim that this is by any means a perfect analogy. It's an analogy which means it's not identity. But I think it is a pretty powerful analogy as to certain features.
I also want to start with a quote from Justice Stevens in Citizens United. He was in the dissent in Citizen United, and actually I agree with the majority that corporations, including of course social media platforms, should have free speech rights to talk about candidates, to talk about issues. But while I’m not persuaded by Justice Stevens' argument as to corporate speech, remember he was arguing that corporations should not be allowed to speak or at least don't have a First Amendment right to speak, I do think his argument works well with regard to corporate attempts to restrict people's speech.
So, let's look a little more closely at the argument. It's a long argument, but I tried to come up with a relatively short quote. "A legislature might conclude," and it sounds like he's endorsing that conclusion, "that unregulated general treasury expenditures will give corporations unfair influence in the electoral process and distort public debate. Because of the speech of corporations, the opinions of real people may be marginalized, and corporate expenditure restrictions are meant to ensure the competition in political arena is truly competition on ideas. Corporate domination of electioneering can also undermine confidence in democracy, and it can threaten politicians into changing their view or at least muting certain views for fear that corporation will -- and this is the Judge's argument -- will speak out against them and spend a lot of money and get them defeated.
I think, as I said, this applies even more to corporate restrictions on individual speech. Justice Stevens' basic concern is the economic power can, if unregulated, be leveraged into political power. And that that's something that the legal system might reasonably want to constrain. Of course, it tries to constrain that through bans on bribery for example. That's a straight corruption, but even setting aside outright corruption, there may be reasons why we want to make sure that the mega corporations don't have undue influence over public debate. And again, I think that they should be able to have what influence they can through their speech, but when they're trying to influence public debate by blocking the speech of others, not of course completely outlawing them, they don't have the power to do that, but blocking access to a extremely important medium of communication in an environment which is highly competitive. So, swaying one percent or certainly five percent or maybe 1 percent of the votes could make a big difference, that's something that we should rightly worry about. That's Justice Stevens' position. And I do think that, as to social media platforms, it's a very serious concern, especially since their power has gotten so vast in this particular media area.
If you look at the actual spending by corporations, as best we can tell, it's maybe 5 to 10 percent of all election spending by corporations and comparable amount by [inaudible 00:17:42]. So that something -- Justice Stevens might say that's enough -- to create concern about distortion of public debate. But it's relatively modest in the big picture. Whereas when it comes to social media platforms, a few, basically Twitter, but especially Facebook, have extraordinary market share and extraordinary power.
Now, I think it's important to realize, when we're taking about platforms, there's a spectrum. One is newspapers or magazines. There we expect them to select what's included and what's excluded. In fact, they'd be useless to us if they didn't. Imagine a newspaper that felt obligated to really publish everything. Let's say for example, it had unlimited space on its website, so publish everything in no particular order, everything that was submitted to it, silly stories, unfounded stories, pointless stories, everything. That would be useless. And newspapers have a First Amendment right to pick and choose to make sure that they provide things that, in their judgment, are useful.
Bookstores or comparable, in some respects. They are a tool for fighting information overload. There are many more books out there than people can effectively sort through. Bookstores help, and that's why we have them. [Inaudible 00:19:03] ideological bookstores. We have feminist bookstores. We have Christian bookstores. We have free-market bookstores. They have the right to pick and choose.
Likewise, I'd say the same, and I have said the same actually in my capacity as a lawyer, but I endorse it as an academic, Google is a provider of search. You wouldn't want to have content-neutral search results. You probably wouldn't even want to have viewpoint. Likewise, I do think that social media platforms have to have broad First Amendment rights to choose what pages they might recommend. So, if there's a pages you might like feature, then they would have the right to select what they think you might like or should like.
An interesting question is what about Facebook, YouTube, Twitter managing conversations, so comments by outsiders and users' pages or Tweets. That's an interesting question. I'm not really sure what the right answer is. Maybe we'll have some time to talk about that.
But then, let's get to Facebook, YouTube, Twitter as providers of hosting for users to reach willing viewers. So, this is what I call their hosting function, like my having a Twitter feed and when people subscribe to my twitter feed, they're seeing what's on my Twitter feed. That begins to me to look a lot more like phone companies, like UPS and FedEx, which are also common carriers which also can say we refuse to deliver books, like communist books. Again, they're common carriers. And the postal service which likewise has to carry all viewpoints. Now the postal service of course is a government actor. It's covered by the First Amendment. But I think it's not an accident that we treat phone companies and the competitors, the postal service, as comparable in that respect, not under the First Amendment, but again, under federal law. So, my suggestion is that this hosting function of the social media platforms is soundly assimilated by the way we treat phone companies and shippers like UPS and FedEx.
Now what about the First Amendment? Isn't there a First Amendment right of platforms to refuse to [inaudible 00:20:56]. Well, here's another quote. This is from Justice Breyer. "Requiring someone to host another persons' speech is often a perfectly legitimate thing for the government to do notwithstanding First Amendment rights." This is from Justice Breyer joined by Justices Ginsberg and Sotomayor in dissent in a recent case, USAID v. AOSI, the second [inaudible 00:21:14] of that case. But the majority didn't disagree with them on that. In fact, the majority took an even narrower view of the First Amendment compelled speech right than he did. And he cited a couple of cases which I think are key precedents, the FAIR v. Rumsfeld which showed that the government may require law schools to host speech from military recruiters, even if they sharply dislike it. PruneYard Shopping Center v. Robins, which held the government may require the owner of a private shopping mall to host speech from politically-minded pamphleteers. This is Justice Breyers summary of this, but I think it's quite accurate. Both, by the way, unanimous cases at least as to the bottom line.
So, if you look at the cases, again, they recognize that certain kinds of platforms, like newspapers, they have to have editorial discretion because they're producing this coherent speech product that is a way of fighting in considerable measure information overload and provides the views of the newspaper, even if it's selected from a bunch of different views.
Likewise, Hurley said the same about parade organizers. On the other hand, we see PruneYard and Rumsfeld say when you're talking about a private property owner, then they can be required to allowed speech on their property, at least in certain situations. And Turner Broadcasting v. FCC said the same thing about a cable system. So, it seems to me that in this kind of situation where analogous to Turner, the programming, as if it were offered on various pages on Twitter and Facebook, consists of individual unrelated segments that happen to be delivered together for individual selection by members of the audience, there's no real First Amendment there here.
Let me just close with one last thing. Virtually every problem can be made worse by unsound government intervention. It may very well be that, even though there may be a problem here with a media platform power, that the cure will be worse than the disease. I'm not at all certain that this kind of approach would sound as a matter of policy, but I do think we have something we need to consider and something that's constitutional.
Gregory Katsas: Thank you. Professor Rubenfeld.
Jed Rubenfeld: Thank you, Judge. And thanks to The Federalist Society for hosting this and inviting me. It's really a privilege and a pleasure to be here. I mean that. So, I'm very happy to be talking to everybody today.
I thought Eugene's presentation was great. I'm very interested in the common carrier thought as well, and I thought he did a good job of focusing on some of the worries or dangers that exist today as a result of the concentration of power over public discourse in the hands of a few huge, behemoth, big tech companies.
It's worth pausing for a moment and realizing just how unprecedented that is. Facebook and Twitter exercise a power over the content of public discourse unprecedented in American history, greater, perhaps, than any entity in the world, public or private, with the possible exception of the government of China. This is worth worrying about from a free speech point of view. And the assumption of many people is that it's not immediately or directly a constitutional question because Facebook and Twitter and Google, they're not state actors. They're private companies. That was, I think, an assumption in Eugene's great presentation as well.
I want to challenge that assumption. So, I want to spend my few minutes talking about the possibility that these big tech, social media platforms are already state actors for constitutional purposes. Now, here's what I'm not saying. I'm not saying that they're state actors because they perform a public function or because they are public forums. I think that argument's mistaken, and it's been pretty decisively rejected by the Supreme Court. So that's not what I'm saying.
What am I saying? Well, let me give you a hypothetical. Imagine that a state legislature wanted to shut down all the abortion clinics in the state, and there's nothing specific to the abortion example here. Imagine another state legislature, they want to take everybody's guns away from their homes. What do these state legislatures do? They know they can't do these things directly because it would be unconstitutional. So they have a clever idea. "Oh, I know what we'll do," the first state legislature says. "We'll pass the reproductive decency act which will immunize anybody who goes in front of abortion clinics and barricades them and doesn't let anybody in or out. So that'll stop everybody from getting an abortion at these abortion clinics, but it's private actors doing it so no problem, right. We'll immunize them." And the other state legislature passes the firearms decency act, and it immunizes anybody who goes into people's homes and takes their guns. And by immunizes what I mean is it makes those people immune from liability. It prevents anybody from suing them for liability and they can't be prosecuted either under a state trespass laws or whatever. So they're immune.
Now, does anybody think a legislature can circumvent constitutional rights through that very simple device of getting private parties to do the work for them by immunizing them? I think that's a serious problem. And I think we haven't really thought about that problem as deeply as we should because, as I'm sure everybody knows, the communications decency act passed by Congress does just that. It immunizes internet companies if they suppress "objectional speech" even if that speech is constitutionally protected. And again, I’m quoting from the statute. It was the intent and purpose of that statute to encourage internet companies to suppress constitutionally protected speech and do so by offering them immunity. That's a very serious problem that has not been thought through nearly as well it should be.
If folks in my first example started barricading the clinics, is there anybody who really thinks that the courts wouldn't find state action in that case? Those people are now immune under a state statute. If people started breaking into folks' homes taking their guns in the second case, is there anybody who thinks that there wouldn't be a state action problem there, if they're doing it under this immunity statute? Well, if there's anybody who does think they're not sure about that, let me add another factor. The legislature starts hauling big companies into hearings over and over for years and says, "We passed this immunity statute. You could be doing this. You're not doing as much of it as we want. We demand that you do it, and not only do we demand that you do it, but if you don't do it, we are going to start seriously thinking about taking away some of your privileges. You know you have these tax breaks. Well, we're going to have to consider taking those away from you." And let's say they're really, really big tax breaks that could, you know, bankrupt the company if you take them away. "And also, we might have to start thinking about whether you're engaging in anti-trust violations." And they say all this explicitly, pressuring these private companies to do more of the conduct that was immunized. And then the companies start doing it. Is there anybody who thinks that that wouldn't be a big state action problem? I think it would be.
And that, of course, is true as well in the current situation, that is, Congress has hauled in the CEOs of Facebook and Twitter and Google over and over for hearing after hearing, demanding that they do more to censor objectionable speech, whether it's hate speech or extremist speech or election misinformation so called speech, or COVID misinformation so called. I think we have a very serious problem of private parties being induced to do stuff that the government can't itself, constitutionally. And I'll just note to everybody that the Supreme Court has been very clear about this. It is axiomatic. The Supreme Court said, gosh, almost 50 years ago now in the Norwood case, it is axiomatic. The government may not induce, encourage, or promote private parties to do to accomplish what the government itself cannot accomplish. There have been cases holding that when government violates that axiomatic rule, state action is present in the private party's conduct. And it's just like we all forgot about this principle and this doctrine, and we should all not forget. We should be very, very concerned about this.
And let me add one more thing to these two factors. We've got the immunity, and we've got pressure, almost coercive pressure, coming from Congress to do suppression of constitutionally protected speech online. But there's yet another factor. What if the White House or federal agencies reached to social media companies and said, "You know what? We can help you suppress speech that's really bad, like COVID misinformation. We're going to tell you what information is misinformation, and we want you to take that down." And what if the social media companies said, "Great, we can help with that. We're happy to help with that. You tell us what's false. We'll take it down." At that point, you don't have just immunity and coercive pressure, you have joint activity. You have what the Supreme Court calls willful participation and joint activity which has long been well-established to be another test for state action. If you have willful participation and joint activity that's state action.
So what I'm saying is that under existing doctrine, there is already -- oh, I should add, by the way, that the White House and the CDC have done just what I was referring to a moment ago. That wasn't hypothetical at all. By their own public statements, both the White House and the CDC have been reaching out and partnering -- that's the word partnering -- or directly engaging with social media partners to tell them what is disinformation with respect to COVID and what therefore should be taken down. So that wasn't hypothetical. That's really what's happening.
So if you put these three things together -- you have government collaboration or joint activity; you have an immunity statute; and you have coercive pressure -- it's very hard for me to understand how we could be not holding state action to exist in the presence of those three factors. I ask anybody to imagine if those three factors were present with respect to any other constitutional right, abortion as I was saying, guns -- say the government wants to get into the emails of people and expose them, but it can't because that'd be unconstitutional. So, it immunizes hackers who break in, and then it starts putting pressure on them and threatening them with adverse legal consequences if they don't break in. And then it says, "You know what, we'll help you. We'll tell you which email accounts are worth breaking in to." If you put those three factors together, it's so difficult for me to think that there's a court in this country that wouldn't find state action in those circumstances.
So full disclosure, I am representing folks who are making this argument in court, but also full disclosure in fairness to myself, I'd been making the argument before I started representing people who are doing it. But I just wanted to make sure that's known. But yes, so I think that we have forgotten our state action doctrine, and I think when we apply that doctrine, if courts are vigilant to apply existing state action doctrine to this very, very serious problem of private, concentrated power over content and public discourse, I think courts should be thinking very seriously about and indeed finding that state action already exists. This argument is limited to the major big tech players. It doesn't apply to mom and pop websites. But I’m going to stop for now there, and that's all from me for now. Thanks.
Gregory Katsas: Professor Franks.
Mary Anne Franks: Thanks very much. One of the things that is most interesting about contemporary conversations on free speech and social media is that there are so many people, across a fairly wide spectrum, that think that they agree that there is something wrong, that whatever it is that we may have thought was true before about free speech, whatever some people may have liked about social media, maybe even §230, now seems like a good time to revisit. But what's really important to express is that the critiques here are coming from very different places. And I want to articulate a different story than the one that I think is fairly dominant here.
This is the story about what is happening to free speech and social media that is not about the choices of Twitter to block certain things, but rather the choices of social media and these major companies to make decisions that are fully insulated from lots of consequences and that is to say to prioritize engagement and a strangely sort of First Amendment light kind of approach that they have taken. This idea of saying we are places where people can speak freely, and so we have some kind of a free speech notion that is not necessarily a doctrinal one and that many people believed, I think believed quite strong was true. That is social media is where people really get to go speech. It is the unfettered access. It is the agora. It is the ideal. And so much of our really cyber idealistic rhetoric was exactly about this, about how we were going to get these notions of actual free speech.
What has been part of the story all that time and the story that was less talked about what the story coming from others who said actually, this notion that free speech exists offline, the notion that we have ever truly had a free speech ideal, in any kind of sense, that is an equality sense, is a fantasy. And the concern is that the way that social media and the internet are operating is that we're going to even farther away from this ideal. That the people who already struggle to speak are going to have to struggle even harder to be heard because of the way that engagement is going to work online, especially if that is engagement that is subject only really to the proclivities of private corporations.
And let me be more specific about what those harms look like because this is a moment where people talk about harms and suppression and silencing and censorship. It becomes quite clear that people mean very different things by those terms. So I will be very concrete. The kinds of concerns, if one is starting from a premise that free speech means, if it's valuable in any sense, is that it's free speech especially for those who are not powerful, that it is especially for those who dissent. It is especially for those who have -- I wouldn't say unpopular ideas, but ideas that are not in power. That is what is most important. If the people in a society who are the most disadvantaged, the most exploited, the most burdened can't speak freely about those things, then we do not in fact have a free speech culture.
So let me be more specific. If people have to worry that the expressions of ideas that they wish to communicate will result in them being threatened with death or with sexual assault or with publication of their private information, then those people can't speak freely. If people wish to speak and they know that they will be met with some kind of violence, they cannot speak freely. If people have no leisure time to be able to spend to enter a conversation because they are working three jobs and they don't have a moment to spend on developing their ideas in the public forum, there's no real free speech there either.
When we talk about problems with social media, and we look at how much more some of these problems have started to loom over those who have [background noise 00:37:57] in speaking—apologies for this—we look at how those problems have gotten worse. People who want to speak out will be threatened. People who want to make observations about unpopular political ideas may be subjected to violent threats. And I'm speaking here of things like non-consensual pornography. I'm thinking about harassment. I'm thinking about propaganda. I'm thinking about insurrectionist rhetoric. I'm thinking about people organizing on social media platforms to encourage each other to take extremist positions and encourage the response to dissent and a response of these people expressing themselves with violence and objectification.
That problem is one that calls for a very different solution than what I think is very much in vogue perhaps in certain circles. That problem is not if we want to speak in §230 terms. That problem is not we need to be attacking (c)(2) which is the part that says you have the right to take certain things down. This problem is it's a problem with (c)(1) of §230 that says you have basic immunity for everything you leave up. That once you take away any type of responsibility, accountability on the part of these platforms to actually care about what's happening on their platforms in terms of the violence and the harassment that is taking place on their platforms, that then you really do exacerbate what was already a very perilous free speech situation and you make it worse. That we have to have a more sophisticated sense of free speech that understands that when some people speak more, other people speak less. When platforms have simply no obligations except their own business interests to make those decisions, that that is a real problem.
So that kind of problem is one that could be remedied or could be addressed at least by taking stock of where §230 had led us, by taking stock of where social media companies have decided to draw their incentives, by taking stock of the fact that when you give a corporation the opportunity to manage its own problems in anyway it sees fit, that it's not going to manage its problems in any way that's actually going to speak to the exploited and the disadvantaged and the unheard. The recommendations there would be to say actually the problem we have in social media is not that we used to have a golden age of free speech and it's been overturned. It's actually to say that there never was a golden age of free speech. The people who can't really speak, that is to say -- and I will be clear about who I think some of those categories are: women, minorities, religious minorities, and others -- those are the ones who have actually never really gotten to participate in the First Amendment privileges the way that everyone else has.
Now how much does that interact with the actual doctrine of the First Amendment? Well, there's an interesting overlap, right, that for such a long time, culturally speaking, the response to the articulations of those groups, disadvantaged groups, exploited groups who are saying we don't actually get to speak freely, was suck it up. All the First Amendment gives you is a negative right for the government not to put you in prison for something that you say. The fact that you don't feel comfortable speaking in your classroom or at your workplace or on social media, well, that's just something that you're going to have to deal with because it's not a matter for the government.
What's been interesting to observe, as a cultural matter, is that, largely speaking, that kind of response was very much coming from people who would identify as conservative or maybe libertarian to say hey, you don't need anybody's help in here. You're not being thrown in prison for what you say. The fact that it's unpopular or the fact that you feel afraid is really not our issue. What has been really interesting is that the conservative more libertarian side has now said, actually, we do think we want to have more intervention. We want a thicker sense of state action. We want more government intervention if people don't feel free to speak. And here the freedom to speak is not we're worried about being threatened. We're not worried about our lives being destroyed in a very concrete sense, but rather we're worried about a program. We're worried about criticism. I think that's a really telling misunderstanding of what free speech is, at least in the doctrinal sense. That there is no affirmative right to speak and be welcomed, right. There is an obligation of the government to restrain itself when it doesn't like what you say, but there is no affirmative sense of you get to say what you want to say wherever you want to say it and people can't get mad at you about it.
So the idea that seems so ironic at this moment is not only is that idea actually getting real traction here, it's not getting traction on the basis of look are the actual objective harms, but actually rather in the sense of we're dominating, we as in a certain ideology are actually quite dominant on social media. We're quite dominant in the regular media, but we're not fully dominant, and we would actually like to ensure that the government makes us dominant all the time in all places and let's change the doctrine so that we can do that.
I'll just take as maybe a too simplistic example, if we want to take a snapshot of where we are, of the top trending topics or the most engaged topics or figures on Facebook -- there's a very convenient account that tracks these things for us -- that will tell you that 9 out of the 10 are almost always conservative, very far right voices. So it's so interesting to see that it's those same voices that are saying but we don't get to speak. We're being silenced. We're being censored. We're being oppressed. We're going to need to change doctrine. We're going to need to change §230 and make sure that it's not 9 out of 10, it's 10 out of 10.
I think there's a lot to unpack there, and I think there's a lot that is going on in terms of the motivations for why we're now reevaluating what we mean by free speech and §230 and social media. I think it's very productive to have a conversation where we take stock of those concepts that have been held for dear for so long. I would just suggest that they should be and could be happening in a far different direction and with a far greater sense of what could be imagined differently than we are having currently.
Gregory Katsas: All right. Thank you all and thank you for keeping us more or less on time. That gives me a chance to invite all of you to take maybe two minutes or two or three minutes each to respond to what you heard from your fellow panelists. Professor -- go ahead, Eugene.
Eugene Volokh: Yes. I appreciate much of what Mary Anne is saying, and I do think, for example, threats of violence are very serious interferences with people's ability to participate in public debate and should be prosecuted. I'm not sure that platforms blocking is going to be that helpful, but at the very least, that speech is not protected.
What interested me was how quickly things segued from threats of violence or non-consensual porn—which incidentally I've also argued is constitutionally unprotected—to propaganda, insurrectionist rhetoric, encouraging extremist positions, encouraging objectification. Now we're turning to calls not just to ban unprotected speech or to have social media block speech that falls within a First Amendment exception. We're turning not just to viewpoint neutral calls to make sure that social media doesn't block certain accounts based on viewpoint calls that, as Mary Anne points out, come from both the left and the right. Now we're talking about basically the desire to have social media presumably, if that changes the law, to encourage social media, that's my understanding of Mary Anne's argument, to also go after again this propaganda, insurrectionist rhetoric, whatever that might mean, extremist positions being encouraged and the like. So I wonder how that would play out.
Tangibly, what kind of speech would platforms, under Mary Anne's proposal, be required or pressured to ban on the grounds of that's necessary in order to protect others? I'll give you one very concrete example. Right now, I’m fighting in the Ohio supreme court—I actually moved to intervene myself as an individual—a court order ordering defendants not to post the names of a police officer because the police officer was accused of allegedly racist conduct. I'm not sure that the accusation was right. But the police officer persuaded the judge in the current climate he might be targeted for violence if his name is publicized. And the judge actually issued an order forbidding the publication of that name. Now I think that's not right, even if there is a concern about violence, and I supposed some might way, well, we should be protecting women, minority, religious minorities, racial minorities, those are the lists that were given, but we shouldn't be protecting police officers because they're dominant. They're powerful. I doubt the First Amendment law will end up reaching that result.
So one question is just what exactly would be banned or would platforms be legally required on pain of liability to prohibit by way again of this insurrectionist rhetoric, propaganda, extremist positions and the like? So that's my question. I also had a question for Jed, but I've used up my two minutes, I'm going to [crosstalk 00:47:12].
Gregory Katsas: Professor Rubenfeld.
Jed Rubenfeld: Oh, I'm not going to jump into that. I'd love to hear Professor Franks' answer. I though her presentation was great. I have questions myself, but I'd love to hear her answer to Eugene Volokh's question, so I'll just sit back.
Eugene Volokh: My apology. I thought we were on a first name basis. If not, I apologize for the undue familiarity, but we know each other, so maybe if we can call each other by our first names.
Gregory Katsas: All right, in that spirit, Mary Anne.
Mary Anne Franks: Thanks very much and thank you for that question, Eugene. And my answer is we don't know, and I think we should find out. In other words, what I'm advocating for here is not for the government to have some sort of regime that says these are the kinds of things you have to take down. What I'm actually arguing against is what I have elsewhere expressed as a concern about §230(c)(1) provision basically becoming standard ground for the internet. On other words, giving them a kind of immunity here that is blanket and also incredibly preemptive, and that that is what should go away. In other words, we need to let them play. We need answers to these questions, and they should not be granted immunity for their indifference to certain types of content.
What that means is, of course, is that arguing to say that there shouldn't be immunity here is not the same thing as arguing for liability here. It is the question of whether or not that should play out in the courts. One of the most frustrating things I think from a free speech and First Amendment perspective about what §230 does is it had short circuited these really interesting conversations. Prior to the internet, you would have real conversations in the courts about whether or not something, first of all, even qualified as speech, right. We may think that it's obvious now that wearing a black armband is speech, but that's not something the court really knew was speech until it had to wrestle with that question. So many questions like that have become prominent in the age of the internet because so much of what happens online isn't just speech. So that's a conversation that should be happening.
Working out what sense of liability any particular platform has with regard to any particular harm that develops, that is an interesting and open question. And I don't know what would happen in most cases if that question were allowed to play out, but that's what I think we should be figuring out. And instead, we've gotten stuck in the First Amendment analogue—sorry—analogue era for the First Amendment, when in fact we would have been having much more interesting conversations about what does it mean for instance to analogize premises liability to the online arena; what does it mean to say Title Seven and Title Nine obligations have to play out in a certain way when it comes to the social media environment. Those are the kinds of questions I think it's quite frustrating that we aren't able to ask because §230 prevents discussion of them all together.
Gregory Katsas: Anybody else?
Jed Rubenfeld: The one thing I'd say is -- and I'm not sure Mary Anne said this in her talk, so this is not really a criticism, I think, but I'm against any group based lens through which to theorize First Amendment rights. And while protecting disempowered and marginalized voices is hugely important to the First Amendment, I think it would just be hopelessly and utterly in violation of deep First Amendment values and principles if we cast that out into a special set of rights for disempowered groups and view it -- groups that are deemed to be more in power have lessor First Amendment rights. I did not hear Mary Anne say that, but some of her remarks might be taken in that direction, and I would be very -- I would be against that in a -- more or less completely.
Eugene Volokh: Can I actually ask -- if I have a moment, I wanted to ask Jed a question. I found your argument about state action very powerful. But one possible distinction that at least come to my mind—I'm not sure this is right—is the examples you gave of a law authorizing people to block abortion clinic entrances or break into people's homes in order to take their guns. I think they were powerful in large part because they were authorizing people to do things that normally, throughout American legal history, would have been clearly illegal. So they're kind of authorizing people to go way beyond the baseline of what people normally do. What §230 (c)(2) does, rightly or wrongly, is it authorizes platforms to do something that traditionally property owners have long done which is control what is allowed and not allowed on their property.
Now, to be sure, I think there are good arguments as a matter of federal statute there should be some requirements that platforms act as common carriers. But you're saying it's a constitutional matter, and I wonder if you think that makes a difference. If the fact that all the §230(c)(2) does is it just essentially strengthens platforms' powers over things that they have historically long had power over, whether than makes a difference because that does seem to be a little different from the examples she gives.
Jed Rubenfeld: That's a great question, Eugene. Thank you. Yeah, no, it doesn't make a difference to me. You may be minimizing the extent to which platforms would be in big trouble without §230(c)(1), the immunity provision. You make it sound like they could restrict and take off content without any problems or liability even without §230(c)(2) as if they wouldn't be necessarily violating any laws. That's not true. The whole reason why §230(c)(1), the immunity, the basic immunity provision was enacted as you know, Eugene, is that there was case law out there and available for folks to make the argument that by leaving some stuff up -- sorry, but taking some stuff down, they became liable for the stuff that was up. If they are considered publishers or distributers, they could easily end up liable for stuff they leave up when they take stuff down. That's exactly what a court had held. That's why §230(c)(1) was enacted.
So they could run into huge liability for taking stuff down. That was the whole reason why they enacted §230(c)(2). So they are doing something unlawful or very arguably doing something unlawful when they take stuff down, and §230(c)(2) says no they're not. They can take it down now freely with abandon, with impunity, with immunity. To me, it's the same essentially situation as in the hypotheticals I posed.
Gregory Katsas: Let me jump in here. One striking feature of this panel is it doesn't seem to be a huge amount of sympathy for the First Amendment rights of the platforms. So Eugene, let me just press you a little bit on your very helpful framing of the spectrum of platforms. You've got newspapers are clearly one extreme, one edge of the spectrum, where editorial discretion prevails. Phone company and telegraphs are the opposite end of the spectrum. They're common carriers. They have to take all comers. One intermediate point that might be analogous to the platforms is the cable companies in Turner. Now they lost on the bottom line, but for purposes of your spectrum, they were classified as like the newspapers, not like the phone company. And so, the government won only because the must carry requirements survived heightened scrutiny which typically is a heavy lift for the government. So why shouldn't we think of these platforms the same way?
Eugene Volokh: Well, so it seems to me that they are pretty similar to the cable companies in some respects, but at the same time, I think that the language of the Court and the reasoning, as well as the bottom line of the Court, in Turner was pretty favorable to certain kinds of regulations. And one important point, and this is that quote that I gave at the end, that's from Turner is that the Court said, look, a parade, a newspaper is a coherent speech product. It's something that people can watch beginning to end or read cover to cover, and as a result, that's what gives these kinds of platforms, the parade organizers and the newspapers the First Amendment right to say we get to pick and choose what's included.
But cable systems, they consist of these individual, unrelated segments. There being channels that happen to be transmitted together for individual selection by members of the audience. The audience chooses, oh, I want to watch channel 17. So, I think again, that's very similar to the audience choosing to say I'm going to go to or follow realDonaldTrump. Now to be sure, the Court did apply heightened scrutiny there, but it was intermediate scrutiny which is a relatively forgiving level of scrutiny. It recognized there was an important government interest in promoting a diversity of views out there. That was the rationale for the restriction. And likewise, I think there is a similar interest here. It recognized that the cable systems had something of a bottleneck power. Now there, to be sure, it's in part because of government monopolies, but still network effects are pretty similar in many respects to those kinds of monopoly effects.
Also remember that in Turner Broadcasting, the regulation wasn't just a speech mandate -- no I shouldn't say speech mandate -- a posting mandate, it was also a speech restriction. When cable networks were told you have to reserve up to a third of your channels for broadcasters, that meant they couldn't use those channels for the speech that they preferred. So that, I think, is the chief reason why the Court actually applied heightened scrutiny there, and the Court concluded that it passed heightened scrutiny as to both the hosting mandate and the speech restriction. So I think Turner's my friend here.
My article on the subject, which is going to be coming out probably within about a month in this new Journal of Free Speech Law that some colleagues of mine and I have started up, says essentially if you look at PruneYard, if you look at Turner, if you look at Rumsfeld, they support the social media as common carrier.
Jed Rubenfeld: Again, on state action. There's a more aggressive and there's a less aggressive version of your theory, I think. The less aggressive version is let's say there's some state which Ohio enacts like a common carrier thing. Individual speaker sues under state law to gain access. The platform interposes §230 as a preemption defense, and then the plaintiff says well, no §230 violates the First Amendment, and so the state action is the statute itself, and that's seems pretty clear that there's state action. But that theory doesn't make the First Amendment the sword for that claimant to win.
There may be a more aggressive version—I couldn't quite tell if you were going this far—but which is to say that by virtue of support conveyed by the immunity, a platform actually becomes the government, and the speaker can use the First Amendment to seek injunctive relief, let's say, against the platform. Are you willing to go that far?
Eugene Volokh: Yeah, not only am I willing to go that far, it's what I meant to be arguing, and I'm very sorry that I was not clear about that. In my hypotheticals, from before, like they barricade the abortion clinics, it is clear to me that that act of barricading, though done by a private individual under the immunity statute, that's state action by the barricader. So the abortion right is being violated. So it's not just that the statute's unconstitutional, there's a violation of constitutional rights when people barricade the abortion clinic or take your guns out of your house when they're doing it at the instance, at the behest, and under the inducement from governmental action.
That's the holding of cases that have looked at this question. What happens private parties act at the induced compulsion of the government, can they be sued themselves or can the person who suffered the violation get an injunction? Yes, they can. Can they sue if it's intentional cooperation? Yes, they can. So I very much meant to be making exactly that point.
I would just add on the First Amendment rights of the platforms, which is, you know, a very interesting question because of their common carrier potential aspect. On the argument I was making, they are state actors when they censor speech in certain circumstances. Governmental actors do not have First Amendment rights. So if it's true, as I'm saying, that they are acting pursuant to and through the inducement of and through the coercive pressure from the government and have become state actors when they censor, they don't have First Amendment rights. The courts don't have to worry about that anymore because under long standing precedent, governmental entities do not have -- they have the right to speak. That's called government speech. That's government speech doctrine, but they don't have First Amendment rights that are protected by the First Amendment.
Gregory Katsas: Mary Anne, one for you. You've made a very interesting point about -- and now I need to drill down a little bit into §230 to tee this up, but (c)(2) versus (c)(1). So for those of you who don't know this, like me 48 hours ago, (c)(1) is basically immunity for what you leave up. If you leave something objectionable on the platform that's defamatory, the plaintiff can sue the producer of the content for defamation but can't sue the platform. And you said that was the real -- that was a big problem because of all of the points you made about vulnerable speakers and such. I'm wondering how your concern about vulnerable speakers plays out in the context of (c)(2). That's the provision that says you're immune not when you leave something up but when you take something down. And it is a fair extrapolation to say trying to skip past the point about any right left valence, can we say that, in general, individual speakers who want to talk through Facebook will have a lot less power than Facebook? And so a lot of your concerns about disadvantaged speakers have valence with respect to (c)(2) as well as (c)(1)?
Mary Anne Franks: That is an interesting way of putting the question, and the way I would answer it is by saying that the (c)(2)—I think it's helpful to keep in mind—is really nothing more than a procedural underscoring of the First Amendment. And we can feel bad about that. We can talk about whether or not we think it's a good thing that the state action doctrine as recently reiterated in Halleck by Justice Kavanaugh, state action doctrine really does mean -- it's not just that it's sort of a afterthought to say, well, we only really care about the state punishing you and leaving everything else out. He says no, the state action doctrine says everyone else gets this robust fear of individual liberty. If you are not the government, it is affirmedably good and part of our essential understanding of the First Amendment and of freedom generally that you get to make decisions about whether or not you want to associate yourself with someone else's speech or whether you want to eliminate them from your private property should you see fit.
The thing that is so head turning about this is the very same people who used to find that kind of reasoning very powerful, that is private actors have individual freedom. This is liberty. This is freedom to say you don't get to come in here. You just don’t. If I am, and this is an example taken from Justice Kavanaugh, if you are a restaurant owner, sorry, a night club owner and you're having open mic nights and someone gets up there and starts saying things that you find to be offensive or pornographic, you have every right to say you don't get to come here anymore. No shoes, no shirt, no service. That is part of that notion that we allowed, if we are not government actors, we get to make decisions about whether we want to be associated with speech or not.
When you lose in that, when you feel like you have lost, that is otherwise known as the marketplace of ideas, right. You're not welcome here. You're not welcome here. Try harder, do better, have better ideas. Is there unfairness there? Does that mean that sometimes you're going to feel unwelcome? Yes, but again, I would have thought that it would be the conservative libertarian cultural groups that would say, yeah, that's just tough, right. The great irony of having developed an entire platform around criticizing things like safe spaces and then calling for social media to become a safe space for conservative ideas.
And I know the question was raised, maybe somewhere in the chat, about whether I'm singling out a certain group and saying well, they unique are not vulnerable and they uniquely don't have any right to say they're being harmed. That's not what I mean at all. All I am saying is that we shouldn't let people self-identify their harms. It is not enough to say, I am feeling censored. It should never be enough for someone to say I have this subjective feeling that I'm being silenced. It should have to be some kind of objective evaluation of what is exactly that is keeping you from speech. And if what's keeping you from speaking is the fact that you think people won't like you or their going to call you a racist or that they might be mean to you, again, I’m not sure why we should be concerned about that, and conservatives generally have not been concerned about that until the game came out slightly differently.
So in other words, I am very much of the mind that when we make articulations about actual chilling effects, actual censorship, actual denials of speech that we be very specific about what that means. Is your life being threatened? Is your private information being exposed? Is someone threatening your children? Those things are real. Those things can be objectively evaluated. Saying someone isn't going like my ideas is not something that I think should be talked about in the same conversation. And exactly the idea that we get to exclude people who have ideas we don't like, that is part of, I thought, was part of what it meant to have some kind of faith in individual liberty.
Eugene Volokh: I'm sorry. Just to make clear, I'm totally on board with the notion that nobody should be entitled to block other people from calling them racist or calling them unamerican or or a communist or White supremacist or whatever else. That's itself exercise of free speech. What people are objecting to is people being tangibly excluded from certain kinds of important platforms. It's not just a subjective perception. It is indubitable exclusion. Now you might way, what's the big deal? So you can't speak on Facebook? Speak somewhere else. Just like in PruneYard, you could have said, and many states do, what's the big deal. You can't speak at the PruneYard Shopping Mall? Go out and speak on some sidewalk or in some other?
You might also say, as PruneYard did, oh, we have a First Amendment right to exclude people from our property. That's freedom. We want to exercise our freedom. And libertarians are quite on board with that, just the majority, not a majority, unanimous Court in PruneYard was not on board with that. Likewise with the unanimous Court in Rumsfeld which said look, you universities want to exclude military recruiters, well, you're free to criticize them, but that doesn't give you a First Amendment right to exclude them.
So whatever it is that some people, both conservatives and liberals, -- look Erwin Chemerinsky has been arguing for some limitations here. Genevieve Lakier and Nelson Tebbe have been arguing for some limitations on social media platforms. Whether they're conservative or liberal, they're talking about very tangible exclusions from these kinds of social media platforms. And again, you might say it's good that social media platforms get to decide who's on and who's not. But I just don't think that that's something that is either just mere subjective perception on objectors' parts, nor do I think is it something that the First Amendment secures to social media platforms, this categorical right as a property owner. Again, libertarians are all in favor of that, but First Amendment law in this respect is not that kind of property rights libertarian doctrine.
Gregory Katsas: Anybody else?
Mary Anne Franks: Just on that front, that's again -- it is self-described in the sense that if you want to look at statistics about who is actually being excluded. First of all, there is a real lack of any evidence that suggest that based on ideological reasons that people are being more often excluded from social media platforms to begin with. So there's a descriptive distinction that I think we should be making because overwhelmingly it is the case that so much of what is considered to be right-wing, far right content actually does extraordinarily well on social media. That there is plenty of that content that is there. So in terms of finding out descriptively is it even true that we can highlight that there is a bias, that seems to actually working in the opposite direction.
But again, it's not like a sixth grade birthday party. If someone doesn't want to have you -- or maybe it is exactly like a sixth grade birthday party. I'm sorry you didn't get invited, but that doesn't mean that there is a constitutional harm that has been done. And it is interesting to take the posture that that is some kind of magnitude that that kind of harm is magnified in a way that is constitutionally relevant or one that requires us to actually rethink doctrine. I'm very much on board with those First Amendment scholars who say look, the First Amendment is a troubling and much less of a good doctrine that we think it is. I'm one of those scholars who says that we should be rethinking a lot of things about the First Amendment. But it is most interesting, I think, for these purposes to notice who is criticizing the First Amendment doctrine suddenly today and why and also seem to take with it the whole notion of private actors being able to engage in their own decision making.
Eugene Volokh: This is not a -- it's not a question of group rights. Well, conservatives are entitled to protection because conservative speech is being disproportionately excluded. Maybe it is. Maybe it isn't. There's certainly are particular example of conservative speech being excluded because of its ideology, but there's also examples of left-wing speech being excluded based on its ideology. And it may very well be that on balance, conservatives are doing well there. The claim here is that this should be an individual right, just like our right to go to a shopping mall. Nobody knows who most benefits from the ability in California and a few other states to go onto private shopping malls and hand out leaflets. We don’t know, but the theory behind that rule is it protects people's individual right, not a First Amendment right, but a right of speech.
To give another example, there are lots of states that have laws that ban people from being fired from their jobs for their political activity. Is that predominantly helping liberals or conservatives? Don't know. In a sense, it might be a liberal rule because it limits employment at will which traditionally conservatives have endorsed. But the bottom line is even conservatives have accepted some limits on private property, generally speaking, and it's a right that protects the rights of individual speakers. So from my purposes, it doesn't really much matter whether it's disproportionably one or another side that's being restricted here because it's not about the rights of conservatism versus liberalism or some such.
The question is should people, individual speakers have the right to be free to speak on these platforms without viewpoint discrimination. Maybe the answer is they shouldn't. But that it seems to me is the really important question.
Gregory Katsas: Jed, you've been relatively quiet. Any final thought?
Jed Rubenfeld: Well, I would just say that we are living a completely uncharted world here. The internet is sort of the modern public square as the Supreme Court put it in Packhingham. We have never dealt with a situation where control over the public square is concentrated in the hands of two or three or four or five private companies. The idea that our old ways of thinking about these problems are doing to be completely satisfactory to a world in which the danger of private censorship has grown to the extent it is should -- it justifies folks in thinking about how traditional doctrine needs to be rethought. And I think that we're all doing that. I think that Mary Anne's doing that. I think Eugene's doing that with common carrier.
I'm worried about instances where the state is using its power and authority to induce and get these gatekeepers to suppress constitutionally protected speech that it doesn't want to see online, which to me is just clearly unconstitutional in terms of the private conduct and the state action. But we're all grappling with this new world, and the younger folks and judges are going to have to grapple it too. So that's all from me.
Gregory Katsas: Well, thank you all very much. In the finest tradition of the DC Circuit, we've gone well over our allotted time, going to have to bring this to a close. This has been great. FedSoc aspires to sponsor interesting, vigorous, respectful debate on topics of the day, and I think we have easily met that standard here. I see that the Q&A and chat functions have been a buzz. Some of these might even be questions, but we're provoking some strong reactions here which is great. I'm sorry we won't be able to get to all of that, but the good news is, as I said, this is just the first of six installments, and we will have part two focused a little bit more on §230 itself, I believe, Wednesday at 12:30 p.m. So that you all for joining us. If you enjoyed today's panel, please try to make it to the next one. Thank you.
Jed Rubenfeld: Thank you.
Mary Anne Franks: Thanks.
Evelyn Hildebrand: And on behalf of The Federalist Society, I want to thank our experts for the benefit of their valuable time and expertise today. I want to thank our audience for participating. As Judge Katsas said, our next panel will be next Wednesday at 12:30 p.m. and it will feature Judge Katsas as the moderator and Professor Mary Anne Franks, Professor Voolh and Professor Philip Hamburger. We welcome listener feedback by email at email@example.com. As always, keep an eye on our website and your emails for announcements about upcoming events. Thank you all for joining us today. We are adjourned.
Dean Reuter: Thank you for listening to this episode of Teleforum, a podcast of The Federalist Society’s Practice Groups. For more information about The Federalist Society, the practice groups, and to become a Federalist Society member, please visit our website at www.fedsoc.org.