Section 230 of the Communications Decency Act

A Capitol Hill Chapter Teleforum

Listen & Download

Section 230 of Communications Decency Act protects platforms from liability for the content produced by users. As social media platforms have evolved, concerns about free speech and platform liability have sparked debates among legislators regarding the best way to regulate social media companies. Some have questioned whether Section 230 is the best solution and have proposed Congressional enforcement of ‘platform neutrality.’ Others have argued that Section 230 is the best way to protect free speech. During this program, our experts will debate and discuss Section 230 and how Congress should approach regulation of social media companies.

Featuring:

  • Hon. Ronald A. Cass, Dean Emeritus, Boston University School of Law; President, Cass & Associates, PC
  • Neil Chilson, Senior Research Fellow for Technology and Innovation, Charles Koch Institute
  • Josh Divine, Deputy Counsel, U.S. Senator Josh Hawley

 

Teleforum calls are open to all dues paying members of the Federalist Society. To become a member, sign up on our website. As a member, you should receive email announcements of upcoming Teleforum calls which contain the conference call phone number. If you are not receiving those email announcements, please contact us at 202-822-8138.

Event Transcript

[Music]

 

Dean Reuter:  Welcome to Teleforum, a podcast of The Federalist Society's Practice Groups. I’m Dean Reuter, Vice President, General Counsel, and Director of Practice Groups at The Federalist Society. For exclusive access to live recordings of practice group teleforum calls, become a Federalist Society member today at www.fedsoc.org.

 

 

Greg Walsh:  Welcome to The Federalist Society's teleforum conference call. This afternoon's topic is titled “Section 230 of the Communications Decency Act.” My name is Greg Walsh, and I am Assistant Director of Practice Groups at The Federalist Society. As always, please note that all expressions of opinion are those of the experts on today's call.

 

      Today, we are fortunate to have with us the Honorable Ronald A. Cass, Dean Emeritus, Boston University School of Law, and is also the President of Cass & Associates; Mr. Neil Chilson, the Senior Research Fellow for Technology and Innovation at Stand Together; and finally, Josh Divine, Deputy Counsel for U.S. Senator Josh Hawley. After our speakers give their opening remarks, we will go to audience Q&A. Thank you all for sharing with us today. Mr. Cass, the floor is yours.

 

Hon. Ronald Cass:  Thank you very much. It’s a delight to be here. The usual way that libel law is approached is that anyone who says something that is defamatory of another is subject to liability for that, and that includes people who simply repeat things said by others. It also includes publishers who put out things that are written by others.

 

      In the law that was passed in 1996 as Section 230 of the Communications Decency Act, one of the things that Congress sought to do was to protect internet providers of opportunities for others to speak, those who were providing an interactive communications service over the internet, to protect them against the same sort of liability that a publisher might have, and instead, to treat the providers of those services as if they were simply providing a transmission mechanism, a platform, or doing what someone might do in a library, just having the opportunity for others to come in and get access to the information and content that is provided by other people.

 

      The act has been a little bit controversial because it actually combines two different ideas. The first idea is that the people providing these services are essentially not doing the editing or curating function that someone might do who is actively selecting communications. The second idea is that the providers either can or must perform some minor editing function so that they are allowed to screen out material that the provider considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable. And they’re required to perform certain functions that will keep off the internet activity and content that is either criminal or that may be subject to criminal punishment under some conditions.

     

      Having put these two things together, there are a variety of questions that come up regarding the act. I’m going to divide them in three categories and just do a very quick hit on each of the three. The three categories are congressional power, constitutionality, and policy. On congressional power, it is fairly clear that Congress has the power under its regulation of interstate commerce to try to facilitate the development and functioning of the internet. So the basic idea behind the law and its inclusion in the Telecommunications Act of 1996 is within the scope of Congress’s power.

 

      On constitutionality, it’s a more difficult question. Generally, the government is only worried about what it does in terms of making choices among messages or putting liability on messages so that under the First Amendment, there are certain things government can’t do. Those obligations for government don’t generally translate to private actors. Ever since the 1960s, the Supreme Court has constitutionalized parts of libel law that previously were thought to be outside the scope of the First Amendment.

 

      But by and large, the sort of things that are given over to the internet providers here that they are either required or permitted to edit without incurring liability are matters of style. They are matters of the way people say things rather than the actual message that people are providing. So by and large, the law doesn’t raise the constitutional question. The constitutional question arises if it’s found that this sort of freedom for internet providers systematically is being used in a way that screens out some messages, some types of messages from perhaps one side of the political spectrum and that the government is essentially cooperating and facilitating the sort of message discrimination that it could not do directly.

 

      The policy question is the hardest of the three questions, and that’s whether this is a good law, whether it’s actually serving its intended purpose, or whether mixing the two sorts of activity, the freedom for people to simply provide a platform and the ability for people to make certain types of editorial judgement, mixes two types of models of speech in a way that complicates the ability of anyone to see which role the internet content provider is playing and whether it’s playing that role in a manner that fits good public policy.

 

      Those are the sort of questions that are more in the news these days. Those are the sort of questions that are prompting people to raise the issue of whether there should be a revision of the law. And I’m sure that those are the sort of questions we will delve into much more deeply over the course of the hour that we’re on this call. And I’ll turn it over now to Neil.

 

Neil Chilson:  Thank you very much, and thanks to The Federalist Society for inviting me to participate in this discussion. It’s always great to participate in a Federalist Society teleforum, but this format is particularly relevant in this time of social distancing. Of course, you guys are doing remote conferences on the regular, long before anybody had even heard of Zoom, so that’s great.

 

      Speaking of Zoom, the U.S. tech sector is the best in the world at giving people powerful tools to collaborate with each other. We’re all experiencing the importance of such tools in a particularly intense way right now, and a big part of the reason that the U.S. leads the world in this area is Section 230 of the Communications Decency Act. So to push back, perhaps, a little bit on Mr. Cass, I don’t think the policy question is particularly close here, actually, and I’ll get into that a little bit.

 

      But first, Section 230 essentially embodies a clear and what I would call a conservative principle of individual responsibility. In the simplest terms, it says that bad actors are responsible for their actions online, not the tools that they use. If this sounds pretty obvious, it’s because it is the normal way we do things in the U.S., and I know Mr. Cass had a few examples. But we don’t hold newsstands liable for the newspaper articles -- the articles in the newspapers they sell. We don’t hold bookstores -- we don’t require bookstores to fact check every book that they sell. That’s the author’s job. And so, too, we hold social media users, not services, responsible for their words online.

 

      Intermediary liability is the legal concept that applies in some places, and for some good reasons, but it’s rare, and for good reason. Blaming one person for what somebody else did can be unfair and unjust. Now, of course, even under Section 230, internet services remain completely responsible for the content that they themselves create. Section 230 fixed a real liability problem known as the moderator’s dilemma.

 

      And here is the dilemma that is coming up in case law in the early ‘90s. Websites that offered their users the ability to post content online faced two terrible options if they didn’t want to be sued out of existence by trial lawyers. Option one was essentially to take an anything goes approach to moderation, just let users post any and all things that they wanted. I think we can all imagine what an internet like that might look like. It’d be one dominated by trolls, spam, fraud, and porn.

 

      The second option was to hire a ton of lawyers to check every single piece of content that users submitted, essentially, to make sure that the website wouldn’t be sued and couldn’t be sued for something that a user said or did. This would be a very sterile internet, a very risk averse internet. Imagine what Twitter might look like if Jack, the CEO, thought he might get sued for anything a user might say on it, on his platform.

 

      Congress recognized that both of these were pretty bad options, and they wouldn’t enable the promise of the internet to connect normal people to each other. And Section 230 solved that moderator’s dilemma by providing both a sword and a shield, the ability of companies to defend against lawsuits where people were suing them for the bad acts of their users, but also it empowered them to moderate, to actively choose to remove content that would make the type of platform that they were trying to build for their users.

 

      The underlying cause of that dilemma, even though the law was passed in ’96, the underlying cause of that dilemma has not disappeared. In fact, it’s gotten much more intense as the internet has become massively more popular and international and people share many more different types of content. Without Section 230, websites would be right back in the moderator’s dilemma worse than ever. And although trial lawyers and trolls might really love it, the internet would be a place that would be much less useful to normal people like you and I.

 

      So what exactly is at stake here? And this gets to the policy question. One of the big reasons that the U.S. -- I mentioned earlier one of the big reasons that the U.S. is a leader, the leading tech country, is in part because of Section 230. And it’s pretty easy to compare the U.S., which has Section 230, with a place like the European Union, which does not have Section 230.

 

      And the disparity here is actually really staggering. Total VC investment in the early ‘90s was basically a parity between those two regions, and then in ’98,’99, the U.S. starts being multiple times bigger. And I think the tech sector in the U.S. shows the radical success of the policy choices that we made in that era, two to three times greater investment in tech overall compared to the E.U. under what they have, which is called the e-Commerce Directive. And interactive platform investment here in the U.S. is five to six times that of the E.U.

 

      This affects companies. It affects big companies, obviously. It also affects lots of small companies who are seeking investment. And it affects a lot of small companies who aren’t even really internet companies except that they use the tools of these platforms in order to sell their products to consumers and to advertise to consumers. So Amazon and Walmart both operate massive third-party seller forums. Section 230 enables those platforms to exist. Billions of dollars of commerce from small vendors flow through those sites every day -- or every month.

 

      So Section 230 protects investment. It has spurred investment in the U.S. It helps small companies. But the part that most personally strikes me is how platforms that are enabled by Section 230 allow everyday people to innovate. I’ve talked about the companies, but there’s a bunch of connections, community building connections that happen online that don’t fit into any of these commercial categories.

 

      My favorite story about this is a very personal one. My wife and I have a nine month old, and while she was still in utero, she was diagnosed with clubfoot, which is a birth defect. Thanks to the miracles of modern science, it’s very correctable, but it requires a challenging process that spans many months. And it was pretty -- it’s really scary. And as new parents, we had a ton of concerns. Our doctors are great, but we couldn’t always ask them every question, and some things didn’t always make sense to ask them.

 

      But you know who was always available? The 5,000 plus people in the clubfoot support group on Facebook. Any time, day or night, we could hear from people we had never met but who really understood our struggle. Now that we’re through the hardest part of that struggle, we can share that back with people in that community. There’s no business plan that could really build that type of community in the absence of something like Facebook groups, and Facebook groups wouldn’t exist without Section 230 because Facebook’s lawyers would have to vet every single post to the clubfoot group to make sure that they weren’t going to be sued.

 

      So to me, that’s the thing that’s biggest at risk from something like some of the proposals that are in Congress today. I assume we’ll hear a little bit about Senator Hawley’s Ending Support for Internet Censorship Act, and I have a lot of concerns about that bill because I think it does ignore the major benefits that Section 230 has brought to the U.S. and to our communities. On top of that, it combines -- sorry, you might be hearing my nine month old in the background right now.

 

      So on top of that, this bill combines the sort of core elements of the fairness doctrine and net neutrality, two things that Republicans really haven’t been in favor in the past. In fact, Reagan got rid of the fairness doctrine. And it really injects FTC bureaucrats into millions of decisions about internet content. We can talk a little bit about the details of it, although I’ll leave most of that to Josh, and then we can talk back and forth.

 

      But big picture, it would set up a sort of partisan blood match at the Federal Trade Commission every other year when companies had to go reseek their certification from the FTC in order to get the protections of Section 230. I spent some time at the FTC. It’s a very bipartisan place. But if we had proceedings every other year where people were coming in to tell the one time that they were bothered on the internet by somebody and tried to get that company’s license removed, it’s just ripe for abuse.

     

      So just to tie up, the text of Section 230 isn’t sacrosanct. It wasn’t handed down on tablets from Mount Sinai, but its purposes are important, and it has been successful at achieving those. And as we evaluate Section 230, the questions we should be asking are whether and when it makes sense to hold one party responsible for the bad acts of another, and what the negative consequences might be if we do that.

 

      That’s not really the question being asked, though, right now. The question that’s being asked right now seems to be this law seems really important to a bunch of companies, and I want to see what I can get if I threaten to take it away. I think they’re right in seeing that the law is important to innovators and business people. I think they might be missing that the law is also critical to everyday people who want to connect with others online.

 

      So as Congress steps forward into this and thinks about how to evaluate Section 230, I think its principle of individual responsibility has shown the power of connecting everyday individuals, and that if we change it, we risk shutting down the voices of everyday people and solidifying the position of already powerful speakers and gatekeepers.

 

      There’s a bunch of principles out there, seven principles that were issued from 28 institutions across the political spectrum. And I’d be happy to go through those, but most of them have been covered in my remarks. And I look forward to the discussion. Thanks.

 

Josh Divine:  All right, thanks. This is Josh Divine. Again, I’d like to reiterate what others were saying, and I’d like to thank The Federalist Society for putting this program together. And I’m especially grateful to have the opportunity to join Ron and Neil on this panel. I think they just have an incredible wealth of experience to share on this issue.

 

      I want to start off by responding to something that Neil just said. Neil just gave a bunch of examples of the good that Section 230 does. And I sympathize with you, Neil, about the situation you were going into with your family. But one thing that Neil said was that Facebook groups wouldn’t exist without Section 230. But in fact, Facebook groups do exist in other countries like in Europe where Section 230 doesn’t exist.

 

      I think one of the things we need to be considering here is that we don’t have to maintain the status quo to keep the good things that we like about Section 230. I think the ultimate question here is whether Section 230 in its current form is the best we can do. And I think the answer to that clearly is no.

 

      I want to make three basic points that I think we should consider when reevaluating this law, especially with respect to social media. First is that the courts have interpreted 230 immunity much more broadly than is justified by the text. Two years ago, one of the original drafters of this provision expressed surprise and said it’s not even text based anymore. He called it judge-made law.

 

      Second is that Section 230 was designed in 1996 for a very different internet with very different kinds of companies. And it doesn’t really make sense in its current form for some of today’s companies.

 

      And third is that 230 is a government subsidy. It’s a tremendous gift to certain companies. And I think we should consider whether society should receive something in return for providing that subsidy.

 

      So I’ll give it a little bit more background for each of these points. First, 230 immunity is extraordinarily broad. And a lot of people try to justify this broad immunity by saying that it’s necessary because the threats that internet companies faced in 1996 and today are unprecedented. But that’s not really true. The basic tort problem, as Neil was just describing, is that these companies distribute a lot more third-party content than they can reasonably vet. And I think that’s true. But other industries have long faced the same problem. Telephones, telegrams, bookstores, nearly every retail establishment sells third-party products.

 

      And before Section 230, the common law had developed a three-part solution to this longstanding problem, so I disagree with Neil’s statement that we didn’t impose liability on things like newsstands. Where speech was concerned, if you were a publisher, you could be held strictly liable. If you were a distributor like a bookstore or a newsstand, you could be held liable only if you knew what you were distributing was illegal. So you could be held liable if you were a newsstand. And if you were just providing the mechanism for speech, like a copy machine or a microphone, then you couldn’t be held liable at all.

 

      Now, there’s a famous case called Prodigy were a court basically messed the standard up. It was looking at a defendant that was editing some content. So it was a publisher of some content, but it was just distributing other content without editing. And the court basically said, “Well, if you’re going to be a publisher for some of the content, we’re going to assume you’re a publisher for all the content.”

 

      So Congress passes Section 230. It’s a pretty simple, straightforward statute to say, look, if you help develop the content, then you’re going to be held liable. But if you’re just distributing that content and it’s created entirely by third parties, then you’re not a publisher, so courts shouldn’t treat you like one. And I think it’s a pretty simple statute, but it gets distorted almost immediately. Section 230 says that internet companies can’t be treated as publishers, but it never says companies can’t be treated as distributors, this original common law distinction. So plaintiffs started trying to hold companies responsible for content that the companies new was illegal.

     

      In an early important case, the Fourth Circuit rejects this argument and says, well, distribution is really just a subset of publication, so Congress implicitly meant to eliminate distributor liability as well. And in my view, the court made what Justice Scalia used to say is the most common interpretive error. It considered only one section out of the whole statute. There’s a bunch of other sections in the Communications Decency Act, and one of those sections expressly imposes distributor liability.

 

      And when you consider the whole text, I think it’s really tough to conclude that Section 230 eliminates distributor liability and does so implicitly when the same statute elsewhere recognizes and imposes that very same kind of liability. And the court’s ruling ends up creating an extraordinary expansion of immunity. Some of the internet companies are no longer subject to the same traditional limits as distributors. Instead, 230 now treats those companies more like photocopiers and microphones.

 

      And that expansion enables companies like Backpage, who took advantage of this broader immunity to design its website specifically to monetize illegal content. Backpage made millions of dollars by intentionally creating a hub for human trafficking. And for years, they used Section 230 to evade liability. They said they were immune because the ads for human trafficking were created by third parties, even though Backpage knew about those ads and, in fact, intentionally designed its platform to encourage those ads.

 

      So let me give one more example. Snapchat has a product called speed filter. It basically allows you to take a picture, and then part of the picture displays your current speed. Now, most people recognize that this kind of tool is primarily attractive to reckless drivers, and indeed encourages reckless driving. Well, under current doctrine, it’s entirely protected.

 

      Now, the problem with this is that a plaintiff who sues somebody over something like speed filter, they’re not complaining about specific speech. They’re complaining about a reckless platform design decision. So when Neil says we should hold people accountable for their own actions, here they’re trying to hold the company accountable for its poor design. But Section 230 has been interpreted to cover those claims anyway. I think these are not the kinds of immunities that the text of Section 230 contemplates.

 

      And a second point I want to make is that 230 was designed for a very different internet. In 1996, most companies basically distributed content from point A to point B, especially compared to what they do today. And we still have companies like that, ISPs, content delivery networks, and for these companies, 230 immunity makes a lot of sense.

 

      But companies like Facebook and Google are very different. They’re nothing like what we had in 1996, but they’re receiving the same kind of immunity. In fact, they’re receiving better immunity that distributors. They basically have total immunity from third-party content, just like microphones and copy machines, even though they substantially manipulate and alter the presentation of that content.

 

      Now, I’m not saying these companies should be fully liable for everything. But I am questioning why, when it comes to immunity, we treat Facebook and Google like they’re copy machines when they so clearly are not. And I think immunity protection should recognize distinctions between different kinds of companies.

 

      And the third point I want to make is that 230 is a subsidy. It’s a government handout that frees certain companies from tort obligations that everybody else has to follow. It’s like the dispensations that the Crown in England used to give to favorite companies. You no longer have to comply with the law; everybody else does. And from a conservative or libertarian perspective, the fact that government is permanently subsidizing some of the most powerful wealthy companies in the world is justification enough to at least begin thinking about reform. And I think we should be thinking about what these companies should do if they’re going to continue to receive this subsidy.

 

      So number one, we might consider transparency and accountability requirements. The vast majority of speech today is controlled by three people, the heads of Facebook and Google. And these people, these companies amplify content, suppress other content, and the decisions they make are almost entirely in a black box. Now, if the government had the same kind of control over speech that Mark Zuckerberg does, I think a lot of people would be willing to stage a revolution.

 

      And today we have a lot of debate about whether there’s sufficient evidence of systemic bias at Facebook and at a lot of these different kinds of companies. But I think it’s odd that we even have to have that debate. The burden of proof ought to be on the companies who are trying to sell us their services. The problem is that they’re dominant companies, and they know that we have nowhere else to go, so there’s no competitive incentive for them to do things like transparency and accountability.

 

      Another thing I’d like to see is companies adopt more data portability and interoperability policies. These used to be very common. You don’t have to have a Gmail account to send an email to somebody who does have one. That’s because email is interoperable, and you can import your data from one service to another. I think similarly, you shouldn’t need to have a Facebook to contact somebody on that network, and if you don’t trust Facebook to make unbiased filtering decisions, you should be allowed to select a third-party filter.

 

      These are a lot of different situations that we can think about. In 1996, Congress created a market distortion and gave an exemption to a bunch of companies from tort law that everybody else has to follow. And if they’re going to continue to keep that exemption, I think we ought to think about what they should do in response.

     

      So I’ll turn back to Ron and Neil for any comments you might have about all of this.

 

Hon. Ronald Cass:  This is Ron Cass jumping in just on a couple small points here. When Neil was speaking, he painted a picture that was really sort of an either/or. Either you had an all open internet where you would be overrun by material you didn’t want to see, or you would have an internet that was burdened by having to have lawyers look over everything. I will say, as someone who was a law school dean, I never object to overlawyering things, but I understand that not everyone has the same sensibility as I do on that one.

 

      And I think that the either/or approach really doesn’t quite capture what we have. It’s a lot more different sort of judgements that can be made, and can be made without either overburdening the internet or requiring us to be in court all the time. With the sort of ability that is in the act under its Good Samaritan provision, you can imagine some types of editing that seem to be fairly easy to defend, so when you talk about the liability of a library or a bookstore.

 

      Imagine you have a bookstore that isn’t just a store for any book that happens to meet a criterion like published within the last decade, on the top 50 seller list, demanded by 50 people come to the store, but instead you have something that’s thematic. So you have a children’s bookstore that tries to screen out anything that’s not appropriate for children. And this is a safe place for children to come, look around, and adults who are looking for children’s books to come.

 

      On the other hand, consider another type of bookstore, a bookstore that’s the anti-Obama bookstore, or the anti-Trump bookstore where the books that have been gathered in the store all have a similar type of messaging. Obviously, the role that’s being played by the editing that’s done by the bookstore owner is very different. And we have very different sorts of sensitivities to the degree to which the bookstore owner is becoming like a publisher, like an editor, instead of like a transmission service. And I think that’s the sort of question that has to be debated when we’re talking about what should happen with Section 230 moving forward, whether it’s serving an ideal function or whether it can be improved by changes in the law.

 

Neil Chilson:  Yeah, this is Neil. Thanks for that point. I was trying to get to that point towards the end, but that really is the important question, right? The question should be when does it make sense to impose intermediary liability and when does it not?

 

      It’s interesting because this law has been around for 34 years, and there’s been tons of litigation about it. So when I hear about constitutional questions around it or what the words mean, these are not in a vacuum. Courts have looked at this over and over and over and over and made judgements exactly in those types of scenarios that you’re talking about, Mr. Cass, where they’re looking at individual facts and saying, “How does this law apply in this situation?” So I do think that the law is taking into account lots of different situations and trying to look and figure out what is the right principles to apply in each case.

 

      So one of the interesting things about that, I think, there’s a mention about the sort of unique environment of the U.S. legally and the comparison to the E.U. around Section 230. One thing that’s also interesting is you’re right that the E.U. does not have Section 230 and that it also has Facebook groups, but it is not nearly quite -- it doesn’t have the trial bar that the U.S. has, either. And maybe that’s to Cass’s student’s benefit. But it doesn’t have the trial bar, and it doesn’t have the litigious nature that the U.S. has, and that was the entire purpose of Section 230 was to protect companies from that risk of frivolous, meritless lawsuits against them when the bad actors were the people that should be having the cases brought against.

 

      The mention of Backpage, it’s hard to take that one seriously, given that Backpage was brought down by the FBI while Section 230 existed. And so I’m not quite sure how Section 230 stopped DOJ and the FBI from bringing down Backpage.

 

      And as far as a subsidy, one of the things that was interesting, over and over and over, there was lots of analogies being thrown out to the internet. Is it a copier or is it a publisher? Part of the problem is the internet is not analogous to any of these things. It’s something different. And a basic principle of rule of law is you treat similarly situated things similar. And the law that applies to them might be different than the law that applies to things that are very different, but that makes sense. That’s just. And so newspapers are protected from Section 230 just as much as Facebook is if they have a comment section online where people can comment.

 

      And so saying that this is a subsidy to one particular industry doesn’t make any sense to me. It was created long before these big platforms existed, it enabled a lot of innovation in those spaces among big platforms and small platforms, and it applies equally to anybody who is similarly situated to those platforms, and so -- which is many, many, many, many, many social media platforms, over 200 last time I checked on Wikipedia.

 

      And so I’ll just stop there. I’m interested in the conversation.

 

Josh Divine:  Yeah, and so Ron’s point about the sort of continuum between publisher and distributor I think is really a key point. And this is kind of the issue I’m having a little bit with how Section 230 applies. There are publishers, there are distributors, and there are a lot of companies that are in between. And agree with Neil that we should be treating similarly situated companies alike. What I disagree is that all these tech companies are similarly situated. Facebook and Google are doing something totally different from what an ISP is doing. And I don't think Section 230 should treat them the same way.

 

      Now, I thought Neil’s comments about the trial bar here versus in Europe, I think that’s right. And I think a lot of what’s motivating support for Section 230 is people don’t like our tort system. But if that’s the case, then I think we should be talking about tort reform. I don't think we should be requiring everybody else to comply with torts while giving certain companies an exemption. I mean, Neil pointed out that venture capital funding is a lot higher here than in Europe, and I don't think it’s a surprise if you give certain companies an exemption from the regulations and rules that everybody else has to follow. And of course, they’re going to attract more funding and they’re going to be more profitable. But that’s a classic market distortion.

 

      So I think all this is very interesting. I think I’m interested to hear what the questions are.

 

Greg Walsh:  Okay, let’s go to audience questions.

 

Annie St. Hilaire:  Hi, my name’s Annie St. Hilaire, and I wanted to ask this question specifically for Josh. And the reason why this is kind of a topic that I can relate to is because I recently came out as a Republican, and from there, I began making some very political statement videos and posting them on Facebook. And within hours, I just got so many responses, so many supporters.

 

      But I noticed in the recent few weeks to few months, I’ve been censored by Facebook. And I believe that they’re doing this essentially because voices like my own, being a woman and being African American, are necessary in the social battle that we’re currently going through. And I guess what I wanted to ask is how can Section 230 be used in a way to hold companies like Facebook, like Instagram, accountable for censoring conservatives?

 

Josh Divine:  Well, first of all, congratulations on becoming a Republican. I think the issue you’re running into is one that’s really happening a lot. And these companies have their standards, and often these standards are extremely vague, and we don’t really know how they’re enforcing them. We don’t really know if they’re enforcing the standards evenly or not. Oftentimes, there will just be an enforcement decision, and there’ll be no explanation. And all of this is occurring in a black box.

 

      And I think if you had more competition in this area, this sort of thing wouldn’t happen. If you look back at the days where Myspace existed, for example, Facebook was really a pro-privacy organization because it had to compete on those grounds. But then as soon as Myspace went away, as soon as Facebook got a large enough network, suddenly it became one of the worst privacy organizations. And so this is a situation where these companies don’t necessarily have an incentive to be transparent about what they’re doing to show that they are, in fact, enforcing their terms of service neutrally and evenly.

 

      And Section 230, that can be one avenue of trying to encourage that kind of cooperation is you compare this responsibility provision of transparency and accountability. Maybe the company has to go through a third-party audit. Maybe they have to do some other kind of transparency measures in exchange for retaining Section 230 immunity.

 

Hon. Ronald Cass:  This is Ron Cass. One of the problems of the way Section 230 is drafted is that like a lot of other laws when people were trying to figure out what was the sort of editorial judgement that the drafters wanted to facilitate, they were certain that whatever they wrote down, they would leave something out. So they tacked on at the end that the provider was able, without any liability, to censor material it found otherwise objectionable. And that’s the sort of editorial judgement that is so broad and so open-ended, it potentially sweeps within it the sort of judgements you don’t want people to be making or certainly be making surreptitiously.

 

      And I think that’s part of the problem you’re running into because you don’t know whether those sort of judgements are being made. You suspect they are. There isn’t any straightforward statement of them, but there is a concern that the way Section 230 is drafted, it authorizes freedom from liability for judgements of that sort that should not be outside the scope of ordinary liability rules.

 

Neil Chilson:  I’d like to also congratulate the caller on her success in being able to get her ideas out there, even if there’s people out there who don’t want to hear them. One of the great things about the platforms that we have today is, I mean, imagine trying to do that when your options were essentially to go down to the local broadcast station or write an op-ed to the newspaper. If you had an idea that the editor of that newspaper didn’t like, or if you didn’t have the connections to get on a broadcast channel, that would have been impossible. And so it’s these types of ability to connect people who have ideas that don’t have to go through gatekeepers.

 

      We live in an age where there’s far fewer gatekeepers than there ever were. Section 230 has been part of enabling those platforms to exist, and so when I think about Section 230, and when you look at the way that Republicans in particular have used online platforms to their advantage, if you look at Facebook, often the most shared content, seven of the ten most shared content that’s in the political spaces is Republican. It’s very common to be able to get access to those ideas when the mainstream media in many cases wasn’t talking about those ideas. And so I think Section 230 has been part of the success of Republicans in being able to connect with each other online.

 

Greg Walsh:  Okay, let’s go to our next caller.

 

John Meyer (sp):  Hi, this is John Meyer. And first, I think that anything that gives these platforms more tort liability will give them more excuse to sneak in even more censorship than they’re already doing. I think it is a big problem because they’re regretting the fact that being an open forum has allowed conservative ideas to be heard.

 

      I really think that consideration should be given to taking price and monopoly groups like this and putting them under a modified First Amendment rule where you can’t censor for viewpoint. And then there’d be some exceptions to that like clear and present danger, or even something a little more liberal than that. But I don’t think we’re going to solve the problem without something fairly drastic.

 

Neil Chilson:  One thing I’ll add is I think it’s often that we hear about the Republican complaints on this side, or I should say the conservative complaints on these issues. But there is a raft of complaints about moderation across the spectrum. And that’s because moderation at that scale is very, very difficult. We’re talking billions and billions of pieces of content that are people’s individual communications. They’re sharing communications with somebody else, and these platforms are trying to figure out a way to keep their environment the kind of place that keeps people happy using their platforms and coming back. And in that process, they have to make decisions about this.

 

      And I think transparency could be very helpful in helping this out, but I just want to say that there’s stories on every piece of the political spectrum about these issues. It doesn’t mean that there’s a bias. It means that moderation is very difficult. And I think it’s challenging to deal with.

 

      And I really did like the caller’s point that if you increase tort liability here, you increase the incentive to shut down speech. These companies would face less -- they would face very little penalty from the individual that they squash, the individual voice that they squash, but they could face a giant fine from the government if they did this incorrectly. And so -- or sorry, or a private lawsuit that might crush them or might just be very expensive. And so I think that removing Section 230 protections would just as likely increase content moderation as it would to decrease it.

 

Greg Walsh:  Okay, let’s go to our next caller.

 

Jay Patel:  Hi. Good afternoon, everyone. This Jay Patel, and I appreciate the opportunity. Someone earlier made the point about trying to put guardrails around some of the provisions like the Section 230. And while I believe that we have -- and all of us are libertarians, and I think that there’s consensus around putting guardrails in that there’s Senator Lee and there’s another senator — I’m drawing a blank — but the two of them came up with some context of how to put some guardrails around censorship and also for access to privacy data from location and things like that from mobile and apps, both. And there was some discussion about all that.

 

      I attended a session on this on the Hill. This was before COVID, obviously, last year. So I think there is consensus -- I mean, not that I think, but I believe there is consensus about how to go about to do it. I think it’s a matter of the way they move forward. I think that’s the tough part, I think, because there was consensus. And the two senators, by the way, one of them is Republican. The other one’s a liberal, a Democrat, obviously. And they both came together to work together, cosponsored some provisions around -- not 230, but this was regarding geo access and things like that.

 

      I do think that censorship to a point would help, especially related to youth, the young people, in terms of the content and the nature of that content. But the internet wasn’t really designed, I think, for that and the evolution of all the platforms and the technology that’s out there. While it’s really a difficult thing because, like someone made the point, you can make -- you can publish something and share that globally, and it’s out there, whether someone agrees with you or not, without having to go put an add in the paper, traditional media. But the other flip side of it has been you can’t take it back, either, if you put something out there. So that’s the other side, I think. Anyway, but I appreciate the conversation on that, and thanks for the opportunity to listen.

 

Hon. Ronald Cass:  This is Ron. I just wanted to jump in. I think it’s pretty important that we do keep, as I think the last caller was saying, in mind, we don’t want to limit First Amendment protections for speech. We don’t want to have the government regulating who can say what, and we don’t even want the government using things that look like style restraints in ways that can substitute for message restraints.

 

      The question really that we’re engaged in at this point is whether there are changes we can  make to Section 230 or other parts of the law that don’t get the government into that posture but that also allow people to protect their own reputations, to protect themselves against censorship that is message based or is content based from enterprises that don’t have the ordinary rules applying to them.

 

Josh Divine:  Ron, this is Josh. I think a lot of the things you’re discussing can be addressed by addressing the issue of concentration in digital markets. So a lot of the things that people complain about these days I think are downstream of just the massive concentration that you have. Facebook has three billion people on its platform. Nobody else comes close. So if you want that network size, and that’s the most valuable thing about Facebook, you go to Facebook. And this is where I think data portability, interoperability, some of these programs that can -- or some of these operations that can decentralize a lot of the decision making that’s going on, those sorts of things can have very positive effects.

 

      If you have -- take my earlier example about third-party moderators filtering your Facebook feed. If you’re able to choose not just to rely on Facebook’s algorithm and Facebook’s moderation but choose whoever else you want to use, suddenly these third-party moderators are going to have an incentive to compete. They’re going to have incentives about transparency. They’re going to have incentives about neutrality. They’re going to have all kinds of good pro-competition market incentives that are currently lacking because the barriers to entry in this area are just enormous.

 

Hon. Ronald Cass:  This is Ron again, just to say I think the question of concentration, what the sort of response is appropriate to concentration of different markets is we need a separate antitrust phone call about that. So I’ll just reserve engaging on that for another day.

 

Neil Chilson:  Ron, I think The Federalist Society has done several, actually, on this, and I’ve participated in some of them. So suffice it to say I have a somewhat different view of the facts from Josh, but I agree with you, it’s a separate issue. And I’m a big fan of markets. I think markets have been part of what have brought us the powerful tools that we have where we have a network that is super useful and powerful, and we can multihome on many different places at the same time. And Section 230 has been part of facilitating that powerful technology and those powerful tools.

 

Greg Walsh:  Okay, let’s go to our next caller.

 

Sean Callahan (sp):  Hi, this is Sean Callahan. I have a practical nuts and bolts kind of question. I’m sitting here in my office. I’ve got a coding workbench up. I’m coding a website, but it doesn’t take in a whole lot of user supplied data, but let’s say it did. Let’s say I’m trying to make the next Twitter but with a conservative slant, and I start getting thousands of pieces of user content a day. And when it’s still just me — let’s make it the bedroom so it’s more dramatic — I’m just in my bedroom or my dorm room and it’s still just me, what happens when I get sued, when I get served? Is this a good social policy, and how many innovations are going to be snuffed out before anybody even knows about them?

 

      Relatedly, I’m going to scale it up. Twitter has 6,000 posts a second. So obviously in some sense, it’s exactly like a copy machine. It’s an electronic system. Now, I would think that when you want to impose distribution liability on it, you’re going to talk about moderation. So the nuts and bolts question is whose burden is it to plead or prove that a defamatory or otherwise actionable statement had been moderated?

 

      If we get rid of 215 but we say a plaintiff can sue Twitter or sue my garage business, but they have to plead that I saw the objectionable statement, and I manually allowed it, I manually moderated it, well, plaintiffs aren’t going to be able to plead that, so we’re not going to have many lawsuits. If, by contrast, it’s the defendant’s burden to prove that he didn’t moderate, that might work. But I’d be interested to know your comments. But it would seem like if you just get rid of that and if you just say every -- 6,000 tweets a second are all distributed by Twitter just because Twitter has algorithms that sometimes does things with those tweets. It seems a little interesting.

 

Neil Chilson:  Yeah, I think Section 230 — this is Neil — is helpful in those situations to startup companies who do have a flood of content immediately. It is possible in this day and age to go from zero to a hundred with a website or a web service and far outstrip your -- even when you’re running the app out of your bedroom. And so those can be very innovative apps. They can be very popular.

 

      And Section 230 means that the first time a bad piece of content is shared on that site that your whole investment and innovation is not at risk because somebody wants to bring a frivolous lawsuit. Even if you would eventually win that lawsuit, what Section 230 does is it lets you move past that litigation very quickly and get back to the innovation that you were trying to build.

 

Josh Divine:  This is Josh. One of the things that we haven’t really discussed is that there’s a tremendous amount of opportunities for how you could reform Section 230. And one is by imposing a size-based standard. So Neil and I, we’ve discussed this idea why cases should be treated alike. I don’t think Facebook should be treated the same as the startup you’re describing in your bedroom. And you can have a Section 230 that is more favorable to smaller companies than it is to dominant players like Facebook and Google.

 

      I also want to point out that we talk about -- in our tort system, we talk about the cost and concerns of frivolous law suits, etc., and those are real issues. But there are economic constraints on the plaintiff’s side as well. It costs money to sue somebody, and you don’t typically do that if the person that you’re wanting to sue has no pockets.

 

      We have newspapers, etc., who haven’t benefitted from 230 in their traditional business functions. They’re subject to the tort system, and they survive just fine. So I think these are serious tort problems, but I think it is also very easy to overstate the significance of that problem and to forget that we have a lot of flexibility in future ways that we could reform Section 230.

 

Greg Walsh:  Okay, I think we’re almost near the end of our time now. And unfortunately, we aren’t going to be able to get to all the questions, but we will try and fit in one more before we close. So here’s the last caller.

 

Brad Meisel:  Oh, hi. My name is Brad Meisel, and I just have a question about botnets which have been used to disseminate a lot of fake news, both for and against politicians in both parties as well as potentially businesses where a person can threaten to bombard you with negative Yelp reviews if you don’t pay a ransom.

 

      Considering Section 230 immunizes platforms from anything that’s posted by a content creator, meaning a person or an entity, do you think, in your opinion, that Section 230 would apply to content posted by a bot rather than a person or corporation partnership or other legal entity the same, or do you think courts could look at it differently, especially given that a bot has incredible capability to disseminate information on a massive level the way no one could have imagined in 1996?

 

Neil Chilson:  Well, this is Neil. I’ll just quickly say that Section 230 doesn’t protect the actor itself, so the bot or the person who’s responsible for that bot posting content is not in any way protected by Section 230. They are liable for the content that they post. What Section 230 does allow is for the platform to take down, even in a systematic way, to take down that content without therefore getting -- becoming liable for any other type of content on that website. And so I think Section 230 is a useful tool in the fight against spam on internet platforms.

 

Hon. Ronald Cass:  This is Ron. Unfortunately, on a lot of the conversations we’re having here, you have both a good side and a bad side. As Neil said, the internet provider’s able to take down the content. On the other hand, this also enables the internet provider to take down content for other reasons, for other objections. And the question isn’t whether you would in an ordinary case have an ability to take it down that you don’t have now, but what the parameters are of the insulation that you have today against any liability for doing that.

 

Neil Chilson:  Well, I wouldn’t be on a Federalist Society call if I didn’t say that generally, the default is that people are allowed to do what they can with the property that they own. And we should think about the constitutional principles that are -- the free speech principles that are here. Not only do platforms enable the free speech of individuals in many ways as we’ve heard, including from some callers who have been able to reach big audiences quickly, but also the platforms themselves have speech rights.

 

      Now, we think at Stand Together that more speech is better and that the solution to bad speech is good speech. And so we encourage all these platforms to allow a diverse set of voices and enable that. But the Constitution does put limits on what government can do in telling companies what content they are or are not allowed to take down.

 

Josh Divine:  This is Josh. I ultimately agree with a lot of what Ron and Neil are saying. And I think we’re just, especially Neil and I, we’re just applying things differently. So again, I agree with Neil’s position that we should let the cards lie where they fall, generally. Individuals should be responsible for their individual actions.

 

      But I think that if you make a bad website, if you create speed filter knowing the extraordinary risk that it’s going to be abused, then that’s your individual bad act, and you ought to be able to -- to be held to account for that. But Section 230, the way it operates, it gives these companies freedom from the responsibility that they ordinarily would be required to abide by.

 

Greg Walsh:  On behalf of The Federalist Society, I want to thank our experts for the benefit of their valuable time and expertise today. We welcome listener feedback by email at [email protected]. Thank you all for joining us. We are adjourned.

 

[Music]

 

Dean Reuter:  Thank you for listening to this episode of Teleforum, a podcast of The Federalist Society’s Practice Groups. For more information about The Federalist Society, the practice groups, and to become a Federalist Society member, please visit our website at www.fedsoc.org.