Social media platforms have emerged as the new “town square” and a key forum for public debate, but some have questioned whether that debate is as open and robust as it should be. On the other hand, some worry that efforts to regulate social media platforms may themselves crimp debate. At the heart of the discussion is Section 230 of the Communications Decency Act. A panel of experts discussed what Section 230 permits and doesn’t permit—a question now before a number of courts, including the U.S. Supreme Court in Gonzalez v. Google.
- Ashkhen Kazaryan, Senior Fellow, Free Speech & Peace, Stand Together
- Randolph May, President, The Free State Foundation
- Joel Thayer, President, Digital Progress Institute
- Moderator: Boyd Garriott, Associate, Wiley Rein LLP
As always, the Federalist Society takes no position on particular legal or public policy issues; all expressions of opinion are those of the speaker.
Jack Derwin: Hello, and welcome to this Federalist Society virtual event. My name is Jack Derwin, and I’m Associate Director of the Practice Groups at The Federalist Society. Today, we’re excited to host a panel discussion titled “Section 230 Goes to Court: Gonzalez v. Google and the Future of the Electronic Town Square.”
Joining us for this discussion is an impressive panel of experts who bring a range of -- excuse me -- of views to the topic at hand. In the interest of time, we’ll keep intros brief at the outset here, but you can view our speakers’ full bios at fedsoc.org.
Our moderator today is Boyd Garriott, who is an Associate at Wiley Rein, where he litigates and provides regulatory advice for a wide variety of telecommunications and technology clients. Prior to Wiley, Boyd clerked on the United States district court for the District of Columbia and attended Harvard Law School.
After the discussion between our panelists, we’ll go to audience Q&A if we have time remaining. So please enter any questions for our speakers into the Q&A function at the bottom right of your window. Finally, I’ll note that, as always, all expressions of opinion on today’s program are those of our guest speakers.
With that, I will pass it over to you, Boyd.
Boyd Garriott: Thanks so much, Jack, and yeah. I just want to reiterate. These panelists are a real all-star bunch—people who have thought a lot about Section 230 and Gonzalez v. Google. So I’m really excited to have a discussion with everyone today.
So we have with us today Ash Kazaryan. She is a Senior Fellow of Free Speech & Peace at Stand Together. She’s a tech policy expert and has previously worked at Meta and TechFreedom, focusing on Section 230 and a variety of other tech issues. Ash is regularly featured as an expert commentator in a variety of different media outlets, including CNBC, the BBC, and Politico.
We’ve also got with us Joel Thayer, the President of the Digital Progress Institute. Joel has previously worked on communications and technology issues as an associate at Phillips Lytle and before that at ACT | The App Association. Joel’s work has been featured in numerous publications, including The Wall Street Journal, Newsweek, and The Hill.
And last but not least, Randy May is the Founder and President of The Free State Foundation. He has been in the weeds on communications, administrative, and regulatory law issues in both think tanks and at major national law firms for over four decades. Randy has authored or edited 8 books and published more than 200 articles and essays and leading national legal periodicals, including Legal Times and The National Law Journal.
Before I hand the reins over to our panel of experts, I just want to give a 10,000-foot overview of Gonzalez v. Google and Section 230 to just do some table setting for our discussion today.
So starting with Section 230, the origin of this law can really be traced back to 1995, when a New York state court in a case called Stratton Oakmont held that a website could be held liable as a publisher of defamatory statements posted by third parties on its online bulletin board. The next year, in 1996, Congress enacted the Communications Decency Act, seeking to promote free expression on the internet and to remove disincentives for platforms to block or filter harmful content.
Relevant to today’s discussion, Section 230(c)(1) says that providers of interactive computer services are not to be “treated as the publisher or speaker of any information provided by another information content provider.” Taking this language, the courts of appeals have, for many years, interpreted this provision quite broadly.
But in 2020, Justice Thomas—in a statement respecting a denial of cert—called for the Supreme Court to review Section 230 in a future case. Justice Thomas opined that the lower courts had afforded more immunity to platforms than the text of Section 230 could bear. It’s against this background that the Supreme Court decided in October to hear Gonzalez v. Google—a case which concerns what counts as being a “publisher or speaker of information under Section 230(c)(1).”
The facts of this case stem from a 2015 terrorist attack in Paris, in which ISIS adherents killed a young woman named Nohemi Gonzalez. Gonzalez’s family sued Google under the theory that its platform, YouTube, is secondarily liable for aiding and abetting ISIS by allegedly hosting some of that organization’s videos and using algorithms to recommend those videos to users.
The Ninth Circuit held that Section 230 barred these claims because YouTube’s recommendation algorithms are neutral tools that present content provided by third parties—placing those tools squarely within the grant of publisher immunity afforded by Section 230(c)(1). But Judges Berzon and Gould wrote separately to argue that these recommendation algorithms—by amplifying and directing content—went beyond merely publishing third-party content.
So now, the question for the Supreme Court is, “Who has the better of the argument?” and whether Section 230 immunizes platforms from liability when they use these recommendation algorithms.
So now, I’m going to hand it over to the panelists to talk about this, and they’ll give some opening statements. As they do, I will just remind everyone to feel free to use the Q&A function if you have questions that you’re interested in hearing our panelists discuss. So with that, we’ll go ahead and start with Ash.
Ashken Kazaryan: Thank you so much. I’m not going to do more of a statement. I want to provide a little more color, and then I think it would be best for everyone to jump into the discussion. There are different levels of how our audiences acquainted with Section 230. I do want to add a few more touches on where it comes from because I think we’re going to talk about the congressional history a little more later in the panel.
So that case you mentioned—also for some color—I’m sure many people have seen The Wolf of Wall Street. That was actually that firm, Stratton Oakmont, suing Prodigy for this website. They were hosting forums. And I believe on some of the forums, people might have alleged that Stratton Oakmont -- Jordan Belfort, I believe his name was—Leo DiCaprio’s character in The Wolf of Wall Street—that, therefore, might have been engaging in some not so super legal activity. And they sued, and they won.
And what the Court took into account was that Prodigy, as a website, was -- they were moderating. They were moderating. They were trying to create somewhat of a good environment for their users. And the second case -- sorry, I need more coffee. The second case was CompuServe, and in CompuServe, it was the opposite. The platform was found not to be liable because the Court said they didn’t know what was going on on the platform because they were not moderating at all.
Those two cases were what Christopher Cox then represented—a Republican representative from California -- read on his way back to D.C., and then met with Ron Wyden—back then representative, now senator—Democratic Senator from Oregon. And they discussed that this was a very wrong incentive that led to either horrible things being hosted on the internet—like very hands-off—or over moderation on those back then 1995 platforms.
And that’s when they wrote what became Section 230. It was a separate bill that was then in the legislative process merged with the Communications Decency Act and passed in 1996. I believe a year later, in Bruno v. ACLU, the Supreme Court found the Communications Decency Act—that dealt with indecency online—to be unconstitutional, except for the part that is Section 230. And since then, it went pretty untouched until SESTA/FOSTA in 2018, but that’s not what we’re talking about here.
And a lot of people do credit Section 230 with the creation of the internet as we know it. It allowed platforms to not be liable for third-party user-generated content and I think created a very diverse online—I would say—ecosystem that keeps developing as we go. We had Myspace. We had AOL. Now, we have Google and Facebook and Instagram and Twitter, and TikTok has emerged into the scene. And all of them are using Section 230 as their liability shield against a very litigious system, that is, the United States.
Now, let’s talk about Gonzalez v. Google. Aside from Bruno v. ACLU, Section 230 hasn’t really been to the Supreme Court. And I would say it was a very surprising case when it was granted cert. Even the smartest legal scholars were not expecting it. There’s no circuit split so far on all the cases of algorithmic recommendations being protected by Section 230. The circuit courts have said yes—I believe the Second and the Ninth Circuit the most recent ones.
The thing about the Ninth Circuit and the Gonzalez case, there was a dissent that said, “If we were not bound by this precedent, we would have ruled -- if we were not bound by a previous precedent, we would have ruled otherwise.” And that was used by the petitioners as an argument that there is a circuit split or there could’ve been a circuit split. A very stretched legal argument, I would say, but that’s not that important here. The cert was granted.
I would say the way the petition was written—and “we,” we’re talking about Justice Thomas—it definitely corresponded what Justice Thomas has said in his statements about Section 230 previously. And that might have been what triggered the interest.
Now, the thing about algorithmic recommendations is that basically everything we see on social media platforms is algorithmically recommended one way or another. You can argue even that a chronological feed is algorithmically recommended because Twitter has decided that “Here’s an option. You want to have a chronological feed? You press on it. We have created an algorithm that will line it up for you.” That’s also a recommendation.
And then you go from that to target recommendations. There’s a lot to be said about target recommendations and how they work. Every single platform uses a variety and a majority, and you wouldn’t know what exactly -- I don’t think even platforms know how many algorithms at the same time are creating specific user experience.
But the prime real estate—that is, the newsfeed or whatever version of newsfeed the platforms have—is that the platforms are looking at who you are as a user and what other users with similar profiles are interested in. That’s one. Second one is, “What is a trending topic right now in general? What are issues that people are interested in?” And that’s how algorithmic recommendations are created.
Now, if Court grants what petitioners are asking for, if Court says, “Yes, Section 230 does not protect targeted algorithmic recommendations,” I don’t see a lot of the current social media platforms and the way they operate surviving—for example, Google, just all in itself. That’s how broad that question is, and that’s why I started from a little far. That’s how surprising it is that we have gotten here this fast.
I would also mention that there’s a variety of options that the Court can go with as they limit it. But as we are talking about Section 230, as we’re going to talk about (c)(1)—because I know Joel and Randy have a lot to say about it—it is important to remember what is at stake here, and it is the future of the internet as we know it. And I’m not trying to be dramatic.
But there’s a big difference between how U.S operates and U.S. legal system operates from everywhere else in the world. And that’s why we do have Section 230, and also, that’s why we have the leading tech companies in the world. And at that, I’m going to pass it over to Joel.
Joel Thayer: All right. Well first, thanks, Ash, and also, first, thank you to The Federalist Society for hosting this important discussion. I think whether you are on the side of Section 230 reform, like my friend Randy, or a Section 230 purist, like my friend Ash, we can all agree that this case will set the tone for broader conversations on internet regulation.
But today, I think it’s best to have a more focused discussion, especially one that articulates what this case is about and what it’s not. For example, this case does not address the issue of online censorship. That would be questions better suited under Section 230(c)(2), which is not being discussed in this case. But fear not, culture warriors. This case could fix serious issues with previous Court interpretations of Section 230.
Frankly, Court precedent on Section 230 has made it virtually impossible to hold these tech companies accountable when they harm consumers. The reason? Case law on this issue is divorced from the tenets of statutory construction.
Now, the formal question presented to the Court at bar is, “Under what circumstances does the defense—created by Section 230(c)(1)—apply to recommendations of third-party content?” Fundamentally, this case asks a relatively simple question: “What does the text of Section 230(c)(1) actually say and do?”
So let’s start with the text of the section at issue. Section 230(c)(1) says—and I quote—“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” You’ll notice there’s no mention of immunity or from what an interactive computer service would be immune. All the statute says is that we cannot treat interactive computer service providers or users—in this case, Google’s YouTube—as the publisher or speaker of a third-party post, such as a YouTube video. That is all.
Warped interpretations from courts—starting with Zeran v. AOL—have drastically moved away from the text of the statute to find Section 230(c)(1) as providing broad immunity to civil actions. Such civil actions include basic torts, liability under civil statutes. And yes. The courts have even construed Section 230 to overcome breach of contract claims—case in point, Murphy v. Twitter.
These immunities aren’t written in the statute. They are Court created. And I have yet to hear a textual argument otherwise. This case, I hope, can rectify that. Boyd touched on this a bit, but I think it’s important to take a deeper dive into the facts of this case.
This case comes to us from the Ninth Circuit. The original plaintiffs are family members of Nohemi Gonzalez of a fatal terrorist act in Paris, which were committed by affiliates of ISIS. The plaintiffs originally claimed that Google violated the Anti-Terrorism Act for knowingly providing material support to the group through its platform. The Ninth Circuit held that because Google was acting more in a publisher capacity, it was entitled to Section 230 immunity.
But frankly, after reading the Ninth Circuit’s opinion several times over, I must admit I was a little confused. The Ninth Circuit, in this case here, waffles on Google being a neutral platform and, at the same time, a sort of publisher—only to ignore all of that waffling to grant immunity to Google. Worse, there’s no conversation as to why the text supports providing Google with this level of immunity. Immunity, mind you, that would not be afforded to The Wall Street Journal had it written about these videos, or a movie theater that had played these videos for you, or even you if you recommended them to a friend.
Google gets immunity because, hey, they’re Google. This highlights the problems with how courts have applied Section 230. The courts follow a standard formula: “Because we’re not sure what tech companies do, if it’s a website or app and the case involves third-party content, we’ll just grant immunity.”
Chief Judge Katzmann of the Second Circuit admitted as much in his concurring opinion of critiquing the majority opinion in Force v. Facebook, which is a case that has a similar fact pattern and causes of action under the ATA.
Frankly, we need a better test to determine when Section 230(c)(1) applies. I argue that the text should be our guide. One option—advanced also by several senators—is that tech companies should only be protected from causes of actions that target a speaker or publisher, such as defamation. Another option would be to shield companies from liability for hosting and displaying content but hold them responsible when they take actions beyond those of a publisher.
Yet, another possibility arises. Going back to the original sin of Zeran would be to allow actions to proceed only under a distributor liability theory. Neither the statute nor the structure of Section 230 suggests that the Court cannot apply a traditional distributor liability test to Google in this instance. In that case, the Court should remain on the case for the parties to argue whether Google knew or should have known that ISIS was using its platform as a recruiting tool.
But in any case, the statute does not support the current reading of Section 230(c)(1) that shields tech companies from liability under practically all causes of action when third-party content is involved. That is just an absurd result, and it’s also detached from basic statutory construction.
But one last point. Courts need to stop being mystified whenever the word “algorithm” is invoked. An algorithm is just a way of making a decision—nothing more. Humans program those algorithms. They aren’t created on high. And given the revelations of the recent Twitter and Facebook files, it’s clear that algorithms aren’t the only thing running the show at these tech companies. Manual intervention is a regular practice when it comes to content moderation.
And so, whether it’s this case or another, I hope courts will start treating tech companies like everyone else. If we are truly all equal under the law and our courts are in fact the great levelers, then justice demands no less. Thank you, everyone, for your time, and I’m interested in the further discussion.
Randolph J. May: Okay. Boyd, I think it’s over to me now. First of all, thanks to The Federalist Society for hosting this; Boyd, to you for moderating; and to my fellow panelists, Ash and Joel. You have actually, I think, set this up nicely and provided a good base for an ongoing discussion.
Before I dive in, I have to say, when Boyd introduced me and he said that I’ve been in the weeds of telecom for over four decades, it has been that long. But I feel like there’s still a lot of weeds that are still there. So I’m not sure how much I’ve accomplished at pulling some of those out.
And then Ash said something that made me think also for a moment during her presentation when she said towards the end something to the effect that, if Section 230 is altered—I think, in any meaningful way—that that could be the end of the internet as we know it.
And that called to my mind what we heard back a few years ago when we were having that discussion about that other favorite topic: net neutrality. When we thought that might be the end of the internet as we know it if net neutrality were repealed. But I guess we’re still using the internet today, so that didn’t happen.
I want to take this discussion in a little bit of a different direction because part of this program—I think today as we conceived it and billed it—is to think about assuming that, regardless of what the Court does, that Congress could decide to reform Section 230—conceivably, even repeal it. President Biden has said he would like to repeal Section 230, and his predecessor did as well.
So I think we can think about what would be a sound replacement, a proper replacement, for Section 230 if it were modified in some way—or conceivably even repealed. And to put it in blunt terms, what should be the extent of immunity that’s granted to the platforms in that regard? And when I think about it—I think when we all think about it—one question is whether there’s a difference that we should acknowledge between an absolute bar to liability on the one hand, and even a broad bar to immunity on the other hand because I think what we have now—I think we could all agree, at least as I understand the cases—we have an absolute bar to immunity or at least a near absolute bar to immunity.
I think another thing that we ought to think about that’s pretty fundamental but it gets lost sometimes and you have to, I think, have it in mind is whether there’s a difference between exercising editorial control over content on the one hand and engaging in conduct relating to the distribution of content on the other hand, and whether you can acknowledge that difference, and, if so, how you would treat those differently in terms of establishing liability.
To put a point on it, in my mind, if you’re talking about editorial control of content, I understand the First Amendment. I’ve been a strong First Amendment defender, and I understand the interest—First Amendment interest—involved when you talk about what I consider to be editorial control over content. It doesn’t always trump everything. But under the First Amendment, it’s pretty darn important, right?
So when I think about fashioning what I would consider to be a “proper regime” to deal with this question we’re discussing, I like to think about a law and economics approach—we could call it the Chicago law and economics approach—when you look at cost benefit, the various interests, and that type of way.
And when I do that, it leads me to think that you could fashion a regime which would properly take into account the interest of the platforms in being preserved. I don’t want them to go away. I recognize the value that they contribute to society but also the harms that can result from their existence, over which they don’t always have, of course, control.
And it leads me to think that you could have a regime that would rely on a reasonable duty of care that would be workable and that would allow them to perform the function that, I think, they really ought to be able to function. And if you do that—if you’re fashioning a regime that’s based on a reasonable duty of care—I think that would allow you to take -- and when I say that, this could be Congress fashioning the regime, right? Congress could kind of come up with a new statute.
In the absence of Congress doing it, I think our common law would do it. I think Joel referred to the fact that -- he talked about the equality of how we treat entities and ourselves under our civil justice system. And we have to think about how unique, this problem is that we’re dealing with—that we take it out of the ordinary or need to take it out of the ordinary principles of our civil justice system, right?
But if you’re doing it that way, I think that would allow you to consider things, like the scale of the platform, the number of users, the size of the user base of a platform, the cost of implementing procedures and practices that would enable the platform to comply with its terms of service. Presumably, its terms of service wouldn’t be to host content that explicitly urges the commission of terrorist acts. I assume that would probably contravene the terms of service. So you would take into account those types of things.
And I want to acknowledge, really, here because it’s worth reading, and I want to also give credit. Geoff Manne and Kristian Stout and Ben Sperry put out a paper about a year and a half ago that I think is well worth reading in terms of a very serious analysis of the pros and cons of the type of regime I’m talking about. That paper is called Who Moderates the Moderators? and you could find it at the International Center for Law & Economics, and I give them credit for that.
But basically, they say—and then I’ll end with this so we can continue the discussion—but in that paper, they say, “Counting the cost of defending meritorious lawsuits as an avoidable and unfortunate expense is tantamount to wishing away our civil justice system.” And so, I think, in order to get to what I would consider to be a proper resolution of this issue we’re talking about—that balances the need for the robust discussion that we want to have as in our public square with the need not to escape all liability for what I consider to be conduct that they ought to be responsible for—we don’t want to completely forget about the availability of our civil justice system to deal with this issue.
Boyd, I think with that, I’ll stop so we can keep moving along.
Boyd Garriott: Thanks so much, Randy, and to Ash and Joel for really interesting thoughts. Before we jump into Q&A, I just wanted to give everyone a chance to go around the horn one more time and give a brief statement or thoughts reacting to what your fellow panelists have said. So, I guess, we’ll start again with Ash.
Ashken Kazaryan: I tried to write down all the things I wanted to react to, and then I gave up because it was too many. But I wanted to say a few things. Joel said that platforms are not like Wall Street Journal. But if you think about it, the way you recommend content is the way Wall Street Journal decides to place articles into certain “highlight this one; put this one on the first page; put this one on the back” kind of way.
Also, Joel, you kept talking about texts, statutory texts, and I think that’s a great thing to talk about. So both the authors of Section 230—Ron Wyden and Chris Cox—have filed an amicus brief in this case. And also internet policy scholars—that include Professor Eugene Volokh amongst others—also have filed in this case. And I count both the people who wrote the law and are very much in great health and mind right now and the people who have studied this before I was even born as my guiding lights in this question.
And what they seem to both say is that, actually, no. They did mean to cover recommendations under this. I mean, the first argument comes from -- well, Section 230 also has the (f) provision that says -- and I’m just going to read. But it says, “Internet and computer services would pick and choose content to amplify, organize, filter, screen, and reorganize third-party content.” That pretty much sounds to me like algorithms.
And the other thing that I think we need to really step back and think about is that when the language was put into the statute that said “publisher or speaker,” it wasn’t any kind of way to distinguish or say that this is different from distributor liability. And also, one-on-one distributor liability -- I think a good example would be the Smith v. California case. The book was called Sweeter than Life, and it was a bookstore that was selling it. And it was about, I believe, a lesbian businesswoman. And California tried to go after Smith and even put him in jail for a little bit for selling that book.
The case went up to the Supreme Court, and they found him to be not liable because he didn’t know that the book was even there. He didn’t know what was in the book. And that’s kind of secondary distributor liability disseminator of information standard that is often used in distributor liability. So from the moment you have the knowledge, you are liable if that content still exists. And I think, if that was the case, again, these systems would crumble.
From the moment that I email YouTube to some customer service email and say, “Hey. Actually, you have content that’s in some way or form illegal or harmful or objectionable”—whatever it is—they’re supposed to take it down. The difference between the internet and any other form of forum that we have had before is its scale. And it is that scale that makes it great, and it is that scale that makes it hard to just manage.
But just, once again, the internet policy scholars brief—people who have taught me and have taught many generations of First Amendment scholars—does argue that, actually, the statutory language takes from common law language the publisher or speaker and puts it in there to distinguish it and say, “This is different from distributor liability, and that’s why it’s there.”
And that is why I was very surprised because I would say Justice Thomas is more of a textual guy—that he went down that route. But we do have congressional history and all this information and people who have wrote Section 230 saying this over and over. And I will tweak links to this because I don’t want to recite them. That’s not a good use of any of our time. But I would encourage everyone who is interested to read these.
And there was another thing -- oh. Joel said that algorithms are mystified creatures, and I would agree to some degree. At the same time, I think I’ve -- in the decade that I’ve worked in this space, there’s a lot of misunderstanding about what algorithms are and what they do. And they also overlap a little bit with the questions of privacy and how we regulate privacy because a lot of what algorithms do is based off of data about the users themselves.
But, at the same time, I didn’t say this at the offset. I work now at Stand Together, but before that, I was at a tech company. I did not represent any tech company right now or any of my experience. But there is a myriad of algorithms that are constantly at play that do create all of this. And, on that, I’m going to stop talking and give it over to Joel.
Joel Thayer: Yeah. And I only had a few comments, really. Well, for one, I think that I’m in somewhat agreement with Randy on a lot of the points, and so there’s not really much to cover there. But on a couple of just points that Ash brought up that I thought were interesting.
One is the concept that there was no circuit split and, therefore, no problem. I’m not saying exactly that. I’m paraphrasing. But for one, you don’t need a circuit split to get judicial review. And the Supreme Court has a lot of discretion to take—whatever case they want. In fact, it’s kind of their thing to not take cases.
The other thing I would say is that I’m glad that we agree that -- that Ash and I agree on The Wall Street Journal being somewhat akin to what Google does in certain instances, which is the point I was making, and I’m sorry if that wasn’t clear. The point I was making was that Google, under Section 230(c)(1) in its interpretation, gets afforded these so-called immunities—a lot of which isn’t actually indicated in the statute at all. And even in the legislative history, it’s not as clear-cut as Ash represented.
But I think that what you’re -- I think what I’m trying to articulate here is that why is Google so special where, when they do something akin to what a newspaper does, they get certain liability -- or they get certain—sorry—immunity? Well, it’s in the statute. Okay, I’m okay with that. If we have been able to articulate exactly that, “Hey, they are publishers.” Okay.
And I think even under this legislative history, there’s nothing to suggest that these companies should get broad immunity from every single civil statute or every civil action. I think that it probably makes more sense given that Section 230 was essentially written—and this provision in particular—was written in response to a case called Stratton Oakmont, Inc, where that case had to do with a defamation case.
So clearly, Congress was reacting to those types of civil liabilities. It’s not even remotely clear to me whether you read that through the legislative history or throughout. It wasn’t until we get to Zeran is where we get this absurd result where, now, you get this broad immunity for almost everything. And so I think that that’s probably where I see the most conflict between what Ash and I -- between Ash and I.
Also, the one thing I’ll just tie -- to tie all this up is I was waiting for the First Amendment to pop up, and it did. And so I don’t see the First Amendment as a relevant portion of this conversation. For one, it’s not an issue in this case. And secondly, I’m not clear on any case law that says that the First Amendment guarantees any publisher or provider a statutory immunity from a civil suit.
So I’m not entirely sure where that point was going. But those were the only major things that I would like to respond to. But yeah. For better or worse, I’ll send it over to Randy.
Randolph J. May: Okay. Well, I hope it’s for better, but thanks, Joel. A couple things. There’s been a lot of discussion about legislative history here—and even what was said on a plane ride, I think, maybe back from California.
But I think one thing we can be sure of is that this present Court is probably not going to rely on the legislative history, I don’t expect. But it’s really going to look at the text of the statute pretty much. I’m not saying that the legislative history is not interesting or even important to some people, but it’s probably not going to be important here.
One thing we haven’t mentioned—just for those that are in the audience that are going to delve further or want to know—the Justice department filed a brief in the case as well and the government’s brief. And, if I understood it correctly—maybe someone will correct me—but I believe that their position is that there is perhaps a distinction.
They ultimately recommended that the case be sent back—I think, for further proceedings below—but that there’s a distinction between recommendations that took place in this case and decisions about the content being placed on the platform and that the algorithms might not be subject to the absolute bar from immunity.
I think The New York Times was brought up, and then Joel mentioned the First Amendment. And I agree. It doesn’t, on its face, appear to be really an issue in this case. It’s an issue of text—interpreting the statute.
But if we are going back to thinking about when Section 230 was put in the statute, I don’t remember at all an argument being made—a serious argument that I can recall, and I was involved back at that time—that the First Amendment compelled the adoption that Congress needed to adopt Section 230 as a matter of First Amendment jurisprudence.
And, by the same token, I think, sometimes, it’s argued that any tampering at all with 230 to reduce in any way the extent of the immunity would constitute a First Amendment violation, and I don’t think that’s true as well. We have New York Times v. Sullivan, though, I think is somewhat relevant and instructive. That’s a newspaper. There was no statute, I think, as Joel pointed out.
But nevertheless, there is broad immunity for certain types of publications: defamation that takes place and different standards that have been developed under the law. And it happens that with the First Amendment underpinning, The New York Times—for certain types of defamation claims—gets pretty broad immunity from liability, and that’s abroad for others. But none of that took place because Congress thought that there had to be a Section 230 type of statute.
And then finally, I would just say, of course, Ash is right that this is a different scale. It’s a different medium, and I appreciate that. I understand. So it’s not equivalent to other types of mediums precisely. But again, it doesn’t mean that the civil justice system can’t adapt and can’t end up in a place that might be more appropriate in terms of balancing the interests at stake and maintaining a robust discussion in the public square but also protecting the public from certain types of conduct that probably all of us think are pretty egregious types of conduct—at least some of them. Thank you, Boyd.
Boyd Garriott: All right. Thanks, Randy, and thanks Ash and Joel. So we’ll now go into questions. I’ll just remind everyone. Feel free to use the Q&A function and ask questions. I’ll be monitoring those.
But, I guess, I’ll start things off. I have a question for Joel. So, Joel, is there something about recommendation algorithms that’s different than traditional tools employed by publishers that allow users to find content? So I’m thinking of a table of contents or even a website directory. What makes recommendation algorithms different from those? Or is it different? Would you say that those are covered by 230?
Joel Thayer: No, no problem. I still think there’s an efficiency difference. I mean, there’s no doubt that algorithmic mechanisms provide you information faster and quicker. I think the issue that I see here is that whenever courts seem to talk about algorithms, they talk about it as this almost monolith -- there’s no interaction between the inputs from humans on the algorithm or the makeup of the algorithm and the result.
So I pointed to the Twitter files as being an example of that. For a long while, both Facebook and Twitter said that, “Well, we’re not censoring X or doing Z because these are all our algorithms. They’re not really us doing it.” But it turns out that, well, there was a little bit more interaction with -- there was more of a human element interaction in the curation process.
So my point really isn’t to further distinguish. It’s just to say that I don’t think simply saying, “Oh, an algorithm did it is enough,” especially if you’re looking at statutes, like the ATA, which doesn’t really require a publishing mandate in order for it to be in effect. This is where it goes back to what Justice Thomas was saying in -- I’m forgetting. I forget the case.
Ashken Kazaryan: Malwarebytes?
Joel Thayer: Malwarebytes, thank you. His concurrence in Malwarebytes where his big issue with Section 230(c)(1) was that, given the current textual -- given the relevant text, there doesn’t seem to be any real bar from them to look at this through distributor liability. So even if we say, “Algorithms are different, and they are more of like a neutral platform sort of mechanism,” that doesn’t mean that that’s where the analysis stops. “Now, let’s look at distributor liability and see how that works.”
But again, I think throughout this entire conversation—and I think Randy did a very good job of this, which we haven’t had a real -- I haven’t seen a textual argument either here or anywhere else that says that there should be immunity for these things just because an algorithm exists.
Randolph J. May: Boyd? I’m sorry. Boyd, I think maybe another way of thinking about it—or at least a question that I have in my mind—is, here, we’re talking specifically in this case about videos that, I think, everyone agrees were urging others to commit terrorist acts that, presumably, all of us here—hopefully, in the audience—would think would not be a good thing, that that’s a bad thing, presumably.
And if that’s the case—and also let’s assume that the algorithms were promoting or making those videos more available than they otherwise would be under some other type of algorithm or no algorithm at all, perhaps—wouldn’t we want to know—assuming we agree that videos promoting terrorist acts are not a good thing and that even Google would not necessarily want to see that happen -- wouldn’t we want to find out more about how the algorithms work and whether it would be reasonable -- whether or not it would reasonable for Google to take some actions so that that doesn’t happen?
And maybe those actions would be too costly in some way that I wouldn’t necessarily understand. But maybe they wouldn’t be, and that would be the type of analysis that I would like -- or the type of consideration that I think would be relevant under the type of regime that I would like to see in place.
Ashken Kazaryan: I would like to tag two things Joel and Randy said. Joel, you talked about the Anti-Terrorism Act, which I think actually -- important piece of this that we didn’t mention is the Taamneh case. So there’s actually a second case that the Supreme Court also granted cert, which is Twitter v. Taamneh. And in it, the Anti-Terrorism Act is the one in question.
And the question in front of a court is if Twitter aided and abetted terrorism by hosting terrorist content that led to the 2017 Istanbul bombings, and the family of the victim there is arguing that.
Now, Randy, you said that you were talking about terrorist videos. And the thing that I wanted to mention is there’s no direct link in either of these two cases between the videos and the terrorists and the horrible, horrific things that have happened, right? It’s just very high level that the videos exist. So that’s, I think, important to mention in these cases.
Another interesting bit that I think viewers would find fascinating is that Justice Kagan was solicitor general in the Force v. Facebook that you, Joel, mentioned. And so she might have—I mean, we’re all speculating—but maybe she was one of the four votes that got one of these two cases to be granted cert. She might have an interest and want to address this issue.
But the thing about the platforms and horrible illegal content is that none of the platforms want to host them. None of the platforms want to be even associated with it—even the platforms that are more hands-off when it comes to their moderation, like Twitter is. And I think, when these cases took place, which was almost 10 years ago, they just had less developed systems of addressing and finding the content. And with every year, they get better and better.
And in Europe, the way free speech works is different. They also are not as litigious as the United States. But they have a very big chilling of free speech because platforms over moderate the content. And even reporting from regions like Syria leads to being taken down. Or there’s huge over moderation of the Israeli-Palestinian conflict because of, again, regulations that are in place and, obviously, Europe doesn’t have anywhere close to the First Amendment protections that we have here. But that’s important to remember.
But, at the same time, if the state department here designates a group as a terrorist organization, name a platform that doesn’t take their content down when they find it.
Boyd Garriott: Thank you all. Just I know we’re approaching the top of the hour, so I’m just going to move on and ask a question from the audience.
Joel Thayer: Boyd, I’m sorry. I really just want to respond to a couple of things because I think that this might resolve some of the issues here. So on the Twitter case—on the ATA case—that question has to do with whether or not the ATA is a criminal statute, which would allow the case to be heard on the merits.
I think that why those things are important is mainly because Section 230—the way it’s been read—has been read so broadly that we now have to find an interesting, innovative ways just to hear the cases on the merits. Now, whether or not there’s a causal link between the terrorist activity and what actually happened, at this point, is irrelevant.
The question is, “Can we hear the case on the merits?” not whether Section 230 actually provides as a bar to those claims. That’s what this case is about. So getting bogged down into whether algorithms are good, bad, and the ugly, well, let’s get, first, the case to the merits and describe what happened, and then we can be able to describe whether or not Google was liable or should have done something or not. Sorry, that’s all I’d say.
Boyd Garriott: Thanks, Joel. So this next question is for you, Ash. And so this comes from the audience. And so your statutory argument seems to depend on legislative history and legislative intent. Do you have an argument that’s independent of legislative history—that’s just the text—for why recommendation algorithms should be considered publishing?
Ashken Kazaryan: So yes. And the whole point is that it shouldn’t be publisher or a speaker. That’s the text. They took it from common law, and they put it in Section 230 to avoid this -- it’s like a very fallacy that keeps reappearing. And there are a few fallacies around Section 230 that keep coming up. But the whole point is that it doesn’t matter if it’s a publisher. It doesn’t matter, and that’s why it was written the way it was.
And I guess I’m citing legislative history and the authors of Section 230 just to prove my point because they’re saying the same thing. It shouldn’t matter. And that’s the difference also with The Wall Street Journal. It shouldn’t matter that platforms are a publisher, and there is a big difference there. But I’ll stop there. There’s also another question for me, but I’ll let you moderate the questions.
Boyd Garriott: Great. Thanks. So, Randy, I’ve got a question for you. So on your reasonable duty of care kind of thinking and your law and economics thinking about how Section 230 should work, is that something that you think Congress would have to do, or is there a bona fide reading of the statute that could support that interpretation?
Randolph J. May: Yeah. No, I think probably Congress would have to do it. Or, as I said, I guess if the statute were just repealed, I think, under the common law, it could evolve. Justice Thomas in the Malwarebytes case, I think, at the end of that opinion, he suggested even state liability laws could evolve to do that. But the answer is I don’t think it’s going to be done under the current statute as it stands right now.
Boyd Garriott: Got it. Thanks, Randy. And since we’re close to the top of the hour here, I guess I’ll just ask one last question for the group just to give some closing thoughts here. How will the impact -- or how will Gonzalez v. Google impact both Section 230 and the future of the internet? Yeah, I’m just interested to hear your thoughts on that. We’ll start with you, Ash.
Ashken Kazaryan: I’ll try to be very short. I think that it will absolutely. No matter how the Court rules is going to impact the way legislatures think about this. I also think given this conversation is shifting the Overton window on how we think about social media platforms and that we already see proposed regulation legislation on state and federal level that addresses algorithms in many different ways and forms.
So I think this is just the first harbinger. And there is the Taamneh case. I think we should get a prize because we didn’t mention Texas or Florida once in this panel. But there are two cases that are probably going to be heard sometime later next year—but definitely are teed up—that are going to actually talk about (c)(2) that we also didn’t mention. So “yay” to us. And they’re going to reshape the speech as we know it. And I’m going to pass it onto Joel.
Joel Thayer: Yeah. I think that I’ll close up really quick. I think that, again, textualism will play a big role here. And from a textualist perspective, I think the courts can adopt the late Justice Scalia’s viewpoint on reading a text, which “Congress doesn’t hide elephants in mouseholes.” So if there is that broad immunity, you have to demonstrate that in the text. You can’t just read it.
And I think in terms for a policy moving forward, if it goes in favor of Gonzalez, then I think that you’re going to see a greater impetus for a lot of these tech companies to come to Congress and try to either remedy it. But, at the very least, I think we’re going to have a broader conversation, as I said on the top of my comments.
I truly think that no matter how this goes, there’s always another opportunity. I think, with Section 230, there’s always going to be another bite of the apple. But I think if it goes in the way of Gonzalez, we’re probably going to see more action on the Hill and probably get more of a compromise because, from the tech company’s perspective, it would, in their view, be even bad case law. But I’ll pass it onto Randy.
Randolph J. May: Well, I’ll go out on a limb and make a prediction that I think, in some way, that this is not going to be a clean victory for Google and the other platforms and the amici that are arguing for it. I think it’ll be sent back, at least for some tweaking. I don’t think the Court -- Joel, again, referred to broad immunity.
But, I mean, my understanding of this case law is it’s almost, really, what I would call “absolute immunity.” And you could end up in a place where there’s broad immunity that’s not absolute, which might be a place to end up in the future. One way or another, I think the conversation, obviously, is going to continue, as Ash pointed out. And I’ve been really, I hope, studious in restraining myself from talking about some of the broader issues because of my own predisposition.
I’m concerned about too much censorship of what I would consider consequential issues, but we haven’t -- that’s not this discussion, as Joel reminded me at the beginning. But what I would say is—let’s be honest—this statute was adopted over a quarter century ago. And I said I don’t think the First Amendment compelled it, but I could certainly be sympathetic to the notion that it was a good thing at that time, that it was an appropriate thing, and that maybe it was necessary for the internet to get where it is today as we know it, at some point.
But that doesn’t necessarily mean that in light of the environment that we have now and the way the platforms have emerged and some of the difference between the really large ones—I don’t even want to say big tech. I’ll just say some of the really large ones and all the hundreds of thousands of other ones—that it’s not possible -- or that we shouldn’t revisit Section 230 now, aside from what the Court does, and probably determine that there’s a way of fashioning a regime that doesn’t provide the absolute immunity or the near absolute immunity that Section 230 presently does. So that’s my closing thought.
Boyd Garriott: Thank you so much, Randy, Joel, and Ash, all of you. I really appreciated hearing your thoughts today. Great discussion. And I guess I’ll pass it back to Jack now to close us out.
Jack Derwin: Thanks so much, Boyd. A sign of a great moderator, of course, is ending right on the hour. And thank you all for joining us today, and thank you to our audience for tuning into today’s event. You can check out our website at fedsoc.org or follow us on all the major social media platforms at FedSoc. Stay up to date. Thank you very much.