On February 21, 2023, the U.S. Supreme Court will hear oral argument in Gonzalez v. Google.
After U.S. citizen Nohemi Gonzalez was killed by a terrorist attack in Paris, France, in 2015, Gonzalez’s father filed an action against Google, Twitter, and Facebook. Mr. Gonzalez claimed that Google aided and abetted international terrorism by allowing ISIS to use YouTube for recruiting and promulgating its message. At issue is the platform’s use of algorithms that suggest additional content based on users’ viewing history. Additionally, Gonzalez claims the tech companies failed to take meaningful action to counteract ISIS’ efforts on their platforms.
The district court granted Google’s motion to dismiss the claim based on Section 230(c)(1) of the Communications Decency Act, and the U.S. Court of Appeals for the Ninth Circuit affirmed. The question now facing the Supreme Court is does Section 230 immunize interactive computer services when they make targeted recommendations of information provided by another information content provider, or only limit the liability of interactive computer services when they engage in traditional editorial functions (such as deciding whether to display or withdraw) with regard to such information?
Join us as Erik Jaffe breaks down Tuesday’s oral argument.
Erik S. Jaffe, Partner, Schaerr | Jaffe LLP
To register, click the link above.
As always, the Federalist Society takes no position on particular legal or public policy issues; all expressions of opinion are those of the speaker.
Sam Fendler: Hello, and welcome to this Federalist Society virtual event. My name is Sam Fendler, and I'm an Assistant Director of Practice Groups with The Federalist Society. Today, we're excited to host "Courthouse Steps Oral Argument: Gonzalez v. Google" featuring Erik Jaffe.
Erik is a partner at Schaerr Jaffe LLP. Erik has extensive experience in appeals and has been involved in over 120 Supreme Court matters from filing cert petitions and amicus briefs to representing parties on the merits. Before starting his law practice, Erik clerked for Supreme Court Justice Clarence Thomas. If you'd like to learn more about Erik, his full bio is available on our website, fedsoc.org.
After Erik gives his opening remarks, we will turn to you, the audience, for questions. If you have a question, please enter it into the Q&A function at the bottom of your Zoom window, and we'll do our best to answer as many questions as we can.
Finally, I'll note that, as always, all expressions of opinion today are those of our guest speaker, not The Federalist Society.
Erik, thank you very much for joining us today, sir. And the floor is yours.
Erik Jaffe: Thank you very much. And thank you for having me. Always happy to come chat and talk about new cases that are before the Court, particularly ones that I follow and I'm interested in.
It's interesting. This case will be interesting. So for those of you joining up, I assume you all are roughly familiar with what the issues in the case are. But at the end of the day, the primary question is whether or not making targeted recommendations—via algorithm or otherwise—of content somehow loses immunity under Section 230 of the Communications Decency Act. And Section 230 of the relevant part provides that "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." So the question is whether making a recommendation of somebody else's—in this case—video somehow loses you immunity under 230 for a claim that says, "Your recommendations meant people joined ISIS and killed my relatives."
And so, does recommending third-party content somehow trigger liability that is different than if you had republished or spoke the -- just posted the third-party content without a recommendation? And one might ask, "Why is this something under the Free Speech & Election Law Practice Group?" because there's no First Amendment issue squarely presented. And that is true. This is a statutory question about the scope of statutory protections. It doesn't ask whether the Constitution covers this. It doesn't ask what would happen if Section 230 immunity didn't exist. But I think there are a whole bunch of lurking free-speech issues underneath these questions that make it interesting. And I'll get to some of those at the back end. But let me sort of summarize what happened at the argument today since this is, after all, a “Courthouse Steps,” so let's talk about what happened at the courthouse.
So the argument order at the courthouse was counsel for Mr. Gonzalez, who was arguing that 230 does not apply to a suit that alleged material support for ISIS by having put their videos in the "Up Next" category as a thumbnail. And then it was followed by a counsel for the United States who argued that he didn't think 230 necessarily applied here, though he did think that there was no decent claim. But at the end of the day, he was sort of somewhere in the middle but generally hostile to the notion that 230 would necessarily cover this. And then, finally, counsel for Google, who, of course, argued that 230 does cover this.
And the questioning started out for counsel for Gonzalez, Mr. Schnapper, with Justice Thomas, who I think many had been concerned was skeptical of Section 230 and might look for ways to narrow it. And, in fact, his questions suggested almost the opposite—that he was quite skeptical of the claims that merely popping a video up in the "Up Next" category based on an algorithm that said, "You seem to like ISIS videos. Here's an ISIS video," or "You searched for an ISIS video, here's another one --" he seemed extremely skeptical that that could lose 230 protections for—in this case—YouTube, whose parent is Google.
You know, it was just interesting. His questioning focused on whether or not the algorithm was neutral as to the content—i.e., it treated cat videos in much the same way that it treated ISIS videos or, you know, cooking videos or whether you like Turkish kebabs and Turkish rice pilaf, I think, was his example. And I think that's a fair way to look at it. He sort of says, "Look, if the answer is, 'You seem to like cat videos or rice pilaf videos. Here's another one,' why is it different when it's ISIS in terms of you're not recommending it, you're just acknowledging what the user of YouTube seems to want." And that seemed to be a recurring theme of his questions where he just didn't see how it was that giving people stuff that was responsive to their past behavior or their requests could somehow make you liable for the fact that they were watching these things. And so that was quite encouraging from someone from my perspective.
And, fair warning, I had a brief in this case in support of Google. I think Section 230 would protect the particular behavior in this case. My firm had another brief in the case on behalf of a group called Protect the First and on behalf of former Senator Santorum, again arguing that Section 230 applied and generally would protect stuff like this—again, a pro-speech kind of approach. But that being said, I was quite pleased to see Justice Thomas surprise some observers by coming out, seemingly, in favor of a reasonably robust application of Section 230.
Other justices -- I think on balance, all the justices, in my view, came out fairly pro-230 in this instance and were more skeptical of Gonzalez's arguments than they were of Google's with a few notable exceptions. In no particular order other than as I wrote down the questions and my observations—so not in order of seniority. I thought the next set of interesting questions came from Justice Amy Coney Barrett, who I think was a little more in the middle on what she thought the scope of 230 was. At least, certainly at the beginning, she seemed a little more sympathetic to Gonzalez's arguments and was trying to find a line between "Are you simply displaying it through some sort of ordinary criteria for explaining things? You have to make a choice," or "Are you encouraging some particular content?"
At the end of the day, you know, it struck me that she was perhaps driving towards this notion of endorsement as opposed to display. So it's one thing to say, "I need to pop something up top, so I'm going to use a bunch of criteria. Does it meet your previous searches? Is it something I think you're interested in? Is it something people like you seem to be interested in? Is it the most popular thing on the web these days, so I'm going to pop that up to the top of your scroll of possible videos?" versus "Am I sending it to you because I think I agree with it?"
And I thought, at the end of the day, after a lot of questions from her, there were some hints that that may be the line she's drawing. And that's not a -- that's not a crazy line at all. It's actually a line that, to some degree, I think counsel for Google—and even my amicus brief—adopted, which said, "You got to ask, are you being held for your content or someone else's content?" And if the answer is, "Yeah, ISIS made the video, but I said, 'Hey, this is a great video, and they got it exactly right,' I've sort of adopted their content as my own, and then I'm being held for my own speech and my own information content."
And I think she might have been pushing a little bit on that line but didn't quite get there. But again, I thought she was skeptical. And one of the indicia of her skepticism about this—or her, at least, uncertainty about how to properly resolve the case—was she kept asking, "Well, what happens if, tomorrow, I decide there's no cause of action here?"
So just for those keeping track, tomorrow there's another case involving similar issues involving Twitter where the question is, "Does this anti-support for terrorists statute --" I forget what it's called. JASTA is the acronym, but I forget the exact title. But there's a statute under which this lawsuit was brought that says you're materially supporting ISIS. And the question is, "Does hosting their videos constitute aiding and abetting ISIS?"
And so, Justice Barrett said, "Well, what if I decide that the aiding and abetting doesn't reach that far? Do I need to reach the 230 issue, or can I just presumably dismiss it as improvidently granted or remand it with instructions to kick the case out because there's no cause of action regardless of what Section 230 says?" And to me, that signaled that she finds the 230 issue sufficiently difficult or problematic as to want to find another way to kick the case out. And she at least kept asking the attorneys this question: "Can I throw it away if I decide tomorrow's case against the plaintiffs?" And pretty much no one said yes to that question. The answer uniformly was no whenever she would ask that question. Invariably, they would say something to the effect of, "Well, you got to give us a chance to replead," or "There wasn't a motion to dismiss, but you need to decide the 230 question first." So I don't think that's going to happen. I don't think they're going to kick it out because of how they decide tomorrow's case, but it certainly was an indicia that she's on the fence on this stuff.
The next justice of interest—or at least in order of how I wrote things down—was Justice Sotomayor, who I think, once again, was quite skeptical of the attempt to restrict Section 230 to not cover this kind of thing. And again, she was especially concerned with the interaction between the immunity and the liability aspects, whether aiding and abetting could be found in what was otherwise protected under 230, but ultimately, separating out the, "Is there a cause of action?" versus "Is there immunity?" I think she, at the end of the day, was very skeptical that any sort of recommendation can suddenly make you liable for the content being recommended, regardless of the basis of your recommendation.
So if the basis of your recommendation was, "This seems to be the most popular answer in the world, so I'm going to give it to you first. And that's my recommendation," or "That's my prioritization," she seemed to be skeptical that that somehow meant you had done something worthy of losing immunity and asked a lot of questions to that effect, again, over and over, sort of pressing upon counsel for Gonzalez in a way that I thought suggested she wasn't very sympathetic to counsel's arguments.
The next one was the Chief Justice, again, also skeptical. Lots of questions about how does this -- "Every website seems to work this way. Are you saying that every time we put something first that's a recommendation?" and also, I thought, quite interestingly, worried about the economic grounds cases of this that if these sort of organizational principles of, "We're going to feed you those things that seem most relevant to you --" if we suddenly are going to hold people liable for those organizing choices, that that would apply to everyone everywhere, every search engine, every website, every moderator, and would have massive economic effects and would be massively disruptive to the internet. And he felt like the Court should not be the one choosing to have such a disruptive effect—that that was a choice that Congress had to decide and had to balance disruption versus greater or lesser protection, etc.
So, again, quite encouraging. Very, very in keeping, by the way, with Justice Roberts's approach to the world, which is don't blow up the system if there's another way out. And here, we have a pretty consistent set of decisions by lower courts that say, "This is protected, and if Congress doesn't like them, Congress can change it. And they seem to be on the ball about noticing this, so let them do it. Don't have the Court step in and make such a massive and disruptive claim." And I thought that was quite in keeping with his general modus operandi of "Don't have the Court become the agent of disruption if it doesn't have to." Sometimes it has to; sometimes it doesn't. But here, you could see the more natural conservative tendencies showing through.
Justice Kagan, similarly—seemed a little skeptical, though, I thought—explored some interesting questions about "What is the nature of the algorithm? What is the nature of the recommendation? Is the recommendation one that simply says, 'You seem to like cats. Here's more cats. You seem to like ISIS. Here's more ISIS?' or was it, 'Golly, gee. We really like ISIS, and we think you should watch ISIS because we like ISIS.'" So it's sort of a viewpoint-driven or content-driven recommendation that has an implied message. The message being, "You should watch more ISIS because ISIS is good," versus "You should watch more ISIS because you seem to like ISIS." One is one's own speech. The other is merely an editing function, or at best, a statement that says, "We want you to watch more videos on our thing. We don't care what they are, so we're just going to show you videos that you seem to like." So again, I thought she was pretty good.
Justice Kavanaugh, again, an extremely cautious; again, in the line of Chief Justice Roberts. Again, concerned about economic disruption and all the disruptive effects of this, leaving it to Congress. I thought he, again, was quite strong in terms of having a broadish reading of 230 that wouldn't allow liability simply because you suggested the contents that somebody else created, particularly where the harm from that suggestion is people watch the content, and the content made them do bad stuff. So again, I thought he was quite good.
Justice Alito, similarly, quite good on the pro-230 side of the fence. Again, quite skeptical. Quite skeptical of where you would draw the line. Counsel for Gonzalez was having a very, very hard time making a coherent distinction about where you would draw the line. And I'll get to that in a few minutes, but he tried to draw a distinction between, "Well, you showed them a thumbnail, which you created, and so you can be liable for that. But if you had just popped up the video directly, that's okay," versus "Well, they asked for the video, so you showed them the video. So that's not on you. But if they asked for the video, and you show them that video plus one other that's like it, that is on you." He was having an incredibly difficult time drawing a very thin line that, ultimately, very few of the justices thought made any sense. And even his erstwhile allies in the United States didn't think his lines made much sense. And I confess, I agree with them.
So then we get to the last two justices, Justice Gorsuch and Justice Ketanji Brown Jackson, and I think there, we finally get a little more skepticism on the 230 side of the fence. Justice Gorsuch was skeptical only in the sense that he was looking for a way to get a remand without having to necessarily give the final answer. But I thought the way he approached Section 230 was interesting, and I sort of agree with him. To him, the question was, "Where is the line between being responsible for your own conduct or your own information content versus being responsible, held responsible, for a third party's information content?" And he looked to some parts, some definitions, in the statute—again, definitions that I fully agree with that we pointed out in our amicus brief—where he said, "Look, information content -- information service providers, they pick, they choose, they digest, they analyze, they prioritize, they do all of these things to other people's content that doesn't make those things their own content. Otherwise, the statute would be largely meaningless." And I agree with him with that—that those things seem to be the functions of information content -- of service providers that they're supposed to be allowed to do rather than be responsible and liable for. That's what gets them -- triggers the option of immunity but doesn't detract from it.
But he thought that the real question then is, "Does YouTube's algorithmic recommendations somehow convert the information content provided by ISIS—or by any third party—into YouTube's own information content?" And to my mind, what he was driving at is, "Do you endorse the content? Did you adopt the content? Did you say, 'Yes, this content is right, and you should watch it?'" And I think that's the kind of line he was probably drawing, though we didn't get that far down the chain because he was participating by telephone, it sounded like, so it was a little more difficult for him to follow up on things, I thought. But he kept on looking for, "Can I just remand to the Ninth Circuit and tell them, 'Focus on content,' rather than focus on"—what he described as—"the neutral tools." And I think that's interesting.
I'm not sure he's completely accurate in what he thinks the Ninth Circuit did below in their reference to "neutral tools," but the Ninth Circuit had a couple passing references to, "Well, if the choices are being made by neutral tools, then you can't say that they've adopted it as their own." And I think that's right, but I think there was a little bit of confusion about what "neutral tools" means in this context. Obviously, an AI or a computer algorithm theoretically is neutral, though that really sort of depends on how you've programmed it. I suppose if you've programmed it to be viewpoint discriminatory or content discriminatory, then it's not that neutral. It's either pro-content or anti-content or pro-viewpoint or anti-viewpoint, and those algorithmic elements may help decide whether you're adopting the republished content as your own. Or those algorithmic elements may be neutral as to the content but non-neutral as to other things like, "Yes, we want people to watch us more, so we're going to give them more that they seem to like." That's not neutral. It's not random or arbitrary. It has an intent, and the intent is simply to give you more of what you want, but it doesn't care what you want. So it's neutral as to the viewpoint, but it's not neutral necessarily as to, "Yes, you should keep using my service."
Finally, Justice Jackson was absolutely the most skeptical of the Google position here, basically taking a position that Section 230 was adopted for a very narrow purpose of letting people remove trash from their websites and not be held liable for the trash they missed. So she looks at the interaction between (c)(1) and (c)(2) of section 230 and says, "(c)(1) says you can't be held liable for other people's content. (c)(2) says, 'And we're not going to hold you liable for removing unpleasant content—' you know, libel, slander, offensive -- you get to have some editorial discretion on taking junk down so that the internet is not a complete sewer if you're inclined to help make it not a sewer. Or at least so that your web service is not a sewer. But that we don't want you to be liable for taking some stuff down but missing other stuff." And she had this very narrow view of the original purpose of 230 that got a lot of pushback but would probably not protect Google in this case.
So of all the justices, she was the one that seemed most sympathetic to Gonzalez's position. I thought Justice Barrett and Justice Gorsuch were not quite sympathetic to Gonzalez's position but less willing to go on board with Google and perhaps get a remand or a dig, and so that was sort of interesting. At least by my sort of intuitive Spidey sense, I see six votes that sort of says the theory of the case presented by Gonzalez here, that popping up a thumbnail that says, "You seem to like ISIS. Here's more ISIS," is not enough to lose 230 liability. That's my gut sensation. I could easily see that being 7-2 or 8-1. I can see that being 8-1 on -- well, I can see Justice Barrett and Justice Gorsuch writing a concurrence or two that sort of says, "I think we should dig this, but at the end of the day, whatever. This is okay," and maybe concurring with the judgment. Whether Justice Jackson writes a dissent or not, I don't know. She might write a concurrence for other reasons but have a narrower interpretation of 230 that lets her deal with other cases in the future in a slightly different way. So we'll see. So that's where I think the justices are at.
Let's talk a little bit about where I thought the lawyers were at. So again, going back to counsel for Gonzalez, I thought they chose a line that was incredibly difficult to defend. The line they chose, it seems to me -- well, there was a theoretical okayness to it, but it just didn't apply to their case very well. So they chose to say that when you create a thumbnail of a video—which is basically a picture with a link in it—to a video that somebody else has uploaded to your site, that that is somehow your content, and you are now responsible for that content.
So if you recall, when I read Section 230 to you all a few minutes ago—or more than a few minutes ago—it said you can't be held liable as the publisher of the information content provided by another. And the way he's trying to get out of this is by saying, "Well, the thumbnail isn't information content provided by another. It's information content that you created, or at least co-created, and that you can be held liable for that." And pardon my bluntness, but that distinction -- the distinction from your information content and their information content is fine. The distinction between "I put a thumbnail up," versus "I just played the video," versus "I had a static screenshot of their video that didn't have a hyperlink in it," that distinction is idiotic. And look, he's a good lawyer, I'm sure he had to go back and forth and had a lot of help at moot courts on figuring out this line, but that line is indefensible. The notion that there is a difference between a hyperlink and a screenshot and a thumbnail is incoherent within the context of 230, just incoherent.
And so that really got him in a bunch of trouble. And tons of people were skeptical about that, and it's going to go nowhere. It's going to go absolutely nowhere. The real question will be "What was the basis for the algorithm?" not "What was the format in which the results were displayed, whether they were displayed as a video that you asked for, whether they were displayed as a thumbnail, or whether they were displayed as a static screenshot." That's not going to be the line.
The other thing he sort of said that was interesting -- he said, "Look, the injury," -- to his clients was not from the ISIS content. "It was from the fact that you put the thumbnails up there, and then people watched it because of the thumbnails." And I don't know that -- again, there's a thin line there which says, "Well, they were injured because people watched the underlying video and then liked ISIS, gave money to ISIS, supported ISIS in some way. But the injury was really only from your recommendation, not from the underlying video. Or somehow your recommendation marginally changed the injury of them having watched the video themselves merely because they got more people to watch it." Again, I think that line is largely indefensible and is not going to be something that people rely on because, at the end of the day, if it was—if your algorithm simply said, "Pop up the most commonly requested answer," and it happened to be the ISIS video, yes, more people would watch it. Again, that is not -- you're still trying to hold them liable based on the content of the video. If the content of their video was cats -- even if it said ISIS on the cover, but the content was cats, then there would be no harm here. Right? Nothing would have happened.
They said a few other things that I thought were interesting. He tried to make the argument that YouTube sort of fed you these videos before you even asked them to, so he said sort of a distinction between reactive versus proactive feeding you of content. I'm not sure that distinction holds up, but I thought it was at least more interesting. He said, "Look, they sent you an email that said, 'By the way, there's some ISIS content we think you might like,'" that that might be somehow different than if you got on there and said, "Oh I'd like some ISIS videos," and they showed you one and said, "Here's a bunch of others—" that that would somehow different. And again, I think it's a little bit thin distinction. It didn't sort of work for me, but it was less incoherent at least. There was at least a line of affirmative versus responsive feeding you information as suggesting whose content it really was that you're being held liable for.
The final two things that he said that were interesting was, one, he tried to limit the damage here because the statute that holds you responsible for aiding and abetting terrorism includes -- he says includes recommending terroristic content, and so that this wouldn't be true of most other courts like defamation. I don't know if that's true or not true, but it was an interesting attempt to narrow this to make this a very narrow scenario that likely would not happen elsewhere. I thought the other advocates helped to make clear that that's probably not the case—that there are lots of courts of negligence that could just sort of say, "Well, you negligently let them see more videos about anorexia or cutting or suicide or any of the things that people say you shouldn't let people watch." And so, I'm not sure he's right that JASTA, the statute, is that unique vis-à-vis other sort of torts out there.
And then the last thing he said -- there was a discussion about whether -- if the definition of an information content -- or an information service provider—so the triggering entities that are covered by 230—if that definition includes folks who engage in selecting, choosing, analyzing, and digesting the content of others, then there must be some difference between doing those things—analyzing, digesting, selecting, displaying the content of others—and creating your own information content, which is what YouTube could get in trouble for. Right? They can get in trouble for their own information content. If YouTube creates their own video and posts their own video that says, "Yay, ISIS. Donate to ISIS," well, then YouTube is in trouble, and they're toast.
But if all they do is digest, analyze, and display somebody else's video to that effect, then the argument would go, "They’re protected." And he said that distinction between digesting and analyzing versus creating content only applies to this tiny little piece of the definition of an information content provider. I don't think anyone was buying that. It's a hyper-technical question, but for those Section 230 geeks out there, it was a debate about whether subsection (f)(4) applied to all of 230 or part of 230 because you provided access software. And access software providers are a subset of information content providers. I'm happy to answer questions about that. I just want to flag it for those folks out there that follow that kind of stuff. It was an interesting discussion. I think the counsel for Gonzalez was wrong about the distinction he's drawing, but there you have it.
So the next advocate was Malcolm Stewart for the United States. I actually thought he did a pretty good job. Even though I don't agree with his outcome, I thought he did -- he was at least talking about the right parts of the statute, which was comforting. He drew this distinction between whether or not you're being held liable for the content -- information content of others—third-party information content—versus being held liable for your own choices or for your own content. And I thought that was useful and interesting. It's just, in this particular case, I don't think it applies. But I think it was a useful line to draw, and it's a good way of thinking about 230 going forward.
So I fully agree that 230 does not immunize you for information content that you, the service provider, provides. So if YouTube makes their own video, they're screwed. Personally, I think if YouTube adopts and endorses your video—affirmatively says, "Yes, that's true. We like that, and we agree. And you should watch this because that's true."—well, that's like adopting a libel. It's one thing to say, "I'm not going to hold them responsible just because they reposted your libel." It's another thing to say, "I'm holding them liable for endorsing it." Just because you repeat somebody's libel and then endorse it, doesn't mean you're immunized. And I think that's about right.
What I don't agree with them is that here YouTube's conduct is what to display, what to prioritize, but the prioritization decision is not based on endorsement. It's based on unrelated criteria like "You seem to like ISIS," or "You seem to like cats," or "You seem to watch a lot of these, and we'd like to keep you on our channel longer, so here's another one." All of those things are not endorsement, even though they're choices. But those choices are not the kind of choices that amount to the harm that they're talking about—material support, for example, or aiding and abetting. So it's where I sort of separate from the government's position. But I'm happy that the government is indeed focusing on the keywords of "Are you being held liable for the information content of someone else?"
Here, of course, the harm being alleged is precisely because somebody watched an ISIS video. They're mad because someone watched an ISIS video, agreed with ISIS, and therefore supported them. That is exactly what they're trying to hold YouTube liable for is that YouTube just made it possible for more people to watch ISIS videos, and those ISIS videos led to bad results. And the whole point of 230 is to make it possible to watch other people's videos without being held liable for other people's videos.
So I don't think the government has the result right, but I think at least the analysis they were engaging in was correct. And I thought the good example that they and others gave was, "Look, if you have an algorithm that discriminates on the basis of race and says, 'I refuse to show houses in a certain neighborhood to black people because that's a white neighborhood, and I refuse to let black people into that neighborhood.' And you write your algorithm to say, 'Don't let any black people buy—see housing listings in this neighborhood.' Well, you're being held liable for your discriminatory algorithm, not for the content of the housing post. The housing post isn't what's offensive. It's your choice not to let them see it. That, independently of the underlying content, is discriminatory. Right? That you are, in fact, discriminating on the basis of race because you made an affirmative decision that you didn't want black people in the neighborhood. It has nothing to do with the underlying post other than it happened to be about the housing. But it's not like the post is what caused them harm. It's your choices." So I thought that was an excellent distinction between being held liable for third-party content versus being held liable for your own behavior. And I think it's a distinction that most people agree with. I thought the justices questioning it all seemed to agree with. I thought even Lisa Blatt for Google agreed with it.
So turning to Lisa Blatt -- I thought she did a fine job. She had a little pushback but not anywhere near as much as counsel for Gonzalez did. And I thought her opening, basically, sort of said it all. "Are you being held liable for something where the harms flow from the underlying content that wasn't yours?" And here, the answer is yes. The harms flow from watching someone else's video. They're trying to hold YouTube liable for that, and they shouldn’t be able to. And she sort of agrees that there's a continuum between merely posting someone else's video without any intervention at all, just first come, first served. Whatever is the next video posted goes up on the screen versus endorsing and recommending something because you say, "Everyone should watch ISIS because ISIS is the bomb." Bad pun, obviously. But "Because ISIS is the best group ever, and we're going to have you watch them." There's a big continuum where I think she suggests that she agrees that that last one—if you say, "ISIS is great. Watch an ISIS video,"—you're endorsing them. You're adopting them, and so you can, indeed, be held liable because now you're being held liable for your own content. It's not merely third-party content anymore. You've adopted it, endorsed it, and made it your own versus all you really did is say, "I'm just feeding you more of what you want. I don't give a shit what the what is. I don't care what the underlying content is. I'm not endorsing it. I'm not criticizing it. I'm just giving you what seems to fit for you." And I like that way of thinking about it. And here, of course, she said that the prioritization of Google—of YouTube's algorithm has nothing to do with whether YouTube agrees or disagrees with ISIS videos. It has everything to do with who you, the user, are and whether, predictively, they think this is something that you were asking for, something that you would want, something that you would like. So that was ultimately good.
Like again, she got a little bit of pushback from Justice Jackson wondering whether this goes beyond the original intended scope of Section 230. I think she did a nice job of pointing out that that view of the original scope is incorrect. It was never intended -- it was certainly in response to that scenario, but it was never intended to be limited to that scenario. And I thought she did a nice job of pointing out that the purposes of 230, as described in the holdings and findings and purpose section of the statute, make it clear that it was broader than that.
So there you have it. I liked the argument. I thought it was interesting. I was encouraged that both the justices, the United States, and Lisa Blatt for Google were focusing on the parts of the statute that I, at least, personally, think are most important.
So the last question, the last interesting thing for free speech folks before we turn to questions from the audience is, "So First Amendment, free speech, well, where did that come in here?" And the place I see the underlying lurking issues again turns on this notion of content of others versus content of yours. And this issue will come up again once we get to the Texas and Florida statutes trying to stop viewpoint discrimination, for example, and use that as a way of getting you out of 230. And I sort of think that, look, the most protected First Amendment behavior you could imagine, literally making your own video and posting it on YouTube—so say YouTube, as an editorial board, says, "We're going to post a video that says, 'Go, Joe Biden' or 'Go, Ron DeSantis', or 'Yay, ISIS', or whatever they want to say, or 'Boo, ISIS' for that matter"—that is all very self-evidently First Amendment protected speech. But, of course, it's not protected by Section 230 because Section 230 only protects you from liability for the content of other people, not for your own content.
So on the one extreme, highest course, First Amendment behavior, not covered by 230. On the other extreme, if you did nothing but just made it a public access channel with no prioritization and no editorial discretion and nothing. It was simply, you post it, it goes up, "We don't help people screen for beans. We don't help you with anything," that clearly gets Section 230 protection because that's entirely the content of others. You're not recommending anything. No one imagines that you could be held liable for that.
So what do you do about in the middle where you are engaging in some sort of editorial function whether it's editing for what we think is best suited to our readers, editing for what you like the most, editing for what you think is most interesting in that day, or even editing for viewpoint? What if I run "Conservatives Only YouTube?" And so, I screen out all Democratic videos, and I only run conservative videos or vice versa. That's plainly viewpoint discrimination, yet it's editorial. And I'm not necessarily adopting and endorsing the views being stated. I'm just limiting the content—almost like a limited public forum. And for me, I think that's the interesting category where it is both protected by the First Amendment, and it is protected by 230. That 230 protects more than simply passive behavior. It protects editorial choice that would also be protected by the First Amendment, even while it does not protect your own personal speech, which is protected by the First Amendment.
So for those of you here because you like the First Amendment, those are the interesting questions that underlie this and the interesting questions about where to draw a line and how to think about not just this case, but the cases that are inevitably coming up between the Florida and Texas statutes that are both up on cert petitions now.
So with that, I will open it up to questions so that the rest of you can hear what you want to hear rather than what I wanted to tell you, and we'll go from there.
Sam Fendler: Well, Erik, thanks very much for that insightful review. I think you did a great job of going top to bottom of what just went on this morning, and of course, into the afternoon with such a marathon session.
I wanted to start with a very high-level question. I think going into the argument, there was much discussion about the algorithm. Right? It's the recommendation of additional content that perhaps is what produced the liability. And then we get into the argument, and the petitioner talks a lot about these thumbnails, and the argument seemed to be that the liability is a result of creating a catalog. So whether it's the thumbnail, or, yes, you searched for this first ISIS video, but YouTube is liable for giving you more—for giving you this catalog, the organization of the information. That seemed to be a big part—if not the main part—of petitioner's argument. And I'm wondering what you think there about not only that argument, but the kind of difference between the expectations going in of the algorithm conversation and what we got with this thumbnail catalog discussion.
Erik Jaffe: Sure. So the original version of "You recommended it. You're responsible for it,"—which I think was going the way that people assumed it was going to go—had potential but didn't apply to this case because it's not actually the algorithm YouTube uses. So what you're getting at with the recommendation is you're almost getting at an endorsement position. Right? "You recommended it. You said this is good. You wanted people to watch it. You're implicitly endorsing the underlying contents, and thus, you can be responsible for it in some sense because you've made it your own by endorsing it." And it turns out that's incoherent, given the nature of the algorithm. I'm not saying that other algorithms couldn't do that. They very clearly could. You could write an algorithm, which is just a rule, which said, "Every time you see ISIS, promote the shit out of it because ISIS is glorious." Well, that would be endorsing -- that algorithm would functionally be endorsing ISIS.
The algorithm that YouTube uses is nothing like that. The algorithm YouTube uses is largely responsive to the user. "Are you somebody who searched ISIS before? Where do you live? Do you seem to watch a lot of ISIS videos? Do you speak in a certain language? Therefore, I should send you those language videos." It has nothing to do with endorsement, and it has everything to do with "I want my user to spend more time here." And so, the only thing that that algorithm sort of says is, "You seem to like ISIS." And that's not something that gets liability as opposed to an algorithm that said, "ISIS is really good. Have another ISIS video." That might get you liability because that's your speech, effectively, your content.
So he had to go to something else because I think he realized that his main line just didn't really work well on the facts of this case, even if it might work on somebody else's case. So he went for the "It's your content because you helped create it, even though you didn’t endorse it." And there's a little piece of the definitional section of 230 that says, "Well, you're an information content creator or an information content provider if you, in whole or in part, created the content." And so, he's trying to work on the "in part." "Yes, there was an underlying video, but you did more than just republish it. You tweaked it a little bit, and you put a hyperlink on it, and you made the thumbnail," and I think that's incoherent. I think it's literally incoherent, and it's especially incoherent when you look at the definitions of what it means to be an information service provider or an interactive software provider, which says, "You can provide software that cuts, pastes, digests, analyzes, prioritizes." It didn't treat all of those actions as if they were content provision. It treated them as if they were content manipulation, which is something different, and that's why it's ultimately broke down, I think.
But I thought he had to go there, perhaps because his other argument didn't fly very well in moot courts, didn't apply to Google, that he realized, "Yeah. Well, it's a perfectly theoretically good line." It just doesn't mean he wins. It's lovely to have a good theory, but if you don't win on your own theory, that's not very helpful.
Sam Fendler: Yeah, no question about that. To continue to pull on that thread that you just left off with there, we heard from several of the justices that they were sort of confused at what the argument actually was. Justice Alito even said -- and I think this is mostly verbatim. I wrote this down. I don't want to say it is verbatim, but it's close to what he said. Justice Alito says, "I’m afraid I'm completely confused by whatever argument you're making at the present time." And there seemed to be that muddling. And I'm wondering -- you know you talked about it a little bit, sort of the disconnect maybe between a moot court and what actually happened this morning. What do you think got confused there?
Erik Jaffe: Well, I think what got confused is he was trying to draw a line between -- well, so if I go onto YouTube and say, "Show me an ISIS video," and it pops up an ISIS video that just starts playing, his theory was, you can't be held responsible for that. But if I pop up a thumbnail that requires you to click it before it plays, you can be held liable for that because you co-created the thumbnail. And that's just the stupidest distinction I've ever heard, which is why I think a bunch of people were confused. "Are you telling me that it was the little mini clip that was the problem, or is it the hyperlink that's the problem so that when you Google something, and Google gives you a bunch of answers, each of which is a hyperlink, the fact that they added the hyperlink suddenly makes it their content now rather than the underlying content?" All of which makes no sense at all.
And so, he tried to say, "Well, by collecting them, by analyzing them, you're engaging in your own speech," forgetting what speech that was. The speech that they're engaging in -- yeah, there may be some speech when I say, "Here's the answer to your question." I am engaging in speech. But the speech I'm engaging in is not the speech of the answer. The speech I'm engaging in is "Here's your answer," or the speech of "Here's something you might like," or "Here's something that seems responsive," or "Here's something that a million other people seem to like that maybe you will too." I'm saying all of those things, none of which create liability, not even incredibly create liability. What they want is to say, "Once I've said that, I'm also saying the underlying stuff," and that's ridiculous, which is why it was so confusing because he was muddling the information content with the organizational -- the implied message of the organization. And the implied message of the organization was very little. And he was trying to mix them together by having this thumbnail argument that nobody was buying, literally nobody.
Sam Fendler: And you know, one of the other things is the justices were trying to make that logical connection between the recommendation of content. And it seemed to me—I think it seems to you as well, and you can correct me if I'm wrong—that there was pretty broad agreement, not only amongst the justices but with the advocates as well, that the algorithm is, in fact, neutral, so we can just start there. It's a neutral algorithm. And then the question is, "How do we get to aiding and abetting?"
And my question for you comes in here, which is, he mentioned a couple times, the petitioner, that the facts of tomorrow's case, the Twitter case, will somehow help us understand how we get to aiding and abetting. I didn't see a preview of what, necessarily, he thinks will arise in the Twitter case, but I'm wondering if you have any idea of what may occur in the Twitter case that will help establish the aiding and abetting.
Erik Jaffe: So I have not followed the briefs in the Twitter case, but his argument suggested that the particular statute includes, as one form of aiding and abetting, recommending—recommending videos from someone else. He thinks the statute would indeed cover simple recommendations regardless of whether they amounted to endorsement. And we'll have to see how that statute is worded. I am doubtful that aiding and abetting could be read from neutral recommendations even if you knew that, therefore, it was going to people who liked ISIS, that somehow merely knowing that somehow made you an aider and abettor. But he seems to think that the wording of the statute will support such a broad view of aiding and abetting to include even content -- viewpoint-neutral recommendations.
Now, what I think is interesting -- part of your question, you sort of suggested that there was an agreement that this was all neutral recommendations. There wasn't total agreement on that. There was some debate about what it means to be neutral. And I think Lisa Blatt, at the end of the day—and to some extent, Malcolm Stewart—pointed out that it's not neutral in some abstract sense, but it is neutral as to the content being recommended. It doesn't care if you say, "I love Trump. I hate Trump. I love Biden. I hate Biden. I love ISIS. I hate ISIS." It's neutral as to that. It may not be neutral as to "Is this a popular video? Cause I want to encourage popular videos." So it's not completely neutral, but it is not creating new information content that somehow endorses the preexisting information content.
The only information content such a recommendation or a tool would create is it would tell you, "This seems to be pretty popular." So yes, there's some information content there. The statement that lots of people have watched this is information content. It's just not information content that gets you sued—that would even plausibly get you sued. All you've said was "You seem to like cat videos." Okay. That's true. I seem to like cat videos. Sue me for -- sue me for saying, "Go ahead and sue me for having noticed that you seem to like cat videos." That's ridiculous. So that's the debate. It's neutral as to the underlying content, while it may not be neutral in the abstract.
And Justice Gorsuch gets to this and points out that algorithms don't have to be neutral. It's just this one seems to be, but other algorithms might not be neutral, and if they're not neutral, that could pose a different question. And with this, I agree with them. If the algorithm somehow is aggressive enough to create new or adopt old information content as its own, then you might be liable for that. But once again, it's important, as Lisa says—Lisa Blatt said—to draw a line between -- you can have a viewpoint-based website that says, "Conservative views only" without having to say, "Can I adopt and endorse everything every one of these people say?'" No. You just—
Again, the analogy I like is like a limited public forum where we say, "This public forum is just to talk about COVID." And so you start talking about the war in Ukraine, and they say, "No, no, no. This is a COVID forum. Shut up. Go talk somewhere else." Or you say, "This is CPAC. We're going to talk about conservative stuff," or "This is the DMC. We're going to talk about DMC stuff." Right? Those are limited forum. They have content restrictions, but that doesn't mean that the organizers agree or endorse everything said. And that would be protected, in my view, whereas if somebody stood up and said, "We're only going to have pro-Trump stuff or pro-Biden stuff," you would implicitly be endorsing a viewpoint, not merely the content.
Sam Fendler: Erik, I want to switch gears here. Get a question from the audience. And you left off in your initial remarks talking about the states—maybe the way forward in the states. And we have an audience member who is asking what your thoughts are on the impact of this case and the argument on attempts by states. So for example, Florida and Texas and their efforts to regulate Big Tech.
Erik Jaffe: Sure. So I think it’s interesting. It's that First Amendment interaction problem. So at some points, Big Tech says -- let's hypothesize that Big Tech says, "Sorry. No crazy conservatives on our sites, only crazy liberals. No crazy conservatives allowed." That's certainly a viewpoint-based discrimination that anybody who hosts a chat room could do. Anybody who hosts a website could do. Lots of people do it all the time. Does that mean they lose Section 230 protection? And my answer would be no. That doesn't mean they lose 230 protection unless they endorse the particular views of other people on their sites.
Now, let's say Big Tech instead said, "I’m only going to run -- I'm going to prioritize and promote videos that say, 'Masking is good. Masking is good. Masking is good. Masking is good.'" Well, at that point, they've endorsed the message that masking is good, and one could say that their own algorithm is conveying the message that masking is good. So then I get to sue them, if it's actionable, for saying masking is good if it turns out masking kills me. And so, I'm going to sue them for having told me to mask when it kills me. Sure, I think that's actionable. I don't think 230 protects them because, at some level, they've adopted the content they are recommending because that's the nature of their algorithm.
Now, a separate question—can I force content providers to be content-neutral just because they get Section 230 protection? And the answer to that is a First Amendment problem, not a 230 problem. 230 covers some First Amendment-protected speech but not others. It covers editorial decisions, but it doesn't cover content creation by the provider. Right? That's the clear line 230 draws, which overlaps with but is not the same as First Amendment protection.
So my personal view is the Florida and Texas statutes violate the heck out of the First Amendment, but it's utterly independent of the scope of 230. 230 does not assume a perfectly neutral pipe. 230 assumes plenty of content and viewpoint discrimination, but the more viewpoint and content discrimination kicks in, the stronger the First Amendment interest, and the stronger the First Amendment protection. And so, I think it's not that 230 necessarily trumps those statutes, but I certainly think the Second Amendment -- the First Amendment does—Second Amendment, too, I suppose if they used it right or wrong.
Sam Fendler: I think we have time for one more question here. Some of the—you know, it was more than one of the justices, but I'm thinking about Justice Kavanaugh in particular—was wondering if the Court is the right body to settle this dispute. There was some discussion about whether, you know—I mean, whether it's the Supreme Court or even a lower court. If the Court—if the judicial branch, and at any level—is who should be settling this, or if we should kick it back to the legislature? And I'm wondering your ideas on that in sort of the broader discussion.
Erik Jaffe: Well, the big thing to question -- I think it’s undoubtedly correct that the choices being made here are policy balloting choices with pros and cons on both sides. Section 230 certainly didn't create blanket immunity for everything. It drew a bunch of lines that are admittedly, in some instances, debatable, but those are quintessential policy choices, and they're policy choices that are well within Congress's knowledge and consciousness. Right now, there are dozens of bills up on the Hill about these things. And so, yes, I think the Court should be reluctant to weigh in on something where Congress is there and is thinking about it and is debating the pros and cons and how to properly craft the compromise, let's say.
However, I thought Justice Jackson made a fine point, which is to say, "But where did we -- where did the courts go wrong? Did the courts overexpand 230 because they thought more protection was needed, and hence, go beyond the original intent of Section 230, at which point, the courts have already put their thumb on the scale in a way that isn't up to Congress?"
One of the -- I think it was counsel for Gonzalez who said, "Look, there have been all kinds of things that have happened since 230, and if you think that those things also should be protected in the same way 230 protected other behavior, well, then Congress can certainly do that. But it didn't. It didn't then, and just because there are things that are like that, that aren't covered by the literal language of the statute, doesn't mean that you, of course, can go out."
So I don't know which one is the activist court. Right? Which is the court that's stepping in beyond Congress, or which is the court that's reigning things in when Congress didn't need to go that far? So it's hard to tell sometimes which choice is activist. To me, the answer -- I know what thumb I'd put on the scale. And the thumb I would put on the scale is the thumb that Congress, itself, put on the scale in its findings. Congress's findings, I think, go to a large degree to negate Justice Ketanji Brown Jackson's view of Section 230. 230 was there to encourage the internet, to make it expansive, to reduce the risk of liability correcting small and big companies alike, to basically let innovation play its part.
And so, if you're asked, "Do I go broad, or do I go narrow when I'm uncertain?" the answer is protection should be broader because that's what Congress wanted. So if you're ever in doubt, look at Congress's enacted intent—not what you think their intent was, not what you infer their intent was—but what they actually put on the statute. And that, to me, makes the difference. So I would certainly let Congress pare it back further if they want to, but they seemed to want it to be broad when they enacted it, and I'd stick with that as the Court.
Sam Fendler: Erik, any departing thoughts, final thoughts here?
Erik Jaffe: I'm just glancing at this last question that came in. The only thing that I see here is that Justice Jackson and Ted Cruz agree with each other, which is notable all in itself, but I guess my short answer is I think she has too narrow a reading of Section 230. That, yes, Section 230 was designed to give folks choice to remove stuff, but it's actually worded much more broadly than that. That is not its only purpose. It's one of its purposes, but its other big purposes, as enacted by Congress, were to let the internet grow. Let it grow without the risk of liability crushing things. And if Congress wants to slow that growth down or increase liability, or if it corrects for some of the harms of the Wild Wild West, Congress knows how to do that. It's very interested in it. And I think in a bipartisan way, quite frankly, is interested in it, so I don't see any reason for the Court to butt their nose in where Congress is perfectly suited—better suited, in fact—for making those kind of close calls.
Sam Fendler: Excellent. Well, Erik, unfortunately, we're out of time. I really appreciate your analysis here. And on behalf of The Federalist Society, I want to thank you for sharing both your time and your expertise with us today.
I want to thank our audience as well for joining us. We greatly appreciate your participation. Please, check out our website, fedsoc.org, or you can follow us on all major social media platforms at FedSoc to stay up to date with announcements and upcoming webinars.
Thank you all again for tuning in, and we are adjourned.