Courthouse Steps Oral Argument: NetChoice Cases

Event Video

Listen & Download

Two cases involving NetChoice, a company that represents social media giants like Facebook, Twitter, Google, and TikTok will be heard at the Supreme Court this term. Both cases concern issues of free speech and social media platforms.

In Moody v. NetChoice, LLC, NetChoice challenges Florida law S.B. 7072, arguing it violates the social media companies’ right to free speech and that the law was preempted by federal law. The lower district court found that the law did not stand up to strict scrutiny. Additionally, the court found that this law didn’t serve a legitimate state interest. The U.S. Court of Appeals for the Eleventh Circuit affirmed this ruling.

In NetChoice, LLC v. Paxton, NetChoice challenges the constitutionality of two sections of Texas law HB 20 (sections 7 and 2) that aims to regulate the content restrictions of large social media platforms. The lower district court found the sections unconstitutional and placed an injunction on the two sections. The Fifth Circuit reversed this decision, ruling that HB 20 doesn’t regulate the speech of the platforms, but instead protects the speech of users and regulates the platform's conduct.

Both cases are set to be heard at the Supreme Court on February 26, 2024. Join us as we break down and analyze how Oral argument went the same day.

Featuring:

  • Allison R. Hayward, Independent Analyst

*******

As always, the Federalist Society takes no position on particular legal or public policy issues; all expressions of opinion are those of the speaker.

Event Transcript

[Music]

 

Chayila Kleist:  Hello, and welcome to this Fedsoc Forum webinar call. Today, February 26, 2024, we are delighted to host a post-oral argument Courthouse Steps on two cases: NetChoice LLC v. Paxton and Moody v. NetChoice LLC, two cases concerning free speech and social media platforms. My name is Chayila Kleist, and I'm an Assistant Director of Practice Groups here at The Federalist Society. As always, please note that all expressions of opinion are those of the expert on today's program, as The Federalist Society takes no position on particular legal or public policy issues.

 

      In the interest of time, I'll keep my introduction of our guest today brief. But if you'd like to know more, you can access her impressive full bio at fedsoc.org. Today, we are fortunate to have with us Allison Hayward, who currently works as an independent analyst. Professor Hayward most recently served as head of case selection at the oversight board. Previously, she was also commissioner at the California Fair Political Practices Commission, a board member at the Office of Congressional ethics, and Assistant Professor of Law at George Mason University School of Law.

 

      She has also previously worked as a chief of staff and counsel in the Office of the Federal Election Commission Commissioner Bradley Smith, and has practiced election law in California and Washington D.C. She is a member of the State Bar of California and the District of Columbia Bar. And I will leave it there.

 

One last note: throughout the panel, if you have any questions, please submit them via the Q&A feature so they'll be accessible when we get to that portion of today's webinar. We're hoping to have plenty of time for questions today, so please go ahead and submit those. With that, however, I'll stop talking and get off your screens. Ms. Hayward, the floor is yours for some opening remarks.

 

Allison R. Hayward:  Well, thank you. Thank you, I'm really pleased to be here. As noted, I've had a career as a constitutional lawyer and commenter, and then, actually, some hands-on experience doing quasi-content moderation at the oversight board. I'm hoping that my observations will be useful to you and maybe slightly different from what you have heard from other people. So, what I'd like to do first is give you a little background of these cases.

 

They've been fairly well-publicized, and so I have the feeling that many of the folks watching already know some of this. But I just want to make sure we're all starting from the same point. Then, what I would like to do is, I'd like to go through how the argument went today, including observations about certain parts of the argument. And then, at the end of it, I'll give you sort of my gist of the lay of the land, and we can go to questions. So, with that, what we have here is about as on-the-nose circuit split as you can get. I think if you wrote this as a hypothetical in a civil procedure case, people would be like, "How would that happen?"

 

But, in 2021, both the states of Texas and Florida enacted very similar laws about the activities of content moderation on large internet platforms. This was responding to events where social media had very prominently struggled with content such as content related to the October 6 attack on the Capitol and content where particular news stories were silenced on platforms because of -- I think the rationale at the time was because -- and then, we're talking about the Hunter Biden laptop story, in case you weren't aware.

 

I think the rationale the platforms had was that they have a policy against hosting speech that looks like it was taken from material that was either hacked or stolen, and not wanting to give people who are malevolent actors a platform to spread other people's information that they had somehow hacked or stolen. I know, on reflection, that seems really kind of beside the point. But, as I'm hoping to make you appreciate, a lot of content moderation is very dense and tone-deaf. And it's kind of the nature of the beast.

 

But, nevertheless, reacting to this censorship on the part of large platforms on what was perceived to be an anti-conservative bias, both states enacted standards related to what platforms could moderate, how they would moderate, notice requirements for people who had their material taken down, various sorts of, kind of, due process-y looking things. So, NetChoice — the trade association for a lot of internet platforms — sued, pre-enforcement, a facial challenge arguing that these laws violated the free speech rights of these private platforms.

 

What happened? Well, Florida's law was enjoined by the Eleventh Circuit. Texas' law was enjoined at the district court level. But then the Fifth Circuit, having the benefit of the Eleventh Circuit's analysis, said, "No, we don't think so. We think that this is the regular sort of conduct. We think that this is fine." And that Fifth Circuit decision was then vacated by the U.S. Supreme Court in anticipation of these cases coming up together.

 

      So what we're looking at is a preliminary injunction posture, pre-enforcement facial challenge, based on the First Amendment violations that NetChoice has identified. And if I said "Netscape" earlier, by accident, just laugh at me because I'm old. So, at the Supreme Court level, what questions are being presented? Well, there's two: whether the content moderation restrictions comply with the First Amendment, and whether these individualized explanation requirements — this sort of due-process-y stuff I was mentioning — comply with the First Amendment.

 

      Well, low and behold, they didn't get to the second question today. So, the argument revolved around content moderation restrictions, whether private platforms can edit what they have on the platform. So, now we'll move on to today's argument. So, for Florida, Henry Whitaker, who is the Florida Solicitor General, argued for the state. And right out of the box, the justices asked about the procedural posture and what the correct standard of scrutiny is to apply to a facial challenge with these First Amendment claims that hasn't been more tailored by any sort of as-applied interpretation or has even had a state court trying to define some of the terms that are in these laws.

 

So, the first questions were about procedural posture and how broadly the law applies. The state responded that what NetChoice has to show is that there is not a legitimate sweep to these laws. Under First Amendment interpretation, you don't need to show that it's unconstitutional for virtually all of its applications. You're looking for more whether there's a legitimate sweep.

 

And there was lots of back-and-forth about different kinds of platforms and different ways that these platforms regulate their content and how that could be measured, in terms of the sweep. And some of it got a little confusing. And I'll get to that in a minute. Then, obviously, the elephant in the room is, well, you're talking about private platforms, First Amendment, the government. The state is talking about redressing censorship of the private platforms. Are they really censoring in a way that we think of censorship as being illegitimate?

 

Because, typically, that involves a governmental entity. And, to that, the state responds that, well, they're common carriers. And, as common carriers, their conduct can be regulated. And what they're doing in content moderation is not really expressive as much as managing the volume of material on the platform. And, secondly, the State has a First Amendment interest in the dissemination of information. So, to the extent that these platforms are set up to disseminate information, and they're not doing what they should be doing, the State can step in.

 

The question was asked, "Well, can you ever imagine a platform that isn't, under your analysis, susceptible to being regulated as sort of this neutral common carrier?" To which the State answered, "Sure, if they're really overly biased." Which, if you think about it, is sort of an interesting place to take the argument. Nonetheless, the back-and-forth with the State was really a conversation involving the First Amendment precedents that most people who have practiced or gone to law school understand.

 

The people who host the parade do not have to invite everyone to participate in the parade. They can exclude some people. Must-carry laws applied to large media corporations are suspect. You can't make private entities say things they don't otherwise want to say. Buckley v. Valeo is largely about not -- we don't have a governmental interest in leveling the playing field, that the solution to bad speech is more speech, not enforced silence, all that.

 

And that was a fairly rich colloquy back and forth that — if you're interested in this stuff — I recommend you read in the actual transcript or listening to the audio. Because I'm not going to go into any more detail on that. But that's where it was left: a real concern over the posture of the case, a concern that they weren't quite sure they understood the scope of what was being regulated, and a concern that this is actually flying in the face of the law and First Amendment precedent that should apply here.

 

So, next, for NetScape, we have Paul Clement arguing. And if you saw the webinar we did a week or two ago, you will know that I am a Paul Clement fan. I think he is incredibly talented. Today was no exception. It was a hot bench. It was a challenging argument to make, in many respects. And I thought he did very well, with one exception that I'll maybe get to. He came out and said, "Look, all these laws do is restrict speech. It's the government restricting speech. It's not actually that hard."

 

And then Justice Thomas said, as you would expect, "Well, what about Section 230? Section 230 is there and is justified on social media platforms being basically sort of neutral conduits. And so, when they start messing around with censorship, that's not legitimate." To which Paul Clement replied, "But that's why we have Section 230. It's because if they were simply the common carrier, neutral conduit, they wouldn't be susceptible to First Amendment challenges. That's not why they need protection.

 

They wouldn't need protection because they're not doing communicative stuff. But, in fact, platforms are doing message and communicative stuff. And we want to protect them from the consequences that might arise if they're held liable for user-generated content. Because you'd have some bozo out there saying something really illegitimate that is liable or slander or intellectual property violation, whatever, and the platform can't handle it immediately, or doesn't understand what it is. They'll be held liable for something.

 

And when you've got hundreds of thousands of pieces of content every minute, coming through your servers, that's just not going to be sustainable. More colloquies for Paul Clement on this issue of is there a legitimate sweep to this law, to which there was a tangent that I think was kind of a mistake — just saying — where the justices were saying, "So, does this law apply to Uber? Can Uber discriminate on who they pick up? Does this law apply to Gmail? Can Gmail discriminate and say, 'Hey, I don't like your political views, so I'm going to take away your email account'?"

 

And Paul Clement answered that these are platforms that would be covered by the law. But what got lost in that answer is the fact that it's the user-generated content on those platforms, things like reviews, that are susceptible to content moderation in the first analysis and the application of this law secondly. And that got kind of lost. And so, you heard this kind of like NetChoice was arguing that "Yes, sure, and Uber can discriminate." And that's not what was happening.

 

But there was enough crosstalk that I thought that that was one of the few times I've seen Paul Clement not be crystal clear and simple and just devastating from the get-go, is in this conversation, which I think was too bad. I don't think it's mission critical. I just thought it was unfortunate. So, the Court asks him, "Well, okay, if we vacate the injunction so the law goes into effect, what happens? To which he replies, "Well, they'll change business models, and they'll probably be blunter and more censorious," if you want to put it that way, "because they'll be afraid of liability." And so, you end up from a platform that has a vibrant amount of conversation and argument to something that's, as he put it, "just puppies."

 

I like puppies. But I can see the point. And then, finally, arguing for the United States was the Solicitor General of the United States, Elizabeth Prelogar. Why is the United States in this argument? Well, they filed an amicus brief where they said things that were similar to what NetChoice said, but not identical. And they petitioned for time, and they got it. And so, they're there. Justice Thomas, one of the first things he says is, "I'm kind of used to hearing you guys argue on behalf of regulations." And that got kind of a chuckle.

 

But, anyway, the U.S. government argued that platforms are engaged in expressive conduct, they're not pure conduits, that the Court can address that issue and take this case narrowly, but shouldn't go beyond what's been argued below and, in essence, not start to imagine the parade of other applications that these laws might have, apart from the platforms that they really intended, and aimed at, notwithstanding the fact that, technically, under the definitions of these laws, these internet services could very well be considered regulated, part of the regulated community.

 

So, overall, lots of disquiet about the sparse record. Again, they're taking up a preliminary injunction that was applied after a limited inquiry. It's not like there's boxes of discovery to go through to figure out what so-and-so thinks about the application of this law to Uber, for example. And one Justice mentioned concern about land mines, of course, unintended consequences. I think it's great when courts consider that there might be unintended consequences to a decision that they make.

 

Meanwhile, if the law goes into effect, individuals can collect fines and it will be very difficult for these same big platforms to operate the way they've been accustomed to operating. So, can the Court address the question of whether or not the free speech precedents that we all know — from Tornillo and Hurley and Buckley and others — apply to social media? Are social media platforms engaged in that kind of expressive speech, and protected?

 

And so, I didn't hear a lot of support for the notion that these platforms are common carriers. I thought that was interesting, because I really think that that is the thing that the states really need to have ruled in their favor. But there was a lot of concern about, again, inadequate record below, moving too quickly into an area they don't fully understand. I think that's great. I think someone, at some point, should mention that the state's argument that they have a First Amendment interest in regulating platforms to improve access to users is a little bit of an emanations and penumbras kind of thing.

 

And I kind of stand with the members of the bar who don't think that that's a very good way of doing constitutional law. But, nonetheless, that's what I heard. And that's what I think. If I had to make a prediction, I think they will vacate. Actually, I think they will vacate based on the sparse record. But if they do reach the First Amendment questions, I think NetChoice will succeed. And that's what I have to say.

 

Chayila Kleist:  Got it. Well, thank you so much for that summary of the cases, the facts associated with it, as well as oral argument. It looks like we already have a question from our audience. So I'll just give a brief plug to our audience that if you do have questions, please feel free to submit those now, as we transition into that time.

 

Our first audience question asks, "Texas mentioned the ‘affected with a public interest’ idea from Munn v. Illinois. Is this the proper question?" Is there any hope for the recovery of this idea, especially as applied to cases like Charles Wolff Packing from 1923 which argues that some sort of market flaw, like scarcity, is required for the regulations of wages? Or how might the lenient approach of Nebbia v. New York assist?

 

Allison R. Hayward:  That's an interesting question. I'm not sure. It's a terrible answer. No, I don't know. What do you think, she says, doing the law professor thing of turning the question around to the questioner?

 

Chayila Kleist:  Interesting. Well, I mean, interesting question. It will be interesting to see if there are answers that come forward. Moving to another question, you mentioned that one of two questions got addressed in oral argument. You've touched on this a little bit, but were there arguments you were surprised to see at oral argument? And, conversely, arguments you were surprised not to see raised?

 

Allison R. Hayward:  Yeah. So, I think that it is incumbent on the states to really explain their position on platforms as common carriers carefully without getting over their skis, as it were. So, I do continue to see it asserted — and it was asserted in amicus briefs, and it was asserted today — that these platforms do no analysis or filtering of content before it goes on the platform, that this is all sort of a post-hoc thing. And, in fact, that is not true. 

 

There's a lot of stuff that happens between the moment you type that post and the little ball goes around, and the post comes up on the platform. You'd be amazed. And there is a process at that point to keep stuff that really shouldn't be on the platform off the platform. So, the notion that there's just sort of this conduit, and then, later on, they get picky and go back and take things down isn't the way that works. So, I think that it needs to be a careful argument to be consistent with the facts. 

 

The common carrier argument needs to be a careful argument to be consistent with the facts and also consistent with why, under common law principles, states get more latitude to regulate common carriers, and it oftentimes has to do with things that these platforms just don't manifest. Questions that weren't asked: I really was disappointed that they didn't get to the second argument on the individualized explanation. Because there were assertions made at the Fifth Circuit that were adopted by the Court, again, that, "Well, but these platforms already have these avenues for appeal. And so, we're not really asking them to do anything differently from what they're already doing." And the answer is, well, actually, yes, you are.

 

      There is nothing nearly as sophisticated as the individualized explanation requirement available today to a platform user. And that's because it would be a vastly different business model to generate that kind of information. Just to give you an example from my experience with the oversight board, the board members were interested in knowing why was this action taken? Why did you allow to stay up, or take down, this particular piece of content? We get some of the history of it from the tool that we use when we're looking at the content. But what happened? Why was this mistake made, for example?

 

And to get a root cause analysis that fully explained what had happened in the platform to lead to that final result took a couple of weeks. Now, obviously, if the law said they had to do that for everyone, they would set up a different system, and it wouldn't take a couple of weeks. They would do it very quickly. They would do it in a very sort of pro-forma way. And they would redesign their business model to give you what the law required. But I don't think that would be, actually, at the end of the day, very useful to a user.

 

      So, I was disappointed they didn't get into that. Because I think there is a lot of sympathy for a sort of a due-process kind of argument. And a lot of users complain about how they've been abused by platforms. They've been taken down. They've lost all their photos. They've been accused of hate speech, and they don't like that. And there's nobody to talk to. And so, there's a lot of appeal, I think, to regulating platforms in that vein. But you need to understand that the platforms don't really collect that kind of information. And it's difficult to reverse-engineer a content decision that's been made that way.

 

Chayila Kleist:  Got it. Thank you. Next audience question relates to the topic of how the justices seem to interact with this. “Did you note how the justices might break out on the issue, what the dividing lines were? And then, I'll add to that, what seemed to be the key issues that are splitting them, if it's not going to be 9-0?

 

Allison R. Hayward:  I think you see Roberts and Kavanaugh being much more sympathetic to what I would consider your sort of conventional First Amendment analysis: these are private entities, Tornillo, all that stuff, Buckley. I see Thomas, Gorsuch for sure, much less sympathetic — and Alito probably too — much less sympathetic to that and looking for some way to bring organizations they see as large powerful biased entities into the civic space in a way that's less biased.

 

      And, again, the common carrier argument gives a state more leeway, obviously, to regulate this business than they would otherwise have. I'll be candid, it doesn't hold a lot of appeal to me. Then, Justice Jackson asked some very good questions. I'm not sure I can put her on a side. I thought she was pretty -- she questioned both sides, I thought, with pretty much equal vigor. Justice Sotomayor, I think, betrayed, at one point, that she came into the argument today thinking that she would prefer to vacate and remand.

 

And she was still grappling with the scope of the law and how it applied to other sorts of platforms, and the various sorts of things that the solicitor general of the United States can tell you, "Please don't worry about that." But you're a Supreme Court justice, and if you're concerned that you're going to write an opinion that has tremendous unintended consequences in places that no one has explained to you, you're going to be really reluctant to go out on a limb.

 

      And so, that's what I heard. There is a disadvantage when you're listening to the audio if you think you know that Justice Kagan sounds a certain way or Justice Sotomayor sounds a certain way, and you get it wrong. So that's why I've been reluctant to mention particular justices. I can recognize Justice Alito's voice. I can recognize Justice Thomas' voice really reliably. I think I can recognize the Chief Justice's voice fairly reliably. But I'm not in the press pool. I'm just sitting at home listening to it, so I don't have that advantage. 

 

Chayila Kleist:  Fair enough. Well, thank you. Next audience question relates to Section 230. "Mr. Clement argued Congress devised Section 230 immunity to encourage websites to take down controversial content." And then the questioner says, "But sites don't need immunity if they take down content, only if they leave it up." Isn't 230 just a higher level of New York Times v. Sullivan, designed to foster a robust exchange of ideas by promising sites they won't be responsible for others' content?"

 

Allison R. Hayward:  Yes. And I think that's where some of the debate seems particularly interesting to me, is this whole notion that because there's Section 230, you've already admitted that you aren't managing your site to have any sort of point of view. And, first of all, the First Amendment protects people with inarticulate points of view, just as it protects people with really profound and concise points of view. So, it's like, "What, do I have to be articulate to be protected?" No. Good thing.

 

      And then there's this whole, because Congress passed this protection that, somehow, you've admitted defeat here, which I don't get. Now, you're right. And I think that's a good catch. They're not liable for things they take down. And one of the things that people are particularly concerned about is that with increased state regulation it will just encourage platforms to take down more content and actually be less surveying of their users' First Amendment interest, whatever that is. And that irony there, I think, could be explored a little bit more.

 

Chayila Kleist:  Next question addresses the common carrier argument, which I know you touched on a couple of times, and the nature of social media platforms theoretically being a sort of new public square. Is there anything, either in the argument, or in your own thoughts on it, that would be worth commenting on, in that space?

 

Allison R. Hayward:  Yeah. I think it's kind of sad that platforms are our new public square, because that sort of suggests that the old public square isn't there anymore. And that could lead us into a conversation about the way municipalities regulate common spaces and, potentially, the crisis of homelessness, and blah, blah, blah. Anyway, I don't really want to go there. Also, COVID has a lot to do with it. Al lot of people withdrew from other sorts of contact with people because they were told they should. And having that all ramp back up again has been slower than I think people realized. 

 

      But then, also, if you're in the public square, I guess the analogy is that the platform is the square and the users are the people talking. But I think that public square, if you're going to acknowledge that, then you have to acknowledge that there's also several dimensions of other public squares sitting there with other people talking. Because it's not just Facebook and Instagram. It's X, which is very different; Discord, which is very different from the last three; Reddit, which is very different; all the commenters on Amazon, because that's user-generated content.

 

And you just have this sort of infinite public square of different squares that you can jump into, depending on what your preferences are, which is not really a public square anymore. It's all these different little pods. Anyway, if it's not obvious to you, I find that analogy kind of wanting, although, like I say, it kind of makes me sad that we don't have the ordinary public square, and we have to talk about this in some sort of analogy.

 

Chayila Kleist:  Fair enough. That's helpful. Next audience question relates to -- I'll just read it.  It will be easier than me trying to sum it up.

 

Allison R. Hayward:  Okay.

 

Chayila Kleist:  "Is it not government action when a government agency actually interferes or threatens government action, should a social media outlet not censor a comment? Was this question addressed in oral argument?"

 

Allison R. Hayward:  No. That's the jawboning question, where the government starts leaning on large platforms to do content moderation in a particular way and sends some nasty emails when they don't. And all I can say is I cannot believe there's emails that say what has been shown in that particular case. That is coming up. And forgive me, I don't remember the exact name of it right now, because I know the name has changed.

 

Chayila Kleist:  It's Murthy v. Missouri?

 

Allison R. Hayward:  But Biden is one of the -- hm?

 

Chayila Kleist:  Murthy v. Missouri, question mark?

 

Allison R. Hayward:  Yes, yes, yes, yes.

 

Chayila Kleist:  Perfect.

 

Allison R. Hayward:  Yeah. Thank you. That's why we have expert professional staff here at The Federalist Society. And that's going to be really interesting because, I think, there, you've got profoundly colorful and bad facts that might lead to a feeling that platforms not only shouldn't be jawboned, but really shouldn't be working with the government at all.

 

And it would be difficult for the large platforms to do the job they want to do, in terms of everything from trying to find pedophiles to trying to keep terrorists from organizing, without some input from the government. Now, the government should not be the tail that wags the dog. The government should be a partner, and, one would hope, a modest partner. But still, I think the fact that that particular case has the facts it has, really, it's pretty bad.

 

Chayila Kleist:  It will be interesting to see, because I know that oral argument isn't for a couple of weeks. It will be interesting to see what arguments are similar, what arguments are different and how that one shakes out.

 

Allison R. Hayward:  But the question was, did that come up in this argument? And, not really. There was a little bit of — I think it was Justice Thomas, in particular — pushing the advocates to talk about at what point does the regulation of a platform turn that platform's speech into government speech? But that didn't really, that wasn't getting traction.

 

Chayila Kleist:  Got it. Next question turns a little bit to the potential outcomes of these cases. You've mentioned your call and then how you think this might go. But I'd love to post some hypotheticals, if it's possible, and sort of think through what the potential ramifications may be.

 

Allison R. Hayward:  Okay.

 

Chayila Kleist:  Assuming that the Court rules in favor of NetChoice, what are the potential responses we might see from states who still want to guide, shape, regulate how social media companies moderate their platforms?

 

Allison R. Hayward:  Yeah. I think what that would do is it would, I hope, encourage states to look at other manners of regulating platform governance that would have salutary effects on users. So, let's say, for example, the complaint when people are bounced off a platform is "I've lost all my stuff. I relied on Facebook to have all my pictures and have all this business information, and this and that and the other thing, I teach piano lessons on Facebook," whatever it is, "and I don't have them anymore. And you haven't explained why. And I've truly lost something."

 

I think there's all sorts of opportunities for consumer protection laws to come in and say "You can't do that. You have to give people some notice, or you have to provide them the opportunity to take their stuff back. Maybe they can't play on your platform anymore. But at least give them their stuff back." Of course, Facebook's view is that it's not their stuff. If they put it on our platform, it's our stuff. Let's play with that one a little bit. And you would find, I think, that, again, it's sort of a due process-y kind of solution.

 

But I think there's some other ways that platforms conduct business that can be regulated that, I think, would find a sympathetic ear with a lot of people, and solve some of the complaints that people have that I think are legitimate. Because they do rely on the platform as sort of an adjunct to their hard drive, or the cloud, for that example. Oh, yeah, we haven't talked about other places in the stack where you can censor but nobody's talking about. That's maybe a topic for another day.

 

Chayila Kleist:  Well, it will be interesting when we get there. I'll pose the reverse hypothetical.

 

Allison R. Hayward:  Okay.

 

Chayila Kleist:  What are the implications of this if the Court rules against NetChoice? You touched a little on how there may be more content moderation/censorship, insert correct term here.  Will social media companies be limited in the way that they can do that? What could the ramifications be for smaller social media platforms that aren't directly affected by these statutes? I'd love to know some of the potential outcomes.

 

Allison R. Hayward:  I think the social media platforms that are easy to identify will get blunter and less — it's not like content moderating, super-nuanced as it is — but it will just get even blunter. So, if you say, "Well, you can't discriminate based on, like, COVID, based on your feelings about vaccines or your feelings about COVID restrictions, or whatever it is. So they could just do a search-and-delete for anything that mentions COVID. It doesn't really quite work that way. But you know what I'm saying.

 

You can be even more blunt and be less susceptible to litigation. And, as Paul Clement said, "It will be just puppies." And, again, puppies aren't bad. It's just, not everyone wants puppies all the time. Some people want to talk about COVID. Some people want to talk about Israel and all the hot-button issues that we have. And if this is sort of a new public square, then where else are you going to talk about this stuff?

 

And there are, by the way, platforms that are much, much, more difficult for outside enforcers to penetrate. There are platforms that do not have search functions, so you know where you're going because you really only want the people that you want to talk to to find you. And that's where some of the really crazy stuff happens. And most ordinary people are just not aware of it.

 

Chayila Kleist:  Fascinating. Next audience question, "If the states succeed in persuading the Court that social media companies are susceptible to common carrier regulation, would the states lose all regulatory authority that they're claiming? Could it be that states are preempted by federal law from regulating interstate common carriers?"

 

Allison R. Hayward:  That's interesting. And, you know, that has not come up. And that's interesting. Well, then, also, the whole common carrier thing kind of drives me a little nuts — if it hasn't been apparent — partly because common carriers still have terms of service. They still are able to regulate conduct on the carrier. Meanwhile, we're talking about common carrier that is free. And oftentimes — not most of the time, but oftentimes — part of common carrier regulation involves regulating price. And the price is your time and your eyeballs. The price isn't money. Anyway, yeah, I think that's a really interesting point. And I don't think it's been fully explored.

 

Chayila Kleist:  It will be interesting to see the literature if it does pop up. Next question, "If NetChoice loses, do you think the Court could leave open the possibility of an as-applied challenge in the future?"

 

Allison R. Hayward:  Absolutely. I think that would be the next step. I think they would lose this preliminary injunction. Things would start up again at the state level with as-applied -- they might enforce the law and you'd have an as-applied challenge. Even if they didn't, you might have challenges based on the First Amendment that were differently argued, or involved different sorts of problems that the platforms have. No, this is not the end.

 

Chayila Kleist:  Fair enough. I don't have any more audience questions. And I think you've answered all the ones I would usually pose at this point.

 

Allison R. Hayward:  Okay.

 

Chayila Kleist:  So, I'll hand it off and we'll get a couple minutes back. Is there any final thought — here is what we need to take away, here is what's worth watching as these decisions will drop in the next couple of months — that you would leave with our audience?

 

Allison R. Hayward:  Yeah. I think the one thing I would sort of — this is my little, on my soapbox — is the management. It is true that these platforms are managed by people with conventional Silicon Valley values. So, if you think of yourself as a reasonably young, very smart Stanford graduate, that kind of politics, they really think they're trying to do the right thing. They don't think they're biased. And I really would like for the argument that is trying to encourage platforms to do a better job of working with messages they may not fully understand or be sympathetic to.

 

The bias really isn't in the content moderation. If you look at your community standards, they're pretty anodyne. And the way the classifiers are trained to apply those is very neutral. I don't know how else to say it. Algorithms don't have viewpoints. They're trained to look for a particular kind of content adjacent to a particular kind of content, with maybe some context, and find it. They're dumb. And so, I don't want to understate the fact that I think that a lot of social media decision-makers have a particular point of view that isn't necessarily the only one that should be entertained.

 

But when content moderation is doing its job, that viewpoint doesn't really seep into the management of the platform. When that viewpoint does seep into the management of the platform is in these exceptional cases where you have to go to the high management to decide if Donald Trump is going to be taken off of Facebook. And that's not the content moderation process that you or I experience. That's a very different thing. Those are escalations.

 

I'm sure the New York Post Hunter Biden story was an escalation. So that's where the individual judgment comes in and those biases are made manifest. It's not the everyday content moderation stuff, and if people acknowledge that and start thinking about how to get at that -- and it might not be through law and regulation.  It might be through just politics and even talking about it. Because there's more than one way to get people to change their behavior. Law is one of them, and it's pretty effective. But there's other ways too. I don't know. That's my soap box. 

 

Chayila Kleist:  Well, thank you. I appreciate that summary of the cases and the oral argument. It ought to be fascinating to see how these all shake out. With that, we can wrap it there. Ms. Hayward, thank you so much for joining us today. I really appreciate you taking time out of your afternoon --

 

Allison R. Hayward:  Well, thank you. This was fun.

 

Chayila Kleist:  -- and lending us your expertise. I know I really enjoyed listening. Thank you, also, to our audience for joining and participating. We welcome listener feedback by email at [email protected]. And, as always, keep an eye on our website and your emails for announcements about other upcoming virtual events. With that, thank you all for joining us today. We are adjourned.