In 2021, both Florida and Texas enacted legislation to limit how social media platforms could limit what users post. The Texas law, challenged in NetChoice v. Paxton found a sympathetic audience in the Fifth Circuit, but the Eleventh Circuit was much more skeptical of the Florida law’s constitutionality in NetChoice v. Moody. In January 2023, the Supreme Court requested the Biden Administration to weigh in on the constitutionality of these laws. The NetChoice duel is likely to be on the calendar for the Court in this next term, with a decision in 2024.
This webinar will gather a panel of experts to discuss the appeals courts vivid differences in approach to issues arising from social media content moderation. The panel will also consider changes in the legal landscape since these petitions were filed, including whether recent Court decisions related to Section 230 of the Communications Decency Act have bearing on the issues raised by NetChoice.
Ryan Baasch, Chief of the Consumer Protection Division, Texas Office of the Attorney General
Allison R. Hayward, Independent Analyst
Jess Miers, Legal Advocacy Counsel, Chamber of Progress
Moderator: Casey Mattox, Vice President for Legal and Judicial Strategy, Americans for Prosperity
To register, click the link above.
As always, the Federalist Society takes no position on particular legal or public policy issues; all expressions of opinion are those of the speaker.
Emily Manning: Hello, everyone, and welcome to this Federalist Society virtual event. My name is Emily Manning, and I’m an associate director of practice groups with The Federalist Society. Today, we’re excited to host a discussion titled “Net Choices: Social Media, Content Moderation, and the First Amendment.” We’re joined today by Ryan Baasch, Allison R. Hayward, Jess Miers, and our moderator today is Casey Mattox, Vice President for Legal and Judicial Strategy at Americans for Prosperity.
If you’d like to learn more about today’s speakers, their full bios can be viewed on our website, fedsoc.org. After our speakers give their opening remarks, we will turn to you, the audience, for questions. If you have a question, please enter it into the Q&A function at the bottom of your Zoom window, and we will do our best to answer as many as we can. Finally, I’ll note that, as always, all expressions of today are those of our guest speakers, not The Federalist Society. With that, thank you for joining us today, and, Casey, the floor is yours.
Casey Mattox: Thank you, Emily, and thank you to The Federalist Society for allowing us to be able to have this panel conversation. So as many of you -- I’m sure everyone who has decided to actually join this call is aware, there are two cases that are pending right now in the Fifth and the Eleventh Circuit where in both cases the Supreme Court has been asked to take those two cases and will be hearing those in its long conference at the end of September. And that’s NetChoice v. Moody and NetChoice v. Paxton, colloquially referred to as the Florida and Texas cases dealing with social media platforms. These deal with two different types of regulations of social media from Texas and Florida. And we’ll discuss the details of those cases as we go, but we have an excellent panel here to have this conversation.
First of all, I’ll start with Ryan Baasch. I’m going to give you all three of them. We’ll start with kind of opening statements from all three. And then I expect and would certainly encourage a lot of really good questions so we can keep this conversation as informative as it possibly can be. Ryan Baasch is the Chief of the Consumer Protection Division in the Texas Attorney General’s Office. He previously served as an assistant solicitor general where he worked on Texas’ most significant cases defending the constitutionality of state statutes and challenging federal regulatory programs.
He was counsel of record at the Fifth Circuit in NetChoice v. Paxton where the court concluded that Texas could constitutionally require dominant social media platforms not to discriminate against users based on viewpoint. Ryan earned his law degree from the University of Virginia, wahoo wah, where he was an articles editor of the Virginia Law Review. After law school, he clerked for Judge Karen Henderson of the D.C. Circuit and also practiced law at the D.C. and New York offices of Latham and Watkins where he litigated constitutional challenges to state statutes and administrative law challenges to various federal regulatory programs.
Jess Miers is Legal Advocacy Counsel at the Chamber of Progress. As a lawyer and technologist, Jess is primarily focused on the intersection of law and the internet. She has written, spoken, and taught extensively about topics such as speech in Section 230, content moderation, intellectual property and cybercrime issues. Before joining the Chamber of Progress, she was a senior government affairs and public policy analyst at Google where she oversaw state and federal content policy portfolios. She also worked while there with Google’s litigation teams on key online speech issues including the ones that we’ll be talking about today.
Allison Hayward most recently served as the head of case selection at the Oversight Board. Previously, she was a commissioner at the California Fair Political Practices Commission, a board member at the Office of Congressional Ethics, and an assistant professor of law at George Mason University School of Law. She also previously worked as chief of staff and counsel in the Office of the Federal Election Commission of Commissioner Brad Smith and practiced election law in California in Washington, D.C. She previously served as a law clerk for Judge Danny Boggs of the Sixth Circuit. And with that, I’m going to turn it over to Ryan to get this conversation going.
Ryan Baasch: Thanks very much, Casey. So I’ll start with a caveat that although I was counsel of record in NetChoice v. Paxton, I’m speaking here only in my own personal capacity right now. I find it very helpful when talking about NetChoice to -- when talking about both of the cases to -- and both Texas and Florida’s laws to start with what we’re really dealing with here, why these laws were passed. And I think NetChoice and its allies on that side of these cases like to say that these laws are about hate speech. They prevent the social media platforms from being able to limit hate speech on their platforms.
But I think it’s very important to remember that these laws were really about truth. I don’t think it’s disputed anymore that the social media platforms frequently censor truthful speech on their platforms. And there’s a number of examples that are teed up in Texas’ brief on this question. But I think one is particularly amusing, and I like to start with it, which is that the platforms in 2020 at the height of the COVID pandemic were censoring American users who would claim that the pandemic originated in a Chinese laboratory. By contrast, the social media platforms were not censoring the Chinese Communist Party when it made claims on the platforms that COVID originated in America. And so I think that’s a poignant example of the stakes at issue here. This is about truth. It’s not really about hate speech as NetChoice and its allies like to claim.
So I’ll talk a little bit about what Texas’ law is, what it does. Texas’ law, the primary function of the law HB20 is to limit the social media platforms’ ability to censor users based on the users’ viewpoint. And I think that there are a couple of important nuances to that restriction that are frequently lost in the public conversation about this law that are important to tease out. Like I said, the core restriction on the social media platforms under Texas’ law is no censorship based on user viewpoint.
What does that mean that the platforms can still do? They can still do a number of things. For one, Texas’ law does not restrict their ability to speak in their own right in any respect. They can add disclaimers to their users’ posts. That’s the platforms’ own speech. Texas’ law does not regulate the platforms’ own speech. The platforms can also censor based on content, categories of content. So while they cannot censor users based on the users’ viewpoints, they can say this entire category of content is off limits. You can imagine the platforms might say pornography as a category of content is off limits altogether. We’re going to censor all of that type of content. It doesn’t matter what your viewpoint is.
The law also explicitly recognizes that the platforms can censor illegal content even if the platforms would be engaged in viewpoint censorship in doing so. A really important carve out in the Texas law allows the platforms to censor content even based on viewpoint for specific users if the users assent to that form of moderation. So you can imagine the platforms can have some kind of opt-in feature where the platforms say we’re going to continue censoring content as we’ve been doing the past several years for you specific user if you opt into this particular form of our platform. Obviously, I don’t think the platforms have that kind of an optionality right now, but the law does not prevent them from providing that kind of opt-in.
So the NetChoice opinion, the Fifth Circuit’s opinion goes to a number of reasons why Texas’ law is constitutional. I hope to kind of address some of those reasons in more detail in response to Q&A. I’ll highlight just three of them here really briefly. The first is that the platforms argument really has no historic basis. The oldest Supreme Court cases, the oldest auctions that the platforms cited in their briefing really began in the 1970s as applied to requiring newspapers to host unaffiliated persons’ speech. By contrast, Texas’ law is supported by a lengthy history of common carriage regulation which we can get into a little later perhaps in the webinar.
The precedents that the platform cited did not apply here. For a number of reasons those precedents have been applied by the Supreme Court to technologies that are simply in opposite to the social media platforms’ technology. And the last reason I’ll highlight in the Fifth Circuit’s opinion for why the law was upheld is because the platforms’ First Amendment position here that they should be allowed to censor users’ speech on their platform under the First Amendment is really irreconcilable with how they use Section 230 in other litigation. The use of Section 230 is premised on the idea that the users’ speech is in no way the platforms’ and the platforms’ should not be held responsible for it. It’s really hard to reconcile that position with their First Amendment argument here which is that they’ve got some kind of First Amendment right to control that speech. So that’s it. I’ll save the rest of my remarks for the Q&As.
Casey Mattox: Excellent. Thanks, Ryan. And Jess, I believe you’re up next.
Jess Miers: Thanks so much for having us and apologies in advance. I’m recovering from COVID. So I will try to speak up here. So I’ve kind of bucketed my remarks in a couple categories. I wanted to actually start with Section 230 since Ryan ended with that.
So just right off the bat here for the Texas and Florida cases the key question for the Supreme Court to decide in both cases is whether social media companies make editorial judgments protected by the First Amendment. That’s the key summary here. Now, while Section 230 is implicated in this discussion, it’s irrelevant to the cases at hand in both NetChoice v. Moody and in NetChoice v. Paxton. Section 230 is a procedural law as many of us know.
I’m getting an echo on that side. Yeah. There we go. Oh, I’m still having an echo. Okay.
Section 230 is a procedural law. It ensures a speedy conclusion of what are inherently First Amendment-based questions. So while Section 230 definitively shields companies and users from their editorial decisions, the law itself doesn’t provide or guarantee any additional rights. That is why the discussion today and in front of our Court should remain central to the First Amendment.
Now, moving to the common carriage question, key to the First Amendment question is whether social media companies can be considered common carriers entitled to a lesser form of protection when it comes to their publication decisions. Now, keep in mind by the way common carriage, number one, as the Eleventh Circuit said you can’t just declare in law that a private entity is a common carriers. But even if they did just declare that, that does not mean that common carriers don’t have First Amendment rights off the bat. So that premise is already flawed in both of the laws as is.
Instead, as we know and as has been held by longstanding precedent, social media companies are private online entities. They are private online publishers. My co-panelist Ryan, he used the word censorship throughout. It’s important to note at least for legal purposes here censorship applies to the government. When we’re talking about censorship, we’re talking about the government’s ability to abridge or infringe on our First Amendment rights. But when we’re talking about private entities, private publishers, just like our offline traditional media and our online traditional media, there is not a censorship component because they’ve always had the First Amendment right to enact their editorial discretion.
Seeing as how the Supreme Court has also ruled in previous cases such as the Manhattan v. Halleck case, which to Ryan’s point noted that a lot of the cases that the petitioners rely on mostly just have to do with old 1970s law, that’s actually not correct. The Manhattan v. Halleck case was decided in 2019 along with several other very relevant premises right now like the 303 Creative case that was decided just this past summer. In both of those cases the Supreme Court found that these private entities, whether it’s a cable company or a private website, indeed are not common carriers and have First Amendment rights to decide what content they choose to carry. We saw that come up a lot in the United States’ recent -- the Solicitor General’s recent brief where they cited throughout their brief several points from 303 Creative, again, establishing that these websites are private entities subject to protections under the First Amendment.
Moving to the transparency discussion, which I think is really going to be the heart of these cases, as we saw, this was sort of where the Solicitor General disagreed. This is where the Eleventh Circuit disagreed. Here what we’re going to see is sort of a distinction between the general disclosure provisions and the specific disclosure provisions in both of the laws, the Texas and Florida laws. General meaning applying to the service having to give over their biannual transparency reports, having to give over their information about their algorithms, etc., versus the more specialized disclosures which have to do with responding to the user and explaining each and every one of the services’ content moderation decisions.
The Eleventh Circuit failed to -- or rather the Eleventh Circuit rejected any First Amendment compelled speech arguments when it came to the general disclosures. The Eleventh Circuit did however uphold that the specific disclosures were too overly burdensome. The Solicitor General’s brief also reached a similar conclusion, so they also created sort of this generalized versus specialized distinction.
And as NetChoice argued in their recently filed supplemental brief, the distinction is arbitrary as both special and general disclosures carry significant operational burdens. And they ultimately lead to the chilling of expression, which is where we get into sort of the compelled speech doctrine. Despite the United States’ guidance to the contrary, the Supreme Court should grant cert on the transparency questions in both laws, especially as it tees up an important circuit split between the Fifth, the Eleventh, and the Fourth Circuit in light of the Fourth Circuit’s Washington v. McManus decision.
And then my last points here, I’m just going to kind of concluded on the impact of these cases. Regardless of where we all fall on our opinions here as co-panelists, I think we can all agree these cases if granted will have a monumental impact on the future of internet law and the existence of user generated content. As we’ve seen in other states, other states are currently in this sort of race to the bottom to enact similar legislation that just flies in the face of First Amendment jurisprudence. And the Supreme Court has an opportunity to clearly affirm the rights of private online publishers, again, when it comes to the content that they choose to carry, curate, and promote to their users. Additionally, all the transparency questions will have significant implications for the debate surrounding AI, especially when it comes to regulating disclosures regarding training data and biases.
Casey Mattox: Thank you, Jess. And Allison, you’re up. If you are just joining us, please, you can start putting some questions in the Q&A function. And after Allison wraps up, I’ll ask a couple questions, and then we’ll open it up to those questions in the Q&A. So Allison.
Allison R. Hayward: Thank you, Casey. So notwithstanding my previous hats that I’ve worn as a constitutional law scholar, I’m not playing one today. What I’m going to do instead is give you a little bit of the benefit of my experience for the three years I was doing essentially content moderation review at the Meta Oversight Board as a head of that team.
So because Jess just talked about the transparency and in particular the requirements in these various laws related to how platforms talk to their users about what the platform has done with their content, I want to give you a little bit of my perspective on there which is that although the Fifth Circuit seems to think that this is a relatively straightforward thing to do, the platforms already do, it is not. So let me give you an anecdote by way of coloring this up a little bit. At the Oversight Board, we would find in our appeals queue content that we thought was sufficiently difficult and controversial and important that the Oversight Board might want to see these appeals had been brought by users.
Now, this is an appeals queue that happens after the internal Meta appeals queue, and I’m only talking about how Facebook and Instagram operate because I haven’t worked with Reddit or Spotify or any of the other places that might be part of this. So we would send over some candidates, and Meta would come back with, you know, now that we look at it, we think that we would’ve come to a different decision on, say, these three or these four appeals.
We would call those internally enforcement errors. And we would then -- our board members would be like, can you get an explanation for why the error was made? And we would go back to the Meta staff, and the Meta staff would say we can do a root cause analysis. It will involve six teams and take four weeks for each piece of content. Now, you might say, yes, but we all know Meta’s notorious for having lots and lots of staff in lots of lots of meetings taking lots and lots of time. And I’m going to confess that I kind of observed that myself.
But it is not as straightforward a matter to look at a piece of content and understand where in the various classifiers that piece of content was actioned on and why. The design is not meant to have that kind of transparency in it. It is meant to take an enormous amount of pieces of content and filter through them very quickly to make sure that what’s on the platform does not at least in the automatic classification world violate their community standards. And if you’ve bothered to read the community standards of any of the large platforms you will know that they are intricate. This is not Joseph Epstein’s Simple Rules for a Complex World. It’s just not -- I mean Richard Epstein. Excuse me. Joseph Epstein, that was different.
This is not Simple Rules for a Complex World. This is not common law adjudication. This is very, very precisely written statute type, civil law type rules. And so the notion that a platform can say to users whose content has been actioned in a negative way and taken off the platform -- because that seems to be what we’re talking about. We’re not talking so much about stuff that’s on the platform that actually violates the community standards but nobody saw it. We’re talking about the stuff that has been taken off and somebody’s angry because they want to talk about the Chinese virus and Chinese virus is taken down because it happens to match a formula for hate speech for example.
That’s just an example. It’s very difficult to tell any user for any piece of actioned content exactly what happened. So more generally what they say is we found you violated our community standard. Well, the user’s not going to be satisfied with that. They’re going to be like where and when and why did I see it taken down and then bounce back up and then taken down three weeks later?
These large platforms operate that way. They were not built to be transparent. They were built to be quick. They were built to be pretty accurate. Though, complete precision is not achievable. So when states come in and say you need to do this thing and the platform is not built to do it, that makes the requirement kind of impossible. And I think impossible legal requirements are bad policy.
What you end up with is something that is not enforced well or enforced only against certain kinds of things that people get excited about, which isn’t really compatible with the rule of law as I understand it. Or you break the platforms. And there are some people out there who I know who were like yeah, let’s break the platform. They’re bad for society. They’re bad for democracy. They lead to mental health problems with children. Yep. All that may be true. But millions of people find Facebook and Instagram, for example, very useful in keeping up with family, in keeping up with friends, in running a business. And you’ll lose a lot when you break a platform that you don’t recognize because that’s the part that’s actually working pretty well and nobody’s complaining about it.
I don’t carry any water really for Meta. It was an interesting three years working on the Oversight Board. I have lots of views and opinions about how they do things that I will not share with you because I’m not supposed to. But having said all that, I really do wonder if elected officials are taking a moment to take advantage of what they perceive to be a spirit of hostility towards platforms not recognizing there’s a much greater population out there that really finds these platforms useful and would be really poorly served if the platforms could not provide the, by the way, free services that they provide.
And then one last tiny thing, I’ve been talking about my experience was just with the Meta platforms, and I can only really speak from experience with those. But I will observe that more than once I’ve seen people comment on in particular the Texas law, but I think it would also apply to the Florida law -- that these user generated content platforms, that’s a much greater group of platforms than the ones you kind of think of. You think of Facebook. You think of Instagram. You think of X or Twitter -- formerly known as Twitter. Maybe you think of Reddit.
Do you think of Teams? Do you think of Slack? Do you think of Spotify? Do you think of -- I don’t know how many users are putting user generated content on the Airbnb site. But I think the universe of potentially regulated platforms is rather diverse and broader than most of the people who are debating about this stuff want to acknowledge. And so I think there is plenty of opportunity for unintended consequences that would, again, not serve the people who enjoy these services very well at all. And with that, I will stop talking.
Casey Mattox: Thank you, Allison. I’m going to go ahead and kick off with just a few questions. One I’ll open up for anybody that wants to talk about it but I think just to make sure that we have framed the issues well for those who are watching. So I think we talked about the specifics of the Texas law. We talked about the transparency pieces in Florida, but I don’t think we actually got to the key part of the Florida law that was actually struck down by the Eleventh Circuit. And I’ll open this up to whoever wants to talk about that specific part of the Eleventh Circuit’s decision.
Jess Miers: I think I spoke to the Eleventh Circuit’s decision on the transparency piece just a little bit in my opening remarks. That’s that general versus specific. So to sort of reiterate here, the general disclosure requirements basically require internet services to publish I believe it’s a biannual transparency report, and that will include information about their algorithms, their content moderation practices and procedures. It’s considered more of a general disclosure or general requirement because it doesn’t include the specific responses to each and every individual user. And the United States SG and the state of Florida as well seemed to agree with that notion. The Eleventh Circuit seems to agree with that holding as well, too.
When we get to the specific provisions, we’re talking more about for every content moderation decision that a service takes, they owe a user an explanation. They have to say why this specific piece of content, and they have to sort of explain their editorial decision to the user. The argument here is that, well, a more specific disclosure is going to be a lot more onerous when it comes to the operational burdens, a lot of what Allison was just talking about. And it’s more likely to be considered compelled speech under the First Amendment than a general disclosure.
In reality, though, that’s not accurate. From a legal -- if we’re talking just from a precedential perspective here, we have the Washington Post v. McManus case in the Fourth Circuit, the case that the Eleventh and Fifth are now split with. That really goes to show that it doesn’t matter. When we’re talking about a product and the product’s entire point is to deliver speech, which is what these social media companies are doing -- they’re publishing speech. Any sort of mandatory transparency disclosure requirement is going to in one way or another interfere with the service’s editorial discretion and editorial decision making.
The idea in McManus, again, was that the state was sort of asking for these editorial guidelines from the Washington Post to later get to the ability to inquire about certain decisions that the Post took down the road. And again, it's that inquiry into the decision making that private entities make when they are exhibiting their editorial discretion that the First Amendment prevents that state from frustrating in the first place. So they’re really -- and you see this come up in the NetChoice supplemental brief as well in response to the SG’s brief. There really is no clear difference between the two. And in fact, trying to create an arbitrary distinction would really frustrate the current law that we have with compelled disclosures.
So that’s what the Supreme Court’s going to have to decide, whether they’re really going to take up this discussion or not. The SG’s office is saying that the Court shouldn’t even look into the general disclosures because there’s not enough information. The parties haven’t briefed it enough. Whereas NetChoice has said no, you can’t really do one over the other, especially as all these other states are starting to come up with their own transparency laws like California’s AB 587 that X is currently suing over.
Casey Mattox: If I’m correct, so let me know if I’m not, the Eleventh Circuit also struck down -- left those provisions, the transparency provisions in place. It struck down provisions of the Florida law that would have regulated the platforms’ ability to remove users who were certain public officials. And if folks would like to speak to that part of the law -- Ryan, I see you’ve turned on your microphone there. If you want to speak to that as well.
Ryan Baasch: Yeah. So I think that for our purposes there’s just one core difference between Texas and Florida. Texas’ law like I said protects all users from viewpoint-based discrimination. And that way it can be analogized to public accommodation laws, common carriage laws. It protects every user who wants to use the platforms.
Florida’s law by contrast prevents censorship decisions against journalists and political candidates, not all users across the board. There may be some nuances to the law that Florida might characterize as providing some protections to users across the board, but the key difference is that Florida’s protecting journalists and political candidates. Texas’ law is protecting all users from this relatively narrow form of censorship, just viewpoint-based censorship.
Casey Mattox: All right. I’m going to throw out a pretty broad question here and let people pick and choose which one they want to go with. There are a number of other state laws that have been enacted that sort of weigh on some of these questions. You have the California law that -- transparency provisions of the California law that the Babylon Bee and now X or Twitter, because I’m going to refuse to call it X and continue to call it Twitter. But basically you have that platform, the platform formerly known as Twitter. They’re challenging this California transparency law. You’ve got a New York law that also would’ve required certain actions by platforms. It was challenged, and New York lost in federal court. So I’ll start with those. So there’s kind of two there. If you’ve got other examples you want to point to, some of the other cases and how they compare or contrast to Florida and Texas’ laws here.
Jess Miers: I’ll go ahead and start. I would actually argue that the New York hate speech law which is currently under challenge via the Volokh v. James case, California’s AB 587, I’ll even throw in California’s age appropriate design code AB 2273 which was I believe enacted a year ago at this point also under fire with NetChoice v. Bonta. They’re all very similar to the Texas and Florida transparency laws, and we’ve seen similar -- we’ve seen states try to propose similar sort of mandatory transparency regulations, whether that be about the algorithms, the design of the service. I believe AADC, it requires services to do a data protection assessment, which is in actuality very similar to the reporting mechanism that -- or the biannual transparency reports that are required in the Florida law, again, having to do with the services have to articulate, put into writing, explain to the AG upon request why they made the certain editorial decisions that they make, how they make those decisions, and what they are doing to make essentially different designs or different features that would lead to different decisions.
I think it’s really interesting to note, by the way, Governor Newsom when he was pushing back on the 587 lawsuit that Elon Musk and X Twitter brought, he even said in his response it’s a shame that these companies are pushing back when we’re trying to cure hate speech in this state. Well, isn’t the argument, though, that folks have been pushing who are in favor of these mandatory transparency regulations that there is no -- it’s not about enforcing content moderation decisions? It’s just about disclosures, which then raises the question, then, why is the California AG --why do they need 587 to regulate hate speech when 587 is supposedly just a transparency regulation?
And the way that that works, the way that we get there to regulating the content moderation decision is it starts with a mandatory disclosure that the AG can request at any time from the companies. So let’s say the company makes a decision about a piece of hate speech on their service that the AG or the state doesn’t like. They can investigate. They can ask questions. They can say under 587 we’re just asking you how you got to that result.
And then from there what stops them from bringing a 17200 claim, like an unfair business practices, unfair trade practices claim that says, look, your editorial policy under 587 that we require you to have says you take down certain categories of hate speech? It looks like here you didn’t do that, and you use 587, you use these transparency disclosures as the way to get to the ultimate goal of the states here, regulating private entities’ speech.
But they’re all the same. It’s all in the same vein. The New York hate speech law, for example, folks will often say that one is innocuous because it doesn’t have an enforcement mechanism. It just says that they have to have a hate speech policy. But it doesn’t tell the services that they have to take down certain pieces of hate speech or certain pieces of content.
But then when you actually read it, it does have an enforcement. It says if you don’t adopt the hate speech policy that New York has outlined for these companies, then you are subject to fines. Now, the hate speech policy that New York outlines could actually go beyond what these services already have as their own house rules for hate speech, which means that in essence it’s forcing these companies to have to adopt over -- they have to over remove. They have to have stricter regulations about content that the services would rather just keep up themselves. And again, that’s another way that we’re seeing these peer transparency disclosures actually start to interfere with the editorial discretion.
Ryan Baasch: I wasn’t expecting to agree with Jess on too much. I do agree with most of what she just said, though, namely that the New York and California laws, they’re in my view not in the same ballpark as Texas’ and Florida’s. Texas and Florida’s laws are designed to protect users among other ways by preventing users from being censored.
New York and California’s laws by contrast are dressed up as disclosure requirements, but really what they’re trying to do through disclosure requirements is coerce the platforms to censor forms of speech that California and New York don’t like. And I think California and New York recognize that they can’t directly tell the platforms you’re going to censor these categories of speech, but these disclosure requirements are a soft method of getting to the same place. So I don’t think that they’re in the same category as Texas’ and Florida’s laws.
Allison R. Hayward: Can I just -- having spent most of my life as a campaign finance constitutional scholar, I’ve seen this mission creep before. But more to the point that we’re discussing today, do you ever wonder when a state says, well, we want disclosure of your algorithms and your rules what they really expect to get? Because already the terms of service and the community standards in the case of Meta are there. You can read them.
Is that not enough? If that’s not enough, do we need to see the internal implementation standards that are very granular from these platforms? Do we need you to tell us what your known questions are, i.e. the places that you know that things aren’t working very well and you’re trying to fix? Give us an algorithm. Really? Do they really want somebody to spit out code on a little flash drive and say here, you make sense of it?
I think a lot of the demands for disclosure people have mastered certain buzzwords, but they don’t actually know what the implications are when they say them. And algorithm I think is one of those. And again, the connection between what states and politicians are asking for, what they really want to do, and what platforms can deliver, there’s a lot of sunlight there. And so it’s probably not going to work and end up in tears.
Jess Miers: If I could build on that, one more point. I promise it will be short. Allison’s point is really key here because to Ryan’s point he says it’s for protecting consumers. It’s about consumer protection. I do think that’s a valid and important goal.
But then you have to ask, again to Allison’s point, who’s vetting the transparency reports? Do they really help consumers in the long run? Because when you think about it, all of these services, they’re still going to be writing a transparency report that is to their own narrative, that makes the service look the best in their own ways here. It may not be as granular as it would need to be to be helpful, especially if these services can be held liable under unfair trade practice type law as well. There’s nothing that really prevents the services from going, look, our terms of service is we can remove anything that we want and just being really general, which in turn doesn’t help consumers.
Casey Mattox: I want to ask -- we’ve got -- and I’ll go ahead and combine two questions here. You have two recent decisions, one from the Supreme Court and one from the Fifth Circuit. And so obviously particular important since the decision came out in NetChoice v. Paxton -- so you have the Supreme Court’s decision in 303 Creative, which we discussed, which I would describe as the Supreme Court saying that the state of Colorado could not under a nondiscrimination rule force a creator of online content to create content that she didn’t want to create and express that content.
So it seems like it’s at least relevant to the question. And then the Missouri v. Biden decision which dealt with coercion of social media platforms, at least the way that the circuit decided it in that case, coercion of social media platforms by government officials. And I’m curious -- sort of this is your opportunity to file your motion with the public court here. New cases have come to light. Ryan, how do you see those cases bearing on NetChoice v. Paxton?
Ryan Baasch: Yeah. I’ll start with 303 Creative. I think it’s different for a number of reasons. I think the two core ones, though -- the core of the decision is that in 303 Creative the law would’ve forced somebody to create a form of expressive content. It was stipulated by the parties the content was expressive. So two key distinctions from what Texas’ law does, one, it forces someone to do something. They forced the website designer to create something.
And two -- I should say it forces someone to create something. And two, it forced them to create something expressive. By contrast, Texas’ law just prevents something. It prevents the platforms from engaging in censorship. And the censorship itself I think is very hard to characterize as expressive. So a number of reasons why the Texas law should not rise or fall based on anything in 303 Creative.
Missouri v. Biden presents what I think is a much more interesting theme to focus on. I don’t know how legally relevant anyone thinks Missouri v. Biden is to the Texas and Florida laws, but there’s something thematic about what’s happening in Missouri v. Biden that’s very relevant. Namely, in Missouri v. Biden I think it’s pretty clear that it’s very hard for the court to tailor an appropriate injunction that prevents the federal government from engaging in illegal activity with the platforms but not preventing them from engaging in lawful activity, just flagging things that maybe shouldn’t be on the platforms but not coercing the platforms to censor that speech.
Where the rubber meets the road, though, is that if the Texas law is upheld, it largely solves the problem you have in Missouri v. Biden. The platforms will be under a legal obligation not to censor based on viewpoint. It won’t matter that the feds are coming to them and coercing them and saying you need to censor this content because they’ll be able -- in fact, the Texas law gives them a benefit because they can raise the Texas law in that circumstance and say, look, we can’t do that without getting in trouble. Right now when the FBI comes to the platforms and says we need you to censor this content; why won’t you do it, the platforms don’t have anything to say. There’s no legal obligation, at least while Texas’ law is frozen. There’s no legal obligation that they’d be violating if they censor the content the feds ask them to censor. And so I think that Texas’ law offers a beautiful solution to the problem that Missouri v. Biden presents.
Jess Miers: I’ll offer the alternative perspective here. So let’s start with the Missouri v. Biden case. Essentially the Missouri v. Biden case no matter where you come out on it or where your opinion lands on how it should have come out, the point of that case is to say that these government actors cannot what we call jawbone private entities like the online services into doing their bidding. And the irony of that, I would actually argue that it contradicts the Texas law because I disagree that the Texas law does nothing.
The Texas law forces these services to carry content that they would not otherwise choose to carry. Let’s take that COVID disinformation for example. A lot of these services, they have to cater to their advertisers. That’s how they make money. And these services do not want their content next to COVID disinformation. They don’t want it next to hateful speech that may come from somebody who is categorized as a political candidate. So in forcing those services to not take that content down it is forcing those services to carry a message that they do not want to agree with.
And that is exactly what happened in the 303 Creative case. The wedding invitation -- the owner of the wedding invitation website did not want to carry the message that she supports homosexual relationships. She felt that if she were to create that invitation, it would then carry the message that she as a Christian does not support that -- that she supports something that is outside of her faith that she regularly disagrees with.
Again, the parallels are very clear here. If a website wanted to take down something that’s COVID misinformation related that’s coming from a political candidate, which is defined very broadly by the way, and the service is prevented by Texas, a state authority that is telling the service, hey, service, you cannot enact your editorial discretion here; you cannot engage in content moderation here; you have to keep that content up against your will, then what that is essentially saying is hey service, you have to carry this content. And that is exactly the type of issue that the Missouri v. Biden case was getting at was that it’s not just about content that the service needs to take down. It’s in any situation when a state actor is telling a private entity, a private publisher what they can or cannot do with content moderation, whether it’s to keep that content up or to take that content down.
And that’s where I think the Missouri v. Biden case is going to be really fascinating because if the Texas and Florida -- if they get their way in the Supreme Court, then these services are going to hit a conflict where if they take content down because of the law, then the state is illegally, impermissibly jawboning the service. But if they take it down, then they’re also violating the Texas and Florida laws.
Ryan Baasch: Casey, if I --
Casey Mattox: Go ahead.
Ryan Baasch: -- if I may, I think Jess said something that’s actually quite important to the legal argument here. It really gets to the heart of the matter. I think Jess mentioned that the platforms have to cater to their advertisers and that the advertisers might not like if certain types of speech remains on the platforms. I think that that’s probably right. The platforms do this because if they don’t do it, they will lose money.
But there’s something really legally critical about that. The editorial discretion First Amendment argument that the platforms have been making is premised on the idea that their editorial discretion is somehow expressive. It’s their expression. But if you recognize, as I think Jess does, as I think a lot of people do, that really what’s happening is not expressive at all, it’s the platforms saying we want to make as much money as possible —if our advertisers want us to do it, we’re going to do it—nothing expressive is happening. Nothing First Amendment protected is happening.
Jess Miers: That’s actually an inaccurate framing of the First Amendment law. We’ve seen very clear precedent that it doesn’t matter if the service is making money. That comes into play when we’re talking about intellectual property perhaps, but we have seen plenty of precedent that says that it doesn’t matter if the service makes money. Newspapers make money. Parade organizers make money. There’s always going to be a monetary aspect when it comes to media. So there has been no court that has said because you make money you have less of a First Amendment protection for editorial discretion.
Casey Mattox: Let’s keep this conversation because I think this is an important question about whether government is compelling speech here and whose speech it’s compelling. Chris Terry (sp) asks -- so let’s go ahead and jump into the questions. So he asked that question in the chat. How do you get around the simple issue that both state laws are state action that compels speech? And I think that maybe sort of pushed the conversation a little bit. In 303 Creative the court was pointing to Hurley as -- it had a lot of discussion about the Hurley case because this question, well, exactly whose speech is being compelled, was sort of central to the 303 Creative case as well.
So Ryan and Allison, I see you had your -- were turning on your microphone as well. But I think I’m curious about that, applying whose speech is being compelled in these cases and how do you distinguish 303 Creative and Hurley as the Court interpreted it in 303 Creative.
Ryan Baasch: Yeah. So as far as compelling speech goes, I’m not sure if these question is getting at the disclosure requirements which I haven’t really said too much about versus getting at the content moderation restriction. To the extent we’re talking about the content moderation restrictions, though, I don’t think it’s fair to characterize the Texas law as compelling speech. Do state and federal laws compel speech when they tell the telephone companies that they have to treat their users equally regardless of various things historically including viewpoint, I think? No one says that AT&T or Verizon are being compelled to speak when those entities are prevented from shutting off subscription services for their customers.
And it’s really the same thing happening with the social media platforms here. Social media platforms are the modern day method of people to communicate with one another. In the 21st century they are what the telegraph and telephone were in the 19th century. And states beginning in the mid-1800s were saying you cannot kick off users for various reasons. No one thought that that created a First Amendment problem for the telegraph and telephone companies. No one thought it was compelling speech on the telephone and telegraph companies’ behalves. And it’s the same logic that applies here. The fact that we’re talking about new technology should not change the legal answer.
Casey Mattox: Allison?
Allison R. Hayward: I’m not playing constitutional lawyer, so I’m going to leave to Jess to go into the common carrier stuff. What I do want to share is something that I don’t think people realize or talk about very much which is in most business situations historically you’ve had some sort of idea where your user or customer, whatever you want to call them, is territorially. Platforms do not have that.
They have some idea where you are. If you are the vast majority of users on Facebook or Instagram, you’re probably not using a VPN. But where the problems come in is the particular edge users or edgy users oftentimes do use VPNs. So if you’re counting on a state’s law to be territorially applied to the people in that state, you’ve come up with a problem right away that you don’t know who those people are on the platform.
You can’t say for sure that any particular person is in Texas or is in Florida or is in the Virgin Islands or is in California because the problem children online know how to use VPNs. And they’re going to be the ones that I think are going to be raising some of the questions that are most interesting, and they’re going to be pretty hard to tie down.
Casey Mattox: Sorry, Jess. I see you. I want to go ahead and ask because I think this really gets to the crux of the question at least with the Texas law which is can these platforms be treated as public accommodations? There’s obviously -- that’s one difference or one arguable difference or I’m sorry not as public accommodations, as common carriers -- because it’s obviously one potential difference between Laurie Smith in 303 Creative being required to create a website or post a website that she doesn’t want to have up and someone subject to the Texas law being required to leave tweets up or Facebook posts up.
So let’s have that conversation. I think I’m curious about what the historical basis and argument from Texas’ perspective is on extending common carrier regulation to the platforms and sort of how one extends common carrier regulations. Is this like the Michael Scott I declare bankruptcy? Like we declare you common carrier and so therefore it is true? What is the test, and how does that test get applied here?
Ryan Baasch: Yes. Thanks, Casey. I think this is an important question. The declaration -- speaking strictly for myself, I think the declaration that they’re common carriers is irrelevant. There are historically-based tests to determine whether a company can be subject to common carrier requirements.
One of the key characteristics of a common carrier historically has been that the entity either be engaged in transportation or the communication industries. I think it’s quite clear here the social media platforms are the 21st century communications providers. They’re not identical to telephone companies, but I don’t think that common carriers doctrine died when we moved past telephone companies historically. The platforms have never really answered this legal point which is that if we can do this to telephone companies, why can’t it be done to you too?
Another one of the criteria to determine if a company can fairly be subject to common carriage requirements is whether it is open to the public at large on essentially the same terms. And the social media companies are open to the public at large on essentially the same terms. They say as long as you comply with our acceptable use policy, our community standards, whatever that is, you’re good to go. We let everybody in. In fact, a lot of people can access large parts of the social media platforms without even creating an account. You can view a lot of things on Facebook, Twitter, Instagram without even creating an account.
These platforms are open to the public at large in the most classic sense, just like telephone companies were, just like telegraph companies were starting in the 1800s. And the telegraph and telephone companies did make these kinds of arguments in the 1800s that you can’t apply what was then -- you couldn’t apply this common carriage doctrine to them because they were new technologies. And the courts rejected those arguments for the same reason they should reject them here. We’re talking about 21st century communication providers. This is how people communicate with one another. The telegraph and telephone companies tried to perform similar forms of censorship.
In the 1880s—this is in Judge Oldham’s opinion for the Fifth Circuit—they tried to censor users for many reasons, including for political reasons. And the states said no, you can’t do that. You’re going to be subject to these requirements just like mail carriers were before them, just like many other types of industries were before them dating back hundreds of years. I don’t see any principled reason why they can’t be extended to the social media platforms now.
Jess Miers: Yeah. So to push back a bit, we’re starting from the premise that social media companies are common carriers like the telephone companies, like the telegraph companies. But that’s a flawed premise as we’ve seen from the cases like Turner Broadcasting, Denver Area. The major test here to decide whether a platform or whether a service should be considered a public utility, should be considered a common carrier has to do with how much control the service can exhibit over the message and the expression that the service is delivering.
In the case of a telephone company, in the case of a public cable company, the telephone company has no control over the message that’s delivered between people using the actual service. What they can do is they can turn the service on, and they can turn the service off. Same goes for mail carriers, mail carriers can’t interfere with the actual messages that are being expressed. And because of that, by the way, we don’t associate a conversation over a phone call or the telegraph or whatever -- we don’t associate those messages with the actual telephone company.
We don’t say that AT&T is speaking when their users are speaking to each other. We don’t say that the U.S. Postal Service has the same message when it comes to what they’re delivering in the mail. These are very clearly core public utilities that don’t have an expressive component in it when it comes to the entity that’s delivering the message.
This is not the case for social media. And not only is it not the case when it comes to the Texas and Florida cases, but we have seen 60 plus cases in the lower courts across the different circuits that have made this conclusion themselves because social media companies do have control over the messages that they deliver. They do have editorial discretion. When Elon Musk decides that he wants to have anti-Semitic content posted on Twitter and he wants to leave that up, that does say that Elon Musk and X, they approve of, they carry -- they align with anti-Semitic messages because they do and they can exhibit control over those messages by removing the specific pieces of content.
We’re not talking about the one to one communication that happens over a telephone. We are talking about an online publisher in the same vein as Fox News, in the same vein as The New York Times. Fox News gets to decide who they put up there, who they call as an expert, and the types of messages Fox News wants to carry. And by making those decisions, they also make decisions about who they don’t want to have up there and the messages they don’t want to carry. And the same thing with The New York Times.
The same thing happens with these internet services. When they make a decision to take content down, that is the internet service expressing a message that says we do not agree with this kind of content and we do not want this kind of content on our private services. That is entirely the reason why these services cannot be considered common carriers because they are functionally publishers under the First Amendment.
Allison R. Hayward: Can I just get in here? I’m not sure the intent and desire of the platform is straightforwardly translated into content moderation to the same degree as Jess has just implied. But I also want to suggest that although I don’t think that social media platforms make sense as common carriers, that’s part of the stack of the internet operatives that give you the service that you enjoy when you when you get onto the internet. So I think there’s possibility that -- I read Philip Hamburger’s brief, and I think it makes more sense -- his arguments make more sense if you’re talking about hosting companies or if you’re talking about other sorts of providers of transmissions or plugins, I don’t know.
But there’s various aspects to the stack of technology that deliver you your Facebook content that you don’t see and that users don’t remember to think about where censorship could occur. So for example, Cloudflare was a company that provides various kinds of security services to users and in particular protects sites from denial of service attacks. Cloudflare decided it did not want to support Kiwi Farms anymore because Kiwi Farms is frankly a disgusting, awful site in my view, in my opinion. It’s not libel if it’s my opinion. Anyhow, so they stopped working for Kiwi Farms.
And I don’t know if Kiwi Farms is still an extant thing. I think it might be. But there’s other players besides just the social media platforms that enter into this question of common carrier regulation and what companies can do to signal that they don’t like opinions that other people might find legitimate. And so I would entertain that when we talk about the common carrier legal category, we remember that there’s other aspects of the stack where individuated decisions about content aren’t possible but wholesale I won’t work with you; I will work with you can happen. And how that plays into the common carrier analysis I think is really interesting.
Ryan Baasch: Okay. Casey, if I may just a quick second respond to both Jess and Allison. I think there’s some slippage between what Jess’ legal position is here and what Allison describes as a factual realities of the platforms actually censor. The core legal question, getting back to basics here, is whether the platforms are engaged in something expressive when they censor content. That’s the core First Amendment argument. If it’s not expressive, they don’t have a First Amendment protection, I think.
But the way that Allison describes platforms’ operations I think reveals that there is nothing expressive going on here when you’re talking about censorship. When she opened, she mentioned that oftentimes Facebook can’t even tell you why they censored someone. They say we need to convene six meetings that are going to take four weeks to tell you why you got censored.
When a newspaper doesn’t want to run someone’s column, when Fox News doesn’t want to host somebody, they’ll tell you right off the bat here’s why we’re not hosting you. That’s not what’s happening on the platforms. If the platforms cannot even articulate why they’re doing something, I don’t think you can characterize it as First Amendment protected. We would not accept this in any other industry. We wouldn’t accept people being kicked out of a restaurant and the restaurant owner saying I’ll get back to you in four weeks as to why I kicked you out. I don’t even know why I kicked you out now. We wouldn’t accept it. It shouldn’t be acceptable here.
Jess Miers: Except that restaurants aren’t speech providers in that regard. Again, we see a lot of comparisons to non-speech providers, and obviously the First Amendment has a stricter scrutiny for providers of speech, which again restaurants are not.
Casey Mattox: (inaudible 00:58:59) because we are coming up at the end of our time. We could keep this conversation going for a while. There’s a lot of topics, I think, to explore here, but I’m going to wrap with a quick run through. We’ve got these two cases coming up to the Court in the long conference. At least for the Solicitor General, they’ve encouraged the Court to take those, at least portions of those cases. What do you think is going to happen? Will the Court take the cases? And if you want to offer a quick prediction of what you think is going to happen. Jess, we can start with you. Go backwards.
Jess Miers: Absolutely. I think it’s likely that the Supreme Court’s going to grant cert in both cases. What I’m interested to see is if the Court will combine them. Ryan listed some good distinctions between the laws, and I agree with that. However, the core question is First Amendment-based, and some of the stuff we discussed today I think it would be very much an overlap between the two cases. So I’m curious to see if they’ll be joined.
Based on what we saw in Gonzalez v. Google, 303 Creative, even Twitter v. Taamneh, Manhattan v. Halleck, I can -- I hate to make predictions. But I would not be surprised if we got a very strong First Amendment conclusion here based on the makeup of the bench as well that internet services, they are indeed private publishers entitled to editorial discretion. What I’m not as certain about are the transparency provisions. I think that’s got a lot more room for argument. I think the points that Ryan raised are fair. And we’ll have to see where the Court comes out in that. Either way, though, it’s going to have a ton of impact when it comes to the future of the way that these internet services operate.
Casey Mattox: Allison.
Allison R. Hayward: I predict they will grant cert. When cases are joined, it sometimes can get messy with the advocates. I would prefer to see them grant cert and then reverse remand depending on the holding of the one case of the other case. And that’s just an old fogey. I think it’s cleaner to have one case, one record, one set of arguments. And yeah, I think they will reverse the Fifth, uphold for the most part the Eleventh. But yeah, the devil’s in the details, and so we’ll see.
Casey Mattox: Ryan, I’m waiting with bated breath to find out whether you think -- what the Court’s going to do with NetChoice v. Paxton.
Ryan Baasch: It’s hard to make the Supreme Court predictions. Just really quick, I know we’re running out of time. I think more likely than not they grant. I think if they do grant, it’s very likely they combine the two. Certainly a ruling against Florida does not control the Texas case because of various differences in the law. I think they’re likely to combine the two if they do grant.
How they’re going to come out I’m not going to offer prediction on, but I will say one more thing which is that I think there’s a good reason for the Supreme Court not to grant, namely because these were pre-enforcement facial challenges to the law. Judge Oldham spent some time in his opinion discussing why there are various problems with pre-enforcement First Amendment challenges and why it would be better for this to come back. And then as applied posture maybe in the context of a specific censorship decision, I think that’d be a better context for the case -- the Court to take these cases. So I think that there’s at least a reason for them not to take them now.
Casey Mattox: Great. Thank you. And back over to Emily.
Emily Manning: So on behalf of The Federalist Society, thank you all for joining us for this great discussion today and thank you also to our audience for joining us. We greatly appreciate your participation. Check out our website, fedsoc.org, or follow us on all major social media platforms @fedsoc to stay up to date with announcements and upcoming webinars. Thank you once more for tuning in and we are adjourned.