Litigation Update: Volokh et al v. James

Event Video

Listen & Download

In early December, Prof. Eugene Volokh, Rumble Canada, and Locals Technology filed a complaint in federal court against New York Attorney General Letitia James, seeking to stop a new New York state law from taking effect. The suit challenges a recently enacted section of New York’s General Business Law, “Social media networks; hateful conduct prohibited.” 

The new law originated, in part, as a response to a 2022 mass shooting in Buffalo that left 10 dead as the result of what is alleged to be a racially motivated crime. The law aims to restrict "hateful" speech on social media platforms and requires social media networks to “provide and maintain a clear and easily accessible mechanism for individual users to report incidents of hateful conduct.” Additionally, the networks must establish and publish a policy outlining how they will “respond [sic] and address the reports of incidents of hateful conduct on their platform.” 

The State of New York has asserted that hate speech moderation of this sort can be a useful tool in preventing hate crimes. Plaintiffs are represented by the Foundation for Individual Rights and Expression (FIRE) and argue that the law violates the First Amendment and forces social media networks to police their users. They further submit that enforcement of the law requires a subjective determination of what constitutes “hateful conduct,” thereby instituting viewpoint discrimination and chilling constitutionally protected speech. 

Please join us for this Litigation Update from Prof. Eugene Volokh.

Featuring:

Eugene Volokh, Gary T. Schwartz Distinguished Professor of Law, UCLA School of Law; Founder, The Volokh Conspiracy

---

To register, click the link above.

 

*******

As always, the Federalist Society takes no position on particular legal or public policy issues; all expressions of opinion are those of the speaker.

Event Transcript

[Music]

 

Dean Reuter:  Welcome to Teleforum, a podcast of The Federalist Society's Practice Groups. I’m Dean Reuter, Vice President, General Counsel, and Director of Practice Groups at The Federalist Society. For exclusive access to live recordings of Practice Group Teleforum calls, become a Federalist Society member today at fedsoc.org.

 

 

Sam Fendler:  Hello and welcome to this Federalist Society virtual event. My name is Sam Fendler, and I'm an Assistant Director of Practice Groups at The Federalist Society. Today, we're excited to host a "Litigation Update: Volokh et al v. James," with Professor Eugene Volokh.

 

Professor Volokh is the Gary T. Schwartz Distinguished Professor of Law at UCLA School of Law and Founder of The Volokh Conspiracy. He is a prominent constitutional scholar, particularly in the area of the First Amendment, and has written and taught extensively on the subject. If you'd like to learn more about Professor Volokh, you can find his full bio at our website, fedsoc.org.

 

After the Professor gives his opening remarks, we will turn to you, the audience, for questions. If you have a question, please enter it into the Q&A function at the bottom of your Zoom window, and we'll do our best to answer as many as we can. Finally, I'll note that, as always, all expressions of opinion today are those of our expert, not The Federalist Society.

 

With that, Professor, thank you for joining us, and the floor is yours.

 

Professor Eugene Volokh:  Thanks. Thanks very much for having me, especially something about my own case. I've litigated quite a few cases as a lawyer. This is a rare case, not the only case, but a rare case where I'm actually the lead plaintiff. So I very much appreciate this opportunity.

 

So this is a case which the Foundation for Individual Rights and Expression is representing me and the video platform Rumble and I think the cousin platform Local, cousin to Rumble, in a lawsuit against the State of New York, against the New York Attorney General, with regard to a newly enacted New York law that deals with supposed "hateful conduct." So this is a New York General Business Law Section 394-ccc, and its title is "Social media networks; hateful conduct prohibited."

 

      If you look at the text of the law, it doesn't really deliver on the promise of the title. I don't want to say that the law itself prohibits hateful conduct—we'll see in a moment what it means by that—but it is deliberately an attempt to target certain viewpoints that some loosely label hate speech on the internet and to try to pressure social media platforms into restricting such viewpoints. We're challenging that on First Amendment grounds, and there will be a preliminary injunction hearing on Monday on this very question in federal court in New York.

 

      So the law defines hateful conduct as meaning "the use of a social media network to vilify, humiliate, or incite violence against a group or class of persons on the basis of race, color, religion, ethnicity, national origin, disability, sex, sexual orientation, gender identity, or gender expression." So they label it conduct, perhaps, to fuzz over a little bit the fact that there's a free speech clause that might govern this kind of behavior but this is what one definition of what some people have at times labeled hate speech.

 

      Obviously, this speech is generally protected by the First Amendment. It's possible that certain kinds of incitement of violence, incitement that is limited to advocacy of imminent unlawful conduct that is intentional and that is intended to promote such unlawful conduct and is likely to do so, that might be punishable, or that category would be punishable if it were just a general prohibition on incitement. This kind of targeted restriction on incitement based on certain categories is itself probably unconstitutional under the R.A.V. v. City of St. Paul decision but certainly speech that vilifies and humiliates, whatever that means, is constitutionally protected. Look at cases like Snyder v. Phelps and Hustler v. Falwell and the like.

 

      So the law begins by defining this category of speech, which includes a great deal of constitutionally protected speech, and then also defines social media networks to mean service providers which for profit-making purposes operate internet platforms that are designed to enable users or share any content with other users or to make such content available to the public. So that does include networks like Twitter and Facebook, but it also includes our vlog, The Volokh Conspiracy, which has a comments section. That comments section is designed to enable users to share content with other users and make content available to the public. We are operating for profit making purposes. It's a very, very small profit that we make. We make a little bit, why not? And so we are covered by it as are Rumble and Locals as well.

 

      So those are the definitions. Then what are the requirements? Well, the law says that a social media network that conducts business in the state, and we certainly know we have lots of readers in New York and a couple of our co-bloggers are in New York, shall provide and maintain a clear and easily accessible mechanism for individual users to report incidents of hateful conduct. So we have to have a mechanism for people to report such things. And then it goes on to say each social media network shall have a clear and concise policy on its website which includes how such social media network will respond and address the reports of incidents of hateful conduct.

 

      So we have to provide a mechanism for people to complain. We have to have a policy for how we deal with these complaints, and we have to publish it. So that's a speech compulsion. We're compelled to create and publish a policy. And, at least as I read it, it says the policy shall include how such social media network will respond and address reports of incidents of hateful conduct. So it sounds like there's a second speech compulsion which is that we have to respond to those incidents.

 

      Now, it's an interesting question whether these kinds of speech compulsions would be permissible if they're imposed on, say, a platform saying every platform has to have a policy describing what it does by way of it moderating comments of any sort, whether hate speech or anti-government speech or anti-police speech or libelous speech or blasphemous speech, whatever else. It's an interesting question whether that kind of content-neutral policy -- excuse me, content-neutral requirement that the platform have a policy and it provide responses, whatever those responses on the policy might be, whether that's constitutional.

 

      But the one thing that's quite clear about this is this is a viewpoint-based law. It is a law that requires a policy for dealing with hateful conduct, which is to say, "hate speech," quotation marks literally from the statute of hateful conduct, scare quotes for hate speech, not a label that I endorse but it's obviously a label that people often use. So that's a set of viewpoints. We have to have a policy for dealing with those viewpoints. We don't have to have a policy for dealing with the opposite viewpoints or for dealing with other viewpoints, as I said, like anti-government, anti-police viewpoints, viewpoints of target and vilified based on political affiliation or social class or whatever else.

 

      And my view as an academic, as a blog, there's a client, is that that violates the First Amendment, that the government can't mandate policies for dealing with particular kinds of viewpoints, can't mandate responses as to complaints with regard to particular kinds of viewpoints, that the First Amendment prohibits that kind of viewpoint discrimination. Again, I should stress, on its face, despite the title which says "hateful conduct prohibited," this law does not actually prohibit these kinds of comments, does not actually require us to prohibit, but it does require us to, based on the viewpoint of certain kinds of speech, to take certain kinds of action generally to -- compels us to put up a policy and compels us, at least as I read the law, to respond to reader complaints.

 

      So I have to acknowledge, this is not as big a burden as an outright prohibition, but viewpoint-based restrictions are impermissible even if they impose very modest burdens. And also, my sense is that this is part of a broader agenda, on the part of New York and the part of others as well, to try to restrict this sort of speech even more. And I think this is best nipped in the bud if possible. Indeed, the State has pointed out that this law was enacted in response to the, among other things, to the Buffalo mass shooting, which was targeted at blacks. And there was, I think, the shooter put up a streaming video which was on briefly of the shooting and the apparent -- people say that he was radicalized by things he saw online. Maybe, maybe not. But in any event, it's quite clear that this law would not, by itself, do anything to prevent such shootings in the future.

 

      My sense is, again, that the New York government is seeing what it can do to try to suppress those kinds of viewpoints. Of course, by the way, much of what would be labeled hate speech, certainly viewpoints I entirely disagree with, I don't like the definition, for example, statements that vilify certain religious perspectives. Certain religious perspectives deserve to be vilified. Westboro Baptist Church is a religious group. Those are the people that hold up the "God hates fags," signs on occasion at military funerals. Vilify means to sharply condemn. I certainly endorse that. But certainly, a lot of the attempts to humiliate, vilify, and such based on those categories, I think, are wrong. I certainly wouldn't engage in them myself. It's just, I think that the First Amendment precludes the government from taking coercive action, even very modest coercive action to try to suppress those kinds of views, at least the way that it's doing here. And the way that, again, I see coming down the pike, if modest steps like this one are upheld, which we hope they won't.

 

      Now, we filed this motion just, I think, a couple of weeks ago basically. And the Court, to its credit, set up an accelerated briefing schedule. So our reply brief is due today. The government's response to our motion for preliminary injunction was filed two days ago. It was due and filed two days ago. So we happen to know what the government's positions are. So I’m going to try to articulate them. Obviously, I'm not an impartial observer here. I'm literally a party to the case, but I'm going to try to do them justice.

 

      So the government says that the law actually doesn't even require responses. As I read their position, it's with regard to our claim that this compels us to respond to people, it says no, no, no. You don't have to respond to anybody. You could have a policy that says we don't respond to anybody. That is an acceptable policy under the law.

 

I'm not sure how that can be reconciled with the text of the law which says that the policy must "include how such social media network will—emphasis, mine—respond and address the reports of incidents of hateful conduct on their platform." Sounds like we have a duty to respond, even though a response may be we won't take things down. But the State's position is actually we don't have to. We don't have to respond at all. So it suggests that the law is, rather than just being kind of a modest step, is almost an empty step.

 

      Now, what about the requirement of the policy? Well, the State says that's a content neutral requirement because we could have any policy we want. It says, in fact, we could have basically no policy at all so long as we affirmatively state that it has no policy at all. So the State's perspective is we could have a policy that says complain however you want about anything you want, whether it's hateful conduct or otherwise, we don't care. We're not going to take things down or we'll take things down just in our own discretion, which is more or less, more or less, the current way I currently deal with things. You can email me if you want to, and maybe I'll respond, maybe I won't. Usually, I do but I don't feel obligated to. And almost always, I will not take things down but occasionally I could imagine a situation like somebody is -- actually, I do indeed take down certain things usually because they're kind of personal insults, especially vulgar ones that seem to me to poison the conversation.

 

      So the government says look, you can have that kind of policy so therefore, our compulsion such as it is, if it is at all a compulsion, is content neutral. But that can't be right. After all, the law does not require platforms to have a policy to handle all complaints. It requires them to have a policy that handles users reporting incidents of hateful conduct. So if we have a policy -- if we had a policy that says only if you have a complaint about hateful conduct, email us here, that would be permissible under the law. If, on the other hand, we had a policy that says if you have a complaint about pro-equality speech, email us here, otherwise we don't want to hear from you. That would not be a policy. I think that would provide a mechanism for users to report incidents of hateful conduct. That would be -- so that would not be allowed under the law.

 

      Likewise, if all that we said was here's our policy for complaining about libelous content or threatening content or anti-American or anti-police content, let's say, not that we have such a policy, but if we did, that, too, I think, wouldn't comply with the law because that would not include how such social media network will report and address the reports of incidents of hateful conduct -- excuse me, will respond and address the reports of incidents of hateful conduct. They require policies as to these viewpoints, but they don’t require policies to other viewpoints. That's a textbook example of a viewpoint-based law. It's right there on the face of the statute.

 

      Actually, in Reed v. Town of Gilbert, the Court made clear that even a facially content neutral law could be content based, therefore, in some situations could be viewpoint-based if it is intentionally targeted at a particular content or viewpoint. But here, it's not facially content neutral. It describes certain viewpoints right there on its face. So that's my response, at least, to the claim that it's content neutral.

 

      They also say well, this is just a commercial speech restriction because after all, it only applies to for profit enterprises. So it's just like a disclaimer requirement or some kind of disclosure requirement as to commercial products. But of course, even though our website does make something of a profit—from advertising, we don't charge subscription fees by any means—that doesn't make it commercial speech any more than The New York Times is commercial speech. The New York Times is a commercial enterprise. It seeks to make a profit, but the Court has made clear that that's not commercial speech. That's fully protected speech.

 

      Likewise, whatever latitude there is for extra disclaimers in advertising or with regard to the sale of non-speech products, I don't think it would apply here. And in any event, even as to commercial speech, attempts to single out particular kinds of political viewpoints, the Supreme Court has said are unconstitutional.

     

      So I think that's the heart of the State's argument, but I'm sorry. I neglect to mention one other thing. One of the things that the State argues is that this is just a disclosure requirement that, I'm quoting here, "supports the State's interest in preventing confusion among consumers about what happens, if anything, after a report is submitted to social media networks." So it's a consumer protection measure, just to make sure that people understand what they're getting from the network. And you can imagine a situation where somebody says I carefully crafted these comments, I invested all of this effort, I saw the ads that one needs to see in order to access the page, and then the comments were deleted and that's kind of unfair to me as a consumer. You can imagine that kind of argument. But while that argument might be used to support a law that requires disclosure of policies as to all comments, nothing as to the consumer protection argument justifies the limitation in this law to particular viewpoints, right?

 

      The government can't have a constitutionally permissible interest in protecting consumers from certain viewpoints but not other viewpoints or protecting consumers' interest in making sure -- knowing whether their comments will be deleted based on certain viewpoints in those comments but not other viewpoints. So the consumer protection interest would extend as much again to all of the other viewpoints that comments might include. But the law targets only a particular set of viewpoints. It's hard for me to see how the law therefore can be justified as a consumer protection measure.

 

      So that was my argument. That was kind of my attempt to summarize the State's argument and my response. My oral reply brief, such as it is, my lawyers are working on a written reply brief which is going to be much more detailed.

 

So timetable? As I said, on Monday, there's going to be a hearing before the district court. It's a preliminary injunction hearing. The Court could issue a ruling right there from the bench with an opinion, perhaps, to follow, or I think probably a little bit more likely, the Court could take the matter under advisement and publish an opinion, presumably soon given that it's set in an accelerated briefing schedule. I mean, this is lightning fast by the standards of the civil litigation system. We filed our complaint on December 1. We filed our motion for preliminary injunction on December 6. And now, there's -- the other side was given seven days to respond. We were given two days for the reply. And then basically less than two weeks after the motion for preliminary injunction was filed, the hearing's taking place. I'm assuming the judge is interested in rendering decision pretty quickly.

 

      Of course, if it comes out against us, we'll appeal. If it comes out against the State, I assume the State will appeal. You know, the New York Attorney General's Office is presumably interested in defending this law. Sometimes, when one wins at the preliminary injunction stage, they may see the writing on the wall and say it's not worth appealing further. Of course, the appeal would go up to the Second Circuit, and then we'd end up briefing. And we're hoping that there will be an accelerated appeal schedule as well.

 

      But in any event, the case is going forward. I have high hopes. I think we're in the right. Well, of course I would say that. Again, I'm the litigant. But still, still, I think that this kind of facially, clearly viewpoint-based law is unconstitutional even though, indeed, again, contrary to its title, it doesn't actually prohibit "hateful conduct," which is to say the expression of certain viewpoints.

 

      So that's my story, and I'd love to hear what questions people have and what reactions they have. If you have a great argument, we still have time to include it in our reply brief.

 

Sam Fendler:  That's excellent. Thank you so much, Professor, for giving us an update on this case. And we're now going to turn to our audience questions. Again, if you have a question, please enter it into the Q&A function at the bottom of your screen.

 

      Professor, I want to start off by asking you a broad question. The issue of hate speech has been prevalent over the last several years. Of course, there's a lot of misunderstanding. Some people don't know that however you define hate speech, that is still in fact protected speech. And you have others that think, or perhaps they know, that it's protected speech but they think that it shouldn't be that way.

 

      In your complaint, you took some time to discuss why America's First Amendment jurisprudence has unfolded the way it has, why it has rejected classifying hate speech or some definition of hate speech as unprotected speech. And I'm wondering if you could speak more about this evolution, the underlying theory and why you think hate speech, whatever that is, does indeed require First Amendment protection.

 

Professor Eugene Volokh:  Sure. So the Supreme Court's current view, which I think is quite correct, is that the First Amendment protects viewpoints generally because Americans are entitled to decide for themselves which viewpoints are good, which are bad, which are right, which are wrong. In Gertz v. Robert Welch, where the Court actually upheld certain restrictions on libel, which is to say full statements of fact that damage a person's reputation—there are exceptions to the First Amendment—the Court made clear that under the First Amendment, there's no such thing as a false idea.

 

      Of course, there are, I think, false ideas. But, again, each of us must decide it for themselves rather than having it be forced on us by the government. And there are many reasons for that. One is that in a democracy, people have to be free to talk about whatever legislative proposals there are out there. So with regard to, just to take an example, gender identity, right? There is a hot political debate about whether transgender athletes should be allowed on women's sports teams. This point, it's pretty lopsidedly against allowing them. That is to say the polls suggest that. Maybe it'll change, maybe it'll continue, maybe it'll swing back and forth.

 

But that's what a democracy is. It's a place where people can decide, can vote on these kinds of policies and that includes -- it's pointless to vote on something if people weren't free to explain why they think no, that they think that transgender status is a mental illness or even if it's not a mental illness, it's just not fair to allow transgender athletes on women's sports teams because they're not real women, let's say. That's a position that many people take. They're entitled to say that.

 

      Of course, there are other positions that are quite marginal, thankfully, so the view that we should re-establish race-based segregation or slavery or whatever else. But even there, slavery was abolished by constitutional amendment. Constitutional amendment was facilitated by war. But among other things, the North's willingness to fight this war stemmed in part from speech and the hostility slavery stemmed from anti-slavery speech. And that democratic process can't be then stopped by the government saying no, no, no, these views are so bad, they can't even be talked about. So that's the kind of democratic argument for free speech.

 

There's another flipside argument which is what we believe in democracy. We don't really trust, fully trust, democratically elected governments. One reason we believe in democracy is precisely because there's opportunity to throw the bums out. But if the government had the power to ban certain kinds of viewpoints, it's very likely that it will abuse that power. And even if you like the Biden administration, imagine what a Trump II administration might do. If you like the Trump administration, imagine what a Kamala Harris administration might do or something along those lines. And, of course, it isn't just at a federal level. There are 50 states out there, and some of them are deep blue. Some of them are deep red. Some of them are a mix. Some of them may have odd movements in them that create all sorts of things that might lead to some sort of administration that wants to censor certain kinds of views.

 

      The First Amendment, by protecting all views, makes sure that the views that we support are protected too. It stops us from suppressing contrary views, but it protects our views. So one question is would you trust the government to decide which views are so evil that they ought to be banned or so harmful that they ought to be banned? And I'm not inclined to trust them especially, again, because even though there may be some governments that I may agree with, some administrations, inevitably, during my lifetime, there have been, of course, and there will be ones that I sharply disagree with. And I think that should be true for all of us. So there are a variety of reasons why we protect free speech, but these are, in a sense, the key ones.

 

      Let's focus more specifically on this hate speech category. One thing to keep in mind is the protection of this -- there have been a few full starts here and there, but it has really been around as long as the Court's modern First Amendment jurisprudence was around. That the Supreme Court, its very first case recognizing freedom of expression rights actually involved pretty hateful ideologies, basically revolutionary communism/anarchism/socialism, whatever else, an ideology that led to tens of millions of deaths during the last century. This was in 1931 so it's before the really large amount of deaths happened. But it was pretty clear that it was a hateful ideology, but the Court said even then, it's protected. And this in the case called Stromberg v. California, which struck down a state law banning the display of a red flag as a symbol of a position to government.

 

      But the very next case the same year, just a few weeks later, Near v. Minnesota, involved anti-Semitic speech. And the Court -- there was an injunction against this anti-Semitic newspaper under a state statute that banned the publication of scandalous and defamatory newspapers. And the Court recognized that that statute could equally be used against any viewpoint and struck that down, notwithstanding the particular ideology of the publishers in the newspaper. I think should be unsurprising, although some might find it surprising, the case was 5-4, and one of the people in the majority, the necessary vote for the majority, was Louis Brandeis, the first Jewish U.S. Supreme Court Justice, actually, long been an advocate of the Court for free speech. He recognized the importance of protecting even speech that was hate speech directed at his own group, Jews.

 

      So this has been -- we've lived with this for a long time. Again, there have been some ebbs and flows on this but on balance, this has been the rule. And I think it's been a rule that served the country well. And I think that's particularly true given what we have seen, which, again, should surprise no one which is that once somebody does try to establish, not even as a matter of law but as a matter of social commentary, category of hate speech as a separate category, unsurprisingly, it ends up swallowing more and more, just like in the 1950s attempts to punish communist advocacy led people to label lots of people they dislike as communists or communist sympathizers or closet communists or whatever else.

 

      So for example, it's routine nowadays to hear the arguments against affirmative action, let's say, race-based affirmative action or arguments against illegal immigration and such denounced as racist. Likewise, the verb, very term racist has, or racism, has now in the minds of some extended to so-called structural racism which includes, really, pretty much any kind of policy that has racially disparate effects, which is very, very many policies indeed. So if indeed, the government is allowed to ban or even otherwise suppress supposedly racist advocacy, unsurprisingly, that has already been used and will be used to try to suppress other kinds of speech that I would certainly never -- that I am quite confident that I'm not racist but that some people view as that.

 

      Just to give one example, there was suppression through law because, well, law doesn't allow that, but there was an incident at a private university where anti-Chinese government speech was labeled racist and harassing and such and was suppressed by the administration. This was, I think, if I recall correctly, these were stickers with a picture of the yellow hammer and sickle in the background of red, which is, of course, on the, I think, Chinese Communist party flag, with the caption, "China kinda sus," which I believe comes from a video game. The "kinda sus" comes from a video game where you're trying to identify enemy agents who are infiltrating your spaceship. I believe the game is called Among Us.

 

      So this was clearly an attempt to condemn the Chinese government. They used the flag of the Chinese Communist Party, but the administration at the university interpreted it as oh, this is an attempt to condemn people who are ethnically Chinese, even, of course, many ethnic Chinese in America are here because they fled the government. Like, I was born in the Soviet Union. My family came here because we didn't like the Soviet Union. But these are just a few examples. All of us can know of a whole lot of others.

 

There was an incident reported in the Wall Street Journal just a couple of weeks ago where a semi-retired partner at a law firm was basically fired for speaking out against abortion in a company-wide meeting where people were mostly for abortion rights following Dobbs saying that well, the problem's with abortion is it has led to a black genocide, the theory being a disproportionate share of aborted babies are black. And the firm basically condemned that as racism and fired her for it.

 

      So if we were to have a hate speech exception, it seems to me it would almost certainly, both as a matter of logic and as a matter of experience, be extended to a vast range of ideas that people need to be free to talk about.

 

Sam Fendler:  Certainly, and Professor, I think there's a line of thought running through your comments that it must be uncontroversial that at the end of the day, somebody will have to decide what hate speech is and it could be somebody that you like, it could be somebody that you don't like, but either way, that decision will be made. 

 

      I want to move the questions to the case at hand. We have several attendees, and you touched on it in your opening remarks, but they're curious about answering the law with some type of compliance code, hate speech policy that maybe is a bit toothless, which is to say our policy is that we're not going to do anything at all, have a nice day, thanks for sharing your opinion. Could you touch a little bit more on that and if you think that that's a feasible compliance method and what you think about that. 

 

Professor Eugene Volokh:  So New York's position is that yes, at least as best I can tell from their response, yes, that would be a permissible policy. And it's true, to its credit, the law does not require that the policy actually call for the removal of these kinds of posts. It just requires that there be a policy. Again, as I read the law, the policy would have to provide for a way in which we will respond so if people complain, we then have to respond even if our responses are no, we're not going to remove any posts.

 

      So I do think we, under this law, would be allowed to have that kind of policy. That's still a speech compulsion. And it's still a speech compulsion that the law itself specifically imposes with regard to particular viewpoints. So we could have such a policy but we'd have to have such a policy in response to a law that requires us to have policies that respond to hate speech, not a law that requires us to have policies that respond to other things. So if ultimately, we lose, I don't think it'll happen, but if we lose, then I would comply with the law, probably. Well, I shouldn't say would. I'm not sure but probably. Certainly, I might comply with the law, and perhaps with a policy like that, it'd still be an impermissible speech compulsion.

 

Sam Fendler:  Sure. I want to ask you about one piece of the law in particular. One of the things that caught my eye is in one of the sections, and I'll read, "Nothing in this section shall be construed as an obligation imposed on a social media network that adversely affects the rights or freedoms of any persons such as exercising the right of free speech pursuant to the First Amendment to the United States Constitution or to add or to increase liability of a social media network for anything other than a failure to provide a mechanism for a user to report to the social media network any incidents of hateful conduct on their platform and to receive a response on such report."

 

      And, again, in your complaint, this is a piece of the law that you mention, that it seems as though it's almost attempting to get ahead of a free speech issue. And I'm wondering if you could speak more about that and what you thought about that piece of the law.

 

Professor Eugene Volokh:  Right. So let's start with the second part, b, that nothing in this section shall be construed to add to or increase liability of a social media network for anything other than failure to provide a mechanism to report and to receive a response on such report. Well, so, yeah, that does say that people can't then sue saying oh, you violated this law by failing to take down some comment, that we could have to pay, I think it's penalties of $1,000 per day for failing to have a policy. But we can't be sued under this law for failing to remove comments.

 

      So that's good in the sense that the law isn't as bad as it could be. It still requires us to speak in a way that I think violates our First Amendment rights. So then, what about 4(a), nothing in the section shall be construed as an obligation imposed on a social media network that adversely affects the rights or freedoms of any persons such as the exercising of the right of free speech pursuant to the First Amendment. Well, I'm not sure what to make of this. Obviously, no law can require me to do things that violate my First Amendment, right? Because the Constitution trumps any contrary law so maybe it's just conveying some sort of tautology that well, we acknowledge that the First Amendment constrains us here.

 

      But at the same time, obviously, the New York AG's position is the requirement of having a policy is constitutionally permissible. New York AG doesn't think that the requirement of the policy adversely affects my First Amendment rights. So we want the Court to say it does affect our First Amendment rights. And if then the Court says even as a result, we're just going to conclude that this law -- that Volokh is protected not just by the First Amendment but by Section 4(a) of this law, well, all right I suppose. It's just that Section 4(a) of this law doesn't really provide any substantive protection. It's the First Amendment that's doing the work, which is why we're suing claiming the First Amendment protects my rights here. 

 

Sam Fendler:  Certainly. And, Professor, one of our attendees is wondering if there is anything of an objective standard in the law as to what qualifies as vilifies, humiliate, incite violence, these terms. I think the law does attempt to define them, but did you see anything in the law that lets you know or makes you believe that there is some objectivity at play?

 

Professor Eugene Volokh:  Well, so the law defines hateful conduct to mean use of a social media network to vilify, humiliate, or incite violence against a group or a class of persons on the basis of those various categories. It does not define vilify. It does not define humiliate. And that's -- it doesn't define incite violence so it's not clear if it's incite violence within the terms of Brandenburg v. Ohio, which is, again, intentionally inciting imminent, lawless conduct in a way that's likely to happen. That is a First Amendment term of art but, again, I'm not sure that incite violence really captures that.

 

      And so I don't think that there is sufficient clarity here. And beyond that, if you look at the government's argument for what it says, well, humiliate and vilify is clear. Well, what do they say it means? Let's have a look at this. This is in their position to our preliminary injunction motion. They cite some dictionaries. So they say, for example, a common definition of vilify is to utter slanderous and abusive statements against or defame. Okay, but it can't mean to defame just because if it meant to defame, why didn't they say defame because that is a well-known legal term of art.

 

      A defamation of groups, I think people say, as opposed to defamation of individuals, these days I think is generally accepted as constitutionally protected, but I think at least it would make it a little bit clearer. But that can't be again what they mean. Likewise, humiliate is to reduce someone to a lower position in one's own eyes or others' eyes or to make someone ashamed or embarrassed. So there, it's far from clear to me that humiliate includes all embarrassment, but that suggests that, for example, if somebody posts something that says look, scientology, here's all the reasons that I think it's a scam and that people who follow it are just being really foolish. Would that be trying to humiliate? Well, according to the government, it sounds like it would be making people embarrassed to believe in this religion. So maybe, although I suppose -- I would guess that the AG would say no, no, no, no, no, no, no. We don't mean to deal with that because that is a perfectly legitimate kind of argument that I think even the AG would agree is legitimate. But then, how would they explain how that's consistent with the definition they offer themselves in the brief of humiliate?

 

      Likewise, if somebody were to say look, I think that some religion, it could be conservative Christianity or jihadist Islam or extremist Judaism or even mainstream Catholicism, is bad for women, it's anti-gay, it maybe advocates in some situations for violence, that's really bad, and people who belong to this religion are complicit in that. Well, would that be humiliating? I don't think that necessarily is, but would it make someone embarrassed to belong to a particular religion? Well, very hard to tell.

 

      So it's not an accident, I think, that vilify and humiliate have not been, up until now, legal terms of art as opposed to defame and maybe incite, if it's defined consistently with Brandenburg, which isn't clear that it is. So I think any law that talks about vilifying or humiliating, absent some more precise definition, I think is not sufficiently clear.

 

Sam Fendler:  And Professor, following up on that, from our audience, are you or do you intend to argue that this is unconstitutional because of that vagueness?

 

Professor Eugene Volokh:  Yeah. That's part of our argument is that the law is vague. Part of it is that to the extent it's clear, it's clearly viewpoint based. Some provisions are vague, and some provisions are clear but in a bad way.

 

Sam Fendler:  Certainly. And change gears slightly, one question from our audience is wondering about how this law may serve as a pretext for investigations, which is to say if this law is in place, the New York Attorney General may or may not have the power to investigate websites that the office does not like. Do you think that that is a risk that may be presented if this law is passed and found constitutional?

 

Professor Eugene Volokh:  Yeah, no. I think that's an excellent point. Again, if you look at the text of the statute, it says that the social media network shall have a clear and concise policy which includes how such social media network will respond and address the reports of incidents of hateful conduct on their platform. Again, the AG says no, you could actually have a policy that says we won't respond. That doesn't seem quite consistent with the text.

 

      So if it is read according to its text, then presumably, one would be liable or at the very least one would be violating this law by not having the policy and presumably by not responding or not responding in compliance with the policy.  So what's more, presumably, if people do create a policy, that will just be a basis, potentially, for the government to say oh, you're violating your own policy. That itself is consumer fraud because you're telling them one thing, and you're doing another.

 

      So then, let's look at all of the complaints you've gotten from people. Let's look at all the internal communications about those complaints. Let's look at your files about how you did or did not respond to it. I do think that's a potential problem, and we've actually seen that in other situations as well where the government was using overbroad statutes, sometimes criminal libel statutes that are broader than the law allows, sometimes so-called harassments statutes, basically as means for trying to, let's say, identify critics or to otherwise harass critics even if it was pretty clear that no enforcement action would ultimately be forthcoming. 

 

Sam Fendler:  Right. I think that leads nicely into another question from our audience, and this is about your strategy in the case. Could you talk about your decision to go on the offensive here to bring a lawsuit as opposed to doing nothing and perhaps making the State bring a case against you to kind of flush out the idea of whether this is compelled speech or not?

 

Professor Eugene Volokh:  Right. Well, I think there are a couple components to it. One is I like to follow the law. I try to be a pretty law-abiding guy, generally speaking. And I don't want to comply with unconstitutional laws, and I don't think I have any obligation to comply with unconstitutional laws. Under our constitutional system, an unconstitutional law is no law at all. But maybe I'm wrong. Maybe the courts will tell me nope, this law is perfectly constitutional.

 

I'd like to know upfront. I'd like to know this upfront so that I can figure out what my legal obligations are. I think that we ought -- it's better for us to know our legal obligations than to be left guessing and left guessing and making mistakes either way, either being over-constrained by things that really shouldn't constrain us because they're unconstitutional or by thinking that the law doesn't apply to us because it's invalid but ultimately, it's found to be valid.

 

      So I prefer to have that happen but, of course, not everybody has the luxury of being able to sue over this and having, basically, the Foundation for Individual Rights and for Expression represent me for free in this. So I appreciate that many people can't do this.

 

      And the second reason is even if I, as a First Amendment scholar, could say look at this law, I'm pretty sure it's unconstitutional, I'm not going to comply. Other people might feel pressured into it because they are worried about liability. They don't know First Amendment law like I do, and they think maybe this would be upheld. So to the extent that we are helping protect others against this kind of law, I'm pleased to be able to do that. Although, of course, the laboring war that the people who really get the credit for this are the Foundation for Individual Rights and Expression.

 

      And then the third point is it would be nice to set a precedent that says nope, these laws that aim at these kinds of viewpoints are just as unconstitutional as other viewpoint-based laws. Maybe that'll be a signal to the New York Legislature, to other legislatures and city councils and university administrators and others that courts are taking the First Amendment seriously here. And to their credit, the courts have actually taken the First Amendment quite seriously in a lot of these cases.

 

      The Supreme Court in that recent case Matal v. Tam reaffirm there's no hate speech exception to the First Amendment, but it's always good to have further precedents as to particular applications of that principle. And to the extent we can set that precedent, I'd be delighted to play a role in that.

 

Sam Fendler:  Right. And right at the end there, you started talking about precedent and we have a question from an attendee who was wondering if you are aware of similar cases right now or perhaps other state laws that are trying to do the same thing as this New York law, maybe they're in the court system. Are there differing circuit interpretations of what you think is at hand here? Do you see anything across the legal landscape across the country that makes you think that maybe this could go to the Supreme Court? What's your understanding about the landscape?

 

Professor Eugene Volokh:  Yeah. Generally speaking, the federal circuits are in agreement on this that when these kinds of challenges to these sort of viewpoint based restrictions on supposed hate speech have been filed, generally speaking, courts, once they reach the merits -- some of them may have been dropped on procedural grounds or not dropped but thrown out on procedural grounds, but once the courts reach the merits, they follow the Supreme Court precedent faithfully, and they say these kinds of restrictions are unconstitutional.

 

      So I don't think there's real disagreement among the federal circuits on this. I don't know of any laws that are quite like this one. Again, there have been university speech codes. Public universities that have been struck down on First Amendment grounds. Unfortunately, there's still others that remain not yet challenged, but I don't know of any laws that are quite like this one. It would be nice if there was a precedent that keeps there from being laws that are quite like this one.

 

Sam Fendler:  Right. Another attendee is asking -- so we've talked a couple times about New York's position that you could have a policy that says well, we're not going to do anything, thanks for bringing this to our attention. But the attendee wants to know how does that policy of responding to hate speech pursue the end that the State is claiming they have in curtailing hate speech and whatever secondary effects may come from there?

 

Professor Eugene Volokh:  Right. I take it that New York's argument is we realize there are limits to how broadly we can pursue any goal of trying to deter hate speech, so we're taking a very, very, very modest step. And we realize it's not going to accomplish the prohibition on hate speech and such because that's not accomplishable so we do what we can. I think it's quite right that it makes it hard to explain how this law could pass strict scrutiny or intermediate scrutiny or any meaningful kind of scrutiny given that it really accomplishes virtually nothing, well, at least according to New York's position.

 

      I think New York's position is well, it doesn't have to pass any heightened scrutiny because it's just regulation of commercial conduct or something like that. I think, again, I think it's a speech restriction, and I think it's impermissible chiefly because it's viewpoint based but also, perhaps, indeed because whatever it's trying to do, it's not really going to accomplish it. 

 

Sam Fendler:  Something else you wrote about in your complaint is that this law passed rather quickly. It was debated on and passed in, I think, a matter of weeks. Do you see any implications of that?

 

Professor Eugene Volokh:  I think that that's useful background. I don't think that that necessarily plugs into any particular First Amendment doctrine in any official sense. I think the complaint, like many complaints these days, tries to tell a story about what's going on and includes some degree of detail. And perhaps, that might explain why the title says hateful conduct prohibited. The body doesn't seem to. Maybe there's just some things that were done in a hurry and done in a slightly slip shot way. But the main doctrinal points are it's viewpoint based, it's vague in certain ways, it's a speech compulsion. That's what's really doing the work, I think, in our argument. 

 

Sam Fendler:  Sure. And I do want to ask you about the compliance issue. You wrote that -- and what you said is maybe you will comply, maybe you won't comply, but even if somebody put forward a good faith effort at complying with this law, in the complaint you seem to be arguing that that would be hard to do anyway. And I'm wondering if you could speak more about that.

 

Professor Eugene Volokh:  Well, I think somebody could comply with the law if they're willing to be chilled, like if they want to be super safe, they will take down things when there are complaints. And they could, at the very least, they would respond to every comment and have a policy that actually mentions hateful conduct. And I don't want to do that. I don't want to do that partly because it's a compulsion and partly because our blog is a shoestring operation. We don't have staff. We just have the co-bloggers. And the complaints almost invariably come just to me. So if, indeed, I have a duty to respond, then I'm going to have to keep responding.

 

      And what's more, right now actually, I get some complaints occasionally, but the whole point of the law is to make it easier for people to file complaints. So presumably, if the law is upheld and if I were to -- or if the law hadn't been challenged, and if I were to have a policy, people'd say oh, let's follow this policy. Maybe let's even have an organized campaign. Oh, there's some commenter here, we want to get this stuff deleted so we're just going to keep sending complaints to Volokh about it. And I'm going to then have to either respond to each one, which will take a lot of time and effort, or say no, I don't have to respond. And, again, it sounds like the New York AG in these filings is saying that's fine, but I think that the statute says that we will respond to this so that suggests that we must respond to it.

 

Sam Fendler:  Understood. Well, Professor, we have about three minutes left. I'm wondering if you have anything that you'd like to leave the audience with, some final thoughts?

 

Professor Eugene Volokh:  First, I just want to say I so appreciate the Foundation for Individual Rights and Expression is out there. Used to be Foundations for Individual Rights in Education until just, I think, a couple of months ago when it broadened its scope to free expression more broadly, I think like wisely. Very glad that they are representing me on this and I much appreciate their hard work on it.

 

      And again, I have to acknowledge I have this luxury. I know it's a First Amendment problem. I know people who will represent me. The main problem with these kinds of laws is that they often end up applying to people who don’t have that luxury. And I'm hoping that if those laws are promptly challenged and go up to court and the court, either the district court or circuit court, rarely Supreme Court but occasionally, strikes them down, that'll send a message to state legislatures that of course you have to fight violence. I think violence, whether it's hate-based or racist or prejudice or not, violence in general needs to be fought. Of course, you have to do that, and you can, of course, speak out against certain kinds of advocacy as well in many situations. But you can't do it by either trying to ban speech or setting up mechanisms that are first steps toward banning it, even if at this point, they just create pressure and seek to create internal bureaucracies and the like for handling these kinds of matters.

 

Sam Fendler:  Great. Well, Professor Volokh, on behalf of The Federalist Society, I want to thank you for sharing your time and your expertise with us today. I want to thank also the audience for joining us. We greatly appreciate your participation. Please, you can check out our website, fedsoc.org or follow us on all major social media platforms @fedsoc to stay up to date with announcements and upcoming webinars. Thank you all once more for joining us, and we are adjourned. Have a great day.

 

[Music]

 

 

Dean Reuter:  Thank you for listening to this episode of Teleforum, a podcast of The Federalist Society’s practice groups. For more information about The Federalist Society, the practice groups, and to become a Federalist Society member, please visit our website at fedsoc.org.