Courthouse Steps Decision: NetChoice Cases

Event Video
Two cases involving NetChoice, a company that represents social media giants like Facebook, Twitter, Google, and TikTok, were heard and decided by the Supreme Court this term. Both cases concern issues of free speech and social media platforms.
In Moody v. NetChoice, LLC, NetChoice challenged Florida law S.B. 7072, arguing it violates the social media companies’ right to free speech and that the law was preempted by federal law. In NetChoice, LLC v. Paxton, NetChoice challenged the constitutionality of two sections of Texas law HB 20 (sections 7 and 2) that aims to regulate the content restrictions of large social media platforms. While the U.S. Court of Appeals for the Eleventh Circuit ruled against Florida, the Fifth Circuit ruled in favor of Texas, creating a Circuit split. In light of that split the Supreme Court granted cert and heard oral argument in both cases on February 26, 2024. On July 1, 2024, a 9-0 court released its decision vacating both judgments based on a lack of "proper analysis of the facial First Amendment challenges" and remanding them for reconsideration.
Join us for a Courthouse Steps Decision program, where we will analyze this decision and its possible ramifications.
Featuring:
- Allison R. Hayward, Independent Analyst
*******
As always, the Federalist Society takes no position on particular legal or public policy issues; all expressions of opinion are those of the speaker.
Event Transcript
Chayila Kleist: Hello and welcome to the FedSoc Forum webinar call. Today, July 8th, 2024, we're delighted to host the Courthouse Steps Decision program on two cases, NetChoice, LLC v. Paxton and Moody v. NetChoice, LLC, two cases concerning free speech and social media platforms, which were decided in a consolidated opinion recently by the Supreme Court. My name is Chayila Kleist and I'm the Associate Director of Practice Groups here at the Federalist Society. As always, please note that all expressions of opinion are those of the expert of today's call as the Federalist Society takes no position on particular legal or public policy issues. In the interest of time, I'll keep my introduction of our guest stay brief, but if you'd like to know more, you can access her impressive full bio at fedssoc.org. Today we are fortunate to have with us Allison Hayward, who currently works as an independent analyst. Ms. Hayward most recently served as the head of case selection at the Oversight Board.
Previously she was also a Commissioner at the California Fair Political Practices Commission, a board member at the Office of Congressional Ethics and an assistant professor of law at George Mason University School of Law. She also previously worked as Chief of Staff and Counsel in the Office of Federal Election Commission Commissioner Bradley Smith and practiced election law in California and in Washington DC. She's a member of the State Bar of California and the District of Columbia Bar, and I'll leave it there. One last note. Throughout the program, if you have any questions, please do submit those via the question and answer feature so they'll be accessible when we get to that portion of today's webinar as we're hoping to have plenty of time for Q&A in this program. With that, however, I'll hand it over. Ms. Hayward, the floor is yours.
Allison Hayward: Thank you. Thank you and thank you everyone for joining us today. And so a final day of decisions being released from the Supreme Court and people are excited and people learn that the executive is in some context immune from prosecution. And so this decision seemed to be a little swept away by the controversy surrounding immunity, which I'm not quite sure I understand because it seems to me from a separation of powers perspective, that's reasonable, anyway, it doesn't matter. We're not talking about that today. We're not talking about the Trump decision. We're talking about NetChoice. And so as you'll recall, in NetChoice, the Computer & Communications Industry Association challenged the state laws of Texas and Florida that had set restrictions on content moderation of certain platforms as well as disclosure requirements when content had been moderated and other must-carry kinds of laws. And the Fifth Circuit, rejected an injunction that the district court applied and said that this was in fact not the regulation of speech at all and was perfectly reasonable for the states to engage in and the 11th Circuit sustained an injunction from the district court.
And so we had a pretty on-point circuit split come before the Supreme Court and what we got from the court was a nine-zero decision that is complicated in the particulars and that's what I want to talk about today. So just remember that this was the question presented: whether the state's law's content moderation restrictions comply with the First Amendment and whether the law's individualized explanation requirements comply with the First Amendment. Now, as you recall, after argument, we talked about the fact that they never even got to question two. So the argument really focused around do these content moderation restrictions violate the First Amendment rights of these private social media platforms, and everyone agreed that they were going to disagree on that because of the procedural posture we were in.
As Justin Kagan said in the opinion for the majority, "The First Amendment does not go on leave when social media are concerned", but on the other hand, she notes "NetChoice chose to litigate this case as a facial challenge and that decision comes at a cost." Already you know that by the end of this NetChoice will not get what NetChoice has asked for. Facial challenge, what does that mean? Well, in the First Amendment context, it means that you need to look at the scope of the law in all its applications and see whether or not there is a substantial amount of coverage of that law that is unconstitutional. It's almost like a little fraction or a ratio. You've got everything the law does, you've got the stuff that the law does that is potentially unconstitutional and you decide if that's substantial, that the facial challenge can succeed.
So there's math involved. It's never a good message where lawyers are involved. There's some kind of at least ideal ratio that we're supposed to be thinking about. This comes from the Bonta and the Hanson cases, and I'm sure many of you'll recognize that. So what is the First Amendment law here? The Kagan opinion goes on to talk about how the First Amendment does not require private entities to provide a forum if those private entities are engaged in expression themselves. This is how the majority opinion shoe horns, in my opinion, the Pruneyard decision into the larger mass of First Amendment law that we have. I have to admit, I'm no big fan of the premier decision. I think it's an anomaly. It was after all a discussion about whether the California constitution that was interpreted by the California Supreme Court to require a shopping center to allow petitioners whether that decision violated the First Amendment. So you've got a little bit of a comedy kind of thing going on here, a little bit of a federalism kind of thing going on here. And hey, it's California not the greatest property rights decision in the world to say the least. And I think its First Amendment value is a little dubious too, but here we are and the Supreme Court has now provided -this isn't the first time, but it has again provided a way of thinking about Pruneyard in a larger scale, seeing about whether the state can require private entities to either host speech that they don't want to host or take down speech that they do want to host.
So six justices at the end of the day say that the approach that the Fifth Circuit took was incorrect and the approach that the 11th circuit took was correct because in Justice Jackson's concurrence, she joins that view that was expressed by the majority, and I think that makes all the sense in the world. I think this decision needed to set down some ground rules largely because the Fifth Circuit was so providing such a different perspective than what one would expect from First Amendment precedent. Look, I'm going to put my cards on the table. As many of you know, I have a background as a constitutional scholar, usually in campaign finance and government ethics, and I am a full throated supporter of the analysis that comes out of Buckley v. Valeo and Massachusetts Citizens for Life and the Citizens United decision. I think the First Amendment in speech, speech about politics in those cases, but speech about anything really has no tolerance for the government coming in and saying that it has a role in balancing private speech.
I also think that as this decision should imply, the notion that private parties can exercise something called censorship really needs to be pulled back quite a bit. And I think you see that in the majority here. They are treating private entities as even social media platforms as having valuable First Amendment rights, and they are not persuaded that they're anything less than expressive entities. Now, the other concurrences, you almost want to say dissent, but the other concurrences led by Justice Alito really are, I mean, acknowledging that the facial challenge really makes this an impossible case to decide at this point. And just as sort of a sideline, if you listen to the oral argument, so much of it was confusion over the scope of these laws and what was actually required, and it was clear that there was not going to be a legitimate basis to say anything specific that would help. I mean facial challenge or not. It was just that the record was very messy. You had lots of amicus participating, the Amicus were saying all kinds of things and talking past each other. This is absolutely the right way to approach this problem, this case at this juncture, which was to vacate and remand back to the courts of appeals that will ultimately end up in district court where people will probably NetChoice, but maybe not, maybe an actual specific platform will challenge it on a more as applied way.
But I think the thing that struck me most about the concurrence that was led by Alito is the skepticism they have about - that concurrence has about whether social media platforms are maligned or I mean they're not skeptical. They imply that they are a maligned force in society. They use lots of scare quotes around terms like content moderation, censorship, and I'm not quite sure I understand where this is coming from. I mean, I understand generally speaking that there have been specific things that have happened on social media platforms that have led people oftentimes conservatives to think that they don't play fair.
As I think I've said before, I think most people who haven't engaged in content moderation don't understand what the limits are that a platform can do intentionally because of the volume of content and because of the rapid rate at which they need to make content decisions. So I think that what the Alito concurrence reflects is an assumption that there's intentionality there that is really not there. An example from more recent that has nothing to do with this case, but I think it's one that people may be familiar with, there was climate expert that had a piece of content on Facebook that was skeptical of modern sort of climate change narrative, but it was based in science and there were citations to scientific information and it was taken down. And when other people tried to repost it, those were taken down too, and people were understandably angry because they felt like this was censorship of a valid point of view. And the issue with that though is when you dug into a little bit, they've been taken down for spam. So it wasn't anything about a content moderation policy that directs itself at climate skepticism. It was spam.
Meanwhile, throughout Facebook, the platform was hearing lots of complaints about all kinds of content being taken down for spam, local news articles being taken down for spam, all kinds of things that just aren't spam. Clearly something had gone wrong and I not persuaded, had anything to do with what leadership at Facebook thinks of climate skeptics. It was just a thing that went sideways in my view. Now, I don't have any inside knowledge. I just can tell you this from my experience doing content moderation that I don't think there was anything communicative or directed towards climate skeptics, but certainly if you're the person whose content has been taken down, that's how you feel. And I think likewise, a lot of the information animating the Scalia, excuse me, the Scalia (laughs), listen to me, the Alito-led concurrence is that they know about the Hunter Biden laptop story that was squelched. They know about other anecdotes that look like platform animus towards conservative points of view.
Oftentimes they're not as they appear, not to say that they never are. I think if you saw the call after the argument, I talked a little bit about how platforms will escalate certain content and then you do have individuals bringing their own points of view, individuals high up in these companies, bringing their own points of view to on whether or not a piece of content stays or goes. So for example, the decision whether or not Trump could persist on Facebook was escalated, but that has nothing to do with ordinary content moderation, really. Those are little cases that escalate. They get publicity, they get noticed, people get excited, people get concerned and senior executives have to make a decision. But getting back to whether or not that's an expressive decision of the platform, absolutely. Isn't a human being making it? I think Mark Zuckerberg is a human being.
What I'm trying to say is the specific cases that animate this concern about platform power and bias really are communicative. The state laws, however, were more about day-to-Day content moderation, which is a legitimate communicative exercise of the platform, but is not really what they're bothered by. They're not bothered by all the pieces of content that use profanity getting taken down. Nobody's looking at that and going, oh my god, censorship. What they're worried about is these very, very high profile cases and they're a different animal that needs to be addressed in a different way than the approach that the state laws took. Okay, that's my soapbox. That was my soapbox from the last call. I'm going to leave that now because I want to mention one other aspect of this case that I thought was interesting. Both the Alito concurrence and Justice Barrett talk about human beings and the communicative decisions of human beings. Now, justice Barrett then goes on to recognize that corporations which are after all made up of human beings have constitutional rights as well. Citizens United being cited, thank you and fine, but Alito gets more specific about are humans making these decisions or are - drum roll - algorithms?
Where do the algorithms come from? I don't think there's high tech dolphins writing algorithms. People are writing them, the platforms are writing. They spend a lot of investment on getting software engineers and other sorts of smart people to figure out how to simplify and speed up the analysis of content coming onto the platform. They didn't fall from the sky, they didn't come to us from the dolphins or martians or anybody else. Human beings did all that. Algorithms are a tool. They're not some separate thing apart from humanity. They're a tool that human beings have developed to try and manage this enormous scale of content. So I wanted to call that out. I would like to think that we get away from thinking about algorithms as some, I mean, yes, they're mysterious, they're very difficult to understand. They malfunction as with the spam story I just told you, but they are in fact tools that human beings develop to do a thing that human beings want to do for business reasons because they own platforms.
Finally, there's the common carrier argument that the Alito concurrence was disappointed, hadn't been fully ventilated in this case. I have great skepticism that once that question is fully ventilated that it will lead to a conclusion that social media platforms are common carriers. Common carriers tend to be businesses that are involved in some kind of activity where scarcity is an issue or a bottleneck is an issue. And if you've taken regulated industries in law school, you start out with the ferry owner that has the one ferry in the river, and yes, he's given that right to run that ferry, but then the community also has a community of the king or whatever. I don't remember the case exactly. It gets to be protected from exorbitant charges from him. So the bargain is he gets to run his ferry without competition and without inefficient interference, but he can't charge whatever he wants because of a sort of natural monopoly thing.
And that's typically how the transportation, I mean railroads, do we really want every railroad company putting down their own tracks? No, it would be duplicative and hugely expensive and inefficient and lead to bad results. So we have a natural monopoly there in the tracks and in the provision of railroad services. By analogy, communications back in the day involve these wires that you still see occasionally in neighborhoods. And all of that is to say that while social media platforms are popular, interesting and ubiquitous in some ways, there's nothing that is naturally monopolizing about any of them. As you can see, did Facebook buy up competitors in the 2000s- 2010s because it felt like it had a natural monopoly? No, quite the opposite.
So I really don't think that this works. I think there may be other aspects of the stack of businesses and entities that provide you with your internet service that are more easily analogized to common carriers, but I don't think the publishers at the top of the stack, the platform that you're actually using day to day, I don't think that applies. And I think with that, I will stop rambling. I fully expect that we will be talking about this again in future years. I fully expect that the legislatures of Florida and Texas will continue to believe that this is a way to go and maybe we will find a specific platform with a specific set of practices that the courts can evaluate to determine whether or not there is a substantial breach of these regulations that violates the constitution. For one thing, I'll be entertained to see what the word "substantial" means in future. But anyhow, I'm done for now.
Chayila Kleist: Thank you so much for that. That was a really helpful summary of the decision how we got here and some of the major issues at play. As we move to a time Q&A here, I'll issue a reminder to our audience. If you do have questions, you can submit those via the Q&A feature and we will do our best to get to them as we move forward with the program. We already have an audience question. Justice Kagan cites both Sowell and Buckley against creating an equal playing field and that's in quotations. Does it matter more that those cases involved creating less speech rather than more?
Allison Hayward: Well, I am going to push back on a little bit on whether the laws at issue here create more speech, because I think what they do with content moderation in practice is encourage it to be broader, to show no favoritism at all. So for example, if you've got a content moderation formulation that says that references to "Zionist" must be taken down if they are within proximity to something else, oftentimes these content moderation policies and practice are sort of almost like those old searches that we used to do on Westlaw is like Zionist within five of some bad thing, and I don't exactly work that way, but that's a useful analogy to just keep in mind, and so maybe the platform will say, well, to be compliant with this new law, we will just not allow people to ever use the word "Zionist" or "Chinese" or whatever it is. I don't know. I think the legislators think that this will provide more speech. I'm not sure it will, and this is something that can be argued about and potentially tested as an applied challenge. It's just that sort of thing that I think is missing from this record that is really important.
Chayila Kleist: Got it. Thank you. The next audience question, and I'll do my best to paraphrase here, asks about the overlap with these cases Twitter, Inc. v. Taamneh and the conversations on algorithms and how they make decisions don't make decisions whether or not they're agnostic due to content or it's a first amendment protected choice.
Allison Hayward: Yeah, well, as I said before, I think algorithms are tools developed by companies to run their business. They're not exotic. I mean, I think the high tech dolphin thing I was giving you guys to get you thinking about, well, where do algorithms come from? Well, they come from people, people deciding to invest lots of money in having them built, and I don't think that changes if you're talking about an algorithm that can use artificial intelligence to teach yourself to make better decisions, I think that's a better tool. I don't know that I'm getting it at what you're asking, but I don't think the fact that a decision was made by an algorithm or was made on escalation really matters, though it seems like there are justices who do think it matters, and I want to push back on that. One of the things that in campaign finance that has been very valuable is understanding that tools used to do speech are part of your speech that's protected. If you use money to do political speech, that expenditure is protected. Now, if it's money you're giving directly to a candidate, it's obviously different, but it just flows from that same sort of larger logic that if you're using something that is inanimate to aid in your expression, that activity is also protected. Otherwise, we'd all be sort of standing by ourselves on our front porches yelling at each other.
Chayila Kleist: Leading on that conversation, obviously Justice Barrett raises this point concerning the effect that having content moderation choices made by an algorithm might have on the viability of First Amendment claims, do we have precedent for how algorithmic choices are covered or not for concerning constitutional protections of rights, even beyond First Amendment's area, and what might be the other areas where this could have an impact? Is there an impact on AI and IP law? Is that conflating too many issues? What might be the implications?
Allison Hayward: I can't speak to what implications it has in consumer protection or intellectual property or unfair trade practice or any of the other things that you might think, oh, there's this. Again, once you have a specific aspect of social media, a social media platforms conduct, you can start picking this apart where again, the NetChoice facial challenge just didn't allow for that. There were so many unanswered questions. Again, I think the precedent is can individuals, corporations, churches, unions, whatever, develop tools to aid in their expression and in so doing, is the use of those tools protected? I mean, again, I mean I'm repeating myself, but the analogy that makes most sense to me involves campaign finance, the use of money and the whole money is speech. No, money allows you to speak in certain ways. We live in a capitalist society. That's how it works. There are probably other analogies in other areas of the law, but obviously I've got my particular set of blinders, and so that's how I see. I think that would be a useful precedent. I guess there are likely others.
Chayila Kleist: Got it. Thank you. I'm looking for the correct wording for this question, so apologies if it's long. Is there a difference in the court as to what would suffice for an unconstitutional or a challenge arguing the law is facially unconstitutional to succeed? I asked because it seemed like there was at least a difference in articulation where Justice Alito and Justice Thomas in their concurring opinions argue that for a law to be facially unconstitutional, it has to be constitutional in none of his applications. While the majority argued that the test would be that a substantial number in a substantial number of cases, a law to be unconstitutional, are there two different ways of saying the same thing or is there a difference that didn't matter in this case because they agreed, but might matter in the future?
Allison Hayward: Yeah, no, there is a difference. There's the facial challenge, facial constitutional challenge in areas outside of the First Amendment. Basically it's the Salerno test and it basically says that the law has to be unconstitutional and virtually all its applications. Well, that's been modified by the court to be slightly, not much, but slightly more accommodating, where it's the sort of substantial number of applications ratio thing that I was talking about. The math thing that I think I'm going to say shows it looks precise, but it's probably that look is probably illusory. I have the feeling that we can argue and people with goodwill can differ on what substantial is, how you weigh certain things, but yeah, they are two different standards. I would note that, by the way, Justice Thomas in a very articulate part of his concurrence talks about how facial challenges violate Article III of the Constitution, which on the one hand I find interesting and perhaps even persuasive. On the other hand, I think that ship has sailed. I could be wrong. So yeah, Justice Thomas doesn't think that facial challenges are a legitimate way to go, that you need to have a case of controversy involving a particular person. The facial challenges really brings people into court that aren't there and evaluate their rights and they're not there, and I think there's some appeal to that.
Chayila Kleist: Got it. Continuing on your comment on Justice Thomas's opinion, how recent is this category of facial challenges as a type of challenge, particularly in the First Amendment context?
Allison Hayward: Oh, I'm not entirely sure. The modification of the substantial versus total I think is by historic standards relatively new. I do remember working in campaign finance areas when I was first starting out where we were still very concerned about whether it’s applied or not. That is not helpful, I know, but it is fairly recent and which leads one to think that it's not the kind of longstanding precedent that's hard to poke back at and that maybe someone would find some value in doing that.
Chayila Kleist: Got it. Thanks so much. We have a couple of audience questions relating to the nature of free expression for social media platforms and Section 230 immunity as content providers, as non-speaker, non publishers, can you speak to the interaction of those two concepts?
Allison Hayward: I think they're two parts of the entirety that social media companies worry about. So to provide the service that they provide, they want to compile and moderate and affect the user provided data that comes in and control it, but that doesn't mean that they will always and everywhere know that a piece of content is libelous or fraudulent. Things miss. I mean, even if you designed a content moderation scheme that was all about nothing, libelous will show up on this platform that is the community standard. You cannot libel. It's too hard to know the outer context of a piece of user provided content, and so that's what Section 230 protects is the stuff they miss to the extent they're even looking for it as they are putting together a platform that they think will appeal to users, some users will come on and be engaged so that they will look at the ads so that the advertisers will be happy and give them money. I mean, that's why you don't pay to be on Facebook. You're not really the customer. You're the product as people say, and so that's like free television. You still have to watch the commercials.
Chayila Kleist: Got it. Another question, you've addressed this some in your conversation I think on common carriers, but an audience member asks, in broadcast history there was a fairness doctrine. Are the Florida and Texas laws likely to give states similar opportunities if the laws stand? Opportunities to bring challenges to broadcasters if they don't embrace the fairness doctrine?
Allison Hayward: That's interesting because there's a lot of preemption arguments that come into telecommunications regulations and what states can do that I think analogously could come into how they regulate social media platforms too, but those arguments haven't been aired, no pun intended.
Chayila Kleist: Fair enough. Well, you mentioned this in your breakout of the opinion, the questions that are presented to the court, but looking ahead in light of the way that the cases were argued as well as Justice Alito's opinion and his breakout of content moderation provisions from individual disclosure provisions, does it look like there are one or two questions heading back down to lower reports? Could you have facially unconstitutional content moderation provisions, but the individual disclosure provisions are fine and vice versa or do they have to be decided together?
Allison Hayward: Oh, I think they could be broken apart. I think somebody with tremendous familiarity with net choice might tell me that, oh no, we're committed to having them rise or fall together. I wouldn't think so. I would think because they do present different kinds of restrictions on a private party's behavior, typically disclosure requirements that we think of that way are not, I mean, they're not expressive, and so then you start looking at is it too burdensome because your annual report showing how many pieces of hate speech you took down or your report back to a user about why they were taken down for hate speech is not expressive in the same way as designing the platform to be a profitable business is, which is in my view, what content moderation is largely about. So I might, because the second question about the individual disclosure was never really argued in this oral argument. It may be that a lawyer would decide to leave that fight for another day or separate out who brings the content moderation versus who brings the disclosure as applied challenges to make them more likely for success. I mean, that's part of litigation strategy that I don't have any insight into, but I can imagine that they would be talking about that.
Chayila Kleist: Got it. Thanks. Other than the impact of the parties in the case itself, what if anything could be the immediate impact of this decision? Are there other cases that might be impacted by this ruling or other potential laws? What might be the immediate impact?
Allison Hayward: There probably are incipient cases out there where parties were trying to argue around what I think of as a very neoclassical First Amendment doctrine that is reflected in the majority's opinion here, and they'll be discouraged from doing that. They will have to contend with your sort of Hansen substantial burden type argument because we know that there's five or six Supreme Court justices that think that's the way to go. Yeah, so I think it'll have some effect on, and I would like to think that it also has effect on legislators too, but they have a different calculus and they need to get elected.
Chayila Kleist: Fair enough. Are there downstream effects? Obviously this is just being remanded to be tried - that is not the correct word - but reconsidered, there we go, based on an actual facial challenge understanding. So we don't have a resolution in neither of these cases yet, but do we have downstream effects that you can sort of guess at for an industries of areas or areas of law that should be paying attention to the way this case was decided and the possible way these cases could come out?
Allison Hayward: Oh, there probably are tons. I think I mentioned in the last call we did, there are other kinds of laws that states could resort to that are more typically part of a state's police power that might get you some of the things you want when you are looking at how a platform operates and you've got constituents who are angry about the way the platform has hurt them. For example, when people are taken down off of a, I think I've mentioned this example before, but I'll reiterate when people are taken down, an account is taken down on Facebook and you lose all your pictures and maybe you lose a video of you teaching a lesson that is an important part of your business and you are injured, you lost the stuff. Maybe Facebook can argue that, well, under the community standards you deserve to lose it, but maybe there's a place for a state to step in and say, our consumer protection laws want to give people in our state notice to save their stuff before they lose their account.
And then the platforms might come back and say, well, we can't just do it for you. We'll have to do it for everyone. Maybe there's a supremacy kind of - then you get into federalism, blah, blah, blah, blah, blah, fine. But at least that's an approach that doesn't implicate the communication capacity of a social media platform to present itself the way it chooses to because that's how it wants to run its business, and it is more about looking at the users and saying, what's happening to you in this relationship? Is there a legitimate place for state power to come in and make your experience fairer or better in some way?
Chayila Kleist: Got it. I guess a last question, following off of that, are there questions that remain unanswered, if any, coming now that we have this decision, this case, other than I guess what the actual outcome will be, and are these facially unconstitutional? Other than that, are there outstanding questions in this case?
Allison Hayward: Well, there are because they didn't really at argument, I think the fact that this facial challenge was unwieldy and unlikely to be able to be decided was really apparent because the argument didn't get into the individual disclosure of content moderation decisions, aspects of these laws. We have the case continuing to be cited as the test, but if a similar kind of epiphany would come about with a more fulsome oral argument, it didn't happen. So we don't have the benefit of the five or six justice group reflecting on that out of this particular case. And I think if they had reached that, we might have another similar opinion that maybe iterates solder or maybe talks about how it would be modified because of X, Y, or Z, but that's not here. And so I think there's less guidance for the courts to follow in the future on that point.
Chayila Kleist: Got it. Following up on that, I guess, how does the nature of this being nine-zero and yet a very fractured nine-zero affect the way that this could be treated as precedent moving forward?
Allison Hayward: Oh, I think because in large measure it's a six-three decision, and so I don't know. I would side with the six, I think most people would.
Chayila Kleist: Fair enough. Well, that's the rest of the questions I have. Are there any final comments or thoughts?
Allison Hayward: No, I think we covered it. I am disappointed this hasn't gotten a little bit more media attention, but I think you can understand in context why a vacation in remand seems like not really news, but I think there's a lot here to pick apart. And I just also want to say that oftentimes I don't agree with Elena Kagan, Justice Kagan, I should probably call her, but I think her opinions are really solid and written to be useful, and I really appreciate that.
Chayila Kleist: Got it. Well, thank you so much for joining us and sharing your expertise and insight on this really interesting set of cases and thanks all to our audience for joining us for this conversation. Thank you for participating. We welcome listener feedback by email at fedsocforums@fedsoc org, and as always, keep an eye on our website, your emails for announcements about other upcoming virtual events. With that, thank you all for joining us today. We are adjourned.