New Voices in Administrative Law: The Emerging Debates on AI Regulation

Event Video

Listen & Download

Artificial intelligence is a remarkable, disruptive force. AI services like ChatGPT already perform tasks once thought impossible for computers to complete. And AI's capabilities are growing exponentially. Although AI promises many benefits, it also carries risks and potential for abuse, which has led some commentators across the ideological spectrum to call on the government to regulate AI. What is the government's role, if any, in the AI revolution? Is the government capable of regulating AI without creating excessive externalities? Join three new voices in administrative law for a framing of the key debates emerging around AI regulation. 

Featuring: 

  • Eli Nachmany, Law Clerk to Hon. Steven J. Menashi, U.S. Court of Appeals for the Second Circuit
  • Laura Stanley, Law Clerk to Hon. Stephen Schwartz, U.S. Court of Federal Claims
  • Seanhenry VanDyke, Law Clerk to Hon. Gregory Katsas, U.S. D.C. Circuit Court of Appeals
  • [Moderator] Prof. Aram Gavoor, Associate Dean for Academic Affairs; Professorial Lecturer in Law, The George Washington University Law School

 

 

 

 

*******

As always, the Federalist Society takes no position on particular legal or public policy issues; all expressions of opinion are those of the speaker.

Event Transcript

[Music]

 

Chayila Kleist:  Hello, and welcome to The Federalist Society's webinar call. Today, June 13, 2023, we host a discussion amongst new voices in Administrative Law on the emerging debates on AI regulation. My name is Chayila Kleist, and I'm an assistant director of practice groups here at The Federalist Society. As always, please note that all expressions of opinion are those of the expert on today's call as The Federalist Society takes no position on particular legal or public policy issues.

 

In the interest of time, we'll keep the introductions brief, but if you'd like to know more about any of our panelists, you can access their impressive and full bios at fedsoc.org. Today, we are fortunate to have with us is our moderator, Professor Aram Gavoor. Professor Gavoor is the Associate Dean for Academic Affairs, a professorial lecturer in law at the George Washington University Law School. His co-authored work decided by the Supreme Court of the United States in Department of Commerce v. New York in 2019. Professor Gavoor has also served as an advocate in the Civil Division of the U.S. Department of Justice and in private practice, having briefed and argued over a dozen cases before a majority of the United States Courts of Appeals, and having litigated nearly a third of the 94 United States District Courts. Professor Gavoor's scholarship has earned placement in the Florida Law Review, Indiana Law Journal, Ohio State Law Journal, Administrative Law Review, and other law journals. And I'll leave it to him to introduce our panel.  

 

One last note, throughout the panel, if you have any questions, please submit them via the question-and-answer feature which can likely be found at the bottom of your Zoom screen, so that our speakers will have access to them when we get to that portion of today's webinar. With that, thank you all for being with us today. Professor Gavoor, the floor is yours.

 

Prof. Aram Gavoor:  Thanks so much, Chayila, and thanks so much to The Federalist Society and its Administrative Law and Regulation Practice Group for hosting this first of its kind Teleforum that is especially designed to platform new voices, new blood, high potential folks in the space of administrative law on important policy matters and legal issues. With the deployment of Generative AIs, especially the mass deployment of them, OpenAI from ChatGPT, the American public policymakers and investors have trained their attention acutely on the value, risks, and societal impacts of such technology and whether it can endanger our free and capitalist society or advance it. 

 

Presidents Bush, Obama, Trump, and Biden have each advanced the national conversation with their own policy preferences through various executive orders, statements, and taskings. There has been a diversity of views as of late within the tech industry and policymakers as to whether and how the technology should be regulated. Another key factor is to differentiate between the federal government's use of AI as opposed to that of the private sector. What's most important right now is that we are in what appears to be a bipartisan moment where the American public and legislators from across the aisle are in an information gathering phase. And it's really a critical point in the conversation as to whether and how and to what degree AI would be regulated. 

 

So with us, our three panelists, in alphabetical order, are Eli Nachmany, who's a law clerk to Judge Stephen J. Menashi of the U.S. Court of Appeals for the Second Circuit. He graduated magna cum laude from Harvard Law School where he was the editor in chief of the Harvard Journal of Law and Public Policy. Prior to law school, he worked for two years in the Executive Branch, serving as a speechwriter to the U.S. Secretary of the Interior and as a domestic policy aide in the White House Office of American Innovation. 

 

Next, we have Laura Stanley, who is a law clerk, to Judge Stephen Schwartz for the U.S. Court of Federal Claims. She earned her JD from the George Washington University of Law School.  Previously, she was a senior policy analyst at the GW Regulatory Study Center and an economist at the U.S. Environmental Protection Agency. She began her career researching the empirical effects of regulation at the Mercatus Center at George Mason University.

 

Last, but not least, is Seanhenry VanDyke. He is a law clerk to Judge Gregory Katsas of the D.C. Circuit Court of Appeals. He earned a JD from Harvard Law School in 2021. And during law school, he served as supervisor and editor of the Harvard Law Review, and senior articles editor for the Harvard Journal of Law and Public Policy. He also served on the board of the Harvard Federalist Society and worked as a summer associate at Cooper & Kirk and Latham & Watkins. So, to kick off our conversation, let's start with Laura.

 

Laura Stanley:  All right. Thank you so much for that introduction, Aram. It's really a pleasure to join all of you here today to talk about this very important topic. So, as Aram mentioned, I am a law clerk on the Court of Federal Claims, but I have to mention that I am here in my personal capacity, and I am not making comments on behalf of my employer. So although targeted AI has been part of our lives for many years now, every time you use Google Maps or Siri Voice, you're using a targeted AI application. Broad based AI models like ChatGPT are now developing at a rapid speed. 

 

So I think on one side of the debate, we see a lot of academics who think the AI revolution is going to improve our standard of living, improve our health, avoid accidents, and so on, but only if we don't thwart development with burdensome regulations. On the other side, we see politicians and academics from all sides of the political aisle, expressing fears that a hands-off regulatory approach to AI will lead to various problems, from the invasion of privacy all the way to the destabilization of democracy. 

 

So if you're tuning in this webinar, you're probably very well aware that the regulatory proposals for AI are increasing exponentially. And you're probably aware that there are a lot of calls for a new agency to regulate AI. So on May 16, the Senate Judiciary Committee held a hearing that included testimony from OpenAI CEO, Sam Altman, and IBM Officer, Christina Montgomery. And as Aram mentioned, it was a bit unusual in the sense that it was quite bipartisan. Politicians from both sides seem to agree that there is a need to regulate AI. And then roughly a week after the Senate hearing, Microsoft released a white paper and OpenAI published a blog post that put forward their vision for a holistic approach to regulating AI.  They called for regulation over the entire AI production process, including the models, the applications, and the data centers. And they really envision a licensing regime that requires preapproval of large training runs, risk assessments, prerelease testing, and then on the back end, post release monitoring. 

 

There's been a lot of public debate about whether our existing federal agencies can effectively regulate AI. Can they handle a licensing regime like this, or do we need a new agency altogether? So I think there are quite strong arguments that our existing legal remedies and existing statutory authorities can be used to manage AI risks. But I also think the push for a new agency is not going away. I think it's important to start talking about the structural characteristics that an agency should have. It's important to start asking now, what level of independence should a standalone AI agency have? What level of political control is appropriate? 

 

And I think an agency with relatively less independence may have practical and legal advantages, particularly given the unique task of regulating algorithms. So for one, it would ensure the agency falls under the purview of the Office of Information and Regulatory Affairs, or OIRA. So for those of you who are not administrative lawyers, OIRA is a small office in the Office of Management and Budget, and agencies are required to submit their regulations to OIRA for review before they are issued. And President Obama aptly characterized this review as a dispassionate analytical second opinion. However, independent regulatory agencies are currently exempt from OIRA review. And importantly, for the purposes of this discussion, OIRA also coordinates review across all the different agencies, and it has staff with expertise and risk assessment.

 

So the regulation of AI raises important questions. For one, which experts should we trust to regulate AI? AI is a general-purpose form of computation that touches every sector, from financial services to health care to agriculture. And the people with expertise on the broad implications of AI are not necessarily going to be the experts in AI and computer science.  OIRA, however, does have expertise in how to analyze how the AI systems are going to interact with each other and to think about the appropriate checks and balances. And OIRA can lean on the expertise across all the different agencies that are already thinking about how to regulate AI. 

 

Another benefit of OIRA review is that it will help combat interest group influence and regulatory capture. At the May 16 Judiciary hearing, we heard senators expressing how great it was that Sam Altman was calling for regulation of his own industry. But I think people who have studied regulation for a long time were probably not surprised by this. Businesses have long advocated for regulation, and the motivation is sometimes to gain a competitive advantage and restrain competition. OIRA review, however, forces policy decisions to be justified in language of cost benefit analysis, and as I mentioned, subject to review by various agencies, and this helps to combat that type of capture.

 

A lot of people have written about this, but in particular, Richard Revesz, who is the current OIRA administrator, and Michael Livermore talk about this in a book called Preventing Regulatory Capture. And then finally, OIRA review can help avoid death by 1000 cuts. There are going to be countless agencies that want to regulate AI, and OIRA staff are trained to look for redundancies in regulations across the entire Executive Branch. And this can help avoid slowing development with superfluous regulations. And I know that might sound a bit trite, but I think it's true. I worked at EPA for many years. That's where I spent most of my career so far. And you might be surprised to learn that EPA is already looking at cryptocurrency and its effects on emissions and energy usage, and AI is going to raise similar concerns. I don't think there's going to be a single agency that is not implicated by AI. 

 

So in addition to an OIRA review, another practical consideration is that an independent agency would represent its own views in litigation without the need to follow DOJ policies. This could leave courts dealing with inconsistent positions taken by different parts of the federal government. And of course, this is an issue with existing independent agencies. But the concern is going to be magnified with AI since, as I mentioned, it's going to implicate virtually every agency. And then in addition to the practical benefits, a non-independent agency may have legal advantages as well. If you're interested in administrative law, you're probably well aware that in recent years, the Supreme Court has been taking a hard look at offices and officers that are insulated from political control in cases like Seila Law v. the CFPB and Arthrex. So an agency where the decision makers are appointed by and easily removable by the president will mitigate some of those separation of powers challenges in federal courts. 

 

And then finally, I want to make one last quick point. Most of the public discussion has focused on how an agency could help mitigate the risks of AI. But there's also a potential for an AI agency to be involved in regulatory reforms to maximize AI's potential and mitigate its negative impacts in unique ways. So, for example, FDA's review process for new drugs and medical devices moves at a really slow pace. It can take months to years, and the agency is already unable to keep up with technological progress. And now AI is moving at a breakneck speed in this area. So, for example, AI was recently used to discover a new type of antibiotic that works against drug resistant bacteria, and the discovery only took 90 minutes. There's a new AI tool. It's called Sybil, and it can detect early signs of lung cancer. 

 

We need an FDA review process that can keep pace with AI, but we also need unique reforms to mitigate negative impacts.  So, for example, an area ripe for reform is our Social Security infrastructure. Unemployment insurance is going to be a safety net that is going to help people adjust to any coming employment shocks from AI. But our current infrastructure is incredibly broken. It's so broken that Congress had to give everyone an extra $600 per week during COVID because state unemployment offices were so overrun by applications that couldn't be processed quickly enough.

So an agency might have a role in collaboration with an office like OIRA in identifying regulations that hinder AI development, but it could also be used in helping to encourage the use of AI to ironically, help mitigate problems caused by AI, like to improve the unemployment review process by the states. So those were the main points I wanted to make. So I'll hand it over to Seanhenry or back to Aram.

 

Prof. Aram Gavoor:  Thanks, Laura. Seanhenry.

 

Seanhenry VanDyke:  Okay. I want to start by thanking The Federal Society for the chance to participate in this discussion and thank Aram for moderating. As Aram mentioned, judging from recent congressional hearings, there seems to be a bipartisan consensus growing that Congress should do something about the risks presented by rapid AI development. But the cutting-edge AI systems that Congress wants to regulate are both tremendously complex and rapidly evolving and changing. 

 

So because of these challenges, many members of Congress seem to want to just outsource the task of regulating AI to an agency that has the technical expertise to figure out the risks and then deal with them. But my hypothesis for today is that there are certain foundational questions that Congress itself hasn't really dealt with yet but is going to have to think about and deal with upfront, or else it risks having the entire scheme rendered ineffectual by litigation and administrative delays.

 

So one of those questions, for example, that Laura just aptly hit on is how much independence this new agency that will regulate AI could have. But I want to focus on two other questions that I think are also important for Congress to be thinking about at this early stage. The first question is the question of agency jurisdiction, how are we going to determine the jurisdictional boundaries of whatever regulatory scheme we come up with to regulate AI? Or put differently, assuming we create a new agency to regulate AI or give an existing agency that jurisdiction, how are we going to determine which AI systems are covered within its jurisdiction and which systems are not? 

 

This is a very tricky problem, in my opinion, because it's hard to draw a precise line that differentiates the strong AI systems that Congress seems so worried about, ChatGPT being an obvious example, from the algorithms used in older and more mundane applications, everything from search engines to smart home devices or Google Maps, like Laura mentioned. Hardly anybody really even agrees on a definition of what AI is. At the May 16 Senate hearing on AI regulation, the CEO of OpenAI called for "A new agency that licenses any effort above a certain scale of capabilities."  But neither he, the CEO of one of the leading AI companies, nor anybody else at the hearing seems to have any idea of what that scale of capabilities should be. 

 

So what should Congress do? One appealing option might seem to just create an agency with something of an open-ended jurisdictional grant and leave it up to the agency what degree of complexity in AI systems presents these risks and warrants regulation. But there are several pitfalls to at least be aware of here. The first is that Congress at least has to provide an intelligible principle for which AI systems deserve regulation and which don't, or else the whole scheme could be invalidated as a violation of the nondelegation doctrine. Second, a common criticism of agencies is that they like to aggrandize their power and engage in mission creep over time. And this criticism arguably carries more weight in recent years after the Supreme Court's decision in City of Arlington v. FCC, which held that agencies get deference in interpreting the statutory schemes that they administer, even when the agency interprets the scheme in a way that expands its own jurisdiction. 

 

So in the context of AI, that could mean that if Congress creates an agency with something of an unclear jurisdictional boundary, then perhaps Congress creates the agency with ChatGPT in mind but then the agency ends up regulating your search engine results, the algorithms producing your social media feeds, your smart home devices, and Google Maps, many other long-standing applications. And now we have a bunch of new regulatory burdens on even moderately sophisticated computing devices. 

 

A third pitfall to be aware of is that when Congress creates a regulatory scheme with a fuzzy jurisdictional boundary, the scheme can often be tied up in litigation for years or even decades to figure out what that jurisdictional boundary is and actually try to concretize it a little bit. I think the best example here might be the decades-long saga of "Waters of the United States" litigation, including the Sackett decision that was just released about a month ago, figuring out under the Clean Water Act, the jurisdictional boundary waters of the United States. What does that mean? How far can EPA and the Army Corps of Engineers go in regulating water? 

 

So based on these considerations, I think that if Congress tries to punt the question of which AI systems deserve regulation to an agency, we can likely expect years of costly litigation and regulatory uncertainty as a result. So I suspect it would be time well spent if Congress takes the time up front to try and figure out a prudent and workable jurisdictional boundary, rather than leaving the question of jurisdiction for the agencies and courts to figure out on the back end. That's the first question I want to look at, agency jurisdiction. 

 

The second question I'm interested in is a question about unanticipated risks. How can Congress make a regulatory framework that is quickly capable of responding to new and unanticipated risks that emerge from the evolution of AI? Much of the national conversation about AI has coalesced around the need to address certain known and anticipated risks, things like election interference, job market disruption from automation, and invidious discrimination by algorithms. But there's also an increasingly mainstream concern that because of how complex and powerful AI is and how quickly it's developing, that there are serious or even existential risks from AI that we just can't even comprehend yet. We're not even aware of what exactly the risk is or how it's going to materialize. We just know that it's something we should be concerned about. 

 

Now, the blueprint for creating a regulatory scheme focused on discrete and known risks, like invidious discrimination, for example, is comparatively straightforward. But what about risks that we can't foresee in advance, at least not with very much specificity? How can Congress empower an agency ahead of time to deal with urgent and unforeseen risks from AI as soon as they come along? Or does Congress have that power? I think this is a particularly challenging and interesting question, and I think the challenge comes from the intersection of two doctrines of administrative law, the major questions doctrine and the nondelegation doctrine.

 

Under the major questions doctrine, if Congress doesn't explicitly empower an agency with what a court considers to be a major question, then a court will presume that Congress did not intend to give the agency authority to deal with that problem. So if new and unanticipated problems arising from AI are considered major questions, which they might be precisely because they were unanticipated and Congress wasn't thinking about them when it designed the regulatory scheme, then a court might find that the agency lacks authority to address these new problems.

 

So one response to this concern deriving from the major questions doctrine might be, well, Congress should make explicit ahead of time that it wants the agency to have authority to address risks from AI that arise in the future. And we're not just dealing with an enumerated set of specific risks, like invidious discrimination and election interference. But that open ended of a mandate would likely run into a different set of problems arising from the nondelegation doctrine, at least if a court were to adopt something like what Justice Gorsuch proposed in his opinion in the Gundy case and reinvigorate the nondelegation doctrine a little bit. So assuming that we have a sort of reinvigorated nondelegation doctrine along the lines of what Justice Gorsuch has proposed, then Congress is in something of a catch 22 if it wants to prophylactically empower an agency to address major yet presently unknown risks from the development of AI.

 

If Congress specifically focuses on presently understood risks, major questions arising down the road will be off limits because of the major questions doctrine. But if Congress is not so specific, then we have a nondelegation problem. There are a couple ways someone could respond to this. One could take this as a critique of one or both of these doctrines and say, look, these two doctrines in tandem totally disempower Congress from proactively dealing with these problems that we have a sense might be coming from AI.

 

But on the other hand, someone might say that the only problem is really just that Congress needs to work the way it was designed to. So the solution could be to move the infrastructure for quick regulatory adaptation inside of Congress, for example, by forming a permanent subcommittee to track and respond to AI developments and perhaps giving it some expert staff to keep up with these developments. Either way, my point here is simply that Congress needs to give more thought to how whatever regulatory scheme it creates is going to deal or not deal with unanticipated developments from AI that arise down the road. Thanks, and I'll turn it over to Aram or Eli.

 

Prof. Aram Gavoor:  Thanks so much, Seanhenry. Eli.

 

Eli Nachmany:  Yeah. So I also want to begin by saying I'm grateful to The Federalist Society for the opportunity to chat with you all today. Aram, thank you so much for moderating. And Laura and Seanhenry, excellent beginning to the panel with your remarks. Artificial intelligence, as we've heard, is a novel technology that threatens really to upend our basic economic structure. I just want to make two points today. The first is that, as Laura and Seanhenry have talked about, there is a vast potential for what Congress can do. But the fact is that our existing statutory frameworks really were not designed to regulate artificial intelligence for the most part. So that's point number one. 

 

And point number two is, notwithstanding that fact, everyone in Washington D.C. it seems, wants some part of artificial intelligence right now, whether or not there's a statutory framework on the books. So let's start with the inadequacy of the existing statutory frameworks. I think there are legal issues here, and there are policy issues here. Beginning with the legal issue. Jody Freeman and David Spence have this great paper. It's called Old Statutes New Problems, and it lays out this issue in the context of climate change. Congress, especially with these big heady issues that often require technical detail and regulatory frameworks and a specification of what an agency's either jurisdiction is, or statutory mandate is, Congress often struggles to keep up.  There are various reasons for that.  A lot of it just comes back to the requirement of bicameralism and presentment -- laws to pass the both houses of Congress and get through the president to become law. Congress especially struggles to keep up when the issue is one that is of political salience.

 

Now, it's not clear, and as we've heard today, it's not entirely clear what the partisan valence of artificial intelligence regulation is going to be. For the longest time, you had Republicans were, as a general matter, skeptical of regulation. You had Democrats were more pro regulation.  Not clear to me that that is going to continue to be the lay of the land going forward.  But nevertheless, various voices on, I think, both sides of the political aisle are at least interested in -- we have this new, in some cases scary technology.  What do we do about it?  Supreme Court in West Virginia v. EPA stepped into this debate last year, and the Supreme Court recognizes what Seanhenry has described as a major questions doctrine. 

 

Now, the major questions doctrine is really -- it's a canon of statutory interpretation, a way that we interpret existing statutes that are on the books where we say if the agency finds this, and the Supreme Court uses the word "unheralded" so it's an unheralded power -- I want to quote the exact language here. So it's an "Unheralded power representing a transformative expansion of the agency's regulatory authority in the vague language of a long extant but rarely used statute designed as a gap filler." Well, that raises all sorts of questions, and maybe the agency's interpretation of the statute is wrong.

 

Now, whether that doctrine, which I think the way to simplify from that quotation is, you can't find a lot of power in an old statute that doesn't clearly give you the power. Whether that doctrine bars aggressive regulation of artificial intelligence is unclear. It's statute dependent. So you'll go statute by statute based on what an agency says is its authority to do given regulation. But the Court in recent years has disfavored regulation that is not grounded in firm statutory authority. And I'd recommend, folks, you may want to follow some of the cryptocurrency regulation cases. There are similar fault lines between this and that.

 

I think this now tees up this policy issue. The policy issue here is that regulation of artificial intelligence appears to present the clearest example of the need for administrative flexibility and bureaucratic expertise in crafting regulation. Some of those arguments that the pro administrative state folks usually deploy when they're defending against some of the administrative state skeptics via the major questions doctrine, nondelegation doctrine, what have you. But that's of course, in tension with our constitutional framework, which centers Congress as our nation's policymaking body, and, some argue, bars delegation of policy making authority to the Executive. Commentators have said Congress is hopelessly gridlocked, slow to act, uninformed. I'm not totally sure that's exactly right, but those seem to be the main criticisms.

 

So the question is, how do you reconcile the two? I think first, we should acknowledge that this whole formulation presupposes that AI should be regulated, that regulation would be a net good. Those propositions are not clear. Still, even if regulation is needed and the administrative state should have some role in shaping that policy, the balance that our constitution at least strikes is this, Congress has to be the actor that decides. Congress has to make the relevant policy choice. So maybe Congress holds more hearings. Maybe it seeks administrative input on statutes, what have you. But Congress ultimately is supposed to be the one under Article One that decides. And that's perhaps frustrating, right? Because Congress might not act. 

 

One might argue the alternative is worse. In a system where we say Congress doesn't have to act, you're not going very much to like the results of what comes out of Washington, D.C. In the end, the answer might be, at least in the immediate term, before Congress can get its hands around what a regulatory framework might look like, is that we should be focused also on -- and maybe this doesn't come from Washington D.C. -- but the inculcation of private civic virtue, visa vis artificial intelligence usage. So maybe the government or leaders in communities should be encouraging folks to be vigilant about data privacy with respect to AI companies. Guard what you're putting into the AI modules, and folks are tinkering around with AI. What data are you handing over to AI companies in the absence of a strong regulatory framework? We should have rigorous public scrutiny of algorithmic bias on really a variety of fronts and maybe some special solicitude for workers that the technology will displace.

 

It brings me to my second point now, those are some big-time policy issues that are quite interesting and, again, politically salient.  And so everyone in Washington D.C. I think, wants a piece now of AI. Or I should clarify, everyone in the administrative state probably wants a piece. Congress may have some accountability concerns. You pass the wrong statute, something that you regulate or choose not to regulate doesn't go your way. If you make a policy choice and it goes bad, the blame falls on you because you're regularly up for reelection. This is one of the main fault lines of the administrative state today. As Elena Kagan points out, in presidential administration, when Justice Kagan was then Professor Kagan, the individual policy decisions of the Executive Branch, by contrast, often not always, but often have little impact on national presidential elections. 

 

In any event, I think this situation demonstrates the value of, as Laura pointed out, a strong White House Office of Information and Regulatory Affairs, OIRA. OIRA, at least when you have a lot of different actors in the administrative state who want to do something. If you can centralize the regulatory review, you have a way to keep everybody in line. And when it's a novel issue like AI, I think you require a whole of government approach. And OIRA allows the United States to put forward a united front in our regulatory work. I'll say watch the independent agencies. Will, for example, the Federal Trade Commission attempt to outflank the White House from the left, maybe go farther under the regulatory front than President Biden is comfortable with? Unclear. Could tee up some fascinating separation of powers litigation, which the FTC already has been in the crosshairs of. So we'll see what happens there.  But AI will undoubtedly have acute effects in certain areas. Agencies like I think the Labor Department should be especially focused on what we can glean from some shifts in labor statistics that we see.

 

But the final thing I'll say is government should consider how we can use AI to make things better. And I'll echo Laura on this. Yes, of course. AI is a threat to various elements of our economic structure, our national security, and, frankly, our way of life. We can and should acknowledge this. But at the same time, AI presents a unique opportunity to improve the human condition, advance knowledge, and aggregate information. The government should harness and leverage AI for these purposes and regulation. When we're thinking about how to do this, we should be clear eyed, but dare I say, we should also be optimistic. So I appreciate the opportunity to join you today and excited to see what comes of AI regulation or non-regulation down the road.

 

Prof. Aram Gavoor:  Thanks so much, Eli. So just to sum up what each of you have said, almost at a reductive level, Laura really focused on the possible independence and degree of independence of a potential AI regulatory body, also the role of OIRA. And full disclosure, I was counselor to the Administrator, so I tend to be pro OIRA [inaudible 32:00] review and have strong views on that. Seanhenry focused on the domain of such an entity's jurisdiction, especially in the backstop of Chevron deference step one, step zero, and certainly with the Court accepting for certiorari question two in Loper Bright cert, that's a question that's going to be resolved to some degree, and we'll figure that out in the next terms. That's something that certainly Congress, in its consideration, if it does act, probably needs to think more about explicit delegation.

 

And certainly that ties into non delegation doctrine as well, and it can't just delegate away its legislative powers. Also, he indicated the unanticipated risks of potential over regulation. And there's plenty of history behind Congress encountering a new technology or new behavior, engaging in some level of regulation, and then 5, 10 years later, it's always going to go something different from how Congress predicts. One of the best examples, not even on the technology side, is just the cost of the Freedom of Information Act. It was predicted that it would cost the entire federal government, like, 100 something thousand dollars to administer that whole statute government wide. And within, like, a couple of years, it was, like, 100 times that. 

 

Eli really focused on the inadequacy of existing statutory frameworks. And then also what to do about the alphabet soup of agencies in the milieu where just everyone wants to get involved.  So to synthesize these points into, I think, a couple of points of discussion that can take us probably for the next ten or so minutes, some core questions is -- and I don't think we have to answer that here because it's more of a technical one. Is AI just a tool or is it something more than a tool? Seems like there's a consensus that it is more than just the tool. AI's implications?  Certainly, I think it implicates every agency and there's plenty of domain for AI's application to existing laws, right? And Ellie talked about that to some degree. Our Equal Employment Opportunity Commission, our civil rights laws, our drug approval processes, AI has some role in all of it. 

 

And then the question of whether there is something more to be done. Whether it's more significant, let's say, than certain tech that exists for which these conversations have happened in the past, or whether it's even a force multiplier for that existing tech like social media, blockchain, crypto, various types of implants and wearable devices, as you're seeing with Apple's, just about a week ago, its recent announcement for a wearable device. So because this is administrative law, the question that I want to pose -- really let's focus now back on Congress. There's a lot of tools Congress can bring to bear. Seanhenry certainly mentioned developing greater subject matter. Expertise is one of them, the establishment of a permanent subcommittee or possibly, like, a temporary subcommittee as, like, a step into it. 

 

But in terms of the different toggles of regulation, focusing on structure, let's just get your views on the degree of independence, the degree of congressional oversight, whether there should be a reauthorization, mandatory reauthorization process, the depth of regulation.  Maybe Congress always needs to be in catch up, right? A very light touch, never quite does enough, but does enough based on what there's consensus on, and then reevaluates every couple of years and at least catches the bottom of the greatest risks while allowing innovation to develop. And then also it doesn't necessarily require an agency to have certain levels of adjudicative or administrative power such as licensure. Maybe its best practices that are being developed, something that's missed, like -- or maybe it's a handful of agencies. But just to get your thoughts contextually for the other discussants and maybe a reaction to what I just suggested, let's go with reverse order now. Eli first.

Eli Nachmany:  Yeah. Absolutely. I mean, I think for thinking about what AI regulation might look like, there are a number of different buckets, and a lot of them fall along similar fault lines to what U.S. law already does. So I think, for example, our antitrust laws express a mood, at least, of skepticism of concentrated power. And so you might think, for example, with artificial intelligence, if there are certain AI companies that seem to be getting a lot of the data inputs, is data today power? I think a lot of people would say yes. And so you might be concerned about, well, can an AI company develop a mosaic about an individual person? Can they develop a mosaic of facts about an individual community? And what power does that have over the folks about whom that mosaic has been built? What can the AI company do with that? 

 

The more mundane, if not benign or perhaps benign, is that they can target advertisements to those folks. The other, I think, more maybe troubling is that they can, as Cass Sunstein said, "Nudge behavior." And we usually think of the nudge theory in the government context, but private companies also absolutely nudge individual actors in one direction or another. We have a pretty robust framework of data privacy laws that are still being developed. But it might not be that AI regulation is so novel as it is just the application of, say, data privacy laws or the frameworks prevailing under amendments like the Fourth Amendment. 

 

Now, I think that's very easy when you think about, okay, let's say a political candidate were to use an AI language model to put in "Where are the best cities that I should go to based on real time voter information data to hold a campaign rally?" And we might be skeptical about an incumbent administration being able to access the AI input and what the AI spit out to that person. The harder case, of course, or maybe depending on your views on national security, the easy case is, let's say somebody who's a terrorist puts into the AI model "How do I build a bomb? And what are the different U.S. structures that would be most susceptible to being bombed?" Should there be some sort of automatic flag that goes to a law enforcement agency as soon as you input that question into an AI model? And the answer might be yes, for national security reasons. But I think it's quite easy to see what the various fault lines are here. And I think in some ways, they very much mirror what we already have on the books. So I think looking to existing U.S. law and building in the most Burkeian sense on what we've already got may be a good place to start.

 

Prof. Aram Gavoor:  Okay. Seanhenry?

 

Seanhenry VanDyke:  Yeah. So you asked about our views on kind of what Congress should do in terms of structuring the independence of this agency and its oversight. Do we have a reauthorization requirement? How does that go? And one of the premises of my remarks was to focus on that there are certain things Congress has to think about in order to not have its whole scheme kind of rendered ineffectual by litigation and delay. And so I think -- and how can Congress err on a side of safety to make sure that whatever scheme it creates isn't just wasted time, gets invalidated by the Court, something like that. And so I think sort of paradoxically, it might be the case that less independence and more oversight from Congress ends up making a more effective and more robust regulatory scheme because otherwise, I think there's going to be lots of delay and problems due to litigation in our current administrative law environment in the courts. 

 

And so a couple examples and things I would highlight is that when it comes to agency structure, there are a lot of different doctrines sort of floating around appointments and removal, Appropriations Clause, independence. But one theme that kind of has covered judicial intervention in all of these areas can arguably be described as sort of an anti-novelty principle, where many courts are saying that when we have an agency that's doing something that's kind of unprecedented, especially unprecedented, and presents a threat to the separation of powers, courts are very skeptical of that. And so a big Supreme Court case kind of ushering in this era was Free Enterprise Fund. We're saying a big motivating piece of the Court's reasoning is that just this two layers of removal protection is unprecedented and we're skeptical of it. And we saw that in several appointment and removal cases.

 

But now we're starting to see it in cases adjudicating agency structure -- adjudicating constitutional challenges to agency structure as well. And a good example here is the CFPB case where the Fifth Circuit said that the appropriation scheme for the CFPB is unconstitutional in part because it's unprecedented, it's novel, it doesn't have a place in our constitutional history and tradition of how the separation of powers works in the federal government. And the Supreme Court is going to hear that case. But I think it's an open question. They could end up agreeing with the Fifth Circuit.

 

And so I think Congress, following all of these developments, my recommendation, if it wants something safe and something that's sure to work, is not to try and create kind of an extra powerful agency in some new creative way like it tried to do with the CFPB and giving it all of this insulation and extra ability, extra funding, but instead to go with a more conventional historical method. I think that's much more likely to stand up to scrutiny in the courts. Another thing that you asked about was sort of the depth of regulation and whether it needs to be reevaluated every few years.

 

And one thing I would say, I think that just hits on the point I made about unanticipated risks as technology evolves. If you go back to the example of Section 230 and the regulation of internet platforms, in 1996 when Congress was deciding, what risks did these platforms create and how should we regulate them? Congress sort of single mindedly focused on the risk of child sexual exploitation via these online platforms, and that was its focus in the Communications Decency Act. And you could analogize that to the risks we're worried about today, things like election interference. But then over the years, as this technology evolved, new problems emerged, like data privacy, financial fraud via the Internet, things that Congress hadn't already addressed. And there wasn't much of a will to revisit what Congress had done in 1996 with the Communications Decency Act. So I think that highlights the need for some sort of structural change within Congress, such as a standing committee or more expert staff that allows it to continually revisit and update its regulatory framework.

 

Prof. Aram Gavoor:  Thanks, Seanhenry. Laura. 

 

Laura Stanley:  Thanks. So I think this question raises an important tension in administrative law in general. So one of the issues that we think about a lot is that in the regulatory state, Congress is delegating so much broad power to agencies and really punting those important policy issues.  And in some respects, an independent agency becomes more under the purview of Congress.  Congress can set the budget call hearings on agency operations. We colloquially refer to them as Congress's agencies. And so I talked a little bit about how having a non-independent agency could help combat regulatory capture and interest group influence through the mechanism of OIRA review. But I think there's lots of other mechanisms as well that help prevent regulatory capture in a dependent agency. This just raises a lot of the tensions that exist in administrative law. Tradeoffs are everywhere, and when we move in one direction, we might give up something in another area. One mechanism, for example, is that Cass Sunstein has argued that non independent agencies that are tied to the president get cover from interest groups in a way that independent agencies don't. There's been a lot of research suggesting that independent agencies aren't necessarily free from interest group influence the way we might want them to be.

 

Prof. Aram Gavoor:  Thanks.  So with our remaining time, I want to save some time for the Q&A that's developing in the chat and also for all of our viewers, please do feel welcome to ask some questions. Obviously, we won't have time to answer them all. I might combine some of them.  I just want to give an opportunity to the three of you just to have a sort of free form discussion maybe for the next five or six minutes before we get to the Q&A.

 

Eli Nachmany:  Yeah. If I could briefly cut in on the independence question, because I think my answer focused more on what principles Congress might think about when doing the regulation.  But there's, of course, the underlying independence aspect that I think Seanhenry and Laura discussed as well, quite well. With independent agencies, the question today is, what exactly an independent agency that passes constitutional muster even looks like. The Court recently in the Seila Law case seemed really to start cutting back on what we are comfortable with in terms of the structure of an independent agency. It seemed to recognize, but maybe cast doubt on two time honored exceptions to the general rule that was set forth in Myers v. The United States that the President must have plenary removal power over the entirety of the Executive Branch.

 

One of those exceptions, I think, is really not an issue here is the Morrison v. Olson exception for something like an independent counsel, an executive official who exercises power pursuant to a limited jurisdictional mandate. Fine. The other exception that the Court seemed to recognize was this Humphrey's Executor exception named after a case from the 1930s.  And Humphrey's Executor essentially says you can have a quasi-judicial or quasi legislative agency headed by a multi member commission. That sort of agency can be independent.  But the Court explains in Seila Law, it's not entirely clear that the agency at issue, The Federal Trade Commission, as it was in 1935 when the Court described it as quasi legislative and judicial, is the same FTC as we see today, when Congress in more recent years vested it with a litigation power which feels more executive. 

 

You look back to the Federal Trade Commission in the 30s as established in the 1910s, the Federal Trade Commission, it prepared reports for Congress. It acted as a special master in certain judicial proceedings. And so there was almost, you might say, a principal agent relationship between the agency and the non-Article Two branches of the government in a way that the current FTC, it's not so clear that that's what it's doing, or at least the general stricture of its power. So I don't know. We'll see what the Court decides in future years as these challenges bubble up. But I would be cautious about an independent agency that you're explicitly vesting with executive power, because the Court seems to be getting at least more and more uncomfortable about that sort of structure as the years go on.

 

Laura Stanley:  I'll jump in and speak on the existing statutory framework issue, change the subject a little bit from independence. So Ellie raised the important issue of whether our existing statutory frameworks are adequate, especially given that courts may strike down regulations promulgated under unclear statutes. I think there seems to be a general kind of assumption in public discussions that we're operating in a little bit of a vacuum, particularly after West Virginia v. EPA, but I'm not sure that's true. So I definitely recommend checking out Adam Thierer's recent report, and it describes all the existing regulatory authorities that are available across agencies.

 

I'll just give a couple of examples. So, for example, you all are probably aware of this, the SEC has very broad authority to police unfair or deceptive acts or practices. So if there's a defective or deceptive algorithm, FDA can already intervene in that area. So as another example, NITSA, FDA, CPSC, they all have really broad recall authority that can extend to algorithms and already has. So NITSA has already recalled Tesla's self-driving autonomous vehicle system and CPSC's recall authority is extremely broad. So it's not clear to me that these actions wouldn't survive review under the major questions doctrine.

 

I think it's also been interesting to see so much public debate on risks that I think really could be regulated ex post and under existing authorities like consumer protection.  And there's been a lot of discussion but maybe relatively less attention on separating those risks from the risks that need to be dealt with ex ante. So risks like weaponization and the more existential risks. I think this has implications when we're thinking about what kind of authorities we want to give to a new agency. So maybe governance is better through a mix of statutes authorizing existing agencies to target or to regulate targeted AI applications and then maybe creating an office within an existing agency to deal with those more existential type risks that we really don't want to be dealt with ex ante.

 

Seanhenry VanDyke:  And I'll just jump in briefly to hit on Eli's point about independence and the pressure that Humphrey's Executor and that exception to the usual requirement that an agency head has to be removable, that doctrine is coming under pressure in recent years. And I think Eli hit on this a bit, but it's putting a lot of pressure on the distinction between quasi executive and quasi legislative power. And particularly in these congressional hearings, a lot of people are calling for an agency to license AI, to audit AI, to take actions like that. And based on some of the new scholarship that's coming out suggesting that we need to pare back Humphrey's Executor, it seems like all of those activities would fall on the quasi-executive side of the line where the agency is going after a specific system and saying we need to license -- [inaudible 51:35] or impose penalties. So if Humphrey's Executor does get paired back in sort of the way Eli was suggesting, then most of the things that people seem to want this new agency to do vis a vis AI would become an unconstitutional under -- would become unconstitutional under the separation of powers doctrine unless we have an agency with a removable head.

 

Prof. Aram Gavoor:  Thanks so much.  So one question -- moving to the Q&A for the remainder of our time -- and just to predicate this question, we have OpenAI that Sam Altman is saying, "Regulate us, we need it," right? As Laura was indicating, it's convenient always for a marketing content to say that. It can just make it more difficult for new entrants in the marketplace. Then IBM indicated, and of course all this runs through public affairs and government affairs folks, the term precision regulation, which is a variation of light touch for existing agencies with existing authorities. And then just yesterday Google enunciated its view in regards to its own technology, Google DeepMind, another term, "wheel and spoke," which is just another way of saying, "No regulation. Let the market do its job. What NITSA's is doing is useful. Best practices, we like that. Leave us alone." So my question is, what are the risks of trusting the market, of letting AI develop a bit more before Uncle Sam jumps in? Even if Uncle Sam jumps in lightly, are those risks unavoidable without regulation? Let's go with Laura first.

 

Laura Stanley:  So I'll give a very unsatisfying answer, which is that I don't really think we can predict what the benefits of AI are going to be or the risks of AI. It's just such an unknown.  It'd be like saying, what's the printing press going to do? When it was first created, we would have never predicted what would have happened following that. So I'll just take a very quick second to make a plug on some of the unintended consequences we should consider. There's so much discussion about limiting access to AI applications, but an unintended consequence of this is that it could drive activity underground. And I think it's just really interesting to think about the Italian ban of ChatGPT. So there were researchers who looked at GitHub users in Italy following the ChatGPT ban there, and then compared it to other European countries, and the output of the coders dropped by 50% in the first two business days and then output recovered two days later because all the coders bypassed the ban. It was like, very easily circumvented. So I just think we want to think, like, can we even regulate it? And are we going to just drive this activity underground?

 

Seanhenry VanDyke:  I'll jump in. So I agree with everything Laura said, but I want to highlight one additional risk of waiting and what are the risks of trusting the market? One risk is that in ten years, it'll be impossible to put this genie back in the bottle in a way that it's not now. And so if you think of, to give another historical example of a big comprehensive regulatory program Congress tried to create, the Clean Water Act and Clean Air Act that Congress passed in the 70's were notoriously difficult to get up and running with the creation of standards, in part because the implementation of these programs required paring back a ton of industry and activity that was already going on and that had already been developing for decades. Whereas it could -- an argument could be made that right now in AI, we have a chance to get a regulatory framework set up before all this technology has developed, and we have to try and retrofit technology back into the framework, which is a lot harder than just creating the framework now and letting the technology develop within the framework.

 

Eli Nachmany:  I'll briefly note, I think, that whatever we do, it's likely that adversaries of the United States, foreign countries, will be attempting to leverage and harness AI as best as they possibly can to the benefit of their own national interests. And so I think, on the one hand, it is quite scary that -- I think Laura is entirely right. We don't know what we don't know. And sometimes when you have uncertainty with a large tail risk, Seanhenry is entirely right that that actually might call for significant, perhaps severe regulation to mitigate the potential unforeseen consequences when they could be quite negative.

 

At the same time, just back to some of my opening remarks, and Laura touched on this point too. The possible benefits of AI for delivery of government services and also to not only improve the human condition in the United States but improve the position in the United States vis a vis other actors on the foreign stage is something at least that should be part of the consideration of AI regulation. So I don't see AI regulation necessarily as just, oh, there is this thing that is scary and thus we must ban it. I think also you get some play in -- the U.S.  Government establishes a foothold with this particular technology and sees what it might look like in the years to come, how it can benefit us as opposed just to harming us.  But Seanhenry's point is entirely salient. I think getting a regulatory framework online could have quite good benefits, whatever that ultimately looks like, whether it's prohibitive or permissive.

 

Prof. Aram Gavoor:  Thanks. So we have a little under two minutes, so let's do a lightning round.  20 seconds each. And certainly the prerequisite to regulation is making sure that American innovation remains supreme and also national security interests are satisfied. So agencies are going to regulate on AI through their own trans substantive and individual domains anyways.  And I swear to God, this is not a slow pitch for OIRA review. What to do about regulatory crossfire. 20 seconds each. Seanhenry.

 

Seanhenry VanDyke:  I mean, I think OIRA is the -- you kind of answered it, that OIRA is the institution that our government currently has for overseeing that. Now, what you could do in addition to OIRA is create something that I think both the Trump and Biden White Houses have done, which is, in addition to OIRA, a separate sort of presidential committee that's specifically focused on AI and is trying to have more technical expertise in this specific field than OIRA does.  And so between OIRA and a committee like that, you have technical expertise and OIRA's knowledge.

 

Prof. Aram Gavoor:  Thanks. Eli.

 

Eli Nachmany:  Yeah. I think you should be very deliberate about what you mean by expertise.  So if it's technical, scientific expertise, get the scientists who understand what artificial intelligence exactly is and how to read an algorithm, source that from the agencies. But I think keeping OIRA in the fold is a good thing. The other type of expertise is what you might call policy or political expertise. Massaging the different interest groups, knowing what the basic fault lines are, that's an expertise that shouldn't be underrated in terms of government and administration. The agencies have a good idea of what the various interest groups want and need.

 

Prof. Aram Gavoor:  Thanks. Laura, close it out.

 

Laura Stanley:  Yes, I will say that I agree with Eli. I'm biased because I was an economist in an agency. But I think should staff up with economists, social scientists, people who are experts in risk assessment. You could even consider creating new economics and social science risk assessor groups within any agency or agencies working on AI. They can be like mini OIRAs within agencies.

 

Prof. Aram Gavoor:  Thanks so much.  I'll pass it off to Chayila now.

 

Chayila Kleist:  Yes, and with that, on behalf of The Federal Society and myself, I want to thank our panel for the benefit of their valuable time and expertise today and thank you to our audience for joining and participating. We welcome listener feedback at [email protected] and as always, keep an eye on our website and your emails for announcements about other upcoming virtual events. With that, thank you all for joining us today. We are adjourned.