Artificial Intelligence and Administrative Law
The Federalist Society recently hosted a webinar entitled New Voices in Administrative Law: The Emerging Debates on AI Regulation. Panelists Eli Nachmany, Laura Stanley, and Seanhenry VanDyke discussed important issues that will arise as Congress and the executive branch consider whether and how to regulate artificial intelligence. Professor Aram Gavoor of GW Law moderated the discussion. Below is a summary of each panelist’s remarks. Watch the whole panel here.
Laura Stanley
Some argue that the coming AI revolution will improve our health outcomes, help us avoid accidents, and increase our standard of living. Others fear that a hands-off regulatory approach will lead to problems ranging from the invasion of privacy to the destabilization of democracy, and there is growing bipartisan support for creating a new AI regulator to address these risks. There are existing legal remedies and authorities that can be used to manage AI risks, as well as agencies that could be granted new regulatory authorities. But the debate over creating a new agency is not going away, and if lawmakers do choose to create a new agency, they should pay close attention to its level of political insulation.
An agency with relatively less independence may have legal and practical advantages, particularly given the unique task of regulating algorithms. For example, in recent years, the Supreme Court has been taking a hard look at offices that are insulated from political control in cases like Seila Law v. CFPB and Arthrex. An agency where decisionmakers are appointed by and easily removable by the President may avoid separation-of-powers challenges.
Practically, it would also ensure the agency falls under the purview of the Office of Information and Regulatory Affairs. Agencies are required to submit regulations to OIRA for review before they are issued, but independent agencies are exempt. AI is a general-purpose technology that touches every sector, and the experts in computer science are not necessarily the experts on the broad implications of AI. OIRA, however, has the broad expertise to analyze how AI systems are going to interact and to create the relevant checks and balances. OIRA can also lean on expertise from different agencies, and its review process forces policy decisions to be justified in the language of cost-benefit analysis, which helps to combat regulatory capture. Although most of the public discussion has focused on how an agency could help to mitigate the risks of AI, there is also potential for an AI regulator to be involved in reforms to maximize AI potential. For example, an AI agency might play a role, in collaboration with OIRA, in identifying existing regulations that hinder beneficial AI adoption.
Seanhenry VanDyke
A growing and bipartisan faction in Congress wants to address the risks presented by artificial intelligence. But the cutting-edge AI systems they want to regulate are both tremendously complex and rapidly evolving. Because of these challenges, many members of Congress appear ready to outsource the task of regulating AI to an agency that (hopefully) has the technical expertise to figure out and mitigate the risks. Yet Congress itself must grapple with certain foundational questions or else risk having the entire regulatory scheme rendered ineffectual by litigation and administrative delays. Two such questions warrant particular attention.
The first question concerns agency jurisdiction. Assuming Congress creates an agency to regulate AI (or gives new power to an existing agency), which AI systems will be subject to regulation? This is a tricky problem because it’s hard to draw a line that differentiates the Strong AI systems Congress seems worried about—ChatGPT, for example—from algorithms used in more mundane applications. Congress might try to avoid this problem altogether by leaving it up to the agency to decide which systems to regulate. But this carries several risks. One is the risk of mission creep: Without a clear jurisdictional boundary, an agency created with ChatGPT and job automation in mind could go on to regulate search engine results, social media feeds, smarthome devices, and more. Additionally, a fuzzy jurisdictional boundary risks years or even decades of litigation to concretize the limits of the agency’s authority. (Compare, for example, the decades-long saga of “Waters of the United States” litigation.) For these reasons and others, it would be time well spent for Congress to attempt to craft a prudent and workable jurisdictional boundary on the front end, instead of leaving these issues for the agency and courts to figure out on the back end.
The second question concerns unanticipated risks. It would be comparatively easy for Congress to give an agency authority to address certain known risks—things like election interference and invidious algorithmic discrimination. But what about serious or even existential risks from AI that we can’t foresee in advance, at least not with any specificity? Congress is arguably in something of a catch-22 if it wants to prophylactically empower an agency to address major, yet presently unknown, risks from the development of AI. If Congress specifically focuses on presently understood risks, major risks arising down the road might be off-limits because of the major questions doctrine. But if Congress is not so specific, the whole framework might have a nondelegation problem. Perhaps one solution is to move the infrastructure for quick regulatory adaptation inside of Congress—for example, by forming a permanent committee and hiring expert staff to track and respond to AI developments. At a minimum, the question of unanticipated risks also warrants serious attention as Congress contemplates how to regulate AI.
Eli Nachmany
Congress might regulate artificial intelligence. It might not. Regardless of what Congress does, however, the executive branch is likely to respond to developments in AI technology with administrative action. For example, the Biden White House has already released its blueprint for an AI Bill of Rights. Of course, many of our existing statutory frameworks (governing technology) did not contemplate regulation of AI when enacted. For that reason, AI regulation by administrative agencies faces the problem of fitting “old statutes” to “new problems”—something that Jody Freeman and David Spence discussed in an important 2014 article about environmental regulation.
In the major-questions era, courts are skeptical of significant regulatory action that rests on dubious statutory authority. Indeed, the Supreme Court just told us in West Virginia v. EPA that agencies should be cautious about finding unheralded powers in old statutes, particularly when such a power would represent a transformative expansion of the agency’s regulatory authority. This state of play creates a problem for administrators: The rapid growth of AI—a novel, complex, scary (?) technology—threatens to upend our economic structure and perhaps our very way of life. It also presents tremendous opportunities to eliminate economic inefficiencies, advance knowledge, and improve the human condition. These sorts of developments often call for government to do something. But in the absence of congressional action on this issue in particular, agencies need to be circumspect about discovering regulatory authorities in statutes that are not about AI regulation.
To be sure, government regulation of AI is not the only tool that the American public has for dealing with this new technology. In my remarks, I also encouraged the inculcation of private civic virtue vis-à-vis AI usage in three discrete areas. First, Americans ought to be vigilant about data privacy when using AI language models; AI companies can get a lot of information about people based on the questions they enter into language models when tinkering around with the software. Second, Americans must develop a culture of rigorous scrutiny of algorithmic bias (in all forms, including political and ideological bias) and question the extent to which an AI language model’s responses are attempting to influence their behavior—from purchasing decisions to voting—in one direction or another. Third, Americans should maintain a special solicitude for the workers that this new technology will displace.
Note from the Editor: The Federalist Society takes no positions on particular legal and public policy matters. Any expressions of opinion are those of the author. We welcome responses to the views presented here. To join the debate, please email us at [email protected].