After years of exploring the idea in science fiction, new advances in Artificial Intelligence (AI) promise to change reality. Naturally, the emergence and adoption of these AI tools raise many important policy questions about privacy, fraud, intellectual property, and the like. And how we answer—and how the government eventually responds to—these questions will affect the trajectory of the deployment of AI well into the future.

As this wave of AI advances is still in its early stages, the prudent course of action for the government should be to monitor progress and only issue some sort of targeted intervention if circumstances warrant. Overzealous regulation at a time of great uncertainty and innovation could easily quash the nascent technology or nudge it in an undesirable direction.

But regulatory caution is not the mantra of the Biden Administration. The Biden Administration is, instead, calling for immediate government intervention to ensure “AI Accountability.”

To this end, this past March, the National Telecommunications and Information Administration (NTIA) released its Artificial Intelligence Accountability Policy Report (NTIA Report). The NTIA Report makes no apologies for its support of aggressive ex ante regulation of AI. In fact, the Report contends that “[m]ultiple policy interventions may be necessary to achieve [AI] accountability.”

At the top of the Biden Administration’s regulatory wish list are mandatory audits before certain types of AI technology can come to market. According to the Report (at page 40):

We recommend . . . that audits be required, regulatory authority permitting, for designated high-risk AI systems and applications and that government act to support a vigorous ecosystem of independent evaluation. We also recommend that audits incorporate the requirements in applicable standards that are recognized by federal agencies.

That said, the Report concedes that “[d]esignating what counts as high risk outside of specific deployment or use contexts is difficult.”

The Biden Administration also offers up a laundry list of other potential regulatory requirements for AI, including:

  • A national registry of high-risk AI deployments;
  • A national AI adverse incidents reporting database and platform for receiving reports;
  • A national registry of disclosable AI system audits;
  • Coordination of, and participation in, audit standards and auditor certifications, enabling advocacy for the needs of federal agencies and congruence with independent federal audit actions;
  • Pre-release review and certification for high-risk deployments and/or systems or models;
  • Collection of periodic claim substantiation for deployed systems; and
  • Coordination of AI accountability inputs with agency counterparts in non-adversarial states.

Needless to say, if fully implemented, NTIA’s desired regulatory regime would drop on innovators and entrepreneurs a mountain of paperwork wrapped in miles of red tape.

Yet while the Biden Administration is quick to call for regulation of AI, there are several analytical prerequisites that are conspicuously absent from the NTIA Report.

First, noticeably absent from the NTIA Report is any clear definition of what exactly constitutes “artificial intelligence.” While the Report is full of technological platitudes and assorted buzz words, it is unclear (at least to the general practitioner) what precise technologies would fall under the Biden Administration’s proposed regulatory umbrella. Casting such an open-ended net against an area of rapid innovation would invite regulatory overreach and its associated harms.

Second, it is axiomatic that when the costs of regulation (no matter how purportedly noble) outweigh their benefits, society is always better off without that regulation. But a cost/benefit analysis is nowhere to be found in the NTIA Report. Instead, the Report simply asserts that its proposed regulatory intervention will essentially be costless, stating that the “costs of mandatory audits can be managed.” This claim is entirely speculative.

Finally, the NTIA Report cites no direct statutory authority for its proposed regulatory regime (presumably because there is none). It follows, therefore, that some sort of new legislation is required. How Congress resolves the absence of statutory authority is anyone’s guess, but the history of tech legislation is unfavorable to the adoption of a sensible and humble approach. The Law of Unintended Consequences will assuredly rear its ugly head.

So as the Biden Administration offers up no concrete definitions of the technology it wants to regulate, performs no cost/benefit analysis, and concedes it lacks any direct statutory authority to do what it wants to do, then we must ask, what’s really going on here?

Viewing the NTIA Report in conjunction with the Biden Administration’s 2022 Blueprint for an AI Bill of Rights and the 2023 Executive Order on Artificial Intelligence, one cannot help but conclude that the Report is yet another not-so-subtle message from the White House to both executive branch and independent regulatory agencies (e.g., the FTC, the FCC, the SEC, the CFPB) that they should embrace expansive views of their respective governing statutes to declare open season on AI innovators. Some agencies are already on the hunt.

Whether Congress and the courts will allow the administration and the regulatory agencies to get away with these efforts remains to be seen, particularly if the Supreme Court narrows Chevron deference this term as it is widely expected to do. In the meantime—like the proverbial frog who is placed in a pot of water that is gradually brought up to a boil—we may find out much too late that the damage caused by preemptive and hostile regulation of AI cannot be undone.

Note from the Editor: The Federalist Society takes no positions on particular legal and public policy matters. Any expressions of opinion are those of the author. We welcome responses to the views presented here. To join the debate, please email us at [email protected].