Biden’s Executive Order on Artificial Intelligence: An Ill-Advised Departure from Light-Touch Regulation
As promised, just in time for Halloween, late on October 30, the Biden White House released its Executive Order on Artificial Intelligence (AI). Ostensibly focused on AI use by federal agencies, the order’s very title—the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence—reflects its extensive reach. The EO even admits that by dictating private sector AI development for government use, the Administration is attempting to shape the AI market generally.
Almost every federal agency is brought into this regulatory extravaganza. For instance, the Commerce Department is directed to establish guidance for content authentication and watermarking to clearly label AI-generated content. Federal agencies will then be required to use these Commerce-created tools to authenticate communications from agencies to the public. The EO’s 140 requirements for the various federal agencies even reach into regulating the actual computing power of Large Language Models (LLM) used to train AI.
Even the “independent” agencies are brought into the fold. The White House “encourages” FCC action on AI in four areas, and the FCC Chairwoman had already initiated inquiries on three of the four before the EO’s release (likely reflecting coordination between the FCC and the White House). Among these FCC actions is a Notice of Inquiries on how AI can make spectrum use by the private sector more efficient, and how to use AI to decrease robocalls and scams.
The President had the jurisdictional grace to call on Congress to do something. The EO encourages Congress to pass data privacy legislation. Of course, perhaps driven by the meteoric take-up of ChatGPT, and AI’s consequent breakthrough into popular awareness, a significant number of AI bills have already been introduced in Congress. Predictive AI—trained by humans—has been around for years, due to the large amount of data accumulated by the early 2010s, as seen in increasingly accurate online searches and apps. But Generative AI, like ChatGPT, in which AI trained on LLM actually generates synthetic content, is a fairly new concept to most Americans.
Congressional bills introduced in response to this newfound public awareness include the bipartisan AI Labeling Act requiring disclosures on AI-generated content. The bill was introduced by Senators Schatz (D-HI) and Kennedy (R-LA) in late October—on the same day Senate Leader Sen. Schumer hosted an all-Senate AI Forum. The AI Labeling Act would also recommend that platforms develop non-binding technical standards for identifying AI content. House Commerce Chair Rodgers has said she plans to expand her earlier privacy bill to include AI, making a House counterpart possible. Other bills would mandate that developers conduct risk assessments of the LLMs used. Senator Klobuchar’s (D-MN) Real Political Ads Act attempts to combat “deep fakes” in political advertising.
In response to the EO, House Commerce Chair Rodgers agreed that Congress must pass data privacy and security legislation, but also cautioned the President against over-regulating this emerging technology. Senator Cruz, Ranking Member on the Senate Commerce Committee, also criticized the overreach and warned against undermining U.S. innovation. However, the Chair of Senate Commerce, Sen. Maria Cantwell of Washington State, praised the President’s EO, as did other Democrat Members of Congress. Industry accolades came from Brad Smith, President of Microsoft (owner of ChatGPT and Azure Cloud), and Kent Walker, Chief Legal Office of Alphabet (Google’s parent), among others. But given the market position of those companies' cloud computing businesses, with Azure and Google Cloud already thriving incumbents in offering AI training on their platforms with their or others’ LLMs, they can better afford the regulatory costs of pre-screening their algorithms with the government than future start-ups.
Of course, Washington is not alone in its new fixation on AI. As with most things regulatory, Europe has led the way. The European Commission is in the final stages of enacting its AI legislation. In fact, the week of the EO’s release—again perhaps pointing to some prior coordination—European policymakers were in town, urging their Biden Administration counterparts to work together on AI regulation. The United Kingdom, no longer an EU member, will host this month an international AI Safety Summit focused on the most advanced AI, called frontier models. Prime Minister Sunak’s aim is reportedly to build on the ongoing work at various international forums, including the OECD and the G7 Hiroshima AI Process, as well as the commitments earlier obtained by the Biden Administration from leading U.S. computing firms, by getting the biggest labs to sign up to more detailed—but still voluntary—plans for the safe development and deployment of new models.
One would think after the clear market evidence of the superiority of the U.S.’ prior light-handed approach to technology regulation that an invitation to work with the Europeans would not be tempting. But the new EO suggests that the Biden Administration is very much interested in adopting European-style rules for U.S. technology innovation.
As Adam Theirer of R Street Institute and Neil Chilson of the Center for Growth and Opportunity recently cautioned, the new EO threatens to slow and even reverse American leadership in this emerging technology by wrapping U.S. computational innovation in red tape; it also reflects the permissioned approach of European regulation that has served their continent so poorly when it comes to technology innovation. Hopefully, as the agencies publish their proposed rules for regulating AI in their respective sectors, the public will file comments to ensure robust debate ensues.
The views expressed above are my personal views and do not reflect the views of my law firm HWG LLP or of any of our clients.
Note from the Editor: The Federalist Society takes no positions on particular legal and public policy matters. Any expressions of opinion are those of the author. We welcome responses to the views presented here. To join the debate, please email us at [email protected].