Earlier this month, the Biden administration published a “request for information” on artificial intelligence in the workplace. The request asked workers to submit, among other things, anecdotes about how they had been affected by AI. These anecdotes would then be used to develop “new policy proposals.”

The request failed to say, however, why “new policies” were needed. The administration had already conceded that AI tools were covered by existing law. And in fact, it had already issued guidance under those laws. So it didn’t seem to be covering any legal or policy gap. Instead, it seemed to be making a political statement. It seemed to be targeting AI because AI is poorly understood, and therefore unpopular. But that kind of approach to regulation promises to produce no real solutions. Instead, it promises only talking points and red tape.

The administration is hardly the first to see AI as an easy political target. States and cities have already started planting their flags. First out of the gate was New York City, which passed the nation’s first law regulating AI-powered selection tools. The New York law requires employers to disclose their AI-powered selection tools, put the tools through annual “bias audits,” and give candidates a chance to ask for other selection methods. Likewise, there are at least four AI bills pending in California. The most far-reaching one, AB 331, would require employers not only to disclose their AI tools, but also to report AI-related data to a state agency. The law would also create a private right of action, serving up still more work to the busy Golden State plaintiffs’ bar.

In short, lawmakers are clearly interested in AI and its effects on workers. Less clear, however, is what they hope to add to existing law. Just last week, the EEOC published updated guidance explaining how Title VII applies to AI-powered tools. Similarly, the NLRB’s General Counsel recently announced that the National Labor Relations Act already forbids AI tools that chill protected concerted activity. And Lina Khan, chair of the FTC, has written that “[e]xisting laws prohibiting discrimination will apply [to AI tools], as well as existing authorities proscribing exploitative collection or use of personal data.”

Given this existing coverage, it’s unclear what new policies the administration thinks it needs. Nor is it clear what harms the administration is trying to prevent. In the RFI, the administration linked to a handful of articles published on general-interest websites. But some of the articles were more than seven years old, and none of them established any discriminatory effects. One even suggested that companies were using AI tools to keep workers and consumers safe. How any of this called for a “new policy response” was left unsaid.

One suspects the administration left so much unsaid because it has so little to say. It cited no real evidence that AI is harming workers. But finding real harm didn’t seem to be the point. Rather, the point seemed to be scoring an easy political win. The administration is targeting AI because few people understand the technology. It can therefore crack down on AI tools without generating much backlash.

That kind of thinking is short-sighted. Not only has the administration identified no harm; it has failed to consider AI’s potential benefits. For example, AI-powered tools might help workers be more productive. The tools might help workers find jobs more suited to their skillsets. The tools might even help workers stay safe. Without more real-world experience, those benefits are impossible to quantify. Yet the administration is rushing ahead anyway, assuming the tools are nefarious without considering their possible upside.

For now, then, the RFI looks like a regulatory misstep in the making. Workplace AI is too new and too unfamiliar to know whether regulation is necessary, much less what a proper regulatory regime would look like. For once, regulators should aim before they fire.

Note from the Editor: The Federalist Society takes no positions on particular legal and public policy matters. Any expressions of opinion are those of the author. We welcome responses to the views presented here. To join the debate, please email us at [email protected].