In the wake of President Biden’s October 30, 2023, Executive Order on “Safe, Secure, and Trustworthy Artificial Intelligence,” it may be well to heed a warning from AI godfather Yann LeCun, who says the real threat from AI is not evil robots taking over the planet as much as greedy one-percenters trying to preserve the field for their own, exclusive domination. In a Business Insider article by Hasan Chowdhury,  LeCun cautions against industry bigshots intent on shaping a regulatory environment to capture the AI industry for themselves. If they succeed, he warns, it will mean that a very small number of very large companies will control AI.

Chowdhury reported that, in March, more than 1,000 tech leaders, including Elon Musk (X), Sam Altman (Open AI), Dennis Hassabis (Google DeepMind), and Dario Amodei (Anthropic), signed a letter calling for a six-month pause on AI development. LeCun says they are the ones drumming up fear of the very technology they are promoting. Many of the risks are hypothetical. “[F]or LeCun, the real danger is that the development of AI is locked into private, for-profit entities who never release their findings, while AI’s open-source community gets obliterated,” said Chowdhury.

It’s not as though the little guys haven’t beaten the big dogs to the punch before. Open AI built an incredible image generator called DALL-E. At first it was heavily encumbered, and they made sure that only a small elite had access. Then another entity called Stability AI created and distributed Stable Diffusion, which was as good as DALL-E (if not better) and could be downloaded on a computer by anyone, for free (open source). It lacked DALL-E’s guardrails. Suddenly, there was great excitement as everybody downloaded Stable Diffusion, and a billion flowers bloomed. AI art took off like a rocket, and now AI-generated images are everywhere. The sky hasn’t fallen—but open source Stable Diffusion basically ate the for-profits’ lunch. Open AI—and Google and X—aren’t about to let that happen again. 

The President’s EO opens with a topping of PR fluff that reads very much as though it had itself been written by AI, but then in Part Two shifts to a very technical, nuts-and-bolts outline that bears the fingerprints of industry insiders. The foxes are making recommendations about henhouse design.

Preemptive and pervasive regulation will benefit the Goliaths at the expense of emerging Davids. Government regulation of AI may be a greater threat than the AI itself. 

Note from the Editor: The Federalist Society takes no positions on particular legal and public policy matters. Any expressions of opinion are those of the author. We welcome responses to the views presented here. To join the debate, please email us at [email protected].