Hardly a week goes by when there is not a new story of someone getting caught filing a brief drafted with the help of generative artificial intelligence that is filled with citations to non-existent cases. (“Always a bad idea,” as Chief Justice John Roberts understatedly put it in his most recent year-end report on the federal judiciary.) 

Courts and state bar associations have already begun to respond to this problem of AI “hallucination” by proposing or enacting rules related to the use of generative AI in drafting legal documents. According to one recent report, 21 federal trial judges have issued standing orders regarding AI. But although the problem of AI hallucination is real, courts should be careful not to overcorrect and stifle an emerging technology that has the potential to revolutionize the practice of law, dramatically expand access to justice, and improve the quality of written submissions to courts.

A recent proposal from the U.S. Court of Appeals for the Fifth Circuit and a new ethics opinion from the Florida Bar show two approaches to dealing with this new technology—one based in skepticism, the other based in cautious optimism.

In November 2023, the Fifth Circuit proposed an amendment to 5th Circuit Rule 32.3, which would require a new certificate of compliance on which filers must check one of two boxes specifying either that “no generative artificial intelligence program was used in the drafting of this document” or that “a generative artificial intelligence program was used in the drafting of this document and all generated text, including all citations and legal analysis, has been reviewed for accuracy and approved by a human.”

Public comments about the proposed rule were largely negative. Comments by my organization, the Institute for Justice (IJ) argued that the rules were likely to discourage the adoption of benign uses of generative AI. By forcing filers to out themselves for using the technology in a way that drew no distinctions between responsible and irresponsible use, the rule could easily discourage lawyers from using AI to, say, improve the readability of their briefs, out of concern that courts will assume they used AI to draft the briefs from scratch. At the same time, IJ argued that federal courts already have tools at their disposal for dealing with the irresponsible use of generative AI, as shown by recent cases in which courts have sanctioned lawyers for submitting filings with hallucinated citations. A better approach, IJ suggested, would be to require filers to certify only that they have confirmed the accuracy of all citations and legal analysis, without requiring them to affirmatively disclose the use of generative AI.

Unlike the Fifth Circuit’s apparent skepticism of AI-assisted filings, the Florida Bar, in a recently released ethics opinion, adopted a more open position. The ethics opinion does not require any affirmative disclosure of the use of generative AI in drafting court filings. Instead, the opinion stresses the ordinary ethical duties that lawyers must bring to their adoption of any new technology. The opinion states that “[l]awyers may use generative artificial intelligence (“AI”) in the practice of law but must protect the confidentiality of client information, provide accurate and competent services, avoid improper billing practices, and comply with applicable restrictions on lawyer advertising.”

Of these two approaches, Florida’s is the one best calibrated to the actual concerns about the use of generative AI in drafting court filings. The problem is not the use of AI per se, but the potential for lawyers to abandon their ethical duties. But there is nothing unique about generative AI in this regard. A lawyer who signs and submits a legal brief written by a junior associate or a paralegal without first verifying the accuracy of the legal arguments in the brief has committed the same ethical lapse. The rules that are already used to deal with that sort of lazy lawyering are just as capable of dealing with lawyers who outsource their legal drafting to a generative AI.

Florida’s approach, which does not require disclosure of the use of AI, is also less likely to discourage beneficial use of technology, particularly to improve legal drafting. The impenetrability of much legal writing has been the butt of jokes for generations. The Plain Language movement and legal-writing mavens like Bryan Garner have made great strides in recent decades in highlighting the poor quality of much legal prose, inspiring many lawyers to work on improving the quality of their writing. But the reality of legal practice is that few lawyers have the time to fully embrace Supreme Court Justice Louis Brandeis’s admonition that “There is no great writing, only great rewriting.” Here, AI-powered software programs—such as the recently released BriefCatch 3—have the potential to speed up this editing process, resulting in clearer briefs, something that can only benefit courts.

To be sure, there are likely to be growing pains as AI products become more widespread. Where a previous generation may have been befuddled by email, there are those who will mistake programs like ChatGPT for a “super search engine,” with an incredible ability to find on-point cases not available through services like Lexis or Westlaw. But just as lawyers eventually became comfortable with technologies like email and cloud computing, these growing pains will pass, particularly now that established legal-research platforms like Westlaw and competitors like vLex have begun rolling out AI tools designed to avoid the hallucination problem. And just as courts and state bars sensibly enacted policies that promoted the ethical use of email and cloud computing, they should do the same with the promising technology of generative AI.

Note from the Editor: The Federalist Society takes no positions on particular legal and public policy matters. Any expressions of opinion are those of the author. We welcome responses to the views presented here. To join the debate, please email us at [email protected].