Note from the Editor: The Federalist Society takes no positions on particular legal and public policy matters. Any expressions of opinion are those of the author. We welcome responses to the views presented here. To join the debate, please email us at [email protected].
The rapid rise of generative artificial intelligence (AI) has sparked concerns about its potential to spread false, reputation-damaging information. Yet few defamation lawsuits have been filed worldwide to establish legal precedent for AI companies’ liability. As of May 2025, no AI defamation case has reached a final judgment globally, though there have been several developments in both litigation and global regulatory efforts that seek to move the needle on this pivotal topic.
Existing AI Defamation Litigation
In 2023, Australian mayor Brian Hood initiated the first known AI defamation action against OpenAI, the creator of ChatGPT. Hood alleged that ChatGPT falsely claimed he had been imprisoned for bribery, despite his role as a whistleblower in a 2000s bribery scandal. His lawyers sent OpenAI a “concerns notice,” the initial step in Australian defamation law. Hood dropped the case in February 2024 after OpenAI updated ChatGPT to correct the false statements, citing high litigation costs and the resolution of the issue.
In the U.S., early AI defamation cases have faltered. A Georgia lawsuit against OpenAI appears likely to be dismissed at the summary judgment stage due to insufficient evidence of actual malice, a key requirement under U.S. defamation law for public figures. A Maryland case against Microsoft was sent to arbitration, reducing its chances of setting a legal precedent.
However, on April 29, 2025, conservative activist Robby Starbuck filed a defamation lawsuit against Meta in Delaware Superior Court. Starbuck alleges that Meta AI falsely accused him of participating in the January 6, 2021, Capitol riot and committing a misdemeanor—claims he says are entirely fabricated. The complaint argues that Meta acted with reckless disregard by continuing to publish these statements after being notified of their falsity and having the means to verify their accuracy.
Starbuck v. Meta could be the first U.S. case to set precedent on the question of who is liable when AI defames an American citizen. A common misunderstanding is that Section 230 of the Communications Decency Act (1996) shields AI companies from defamation liability. While Section 230 protects platforms from liability for user-generated content, it does not apply when AI itself generates the content. Moreover, legal scholars have argued that AI companies can be held accountable if they knowingly or recklessly allow defamatory outputs, and that failing to act after being alerted to false, libelous content could constitute reckless disregard, opening the door to liability.
Global Regulatory Responses to AI Defamation
With limited court precedents, many countries are adapting regulatory frameworks to address AI-generated harms, including defamation.
In the United Kingdom, the Defamation Act 2013 was enacted to reform UK defamation law in England and Wales, addressing concerns that existing laws overly favored claimants, stifled free speech, and encouraged “libel tourism.” While Section 5 of the Act protects website operators from liability for user-generated content if they follow a complaint process, AI providers who themselves generate content (like Meta AI is alleged to have done in the Starbuck case) are likely primary publishers, akin to authors, and must rely on traditional defamation defenses like truth or honest opinion to avoid liability. No UK case has yet applied this to AI outputs.
The UK also enacted the Data Protection Act 2018, which implements and supplements the European Union’s General Data Protection Regulation (GDPR) in the UK, tailoring it to national needs. Under the Act, false AI statements that identify and relate to an individual can be treated as inaccurate “personal data,” allowing individuals to demand rectification or erasure. The UK’s Online Safety Act 2023, meanwhile, mandates risk assessments for illegal content (which may include defamatory content if it meets the threshold of a criminal offense), provides complaint mechanisms, and imposes duties to remove illegal content promptly, with fines up to 10% of qualifying global annual turnover or £18 million, whichever is greater.
The European Union’s GDPR sets a unified standard for data protection across the EU. In April 2024, advocacy group None of Your Business (NOYB) lodged a GDPR complaint against OpenAI with the Austrian Data Protection Authority (DPA), alleging violations for ChatGPT’s false outputs that violated data accuracy and seeking damages and correction or deletion of the false data. The complaint is under review. Further, the European Union’s product liability laws, like the 2025 Product Liability Directive, propose strict liability for AI providers for harms like personal injury, property damage, and data loss caused by defective AI outputs.
India’s law, Section 66D of the Information Technology Act, 2000, penalizes “cheating by personation” via digital means, with up to three years’ imprisonment and fines of ₹1 lakh, or around $1,200 USD. Political deepfakes targeting politicians before the 2024 elections raised concerns, but platform liability remains unsettled, including because the law focuses on individual perpetrators and courts have not consistently clarified platform responsibilities.
China’s Interim Measures for the Management of Generative Artificial Intelligence Services (effective August 15, 2023, and issued by the Cyberspace Administration of China) specifically target generative AI services provided to the public in China, covering text, images, audio, and video. These measures require providers to ensure the quality and compliance of training data. Article 17 mandates that providers disclose details about the “source, scale, type, quality” of pre-training and optimization data, as well as labeling rules and algorithms, to regulators during inspections, reflecting tight state oversight.
Returning to the U.S., the outcome of Starbuck v. Meta could redefine accountability for AI-generated defamation, setting a precedent for when machines falsely tarnish reputations. As global regulators tighten the reins on AI’s potential harms, U.S. courts face a critical choice: will they hold tech giants liable for reckless falsehoods, or shield them, leaving victims with little recourse? With no final AI defamation judgment worldwide as of May 2025, this case has the potential to be a legal crucible, with its resolution shaping whether AI’s power to generate and amplify lies comes with responsibility—or not.
Disclosure: Dhillon Law Group Inc. is lead counsel for Mr. Starbuck in Starbuck v. Meta.