AI, Accountability, and the Law: The Gwinnett County Decision on ChatGPT and Defamation

AI, Accountability, and the Law: The Gwinnett County Decision on ChatGPT and Defamation

In a significant ruling handed down on May 19, 2025, the Superior Court of Gwinnett County, Georgia, in the case of Mark Walters v. OpenAI, L.L.C., dealt with a novel and consequential legal question: Can the developer of a generative artificial intelligence (AI) model be held liable for defamation arising out of the AI’s “hallucinated” outputs?

This case presents a watershed moment for the evolving interaction between tort law, freedom of expression, and technological innovation. At the heart of the dispute was the conduct of ChatGPT, a large language model developed by OpenAI, and the nature of responsibility when AI generates inaccurate or false statements.


The Factual Backdrop

Mark Walters, a well-known commentator in the Second Amendment advocacy landscape and radio host with an estimated 1.2 million listeners per segment, alleged that ChatGPT generated false and defamatory content about him when a journalist, Frederick Riehl, used the platform to summarize a lawsuit (SAF v. Ferguson) for an article on AmmoLand.com.

Importantly, the misleading information was not based on the contents of the actual complaint, nor did it reflect any publicly available data. Instead, when given a URL to the complaint, ChatGPT—admitting it could not access internet content—fabricated a summary implicating Walters in serious allegations that were never made in the actual court document.

 Hallucination and Liability

The Court acknowledged that generative AI, due to its probabilistic nature, can produce “hallucinations”—outputs that may appear coherent and factual but are in fact invented. OpenAI’s Terms of Use, prominently accepted by Riehl, and disclaimers embedded within the ChatGPT interface repeatedly warned users about the possibility of incorrect or misleading outputs.

The Court took cognizance of the fact that:

  • Riehl had prior experience with ChatGPT hallucinating facts;
  • He was on notice of multiple disclaimers and warnings from OpenAI;
  • The content in question was not verifiably sourced, and the journalist failed to cross-check the response with the actual document.

The Court’s Holding

In a well-reasoned order, the Court granted OpenAI’s Motion for Summary Judgment, effectively dismissing the defamation claims. It relied on several legal tenets:

  1. Foreseeability and Causation: The AI’s output was not a publication of OpenAI in the traditional sense—it was a user-generated prompt coupled with a known experimental tool.
  2. No Actual Malice or Negligence: Walters failed to show that OpenAI acted with knowledge of falsity or reckless disregard for the truth.
  3. First Amendment Protection: The judgment implicitly acknowledges the free speech concerns surrounding AI and journalism, balancing innovation and harm.
  4. Terms of Use & Disclaimers: Riehl’s acceptance of explicit disclaimers shifted the burden of verification on the user.

 Broader Implications

This case is one of the first judicial pronouncements globally dealing with AI defamation. It sets forth a clear precedent:

  • Developers of LLMs like ChatGPT are not strict publishers and cannot be held vicariously liable for every hallucinated sentence.
  • Responsibility lies with the user—especially where appropriate disclaimers and limitations are in place.
  • Journalistic diligence and human verification remain paramount when AI is used in the reporting process.

Closing Reflection

As AI becomes omnipresent in our professional and personal ecosystems, this ruling provides clarity and comfort to AI developers, while simultaneously warning users and journalists to tread carefully. Courts are likely to distinguish between tools and actors, placing human agency—and accountability—at the center of legal discourse.

The Gwinnett County judgment also hints at future legislative intervention, where clear statutory guardrails may be erected to regulate misinformation, AI hallucination, and digital reputational harm.


Disclaimer

This blog post is intended for general informational purposes only and does not constitute legal advice. Readers are advised to consult qualified professionals for advice pertaining to specific factual or legal circumstances.

Book Appointment