ChatGPT Gets Sued By Radio Host

ChatGPT Gets Sued By Radio Host

A nationally syndicated talk show host based in Georgia has taken legal action against OpenAI, an artificial-intelligence company, for defamation. The lawsuit alleges that OpenAI’s AI-powered chatbot, ChatGPT, created false legal claims against him.This case is believed to be the first defamation complaint related to ChatGPT, which was launched in November 2022.

The plaintiff, Mark Walters, is the founder of Armed American Radio, a platform known for advocating gun rights and is often referred to as “the loudest voice in America fighting for gun rights.” He filed the lawsuit in Georgia state court, seeking unspecified monetary damages.

According to the complaint, journalist Fred Riehl, who serves as the editor of, asked ChatGPT on May 4 to summarize a legal case called Second Amendment Foundation v. Ferguson. This case was filed in a Washington federal court and accused the state’s Attorney General, Bob Ferguson, of abusing his power by impeding the activities of the gun rights foundation. Fred Riehl provided ChatGPT with a link to the lawsuit.

The lawsuit raises concerns about the accuracy and accountability of AI-generated content, especially when it comes to legal matters and potential defamatory statements.

AI Hallucination

Walters, the CEO of CCW Broadcast Media and a host of pro-gun radio shows, found himself caught in an unforeseen situation involving an AI “hallucination.” This term refers to instances where AI, like ChatGPT, generates completely false events or information.

In this particular case, the AI not only fabricated an embezzlement case but also wrongfully implicated Walters as a central figure in the falsehood. On May 4, Fred Riehl, the editor-in-chief of AmmoLand, sought assistance from ChatGPT to summarize a legal case called “The Second Amendment Foundation v. Robert Ferguson.” However, instead of providing an accurate summary, the AI generated a completely fictional 30-page document that falsely portrayed Walters as the treasurer and chief financial officer of the Second Amendment Foundation (SAF) and as the defendant in the lawsuit.

It’s essential to highlight that Walters has no affiliation with the SAF and had no involvement in the lawsuit that ChatGPT was asked to summarize. The actual case centered around SAF’s allegations against Washington state’s Attorney General Bob Ferguson, accusing him of misusing his authority to suppress the activities of the gun rights organization.

The Backlash 

After the dissemination of the false information, Riehl decided to verify the accuracy of ChatGPT’s claims by consulting Alan Gottlieb, the head of SAF. Gottlieb confirmed that the AI’s assertions were entirely untrue. However, when Riehl asked ChatGPT to provide a specific excerpt from the lawsuit mentioning Walters, the AI persisted in its erroneous claim. It generated a completely fabricated paragraph that falsely portrayed Walters as being involved in misconduct within SAF.

This incident has had a profound impact on Walters, subjecting him to intense public scrutiny and ridicule. In response, he took legal action against OpenAI and filed a lawsuit on June 5 in a Georgia state court. The lawsuit characterizes the AI’s output as “malicious” and alleges that it has caused significant harm to Walters’ reputation, exposing him to public hatred, contempt, and ridicule. As a remedy, Walters is seeking financial damages, the amount of which will be determined during the trial proceedings.

Time For Regulations 

This incident puts the spotlight on the potential harm that AI hallucinations can cause. It raises pressing questions about the regulation of emerging technologies like AI, and the accountability of their creators.

In April, Google CEO Sundar Pichai, whose company has released a rival to ChatGPT called Bard, warned against the problem of hallucinations by AI in a CBS “60 Minutes” interview. He described scenarios where Google’s own AI programs developed “emergent properties,” or unanticipated skills for which they were not trained.

OpenAI CEO Sam Altman has echoed these concerns. He called for Congress to implement guardrails around artificial intelligence, warning that the lack of regulation could lead to significant harm to the world. Altman emphasized the potential severity of the consequences if AI technology goes wrong, stating, “If this technology goes wrong, it can go quite wrong and we want to be vocal about that.

Even Elon Musk, a well-known advocate for AI, expressed apprehension about the further development of AI models. He warned about the systems’ profound risks to society and humanity, advocating for a pause in their development.


The legal action taken against OpenAI’s ChatGPT represents a crucial moment in the ongoing discussions about the impact of AI on society. While AI has the capacity to bring about transformative changes in numerous domains, it is evident that without appropriate oversight and checks, it can also lead to harmful consequences, as illustrated by the incident involving Walters. This lawsuit serves as a clear indication of the necessity for more robust regulation and control in the AI industry, ensuring that the technology is employed responsibly and with ethical considerations in mind.

Frequently asked questions 

Q1: Who is the plaintiff in the lawsuit against OpenAI’s ChatGPT and what is the reason behind the legal action?

Mark Walters, a radio host based in Georgia, is the plaintiff in the lawsuit against OpenAI’s ChatGPT. The legal action is taken because ChatGPT generated false information, implicating him in an embezzlement case.


Q2: What does the term “AI hallucination” refer to?

The term “AI hallucination” pertains to instances where an AI system generates entirely fictitious events or information. In the case of Mark Walters, ChatGPT created a fabricated embezzlement case and falsely linked him as a key figure.


Q3: What implications does this lawsuit hold for the AI industry?

This lawsuit represents a significant turning point in the discussions surrounding AI and its potential consequences. It highlights the risks associated with unregulated AI systems and underscores the need for accountability and responsible use of such technologies.

Q4: How have prominent figures in the AI industry responded to this incident?

Prominent figures in the AI industry, including Google CEO Sundar Pichai and OpenAI CEO Sam Altman, have expressed concerns about the risks posed by AI “hallucinations.” They advocate for increased government oversight and the implementation of safeguards for AI technologies.

Q5: What was the actual court case that ChatGPT was asked to summarize?

ChatGPT was requested to summarize a court case titled “The Second Amendment Foundation v. Robert Ferguson.” The case involved accusations by the Second Amendment Foundation against Washington state’s Attorney General Bob Ferguson, alleging abuse of power to suppress gun rights group activities