ChatGPT Makes False Accusations Against Radio Host, Resulting In A Defamation Lawsuit Against OpenAI


Irrespective of how simple ChatGPT has made lives for millions of people when they want quick answers to queries in seconds, it cannot be ignored that the Large Language Model (LLM) would make glaring errors that were both false and misleading. Unfortunately for OpenAI, the entity behind ChatGPT’s creation, those errors have landed the company in hot waters thanks to a lawsuit filed by a radio host.
Radio host has filed a defamation lawsuit against OpenAI as ChatGPT generated answers related to the host’s history of crimes, which ended up being false
Mark Walters, a radio host in Georgia, is suing OpenAI because ChatGPT made responses that the host had been accused of defrauding and embezzling funds from a non-profit organization. According to The Verge , the LLM generated the information after receiving an inquiry from a journalist named Fred Riehl. The lawsuit was filed on June 5 in Georgia’s Superior Court of Gwinnett County, with Walters seeking monetary compensation from OpenAI of an undisclosed amount.
The journalist received the responses from ChatGPT after it asked the program to summarize a real federal court case by linking to an online PDF. The Large Language Model then generated a false summary of the case, which has extensive in detail but was also false. It mentioned incorrect information that Mark Walters was believed to have pursued the misappropriation of funds from a gun rights non-profit called the Second Amendment Foundation, where Walters pocketed $5 million. Nowhere has it been stated that Walters was accused of this crime.
Fred Riehl, the journalist, made the responsible choice not to publish this factually incorrect information. Instead, he decided to double down on the information through another source, which was the correct decision to make. However, the reports do not confirm how Mark Walters figured out that ChatGPT was generating false responses about him. Since millions of users are aware that such programs would often make misleading responses, which are often referred to as ‘hallucinations,’ they would outright dismiss them without a second thought.
However, when ChatGPT starts generating responses that can result in actual harm, then it becomes a problem. Two instances where the LLM’s responses led to severe consequences was when one professor threatened to fail his entire class after ChatGPT stated that students were using AI to complete their essays. The second incident revolved around a lawyer facing possible disbarring after using the program to research fake legal cases. Due to these issues and more, OpenAI has issued a small disclaimer on ChatGPT’s homepage, warning users that the AI can occasionally generate false information.
It is unclear how this lawsuit will play out, but it is high time that OpenAI engineers attempt to alleviate the false responses generated by ChatGPT because more trouble can ensue for various professionals, including the company.
Written by Omar Sohail

Top News

© 2000- Artmotion Network   Terms of Use  Help  Advertise  Add News  Feedback Make donation