ChatGPT's made-up cases puts a lawyer's career in danger

Onur Demirkol
May 29, 2023

ChatGPT is known for making up fake information, and a lawyer in the United States has learned it the hard way. Steven A. Schwartz relied on ChatGPT's responses and included case examples that were claimed to have happened before; however, all cases were actually made up by the chatbot.

The New York Times revealed today that lawyers suing Colombian airline Avianca filed a document full of prior cases that ChatGPT made up. US District Judge Kevin Castel stated, "Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations," after opposing counsel called attention to the nonexistent cases. He then scheduled a hearing while he thought about issuing sanctions on the plaintiff's lawyers.

Roberto Mata was represented by Steven A. Schwartz, a lawyer with more than 30 years of experience in the state, in his lawsuit against airline Avianca for an alleged event in which a service trolley allegedly hit his knee and injured him. After Avianca's attorneys requested a federal court judge to throw it out, Mr. Schwartz, a lawyer with the firm Levidow, Levidow & Oberman, drafted a brief that was intended to use precedent to demonstrate why the case should proceed.

The airline's legal counsel, however, expressed concern about the brief in a letter to the judge, stating that they were unable to locate some of the mentioned cases. The judge stated in an order that he had been presented with an "unprecedented circumstance" and that he had requested Mr. Schwartz and one of his coworkers, Peter Loduca, to explain why they should not be punished.

chatgpt lawyer

Schwartz didn't know about ChatGPT's made-up answers

Schwartz claimed that he was "unaware of the possibility that its content could be false." The lawyer even gave the judge screenshots of his conversations with ChatGPT in which he inquired about the validity of one of the cases. The chatbot approved and said the incident is real. Even more so, it acknowledged that the instances might be located in "reputable legal databases." However, none of them were found, as they were all made up by OpenAI's ChatGPT.

This is not the first incident, as ChatGPT's answers are not always right. Taro Kono, Japan's digital minister, stated that he was recently wrongly identified as Fumio Kishida, Japan's prime minister—the very person he had faced off against and lost to in a leadership election held in 2021—during a conversation with OpenAI's lauded chatbot.

"ChatGPT gave the incorrect response when I asked him who Kono Taro is, so you need to be careful," I said. According to Kono, the Japanese prime minister, Bloomberg.


Tutorials & Tips

Previous Post: «
Next Post: «


  1. basingstoke said on May 30, 2023 at 3:39 pm

    Really funny story! comedy on ghacks

  2. colton said on May 29, 2023 at 11:02 am

    Experience the perfect synergy of ChatGPT and the Utopia P2P browser, combining convenience, safety, and security. This innovative technology allows users to connect with people worldwide while safeguarding their privacy and data.

Leave a Reply

Check the box to consent to your data being stored in line with the guidelines set out in our privacy policy

We love comments and welcome thoughtful and civilized discussion. Rudeness and personal attacks will not be tolerated. Please stay on-topic.
Please note that your comment may not appear immediately after you post it.