A lawyer used ChatGPT for legal filing. The chatbot cited nonexistent cases it just made up

A lawyer in New York City has been reprimanded by a judge for using the AI chatbot ChatGPT to cite nonexistent cases in a legal filing.

Lawyer Steven Schwartz of Levidow, Levidow & Oberman has been practicing law for three decades. Now, one case can completely derail his entire career.

Why? He relied on ChatGPT in his legal filings(opens in a new tab) and the AI chatbot completely manufactured previous cases, which Schwartz cited, out of thin air.

Steven Schwartz, a lawyer with the firm Levidow, Levidow & Oberman, was representing a client in a case against the airline Avianca. In his brief, Schwartz cited six cases that he said supported his client’s argument. However, when Avianca’s lawyers looked up the cases, they could not find them.

It turned out that Schwartz had used ChatGPT to generate the citations. ChatGPT is a large language model chatbot developed by OpenAI. It can generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way. However, it is not always accurate.

It all starts with the case in question, Mata v. Avianca. According to the New York Times(opens in a new tab), an Avianca(opens in a new tab) customer named Roberto Mata was suing the airline after a serving cart injured his knee during a flight. Avianca attempted to get a judge to dismiss the case. In response, Mata’s lawyers objected and submitted a brief filled with a slew of similar court decisions in the past. And that’s where ChatGPT came in.

In this case, ChatGPT had generated citations to cases that did not exist. Schwartz admitted that he had not checked the citations before submitting the brief. He said that he was unaware that ChatGPT could generate false information.

The judge in the case, P. Kevin Castel, reprimanded Schwartz for his use of ChatGPT. He said that Schwartz had “failed to exercise due care” in preparing the brief. Castel also ordered Schwartz to pay Avianca’s legal fees.

This case is a reminder that AI chatbots should not be used as a substitute for legal research. ChatGPT can be a helpful tool, but it is important to remember that it is not infallible. Lawyers should always double-check the information that ChatGPT generates before using it in a legal filing.

The incident has also raised questions about the use of AI in the legal profession. Some lawyers are concerned that AI could lead to errors and malpractice. Others believe that AI could be a valuable tool for lawyers, if used correctly.

It is still too early to say what the long-term impact of AI will be on the legal profession. However, this case is a reminder that lawyers need to be aware of the potential risks of using AI in their work.

It’s important to note that ChatGPT, like all AI chatbots, is a language model trained to follow instructions and provide a user with a response to their prompt. That means, if a user asks ChatGPT for information, it could give that user exactly what they’re looking for, even if it’s not factual.

According to Schwartz, he was “unaware of the possibility that its content could be false.” The lawyer even provided screenshots to the judge of his interactions with ChatGPT, asking the AI chatbot if one of the cases were real. ChatGPT responded that it was. It even confirmed that the cases could be found in “reputable legal databases.” Again, none of them could be found because the cases were all created by the chatbot.

 

A lawyer faces sanctions after he used ChatGPT to write a brief riddled with fake citations | Engadget

 

The judge has ordered a hearing next month to “discuss potential sanctions” for Schwartz in response to this “unprecedented circumstance.” That circumstance again being a lawyer filing a legal brief using fake court decisions and citations provided to him by ChatGPT.

You May Also Like