NEW YORK (AP) — Two apologetic lawyers responding to an angry judge accused ChatGPT Thursday of tricking them into including bogus legal research in a court filing.
Lawyers Steven A. Schwartz and Peter LoDuca face punishment for filing a lawsuit against an airline that included references to past court cases that Schwartz thought were real, but were actually made up by the chatbot powered by artificial intelligence.
Schwartz said he used the groundbreaking program while researching legal precedents supporting a client’s case against Colombian airline Avianca for an injury sustained during a flight in 2019.
The chatbot, which has mesmerized the world with its output of trial-like responses to user prompts, has suggested several cases involving aviation accidents that Schwartz had been unable to find through the usual methods used in his firm. of lawyers.
The problem was that many of these cases weren’t real or involved airlines that didn’t exist.
Schwartz told Judge P. Kevin Castel that he was “operating on a misconception…that this website was getting these cases from a source that I didn’t have access to.”
He said he had “failed miserably” to do any follow-up research to ensure the quotes were correct.
“I didn’t realize ChatGPT could fabricate cases,” Schwartz said.
Microsoft has invested around $1 billion in OpenAI, the company behind ChatGPT.
Its success, demonstrating how artificial intelligence could change the way humans work and learn, has sparked fears among some. Hundreds of industry leaders signed a letter in May warning that “mitigating the risk of AI extinction should be a global priority alongside other society-wide risks. such as pandemics and nuclear war”.
Judge Castel appeared both baffled and troubled by this unusual event and disappointed that lawyers did not act quickly to correct the false legal citations when first alerted to the problem by Avianca’s lawyers. and the court. Avianca pointed to the false case law in a March filing.
The judge confronted Schwartz with a court case invented by the computer program. It was initially described as a wrongful death case brought by a woman against an airline only to turn into a lawsuit over a man who missed a flight to New York and was forced to incur additional expenses.
“Can we agree that this is legal gibberish?” Castel asks.
Schwartz said he mistakenly believed the confusing presentation resulted from excerpts taken from different parts of the case.
When Castel finished his interrogation, he asked Schwartz if he had anything else to say.
“I would sincerely like to apologize,” Schwartz said.
He added that he suffered personally and professionally as a result of the blunder and felt “embarrassed, humiliated and extremely remorseful”.
He said he and the company he worked for – Levidow, Levidow & Oberman – had safeguards in place to ensure nothing like this happened again.
LoDuca, another attorney who worked on the case, said he trusted Schwartz and didn’t review enough of what he compiled.
After the judge read aloud excerpts from a cited case to show how easy it was to discern it was “gibberish,” LoDuca said: “It never occurred to me to mind that it was a fake case.”
He said the result “pains me endlessly”.
Ronald Minkoff, an attorney with the law firm, told the judge the submission “resulted in negligence and not bad faith” and should not result in penalties.
He said lawyers have always struggled with technology, especially new technology, “and it’s not getting any easier.”
“Mr. Schwartz, someone who barely does federal research, chose to use this new technology. He thought he was dealing with a standard search engine,” Minkoff said. “What he was doing was playing with live ammunition.”
Daniel Shin, adjunct professor and deputy director of research at the Center for Legal and Court Technology at William & Mary Law School, said he presented the Avianca case at a conference last week that drew dozens of in-person and online state and federal participants. courts in the United States, including Manhattan Federal Court.
He said the topic caused shock and confusion at the conference.
“We’re talking about the Southern District of New York, the federal district that handles the big cases, from 9/11 to all the big financial crimes,” Shin said. “This was the first documented case of potential malpractice by a lawyer using generative AI.”
He said the case showed that the lawyers may not have understood how ChatGPT works because he had a tendency to hallucinate, talk about fictitious things in a way that seemed realistic but was not. .
“It highlights the dangers of using promising AI technologies without knowing the risks,” Shin said.
The judge said he would rule on penalties at a later date.