The Pitfalls of Using AI for Legal Research: Nonexistent Cases and ‘Gibberish’

In an attempt to establish precedent in a personal injury case, a lawyer in New York cited six cases. The lawyer, representing a man suing Colombia-based Avianca Airlines for allegedly being struck by a metal serving cart in 2019, presented detailed descriptions of Varghese v. China Southern Airlines, Shaboon v. Egypt Air, and more. (no legal citations listed for this article)

The problem? The cases didn’t exist. Instead, the lawyer had relied on information from ChatGPT, an AI-powered tool that can generate infinite variations of copy based on language prompts. Although the lawyer claimed he thought ChatGPT worked like a traditional search engine, a federal judge imposed $5,000 fines for the submission of fictitious legal research (the fines also applied to another lawyer involved in the case as well as their law firm).

The judge commented, “Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance. But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.”

AI Use on the Rise Among Lawyers

The New York case is not the first time AI has entered the legal realm. According to a LexisNexis survey, lawyers have embraced the technology. In March of 2023, the survey found 57% of consumers said they were aware of tools such as ChatGPT. In contrast, 86% of lawyers said the same thing with half already using it in their work or planning to do so.

Increased AI use, in turn, has come with questions about its reliability and ethical implications for the legal community. Back in early 2023, when ChatGPT amassed 100 million users in just two months (it now has 1.7 billion users), a judge in England made headlines in the Guardian for recognizing the value of AI in legal research. Lord Justice Birss acknowledged he had asked ChatGPT to provide a summary of an area of law that he included in his judgment. He explained, “All it did was a task which I was about to do and which I knew the answer to and could recognize as being acceptable.”

Across the globe in Colombia, a judge also made the news for admitting to using ChatGPT to inform a legal decision. In this case, the judge consulted the AI tool to determine whether an autistic child’s insurance should cover the costs of medical treatment. The judge posed specific questions to ChatGPT, seeking legal insights. Ultimately, the AI’s response aligned with the judge’s final decision, raising questions about AI’s influence in shaping legal outcomes.

New York Lawyers’ Misuse of AI in Court

Then, in the early summer of 2023, came the New York personal injury case and the discovery of the inaccurate case citations. When the lawyer in question asked ChatGPT for legal precedents that would support his client’s case against the airline, he received several examples of aviation lawsuits that did not turn up when he used traditional methods of research.

When the airline’s lawyers were unable to find the associated court documents, it eventually led the federal judge to determine they were all inaccurate. The judge said one of the inaccurate decisions had “some traits that are superficially consistent with actual judicial decisions” while other sections were “gibberish” and “nonsensical.”

The Challenge of AI-Generated Content

One significant challenge with AI, including chatbots like ChatGPT, lies in the generation of content. AI systems operate based on vast datasets encompassing a wide range of information, and they aim to generate coherent responses based on patterns within that data. But these systems are not infallible and can inadvertently generate content that is misleading, inaccurate, or, as seen in recent cases, entirely fictitious.

Why AI Can Generate Inaccurate or Fake Cases

The capability of AI to generate fake cases can be attributed to several factors:

  • Training Data: AI models like ChatGPT are trained on vast datasets that contain both accurate and inaccurate information. This training data is essential for the model to learn and generate text but can also introduce inaccuracies.
  • Inability to Fact-Check: AI systems lack real-time fact-checking capabilities. When generating content, they rely on the information present in their training data and may not verify the accuracy of the data against real-world sources. Even when users ask for sources to back up statements, ChatGPT may submit answers that sound accurate but, upon closer inspection, have nothing to do with the query.
  • Generative Nature: AI systems aim to produce coherent and contextually relevant text based on the input provided. This means they may prioritize generating text that appears plausible over ensuring its factual accuracy.

Implications and Ethical Considerations

While a powerful tool, ABA Journal cautions the legal community must be mindful of its limitations. Following the New York case, a federal judge in the Northern District of Texas issued a standing order that anyone appearing before the court must either flag any AI-generated language for a fact check or attest that AI will draft no portion of any filing. The judge remarked that although the platforms have many uses in law, briefs are not one of them as they are currently prone to “hallucinations and bias.”

Contact Penney & Associates

Are you looking for experienced trial attorneys who put your needs first? At Penney & Associates, we put our expertise to work for you. We’re here to fight for your rights and get you the compensation you deserve. Contact us now for a free consultation.

Read more:
The Alarming Rise in Bicycling Accidents: How to File a Lawsuit
Filing a Wrongful Death Claim in California: What to Keep in Mind
Product Liability Claims in California: Pursuing Compensation

* This blog is not meant to dispense legal advice and is not a comprehensive review of the facts, the law, this topic or cases related to the topic. For a full review of our disclaimer and policies, please click here.

Couple looking at overdue bills

Why Hiring the Right Lawyer Saves Money in the Face of Inflation

The past few years have been a rollercoaster for consumers. Nearly two years after inflation hit a 40-year high in June of 2022, consumers got more disappointing news as the...
Fred Penney Billboard

Standing Tall: Why Billboards Rule for Lawyer Advertising

In a competitive market, savvy lawyers know how to stand out — literally. Towering high above California's freeways, billboards deliver a visual punch, instantly showcasing a lawyer's firm and specialty...
Finger touching console screen in smart car with illustrations of WIFI, a lock, and a brain surrounding it

Smart Car Privacy: A Recent Court Ruling and What Car Owners Should Know

The integration of technology into modern vehicles has raised privacy concerns. A recent court ruling recently exposed this fact and resulted in implications for both automakers and car owners. In...