ClickCease
logo
GET STARTEDlinecall443-731-0267
What are you looking for?

Personal Injury Lawyer and the Use of AI

  • February 10, 2025
  • Justin
  • No Comments

The Double-Edged Sword: Using AI in Personal Injury Law Firms

By Robert K. Jenner and Justin Browne

                                                                     Jenner Law                          KBA Attorneys

Introduction: AI in Legal Practice – A Game Changer with Risks

Artificial Intelligence (“AI”) is transforming legal practice, enhancing efficiency and innovation. From legal research tools like Westlaw’s Co-Counsel, to medical record review platforms, to deposition preparation and analysis AI tools, to document generation and automation, to voice transcription, and to image and video generation for demonstratives, AI has streamlined litigation and trial preparation. However, these advancements bring significant responsibilities—and potential dangers.

Recent Court rulings underscore the severe consequences of misusing AI in legal practice. This article explores the risks of overreliance on AI, ethical considerations, and best practices for safely integrating AI into personal injury law firms.

The Perils of Overreliance on AI: Lessons from Recent Court Cases

In a recent case, Wadsworth v. Walmart Inc. and Jetson Electric Bikes, LLC., Civ. No. 2:223-cv-00118-KHR (D. Wy. Feb. 6, 2025), plaintiffs’ attorneys cited nine cases in a brief. Only one was real. The rest were AI-generated hallucinations. The Court issued a show cause order regarding why the attorneys should not face sanctions.

This case comes two years after Mata v. Avianca, Inc., 678 F. Supp. 3d 443 (S.D.N.Y. 2023). There, attorneys used ChatGPT to draft a brief without verifying citations. When questioned whether the cited cases were accurate by the Court, they asked ChatGPT. They submitted affidavits attesting to what was hallucinated case law, and not just citations, but actual cases. The Court sanctioned them.

A Growing List of Sanctions Cases

A year later, a Texas lawyer faced severe consequences for submitting AI-generated legal arguments without proper cite-checking. Gauthier v. Goodyear Tire & Rubber, Co., Civ. No. 1:23-CV-281 (E.D. Tex. Nov. 25, 2024); see also See Park v. Kim, 91 F.4th 610, 614–16 (2d Cir. 2023) (referring attorney for potential discipline for including fake, AI-generated legal citations in a filing); Kruse v. Karlan, 692 S.W.3d 43, 53 (Mo. Ct. App. 2024) (dismissing appeal because litigant filed a brief with multiple fake, AI-generated legal citations); Smith v. Farwell, Civ. No. 2282CV01197 (Mass. Super. Feb. 12, 2024) (sanctions for fictitious cases); but see U.S. v. Cohen, Civ. 1:18-cv-00602 JMF, Dkt. 108 (S.D.N.Y. Mar. 20, 2024) (declining sanctions in the absence of bad faith where the citations came from an attorney client); Dukuray v. Experian Info. Solutions, Civ. No. 23 Civ. 9043(AT)(GS), 2024 WL 3812259 (S.D.N.Y. July 26, 2024) (pro se litigant not sanctioned); Karen Iovino v. Michael Stapleton Associates, LTD., Civ. No. 5:21-cv-00064, 2024 WL 352170 (W.D. Va. July 24, 2024) (show cause order issued, but no sanctions applied because the error was mis-citations, not fictitious citations) per Dkt. 1651 (Oct. 10, 2024).

Lawyers who rely blindly on AI-generated work product and abandon fundamentals face more than sanctions. See, e.g., People v. Crabill, Civ. No. 23PDJ067, 2023 WL 8111898 (Colo. Sup. Ct. Nov. 22, 2023) (one year suspension for attorney who submitted AI-hallucinated case law in a motion, failed to withdraw it after becoming aware, and falsely blamed an intern when caught).

Why AI Hallucinates and the Legal Risks It Poses

Hallucinations can take at least two forms – outright fabrication of a source or mischaracterization of it. Both stem from similar root causes.

Why do humans lie? Some would say because of they were raised – from being taught to do so, or being abused into feeling they have to do so. Ultimately AI hallucinations similarly arise from training.

AI models generate text based on patterns rather than true understanding; hence, “artificial” intelligence. AI may miss context, make false assumptions, perpetuate bias, or overgeneralize. When asked for legal citations, AI may “fill in the gaps” by creating plausible, but fake, case law. This phenomenon can lead to severe professional consequences if attorneys fail to verify their sources.

     Advancements in AI Accuracy: Is RAG a rag?

Retrieval-Augmented Generation (RAG) is an innovation designed to reduce hallucinations by pulling information from verified sources before generating text. While promising, studies show that hallucinations still occur. See, e.g., David Engstrom et al., Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools, Stanford University (2024), available at, https://dho.stanford.edu/wp-content/uploads/Legal_RAG_Hallucinations.pdf (last visited Feb. 9, 2025) (finding AI legal research tools still hallucinate between 17% and 33% of the time, suggesting that while RAG can reduce hallucinations, it does not eliminate them entirely.) AI is improving, but it remains infallible, and certainly not a sole legal authority.

Ethical and Professional Responsibilities

The American Bar Association (“ABA”) Model Rules of Professional Conduct require lawyers to verify their work, including AI-generated content. Specific rules that come to mind include:

  • Rule 1.1 (Competence)
  • Rule 1.3 (Diligence)
  • Rules 5.1 & 5.3 (Supervision)

When mistakes occur, it is important to own them. See Rule 3.3 (Candor Toward the Tribunal). Bar associations, Courts, and other organizations are increasingly sharing guidance to help attorneys navigate the choppy waters that come with seizing the benefits of AI while adhering to the Rules.

Best Practices for Using AI in Personal Injury Law Firms

States and rules committees continue contemplating new rules to address the risks AI presents. In the meanwhile, sticking to the fundamentals is the ultimate safeguard. They are written to evolve with the times.

To leverage AI while mitigating risks, attorneys may consider these best practices:

      Best Practice                                                                       Action Item

Check Local Rules                      Some courts require AI disclosures in filings. Know the landscape before you                                                                          begin.

Use AI as a Starting Point      Treat AI-generated content as a draft, not the final product, and iterate maintaining                                                              human verification and judgment.

Select the Right Tool                Understand AI’s training data and scope, temporally and underlying training                                                                          corpus. Not all AI platforms are suitable for legal analysis.

Verify Citations                          Always confirm case law and other authority by finding and reading the original                                                                     source material.

Follow a Review Process       Establish an AI review protocol, just as you would for staff or junior associates.

Stay Informed                            Continuously educate yourself and your team about AI’s evolving capabilities,                                                                        limitations, and risks.

Conclusion: AI Is a Tool, Not a Legal Authority

AI is here to stay, embedded in the tools legal professionals use daily. Ignoring AI entirely is not the solution—opposing counsel will certainly use AI-powered tools to their advantage. Clients benefit from its proper use. However, misusing AI or failing to properly verify its output can lead to serious consequences, including sanctions, lost cases, and disciplinary actions.

The cases we cited are but some; there are more. This risks also extend beyond attorneys. Parties and experts face this reality. Our more expanded piece to be published will cover those as well.

The lesson from Wadsworth and its predecessors is clear: AI is a creative assistant, not an infallible oracle. The ultimate responsibility for accuracy, competence, and ethical compliance rests with us, the attorneys. As much as things change, they stay the same. As Judge Newsome notes in his eloquent review of the use of AI in the legal profession, “[f]lesh-and-blood lawyers hallucinate too” by “shad[ing] facts, finess[ing] (and even omit[ting] altogether) adverse authorities, etc.” Snell v. United Specialty Ins. Co., 102 F.4th 1208, 1230-31 (11th Cir. 2024). So follow your training, adhere to the Rules, and employ best practices to stay vigilant. We can harness AI’s potential without compromising professional integrity.

 

 

This piece was co-authored with Mr. Jenner, then re-written using ChatGPT and edited further.