Court Sanctions Attorneys for AI-Generated Misrepresentations in Legal Brief

In a recent decision from the Central District of California, a Special Master imposed sanctions on attorneys representing the plaintiff in a civil case after they submitted legal briefs containing numerous inaccuracies, including citations to non-existent cases. As with several other recent cases, the errors were attributed to the use of AI tools; in this case, the AI tools were used solely in preparing an outline of the briefs.

The case is titled Lacey v. State Farm, C.D. Cal. Case No. 24–cv–5205. It involved a dispute over insurance coverage, with the plaintiff challenging the defendant’s privilege assertions during discovery. To resolve ongoing disputes, the court appointed a Special Master to oversee the discovery process.

During the proceedings, the plaintiff’s attorneys submitted a supplemental brief that included approximately nine incorrect legal citations out of 27, with at least two referencing cases that do not exist. The inaccuracies were traced back to the use of AI tools to generate an initial outline for the brief. This outline was shared among the legal team and incorporated into the final submission without adequate verification.

When the Special Master raised concerns about certain citations, the attorneys revised and resubmitted the brief, removing the identified errors but leaving other inaccuracies uncorrected. The continued presence of false citations, coupled with a lack of disclosure about the use of AI, led the Special Master to conclude that the attorneys’ conduct was reckless and aimed at influencing the court's analysis improperly.

The Special Master struck the briefs and issued monetary sanctions against the attorneys totaling $31,000.

Wisely, the attorneys responsible for submitting AI-generated errors were contrite and frank in admitting how things had gone wrong (always the best course when an attorney messes up). The Special Master emphasized that while the individual attorneys expressed remorse and took responsibility, the collective failure to verify the AI-generated content warranted sanctions to deter similar conduct in the future.

While experts debate how close we are to AGI (Artificial General Intelligence), a more practical question for attorneys is whether it’s safe to rely on AI in any capacity, and if so, how to safeguard against facing a sanctions order. The safest (and in my opinion, best) option is to never rely on AI when preparing legal briefs. AI is not 100% reliable, and it’s questionable when or if it ever will be. But this case illustrates the danger in relying on AI in any capacity during the course of preparing briefing; here, only during the outlining stage, by one attorney, a fact that was not communicated to the entire team. Had someone on the team cite-checked the AI generated authorities, presumably the errors would have been fixed. But why risk that scenario in the first place?

Previous
Previous

The Ninth Circuit Affirms Order Dismissing Complaint for Copyright Infringement

Next
Next

U.S. Court of Appeals for District of Columbia Upholds Human Authorship Requirement in AI-Generated Works