Federal Court Rules that a Client's AI Conversations Are Not Privileged

On February 10, 2026, Judge Jed Rakoff of the Southern District of New York ruled that dozens of documents a criminal defendant generated using Anthropic's Claude AI tool are neither protected by the attorney-client privilege nor the work product doctrine. The decision, United States v. Heppner, appears to be the first of its kind. Judge Rakoff subsequently issued a written memorandum setting forth his reasoning. [Update: For a discussion of a contrary ruling issued on the same day, see my post on Warner v. Gilbarco.]

Factual Background

Bradley Heppner, a Dallas financial services executive charged with securities fraud and wire fraud, used a consumer version of Claude to research legal questions related to the government's investigation. This was after receiving a grand jury subpoena and retaining counsel at Quinn Emanuel, but before his arrest. Heppner fed information he had learned from his defense attorneys into Claude, generated 31 documents of prompts and responses, and then transmitted those documents to his lawyers. When the FBI seized the documents during a search of Heppner's home, his attorneys asserted the attorney-client privilege and work product protection.

The government moved to compel production. Judge Rakoff granted the motion from the bench and subsequently issued a written memorandum.

The Attorney-Client Privilege Did Not Apply

Judge Rakoff identified multiple independent reasons the privilege did not apply:

First, no attorney was involved. An AI tool is not a lawyer. It has no law license, cannot form an attorney-client relationship, and is not bound by confidentiality obligations. The court noted that all “[r]ecognized privileges” require “a trusting human relationship,” such as “a relationship with a licensed professional who owes fiduciary duties and is subject to discipline.” No such relationship exists, or could exist, between an AI user and a platform like Claude.

Second, Heppner did not communicate with Claude for the purpose of obtaining legal advice. The court acknowledged this was “perhaps a closer call” because Heppner’s counsel asserted the documents were prepared for the “express purpose of talking to counsel.” But because Heppner used Claude on his own initiative, without direction from counsel, what matters is whether Heppner intended to obtain legal advice from Claude (and Claude itself disclaims providing it). When the government asked Claude whether it could give legal advice, Claude responded that “I’m not a lawyer and can’t provide formal legal advice.” The court noted, however, that had counsel directed Heppner to use Claude, “Claude might arguably be said to have functioned in a manner akin to a highly trained professional who may act as a lawyer's agent within the protection of the attorney-client privilege,” citing United States v. Kovel, 296 F.2d 918 (2d Cir. 1961).

Third, the communications were not confidential. Anthropic’s written privacy policy provides that it collects data on users’ “inputs” and Claude’s “outputs,” uses such data to “train” Claude, and reserves the right to disclose data to “third parties,” including “governmental regulatory authorities” and in connection with “claims, disputes, or litigation.” The court found Heppner could have had no reasonable expectation of confidentiality, citing In re OpenAI, Inc., Copyright Infringement Litig., No. 25 MD 3143 (S.D.N.Y. Jan. 5, 2026), for the observation that AI users do not have substantial privacy interests in conversations they “voluntarily disclosed” to a publicly accessible platform. This reasoning applies equally to OpenAI’s ChatGPT, which has comparable terms.

Fourth, Heppner created the documents before transmitting them to his lawyers. Pre-existing, unprivileged materials do not become privileged simply by being sent to an attorney after the fact.

Work Product Did Not Apply

The work product doctrine fared no better. The court held that even assuming the AI Documents were prepared “in anticipation of litigation,” they were not “prepared by or at the behest of counsel” and did not reflect defense counsel’s strategy. Heppner’s counsel confirmed that the documents “were prepared by the defendant on his own volition” and conceded that while the documents “affected” counsel’s strategy going forward, they did not ‘reflect” counsel’s strategy at the time Heppner created them.

Notably, the court went beyond the concession to address the broader legal question. Heppner relied on Shih v. Petal Card, Inc., 565 F. Supp. 3d 557 (S.D.N.Y. 2021), in which a magistrate judge in the same district held that the work product doctrine protected litigation materials a plaintiff had prepared regardless of whether her attorney directed the work. Judge Rakoff expressly disagreed with Shih, reasoning that its holding “undermines the policy animating the work product doctrine,” which, as the Second Circuit has repeatedly stressed, is “to protect lawyers’ mental processes.” The court cited In re Grand Jury Subpoenas, 318 F.3d 379, 383 (2d Cir. 2003); Matter of Grand Jury Subpoenas, 959 F.2d 1158, 1166 (2d Cir. 1992) (the doctrine “generally does not shield from discovery documents that were not prepared by the attorneys themselves, or their agents”); and Bice v. Robb, 511 F. App'x 108, 110 (2d Cir. 2013).

Heppner also invoked Federal Rule of Criminal Procedure 16(b)(2)(A), which provides that a defendant need not produce “reports, memoranda, or other documents made by the defendant, or the defendant’s attorney or agent, during the case's investigation or defense.” The court found this rule inapplicable because the AI Documents were seized pursuant to a search warrant, not requested in pretrial discovery.

Distinction: When Attorneys Use AI

It is important to distinguish Heppner from a line of cases addressing attorneys' own use of AI tools. In Tremblay v. OpenAI (N.D. Cal. 2024), the court classified attorneys' ChatGPT prompts as opinion work product (the highest level of protection), reasoning that the prompts were "queries crafted by counsel and contain counsel's mental impressions and opinions about how to interrogate ChatGPT, in an effort to vindicate Plaintiffs' copyrights." The Southern District of New York reached a similar result in New York Times Co. v. Microsoft Corp., denying a motion to compel production of AI prompts without prejudice.

The distinction is straightforward. In Tremblay and NYT, attorneys used AI as a litigation tool, and their prompts reflected legal strategy and judgment. In Heppner, a non-attorney client used AI on his own initiative, without direction from counsel, and the documents reflected the client's independent research rather than attorney work product. Indeed, Judge Rakoff himself acknowledged the distinction, writing that had counsel directed Heppner to use Claude, the outcome might have been different. Courts appear to be drawing a clear line: when a lawyer uses AI as part of their litigation work, the prompts and outputs may qualify for work product protection; when a layperson uses AI to explore their own legal situation, they get no such shield.

The Takeaway

The practical lesson from Heppner is simple: anything a client types into a consumer AI platform is potentially discoverable. Lawyers should advise clients explicitly that AI conversations are not confidential and are not protected by privilege.

United States v. Heppner, No. 25-cr-503 (S.D.N.Y. Feb. 10, 2026) (ruling from bench).

United States v. Heppner, No. 25-cr-503 (S.D.N.Y. Feb. 17, 2026) (written memorandum)

Previous
Previous

Federal Court Rules that a Litigant’s AI Materials Are Protected Work Product

Next
Next

Disney Files a Blockbuster Complaint Against Midjourney