Another Court Rules on Work Product Protection and AI

In February, I wrote about the emerging split between United States v. Heppner and Warner v. Gilbarco on whether a litigant’s AI-generated materials are protected from discovery. On March 30, 2026, Magistrate Judge Maritza Dominguez Braswell of the District of Colorado weighed in on that question in Morgan v. V2X, Inc., No. 25-cv-01991-SKC-MDB (D. Colo. Mar. 30, 2026). Judge Braswell sided with Warner, holding that a pro se litigant’s use of AI to prepare for litigation is protected work product under Federal Rule of Civil Procedure 26(b)(3). But the court went further than either Heppner or Warner by addressing a question neither case reached: what should a protective order say about the use of AI with confidential discovery materials?

Factual Background

Morgan v. V2X is an employment discrimination case. Plaintiff Archie Morgan, proceeding pro se, alleges that V2X, Inc. terminated him based on race, national origin, and retaliation for protected activities including opposing sexual harassment and whistleblowing. V2X says it discharged Morgan for legitimate, nondiscriminatory reasons after investigating a workplace complaint corroborated by more than 30 witnesses.

The opinion does not address the merits. It arises from a discovery dispute over AI use. Both parties were using AI in the litigation, but they disagreed on how AI should interact with information designated as “Confidential” under the stipulated protective order. V2X moved to amend the protective order with AI-specific restrictions and to compel Morgan to disclose the identity of the AI tool he was using with V2X’s confidential information. Morgan opposed the disclosure request, arguing that his selection of an AI tool is protected work product under Rule 26(b)(3).

Work Product and Pro Se AI Use

The court began with the threshold question: does the work product doctrine under Rule 26(b)(3) apply to a pro se litigant’s AI-generated materials?

Rule 26(b)(3)(A) protects “documents and tangible things that are prepared in anticipation of litigation or for trial by or for another party or its representative.” The Tenth Circuit has not addressed whether this protection extends to pro se litigants, but Judge Braswell found the text’s broad reference to materials prepared by any “party” resolves the question. The court noted that prior to the 1970 amendments, some courts declined to protect work product prepared by non-attorneys, but the Advisory Committee’s amendments were specifically designed to extend protection beyond attorneys’ work product. Since then, courts have routinely applied the rule to pro se litigants’ materials. Carbajal v. St. Anthony Cent. Hosp., 2014 WL 2459713, at 2 (D. Colo. June 2, 2014); Anderson v. Furst, 2019 WL 2284731, at 4 (E.D. Mich. May 29, 2019).

Judge Braswell emphasized that the case for applying these protections to pro se litigants is “magnified in the context of AI—one of the most powerful knowledge tools ever to become available to the masses.” Because pro se litigants act as both party and advocate simultaneously, a reading of Rule 26(b)(3) that conditions work product protection on the involvement of counsel “finds no support in the rule’s text and would further disadvantage unrepresented litigants.”

The court distinguished Heppner on two grounds. First, Heppner was a criminal matter; Morgan is a civil case governed by the Federal Rules of Civil Procedure, and Rule 26(b)(3) broadly protects the work product of a party, not merely counsel. Second, in Heppner, the criminal defendant used AI on his own initiative, apart from his lawyer, creating a gap between party and attorney. No such gap exists in the pro se context: “A pro se litigant is simultaneously the party and the advocate.”

Does AI Use Waive Work Product Protection?

On the waiver question, Judge Braswell aligned with Warner. The court acknowledged that mainstream AI platforms collect user data for training and other purposes but held that “that does not eliminate all expectations of privacy or automatically waive protections.”

The court drew an analogy to electronic communications generally. Gmail hosts millions of accounts and has access to their contents, but email subscribers retain reasonable privacy expectations in their communications. United States v. Warshak, 631 F.3d 266, 268 (6th Cir. 2010). And the Supreme Court has held that the mere fact that information is held by a third-party intermediary does not automatically extinguish a reasonable expectation of privacy. Carpenter v. United States, 585 U.S. 296, 310–16 (2018). These are Fourth Amendment cases, not work product cases, but the court found the principle informative: “routing information through a third-party system does not forfeit all privacy.”

Judge Braswell then went further, observing that the case for privacy is “arguably stronger in the context of modern AI use.” Unlike a search engine that passively returns results, AI platforms “are specifically designed and trained to engage. They invite candid and significant disclosure of information, including sensitive information. They simulate empathy, foster trust, and interact in a way that feels genuine and intimate.” The court cited research confirming that the conversational nature of AI chatbots encourages greater disclosure of personal information than traditional interfaces.

Moreover, work product protections are typically waived only by disclosure to an adversary, or in circumstances that substantially increase the likelihood that an adversary will obtain the materials. United States v. Am. Tel. & Tel. Co., 642 F.2d 1285, 1297–1301 (D.C. Cir. 1980); In re Qwest Commc’ns Int’l Inc., 450 F.3d 1179, 1186 (10th Cir. 2006). AI interactions do not meet that test: “even though AI use technically ‘discloses’ information to a third party, it is highly unlikely the information will fall into the hands of an adversary absent some legal process to compel it.”

The Limits of the Protection: Tool Identity

Having found that Rule 26(b)(3) protects Morgan’s AI use, the court turned to the scope of that protection. Morgan sought to shield not only his AI outputs but the name of the AI tool itself, arguing that tool selection reveals mental impressions and case strategy.

The court rejected that argument—not on principle, but on Morgan’s failure to meet his burden. The court acknowledged that “in some contexts disclosing an AI tool can reveal mental impressions or strategy,” but Morgan offered only conclusory assertions without factual support. See Martin v. Monfort, Inc., 150 F.R.D. 172, 172–73 (D. Colo. 1993) (once the requesting party establishes relevance, the burden shifts to the resisting party to show materials are protected); Pouncil v. Branch L. Firm, 277 F.R.D. 642, 653 (D. Kan. 2011) (proponent of work product protection bears the burden of showing materials contain protected mental impressions). And V2X’s request was legitimate: if Morgan had already submitted confidential information to an AI system, V2X was entitled to know which one.

The Protective Order: The Court Writes Its Own Provision

The most practically significant part of the opinion is the protective order language. Both parties agreed the existing protective order needed an AI-specific amendment, but they proposed very different provisions.

V2X proposed language that would prohibit the use of any AI application that transfers confidential information unless the application does not further transfer the information to another provider (absent due diligence confirming adequate security) and allows the receiving party to delete all confidential information. V2X’s proposal also prohibited any use of confidential information to train AI models.

Morgan proposed more permissive language that would allow the use of AI as long as the tool operates within “a secure, closed-circuit environment” and the provider’s terms of service do not permit use of uploaded data for training large language models (“LLMs”) or human-in-the-loop review.

The court rejected both proposals. It found Morgan’s proposal insufficient because it addressed cybersecurity concerns (unauthorized access) rather than the distinct risks of mainstream AI platforms. It found V2X’s proposal “over-engineered” and “crafted to fit the precise bounds of Defendant’s contractual engagement with AI providers,” resulting in language that Morgan argued was vague.

Instead, Judge Braswell drafted her own provision, which requires that before any party inputs confidential information into an AI platform, the AI provider must be contractually prohibited from: (1) storing or using inputs to train or improve its model; and (2) disclosing inputs to any third party except where such disclosure is essential to service delivery. Any third party receiving such disclosures must be bound by obligations no less protective than the protective order. The AI provider must also contractually afford the party the ability to delete all confidential information upon request. A party intending to use AI under these conditions must retain written documentation of the contractual protections.

The Access Gap

The court was candid about the practical consequences of its own ruling. It acknowledged that the provision will “at least for now” bar the parties from using most, if not all, mainstream low-to-no-cost AI to process confidential information, and that “[t]his type of restriction disadvantages pro se litigants.” Enterprise-tier AI accounts that satisfy the contractual requirements “may be available only through organizational procurement processes, or at costs that a pro se litigant is unlikely to bear.”

In a footnote, the court posed a question it did not attempt to answer: “[A]s large firms pour thousands of dollars into enterprise-grade AI and make their use of AI more secure, efficient, effective, and powerful, how will a pro se litigant or a litigant who cannot afford big-ticket legal services and better AI keep up?”

The court tempered the restriction with two practical notes. First, it cautioned the parties “against the over-designation of Confidential Information”—a warning that if V2X designates material as confidential that does not warrant protection, it will have effectively barred Morgan from using AI to analyze that material. Second, the court emphasized that nothing in the order restricts the use of AI in ways that do not involve uploading confidential information.

The Takeaway

Morgan v. V2X is the third federal district court in two months to address whether AI-generated litigation materials are discoverable, and the second to hold that they are protected work product. The opinion draws a clear line: the work product doctrine protects what a litigant does with AI, but when confidential discovery materials are involved, the AI platform itself must offer contractual guarantees against training use, third-party disclosure, and data retention. If other courts follow suit, the takeaway may be that consumer-tier AI platforms cannot be used to process information designated as confidential under a protective order without risking a violation. For pro se litigants, the ruling protects their right to use AI but simultaneously limits the tools available to exercise that right.

Morgan v. V2X, Inc., No. 25-cv-01991-SKC-MDB, 2026 U.S. Dist. LEXIS 67939 (D. Colo. Mar. 30, 2026).

Next
Next

The U.S. Supreme Court Reverses a $1 Billion Copyright Verdict and, Arguably, Renders the DMCA Safe Harbor Obsolete