Grammarly’s “Expert Review” Turned Real Writers Into Unpaid AI Endorsers

In Angwin v. Superhuman Platform, Inc., No. 26 Civ. 02005-JGK (S.D.N.Y. filed Mar. 11, 2026), a putative class action, a prominent New York Times Opinion editor alleges that Grammarly took the names, reputations, and professional credibility of hundreds of journalists, authors, and editors and sold them as features in a commercial AI product without obtaining consent. The case does not allege copyright violations but violations of the authors’ rights of publicity.

The Parties

Julia Angwin is an award-winning investigative journalist, a contributing Opinion editor at The New York Times, the founder of Proof News and The Markup, and a Pulitzer Prize finalist. Compl. ¶¶ 4–7.

Defendant Superhuman Platform, Inc. is a San Francisco-based company that owns and operates Grammarly, a digital writing assistant used by 40 million people daily. Grammarly claims $700 million in annual revenue and raised $1 billion in financing in 2025. Compl. ¶¶ 8, 13, 15.

The “Expert Review” Feature

In August 2025, Grammarly launched an “Expert Review” tool for its $12-per-month Pro subscribers. The feature worked as follows: a user uploads text, and Grammarly tells the user it is “reading your text” and “finding experts to review your piece.” The tool then displays the message “Applying ideas from” a named individual, such as “Applying ideas from Julia Angwin,” accompanied by a biographical description of the expert. Inline comments appear next to specific passages of the user’s text, attributed to the expert by name. Compl. ¶¶ 16–19.

The named experts included Stephen King, astrophysicist Neil deGrasse Tyson, New York Times tech reporter Kashmir Hill, journalist Kara Swisher, and Julie Brill, a former Commissioner of the Federal Trade Commission and former Chief Privacy Officer of Microsoft. Compl. ¶ 16.

The complaint identifies several categories of harm flowing from this arrangement. First, users were left with the impression that Angwin and others were personally reviewing their writing and providing feedback, when they were not. Second, the AI-generated advice attributed to the experts might be advice that the expert would disagree with or never give. Angwin hypothesizes that a user who received bad advice “from” her, and got a bad grade or a negative performance review as a result, could blame her for guidance she had nothing to do with. Compl. ¶¶ 22, 25.

And none of the experts consented. Grammarly did not ask any of them for permission to use their names, did not notify them that their names were being used, and did not compensate them. Angwin learned of the feature only when she read a March 9, 2026 article by journalist Casey Newton, who discovered that he, Angwin, and many other reporters had been, in his words, “involuntarily conscripted into serving as unpaid experts” for Grammarly’s for-profit app. Compl. ¶¶ 21, 23. (The article by Newton, who co-hosts the New York Times podcast “Hard Fork,” is worth reading for at least Kara Swisher’s reaction to learning about Grammarly’s “Expert Review” feature: “You rapacious information and identity thieves better get ready for me to go full McConaughey on you. Also, you suck.”)

The Legal Claims: Right of Publicity, Not Copyright

The complaint does not assert copyright infringement. It does not allege that Grammarly copied Angwin’s articles or trained a model on her published work in a way that infringes her copyrights. The theory is different: Grammarly appropriated her name and identity for commercial gain without consent.

The complaint asserts four counts:

First, violation of California’s common law right of publicity, which protects persons from the unauthorized appropriation of their identity for commercial gain. The elements are: (1) the defendant’s use of the plaintiff’s identity, (2) appropriation to the defendant’s advantage, (3) lack of consent, and (4) injury. Compl. ¶¶ 48–55. The complaint traces the cause of action to Lugosi v. Universal Pictures, 25 Cal. 3d 813, 824 (Cal. 1979), in which the California Supreme Court recognized that “the protection of name and likeness from unwarranted intrusion or exploitation is the heart of the law of privacy.”

Second, violation of California Civil Code § 3344, which makes it unlawful to knowingly use another’s name for purposes of advertising or selling products without prior consent. Compl. ¶¶ 56–61.

Third, violation of Sections 50 and 51 of the New York Civil Rights Law. Section 50, enacted in 1909, makes it a misdemeanor to use a person’s name for advertising or trade purposes without written consent. Section 51 provides a private right of action, including the possibility of exemplary damages for knowing violations. Compl. ¶¶ 62–70. As the New York Court of Appeals explained in Finger v. Omni Publications Int’l, Ltd., 77 N.Y.2d 138, 141 (N.Y. 1990), these provisions were intended to prohibit “nonconsensual commercial appropriations of the name, portrait or picture of a living person.”

Fourth, unjust enrichment. The complaint alleges that Superhuman earned millions in subscription revenue from a tool that traded on the names and reputations of real people who received no compensation. Compl. ¶¶ 71–76.

What Makes This Case Distinctive

AI-related right of publicity claims are not entirely new. Voice cloning and deepfake litigation have begun to raise similar questions. Tennessee’s ELVIS Act, enacted in 2024, was the first state statute to expressly extend right-of-publicity protections to AI-generated voice clones. And the Lanham Act claims in Encyclopædia Britannica, Inc. v. OpenAI, Inc., No. 1:26-cv-02097 (S.D.N.Y. filed Mar. 13, 2026), similarly allege that an AI product attributed fabricated content to a real brand.

But Angwin is not about voice cloning or hallucinated attribution. Grammarly did not accidentally invoke Angwin’s name through a model’s stochastic output. It deliberately built a product feature around real people’s names, displayed those names in a commercial interface, attributed AI-generated advice to those individuals, and charged users $12 a month for access. The complaint frames this as a straightforward application of century-old publicity rights to a new product category.

Copyright law protects a person’s work. Right of publicity law protects the person themselves. Arguably, an AI company can build a product that draws on the publicly available work of an expert without necessarily infringing the expert’s copyright (depending on fair use and other defenses). But when the company puts the expert’s name on the product and sells it, it implicates a different body of law entirely, one that does not depend on whether any copyrighted material was reproduced.

Grammarly’s Response

Notably, on March 11, 2026, the same day the complaint was filed, Superhuman CEO Shishir Mehrotra posted on LinkedIn that Grammarly would disable Expert Review. Mehrotra acknowledged that experts were "concerned that the agent misrepresented their voices," apologized, and stated the company would "rethink [its] approach going forward" and "reimagine the feature" to give "experts real control over how they want to be represented—or not represented at all." In a separate statement, Superhuman said it believes the legal claims are "without merit" and will defend against them.

Previous
Previous

Enlargement Verdict Diminishes: Federal Circuit Cuts Down $17 Million Trade Secret Based on Patent Disclosure and Lack of Secrecy

Next
Next

OpenAI Learns That “Cameo” Is Not a Generic Term