Blog

Court bars expert from accessing AI source code in high profile copyright case

expert testimony

Plaintiffs sued Stability AI for misusing their copyrighted artwork to train generative AI models. As part of the lawsuit, plaintiffs sought to designate a University of Chicago professor and AI researcher as an expert. Plaintiffs asserted it would be necessary to disclose defendants’ highly confidential materials to the professor.

Defendants asked the court to block disclosure of their “ATTORNEYS’ EYES ONLY” and “HIGHLY CONFIDENTIAL – SOURCE CODE” material to the expert. They argued that the expert’s work – particularly concerning his projects designed to “poison” or “protect against” generative AI training – put him in “functional competition” with their models.

The court ruled in favor of defendants because it found the risk of competitive harm outweighed plaintiffs’ need for the expert testimony. Although the expert was a respected academic and not a commercial competitor, the court reasoned that his tools were designed to interfere with AI model performance and that he continued to develop such technologies. Even unintentional use of confidential information could influence his future work.

Additionally, the court found that while the expert was well-qualified, he was not uniquely qualified. Plaintiffs could rely on other experts in the AI field, including one of the expert’s former students, who was approved as an expert in a related case.

Andersen v. Stability AI Ltd., 2025 WL 1927796 (N.D. Cal., July 14, 2025)

Can generative AI turn hearsay into admissible evidence?

In the recent case of Malia LLC v. State Farm, an insurance policyholder sued the insurance company claiming the insurance company wrongfully denied coverage. The insurance company moved for summary judgment. One of the pieces of evidence the insurance company asked the court to consider was a wind verification report generated by a system that “combine[d] proprietary, three-dimensional storm models with artificial intelligence, radar data, and real-world observations to analyze what actually happened.”

Plaintiff sought to exclude this evidence, arguing it was inadmissible hearsay. Under Federal Rule of Evidence 801, “hearsay” is an out of court statement that a party offers into evidence to prove the truth of the matter asserted in the statement.

The court rejected plaintiff’s argument. It observed that the report contained raw data generated by a machine or algorithm. And it went on to note that “the consensus among the circuit courts is that machine-generated data cannot be hearsay because it does not constitute a statement under [Rule 801].” It cited to the case of Lyngaas v. Curaden Ag, 992 F.3d 412 (6th Cir. 2021), which found that summary-report logs of fax transmissions were not hearsay. The Lyngaas case had cited to other similar cases, including one in which a taser report was “merely a report of raw data produced by a machine”.

The case raises a larger, perhaps more interesting question – can a litigant use generative AI to prepare content, and get it admitted into evidence when it otherwise would have been barred by the hearsay rule?

While a strict reading of the holding of Malia LLC and similar cases might lead one to believe so (since these cases actually do say that “machine-generated data cannot be hearsay”), such a conclusion would probably be overly simplistic and potentially misleading. The key distinction lies in whether the content produced by the AI is truly machine-generated in the evidentiary sense – that is, created autonomously by the machine without incorporating a human’s assertion. If a person prompts the AI with factual inputs or narrative claims, then the court would be more likely to determine that the output is merely a stylized or reorganized version of those assertions. In that scenario, the output arguably retains the character of a “statement” under the hearsay rule.

Courts are likely to scrutinize not just the form of the AI output, but the origin of the content and the intent behind its use. If the litigant’s goal in using generative AI is to repackage hearsay in admissible form, the court may well see through the tactic and apply traditional hearsay exclusions. On the other hand, if the AI produces output based solely on internal rules or statistical patterns, without incorporating or restating human assertions, then it may fall outside the definition of hearsay altogether.

In short, while these cases offer a foothold for arguing that certain AI-generated content is not hearsay, the admissibility of such content will ultimately depend on how closely the output is tied to human assertions.

Malia LLC v. State Farm, 2025 WL 1840732 (W.D. Tennessee, July 3, 2025)

Voice cloning case presents novel AI copyright issues

digital voice clone

Two professional voice actors based sued Lovo, a company that sells AI-generated voiceover software alleging, among other things, copyright infringement. Plaintiffs claimed that defendant took their voices without permission and used them to create digital clones. Those clones were sold to customers under the pseudonyms “Kyle Snow” and “Sally Coleman.” Defendant moved to dismiss the copyright claims. The court provided a mixed ruling.

How it started

Plaintiffs asserted that defendant first contacted them on the freelance platform Fiverr in 2019 and 2020. Defendant’s representatives allegedly assured plaintiffs the recordings would only be used for private research, not public or commercial projects. Based on those promises, plaintiffs delivered the recordings. Each was paid a few hundred dollars.

Several years later, plaintiffs discovered their voices had been cloned. In 2023, they heard a podcast produced using Lovo’s software. Plaintiff Lehrman’s friends and colleagues said the cloned voice sounded exactly like him. After further research, plaintiffs learned that defendant had built digital voice profiles from their recordings and offered those profiles to customers as part of a paid AI service.

What plaintiffs claimed

Plaintiffs made four different types of copyright claims. One was that defendant had directly copied Plaintiff Sage’s recording and used it in a promotional video for investors. Another was that defendant had used both of their recordings to train defendant’s AI system called Genny. They also claimed that the cloned voices produced by Genny were infringing. Finally, they argued that defendant should be liable for contributory infringement because it let its customers use those clones.

What the court decided

The court allowed one copyright claim to move forward. Plaintiff Sage alleged that defendant used a real recording of her voice in public investor presentations and YouTube videos. Defendant admitted to using a portion of the recording, and the court agreed that this could be outside the scope of the license.

The other copyright claims were not as successful. The court dismissed the copyright claim based on use of the voice recordings to train the AI model. Plaintiffs did not explain in their complaint how the training process worked or how it infringed their rights. The court said plaintiffs could amend the complaint and try again.

The court also dismissed the claim based on the AI-generated voices themselves. It explained that copyright law does not protect imitations of a voice, only the actual recordings. Plaintiffs had not alleged that the AI clones were identical to the originals, only that they sounded very similar. That kind of mimicry was not covered by copyright law.

Finally, the court dismissed the contributory infringement claim. Since there was no direct infringement based on the AI-generated outputs, the court said there could be no indirect liability either.

Lehrman v. Lovo Inc., — F.Supp.3d —, 2025 WL 1902547 (S.D.N.Y.,  July 10, 2025)

See also: VIDEO: AI and Voice Clones – Tennessee enacts the ELVIS Act

Court says that party cannot use AI instead of a human stenographer at depositions

robot court reporter

Plaintiff asked the court for permission to take 13 depositions using video recordings and AI transcription software (instead of a human stenographer), with a notary and videographer serving as the deposition officer and certifying the transcripts. Plaintiff argued this method would comply with the Federal Rules of Civil Procedure while reducing costs from $4,000 to $1,500 per deposition.

Defendant did not oppose the use of video or the proposed officer but objected to the use of AI to produce and certify transcripts, citing concerns about accuracy and confidentiality, especially under the terms of a protective order in the case.

The magistrate judge denied plaintiff’s request, and plaintiff took the matter up with the district court judge, claiming the ruling conflicted with Rule 1’s emphasis on inexpensive litigation and improperly limited who may serve as a deposition officer.

The district judge overruled those objections, holding that the magistrate’s decision was not clearly erroneous or contrary to law. The court emphasized that Rule 30(b)(3)(A) permits judicial discretion over how depositions are recorded and that plaintiff’s proposed method, though potentially cheaper, was untested and raised legitimate concerns about the secure handling of confidential information. The court rejected plaintiff’s claim that defendant had waived confidentiality by using Google Drive for its own documents, and explained that the number of planned depositions made it reasonable to consider how any problems with the proposed method could grow larger if repeated. The court concluded that while the Federal Rules promote efficiency, they do not require acceptance of methods that undermine reliability or violate protective orders.

Black v. City of San Diego, 2025 WL 1908072 (S.D. Cal., July 10, 2025)

Immigration attorney hit with sanctions for using Claude to generate fake case citations

A federal court imposed sanctions on a petitioner’s attorney for submitting fabricated legal quotations generated by the AI tool Claude Sonnet 4 in an emergency habeas case seeking to halt a client’s deportation. Facing an expedited timeline and suffering from a respiratory infection, the attorney admitted he used AI to draft a supplemental brief and failed to verify the quotations, despite knowing AI tools are prone to hallucinations.

The court found this conduct violated Rule 11 and constituted subjective bad faith, noting that the attorney either consciously avoided checking his sources or deliberately ignored the opposing party’s warning about the fake quotes.

While courts have typically imposed monetary sanctions ranging from $1,500 to $15,000 in similar cases, the court here imposed a reduced $1,000 fine in light of mitigating factors, including the attorney’s prompt admission, withdrawal of the filing, and enrollment in a continuing legal education (CLE) course on ethical AI use. He was also ordered to file proof of CLE completion.

The decision underscores that AI use in legal practice does not excuse attorneys from their duty to confirm the accuracy of filings and that even in emergency settings, courts will hold lawyers accountable for unverified, fictitious legal content.

Kaur v. Desso, 2025 WL 1895859 (N.D.N.Y. July 9, 2025)

Court gives X opportunity to raise Section 230 claim in deepfake case

X  sued Minnesota Attorney General Keith Ellison over a state law that prohibits the dissemination of AI-generated political deepfakes, arguing the statute violates the First and Fourteenth Amendments and is preempted by the Communications Decency Act at 47 U.S.C. 230. A related case challenging the same law is already on appeal in Kohls v. Ellison, leading the court to stay X’s constitutional claims while allowing its Section 230 claim to move forward. The court invited both parties to file motions for judgment on the pleadings within 30 days. If neither does so, the entire case will be stayed pending resolution of the Kohls appeal.

X Corp. v. Ellison, 2025 WL 1833455 (D. Minn. July 3, 2025)

Relying on AI for help in discovery meet-and-confer was not good faith

AI good faith

When a party in a lawsuit claims that the other side is not fully responding to discovery requests, that party can file a motion to compel, asking the court to require the noncompliant party to respond fully to the discovery requests. Before filing such a motion, however, the party must – according to the Rules of Civil Procedure – confer in good faith with the other side.

The case of Tijerino v. Spotify USA Inc. is a patent case in which the pro se plaintiff claimed defendant was not fully responding to discovery. He held a phone call with defendant’s attorneys and – according to defendant’s attorneys’ account of the call which plaintiff did not dispute – used an artificial intelligence program to guide him in the conversation.

After the call, plaintiff filed a motion to compel, which the court denied. The court observed that plaintiff’s failure to be prepared in the conference meant that he did not confer in good faith. It was not sufficient to “outsource” such preparation to artificial intelligence.

Tijerino v. Spotify USA Inc., 2025 WL 1866057 (E.D. Louisiana, July 7, 2025)

Pro se litigant cited AI hallucinated cases but court found no harm, no foul

A bankruptcy case took a turn during a hearing when the court asked the pro se debtor how he had found the cases he cited in his legal filings. Debtor admitted that he had used artificial intelligence to generate legal arguments and case citations. When the court reviewed those citations, it found they were either misrepresented, irrelevant, or entirely fictitious. One citation led to a case that had been vacated. Another did not say anything close to what debtor claimed. And one did not exist at all.

The court made clear that parties, whether attorneys or pro se litigants, must make a reasonable inquiry before submitting legal contentions to the court. That means personally verifying that cited case law actually supports the argument being made. Using AI without checking the accuracy of the output is not enough.

Even so, the court declined to impose sanctions under the relevant rule. It noted that the case was already being dismissed for independent reasons under bankruptcy law and saw no need to pile on. Essentially, the judge applied a “no harm, no foul” approach. But the warning was clear: AI-generated case law that has not been verified cannot be trusted and will not be tolerated.

In re Perkins, No. 24-32731-thp13, 2025 WL 1871049 (Bankr. D. Or. July 7, 2025)

Court limits use of trademark for forthcoming AI-powered agentic browser

tro trademark

A technology company sued artificial intelligence company Perplexity for trademark infringement and unfair competition. Plaintiff claimed exclusive rights to the mark COMET, which it used in connection with a range of technology and consulting services. Perplexity planned to launch an “AI-powered browser for agentic search” under the same name. Plaintiff asked the court to stop Perplexity from using the mark altogether. The court granted a preliminary injunction in part, allowing the browser to launch under the COMET mark, but blocking all other uses of the mark.

To decide whether to issue the injunction, the court applied the standard four-part test. A plaintiff must show a likelihood of success on the merits, a likelihood of suffering irreparable harm without an injunction, that the balance of equities favors relief, and that the injunction is in the public interest.

The court found that plaintiff was likely to succeed on its trademark infringement claim. Plaintiff owned an incontestable federal registration for the COMET mark. That gave plaintiff a strong position on ownership. The court then looked at the likelihood of confusion between the two uses of COMET by applying the Sleekcraft factors, taken from AMF Inc. v. Sleekcraft Boats, 599 F.2d 341 (9th Cir. 1979).

  • On the question of similarity of the marks, the court noted the marks were identical. That factor weighed heavily in favor of plaintiff.
  • On the element of proximity of the competing products, the court found that even though both parties used artificial intelligence and machine learning, their overall products served different functions, had different designs, and targeted different users. That factor favored defendant.
  • Concerning the strength of plaintiff’s mark, plaintiff convinced the court that it had built a strong reputation over the last seven years, but the court found that plaintiff’s mark was strong only in a more limited segment of the AI space. That factor moderately favored plaintiff.
  • In terms of marketing channels, there was some overlap in audience. Both companies appealed to AI developers and sophisticated users, and Perplexity had the resources to dominate certain markets. That factor slightly favored plaintiff.
  • The degree of care exercised by customers favored defendant, since the users of both products were more likely to be careful in making their choices.
  • The court found that intent only slightly favored plaintiff, and that it was not a critical factor at this early stage.
  • On actual confusion, the evidence was sparse and open to different interpretations, so the court considered this factor neutral.
  • On the likelihood of expansion, the court found in favor of plaintiff. It was concerned that Perplexity, as a company “seeking to become an AI juggernaut,” would eventually expand the COMET mark into new products that could directly interfere with plaintiff’s business. The court found this concern reasonable, and Perplexity’s testimony did little to calm those concerns.

The court also found that plaintiff would suffer irreparable harm if Perplexity expanded the use of the COMET mark beyond the browser. The harm could not be undone easily, and confusion in the market could damage plaintiff’s reputation and customer relationships.

On the balance of the equities, the court found that preventing Perplexity from launching the browser would create significant hardship. However, Perplexity had repeatedly promised under oath that it had no plans to use the mark for other services. Therefore, blocking those additional uses would not harm Perplexity. On the other hand, plaintiff would be harmed if Perplexity expanded use of the mark. The court explained that a broad, mistaken injunction would be more harmful than a narrow one that was mistakenly denied.

Finally, the court found that the public interest would be served by protecting plaintiff’s narrow slice of the artificial intelligence market from confusion. Since the risk of confusion was limited to that segment, the court tailored its injunction accordingly.

The court issued an order enjoining Perplexity from using the COMET mark on any service listed in plaintiff’s trademark registration. However, Perplexity was permitted to move forward with launching its AI browser under the COMET mark.

Comet ML Inc. v. Perplexity AI, Inc., 2025 WL 1822477 N.D. Cal. (June 30, 2025)

Court lets authors expand copyright case to target Databricks’ new AI models

amending complaing

Five copyright holders sued Databricks and Mosaic ML, claiming their copyrighted works were used to train artificial intelligence systems without permission. Plaintiffs originally alleged that Mosaic ML directly infringed their works by training its MPT large language models on datasets that included their works. Plaintiffs also accused Databricks, Mosaic ML’s parent company, of vicarious liability for that conduct.

After Databricks released a new set of AI models called DBRX, plaintiffs moved to amend the complaint. Plaintiffs asked the court to allow a new claim of direct copyright infringement against Databricks for allegedly using the same protected works to train DBRX. Plaintiffs also sought to update the list of copyrighted works allegedly copied. Defendants opposed the request, arguing that the amendment came too late and would unfairly change the case.

Timing

The court acknowledged that plaintiffs waited more than a year after DBRX was released before requesting to amend the complaint. That delay was significant, and plaintiffs did not provide a strong explanation. However, the court noted that discovery was still open, and key deadlines had not yet passed. Because the case was still active, the court said the delay alone was not enough to deny the motion.

Intent

Defendants claimed plaintiffs acted in bad faith by dragging out the case and making vague statements in court filings. But the court saw no signs of deliberate delay or dishonesty. Instead, it found that plaintiffs’ motion to amend reflected an effort to match the complaint with new information obtained through discovery.

Prejudice

Defendants argued that allowing new claims about DBRX would cause unfair prejudice by drastically changing the case. The court disagreed. It found that the parties were already engaged in discovery related to DBRX and that any added burden would be limited. Since the DBRX and MPT models might rely on overlapping data, the new claims would not require a completely new approach to the case.

Futility

Defendants also said the new claims were too vague and would not survive a challenge. But the court said such issues should be dealt with after the complaint is amended. Unless the new claims are clearly invalid, courts usually allow amendments and address legal sufficiency later in the process.

So the court granted plaintiffs’ motion to amend. The lawsuit will now include direct copyright infringement claims against Databricks based on its newer DBRX models, along with an updated list of works that plaintiffs claims were copied.

In re Mosaic LLM Litigation, 2025 WL 1755650 (N.D. California, June 25, 2025)

Scroll to top