Infringement case against OpenAI failed because there was no copyright registration

copyright dismissed

Thinking about suing an AI company for copyright infringement? Do not overlook the basics. Before any court will consider the merits of an infringement claim, the plaintiff needs to have an actual copyright registration in hand, not just a pending application.

That notion was confirmed in a recent unsuccessful lawsuit against OpenAI in federal court in California. Plaintiff sued OpenAI alleging that OpenAI infringed the copyright in several artificial intelligence models and content plaintiff claimed to have developed and then destroyed evidence to conceal that alleged infringement.

Plaintiff asked the court to issue a temporary restraining order preventing defendant from deleting or altering data and documents related to the alleged infringement while the case proceeded. The court denied the request for a temporary restraining order and dismissed the complaint.

The court ruled this way because the Copyright Act bars any civil infringement action until copyright registration has been made, and courts interpret that requirement to mean the Copyright Office must have issued a registration certificate, not merely received an application. This left plaintiff in this case unable to show a likelihood of success on the merits.

Gholami v. OpenAI, Inc., No. 26-cv-00174, 2026 WL 61359 (N.D. Cal., January 8, 2026).

Here is some relevant info:

AI fake cases crisis reaches Illinois Appellate Court for the first time

illinois appellate

The Illinois Appellate Court for the Fourth District issued a decision that marks the first time an Illinois court at this level has addressed attorney misuse of artificial intelligence. While the sad underlying matter involved the termination of parental rights, the appellate decision is noteworthy for another reason: the use of fictitious legal citations generated by AI.

The Appeal

Respondent appealed from a trial court’s order terminating her parental rights to her two minor children. The appeal raised several arguments, including challenges to the trial court’s findings on unfitness and best interests, a due process claim regarding self-representation, and a claim of ineffective assistance of counsel. The appellate court ultimately affirmed the trial court’s decision. However, the focus of the opinion shifted when the court found irregularities in the appellate briefs that respondent’s appointed counsel had filed.

The Court’s Review of the Briefs

After reviewing the briefs, the court noticed that eight cited cases did not exist. The court ordered counsel to explain the source of these citations and to appear in person. Counsel responded by acknowledging that five of the cited cases were fictitious. He explained that he had used AI to assist in drafting the brief and did not independently verify the citations it produced. For the remaining three cases, which were real, the court found that the actual content did not support the legal arguments presented in the brief.

The Role of AI and Legal Responsibility

The court’s opinion noted that AI tools such as generative chatbots can assist legal professionals but must be used with caution. Citing recent guidance from the American Bar Association and the Illinois Supreme Court’s new AI policy, the court emphasized that attorneys are responsible for reviewing and verifying all material submitted to a court, regardless of how it was generated.

The court concluded that the attorney had not reviewed the AI-generated citations and that this lack of verification resulted in inaccurate filings. It clarified that while use of AI is not prohibited, reliance on unverified outputs can compromise the integrity of legal proceedings.

Sanctions Imposed

Rather than striking the briefs, the court chose to address the appeal on the merits but imposed monetary sanctions under Illinois Supreme Court Rule 375. It ordered the attorney to (1) return the $6,925.62 he had been paid by Sangamon County for his representation, (2) pay an additional $1,000 fine to the appellate clerk, and (3) submit a copy of the opinion to the Illinois Attorney Registration and Disciplinary Commission.

The court noted that these measures were intended to ensure accountability and to reinforce the expectations surrounding the use of AI in court proceedings.

In re Baby Boy, — N.E.3d —, 2025 WL 2046315 (Ct. App. Ill. 4th Dist., July 21, 2025)

Tesla awarded sanctions where opposing party cited AI-generated cases in discovery briefing

Plaintiff sued Tesla in federal court in the Southern District of Florida. During discovery, plaintiff filed multiple motions that cited fake case law hallucinated by artificial intelligence. Defendant moved to strike the filings and asked the court to award attorneys’ fees for the time spent addressing the fake citations and related motions.

Defendant argued that its attorneys spent more than five hours reviewing the false citations, drafting a motion to strike, and responding to a motion to compel contact information. Defendant requested $1,096 in fees. Plaintiff objected, claiming the hours were excessive and that defendant had not properly conferred before seeking fees. Plaintiff had offered to pay only one dollar as a “symbolic” resolution.

The court rejected plaintiff’s arguments. It found that plaintiff did not confer in good faith and was again wasting the court’s time. Although plaintiff was not a lawyer, the court held that pro se litigants must still follow the rules and act professionally. The court emphasized that submitting hallucinated cases, even unintentionally, undermines the judicial process.

The court reduced the requested amount slightly, finding that some billing entries were duplicative, as both attorneys reviewed the same documents. Plaintiff was ordered to pay the reduced amount and defendant was required to notify the court whether payment was received.

Crespo v. Tesla, Inc., 2025 WL 1921903 (S.D. Florida, July 11, 2025)

 

Court bars expert from accessing AI source code in high profile copyright case

expert testimony

Plaintiffs sued Stability AI for misusing their copyrighted artwork to train generative AI models. As part of the lawsuit, plaintiffs sought to designate a University of Chicago professor and AI researcher as an expert. Plaintiffs asserted it would be necessary to disclose defendants’ highly confidential materials to the professor.

Defendants asked the court to block disclosure of their “ATTORNEYS’ EYES ONLY” and “HIGHLY CONFIDENTIAL – SOURCE CODE” material to the expert. They argued that the expert’s work – particularly concerning his projects designed to “poison” or “protect against” generative AI training – put him in “functional competition” with their models.

The court ruled in favor of defendants because it found the risk of competitive harm outweighed plaintiffs’ need for the expert testimony. Although the expert was a respected academic and not a commercial competitor, the court reasoned that his tools were designed to interfere with AI model performance and that he continued to develop such technologies. Even unintentional use of confidential information could influence his future work.

Additionally, the court found that while the expert was well-qualified, he was not uniquely qualified. Plaintiffs could rely on other experts in the AI field, including one of the expert’s former students, who was approved as an expert in a related case.

Andersen v. Stability AI Ltd., 2025 WL 1927796 (N.D. Cal., July 14, 2025)

Can generative AI turn hearsay into admissible evidence?

In the recent case of Malia LLC v. State Farm, an insurance policyholder sued the insurance company claiming the insurance company wrongfully denied coverage. The insurance company moved for summary judgment. One of the pieces of evidence the insurance company asked the court to consider was a wind verification report generated by a system that “combine[d] proprietary, three-dimensional storm models with artificial intelligence, radar data, and real-world observations to analyze what actually happened.”

Plaintiff sought to exclude this evidence, arguing it was inadmissible hearsay. Under Federal Rule of Evidence 801, “hearsay” is an out of court statement that a party offers into evidence to prove the truth of the matter asserted in the statement.

The court rejected plaintiff’s argument. It observed that the report contained raw data generated by a machine or algorithm. And it went on to note that “the consensus among the circuit courts is that machine-generated data cannot be hearsay because it does not constitute a statement under [Rule 801].” It cited to the case of Lyngaas v. Curaden Ag, 992 F.3d 412 (6th Cir. 2021), which found that summary-report logs of fax transmissions were not hearsay. The Lyngaas case had cited to other similar cases, including one in which a taser report was “merely a report of raw data produced by a machine”.

The case raises a larger, perhaps more interesting question – can a litigant use generative AI to prepare content, and get it admitted into evidence when it otherwise would have been barred by the hearsay rule?

While a strict reading of the holding of Malia LLC and similar cases might lead one to believe so (since these cases actually do say that “machine-generated data cannot be hearsay”), such a conclusion would probably be overly simplistic and potentially misleading. The key distinction lies in whether the content produced by the AI is truly machine-generated in the evidentiary sense – that is, created autonomously by the machine without incorporating a human’s assertion. If a person prompts the AI with factual inputs or narrative claims, then the court would be more likely to determine that the output is merely a stylized or reorganized version of those assertions. In that scenario, the output arguably retains the character of a “statement” under the hearsay rule.

Courts are likely to scrutinize not just the form of the AI output, but the origin of the content and the intent behind its use. If the litigant’s goal in using generative AI is to repackage hearsay in admissible form, the court may well see through the tactic and apply traditional hearsay exclusions. On the other hand, if the AI produces output based solely on internal rules or statistical patterns, without incorporating or restating human assertions, then it may fall outside the definition of hearsay altogether.

In short, while these cases offer a foothold for arguing that certain AI-generated content is not hearsay, the admissibility of such content will ultimately depend on how closely the output is tied to human assertions.

Malia LLC v. State Farm, 2025 WL 1840732 (W.D. Tennessee, July 3, 2025)

Voice cloning case presents novel AI copyright issues

digital voice clone

Two professional voice actors based sued Lovo, a company that sells AI-generated voiceover software alleging, among other things, copyright infringement. Plaintiffs claimed that defendant took their voices without permission and used them to create digital clones. Those clones were sold to customers under the pseudonyms “Kyle Snow” and “Sally Coleman.” Defendant moved to dismiss the copyright claims. The court provided a mixed ruling.

How it started

Plaintiffs asserted that defendant first contacted them on the freelance platform Fiverr in 2019 and 2020. Defendant’s representatives allegedly assured plaintiffs the recordings would only be used for private research, not public or commercial projects. Based on those promises, plaintiffs delivered the recordings. Each was paid a few hundred dollars.

Several years later, plaintiffs discovered their voices had been cloned. In 2023, they heard a podcast produced using Lovo’s software. Plaintiff Lehrman’s friends and colleagues said the cloned voice sounded exactly like him. After further research, plaintiffs learned that defendant had built digital voice profiles from their recordings and offered those profiles to customers as part of a paid AI service.

What plaintiffs claimed

Plaintiffs made four different types of copyright claims. One was that defendant had directly copied Plaintiff Sage’s recording and used it in a promotional video for investors. Another was that defendant had used both of their recordings to train defendant’s AI system called Genny. They also claimed that the cloned voices produced by Genny were infringing. Finally, they argued that defendant should be liable for contributory infringement because it let its customers use those clones.

What the court decided

The court allowed one copyright claim to move forward. Plaintiff Sage alleged that defendant used a real recording of her voice in public investor presentations and YouTube videos. Defendant admitted to using a portion of the recording, and the court agreed that this could be outside the scope of the license.

The other copyright claims were not as successful. The court dismissed the copyright claim based on use of the voice recordings to train the AI model. Plaintiffs did not explain in their complaint how the training process worked or how it infringed their rights. The court said plaintiffs could amend the complaint and try again.

The court also dismissed the claim based on the AI-generated voices themselves. It explained that copyright law does not protect imitations of a voice, only the actual recordings. Plaintiffs had not alleged that the AI clones were identical to the originals, only that they sounded very similar. That kind of mimicry was not covered by copyright law.

Finally, the court dismissed the contributory infringement claim. Since there was no direct infringement based on the AI-generated outputs, the court said there could be no indirect liability either.

Lehrman v. Lovo Inc., — F.Supp.3d —, 2025 WL 1902547 (S.D.N.Y.,  July 10, 2025)

See also: VIDEO: AI and Voice Clones – Tennessee enacts the ELVIS Act

Court says that party cannot use AI instead of a human stenographer at depositions

robot court reporter

Plaintiff asked the court for permission to take 13 depositions using video recordings and AI transcription software (instead of a human stenographer), with a notary and videographer serving as the deposition officer and certifying the transcripts. Plaintiff argued this method would comply with the Federal Rules of Civil Procedure while reducing costs from $4,000 to $1,500 per deposition.

Defendant did not oppose the use of video or the proposed officer but objected to the use of AI to produce and certify transcripts, citing concerns about accuracy and confidentiality, especially under the terms of a protective order in the case.

The magistrate judge denied plaintiff’s request, and plaintiff took the matter up with the district court judge, claiming the ruling conflicted with Rule 1’s emphasis on inexpensive litigation and improperly limited who may serve as a deposition officer.

The district judge overruled those objections, holding that the magistrate’s decision was not clearly erroneous or contrary to law. The court emphasized that Rule 30(b)(3)(A) permits judicial discretion over how depositions are recorded and that plaintiff’s proposed method, though potentially cheaper, was untested and raised legitimate concerns about the secure handling of confidential information. The court rejected plaintiff’s claim that defendant had waived confidentiality by using Google Drive for its own documents, and explained that the number of planned depositions made it reasonable to consider how any problems with the proposed method could grow larger if repeated. The court concluded that while the Federal Rules promote efficiency, they do not require acceptance of methods that undermine reliability or violate protective orders.

Black v. City of San Diego, 2025 WL 1908072 (S.D. Cal., July 10, 2025)

Court gives X opportunity to raise Section 230 claim in deepfake case

X  sued Minnesota Attorney General Keith Ellison over a state law that prohibits the dissemination of AI-generated political deepfakes, arguing the statute violates the First and Fourteenth Amendments and is preempted by the Communications Decency Act at 47 U.S.C. 230. A related case challenging the same law is already on appeal in Kohls v. Ellison, leading the court to stay X’s constitutional claims while allowing its Section 230 claim to move forward. The court invited both parties to file motions for judgment on the pleadings within 30 days. If neither does so, the entire case will be stayed pending resolution of the Kohls appeal.

X Corp. v. Ellison, 2025 WL 1833455 (D. Minn. July 3, 2025)

Relying on AI for help in discovery meet-and-confer was not good faith

AI good faith

When a party in a lawsuit claims that the other side is not fully responding to discovery requests, that party can file a motion to compel, asking the court to require the noncompliant party to respond fully to the discovery requests. Before filing such a motion, however, the party must – according to the Rules of Civil Procedure – confer in good faith with the other side.

The case of Tijerino v. Spotify USA Inc. is a patent case in which the pro se plaintiff claimed defendant was not fully responding to discovery. He held a phone call with defendant’s attorneys and – according to defendant’s attorneys’ account of the call which plaintiff did not dispute – used an artificial intelligence program to guide him in the conversation.

After the call, plaintiff filed a motion to compel, which the court denied. The court observed that plaintiff’s failure to be prepared in the conference meant that he did not confer in good faith. It was not sufficient to “outsource” such preparation to artificial intelligence.

Tijerino v. Spotify USA Inc., 2025 WL 1866057 (E.D. Louisiana, July 7, 2025)

Court limits use of trademark for forthcoming AI-powered agentic browser

tro trademark

A technology company sued artificial intelligence company Perplexity for trademark infringement and unfair competition. Plaintiff claimed exclusive rights to the mark COMET, which it used in connection with a range of technology and consulting services. Perplexity planned to launch an “AI-powered browser for agentic search” under the same name. Plaintiff asked the court to stop Perplexity from using the mark altogether. The court granted a preliminary injunction in part, allowing the browser to launch under the COMET mark, but blocking all other uses of the mark.

To decide whether to issue the injunction, the court applied the standard four-part test. A plaintiff must show a likelihood of success on the merits, a likelihood of suffering irreparable harm without an injunction, that the balance of equities favors relief, and that the injunction is in the public interest.

The court found that plaintiff was likely to succeed on its trademark infringement claim. Plaintiff owned an incontestable federal registration for the COMET mark. That gave plaintiff a strong position on ownership. The court then looked at the likelihood of confusion between the two uses of COMET by applying the Sleekcraft factors, taken from AMF Inc. v. Sleekcraft Boats, 599 F.2d 341 (9th Cir. 1979).

  • On the question of similarity of the marks, the court noted the marks were identical. That factor weighed heavily in favor of plaintiff.
  • On the element of proximity of the competing products, the court found that even though both parties used artificial intelligence and machine learning, their overall products served different functions, had different designs, and targeted different users. That factor favored defendant.
  • Concerning the strength of plaintiff’s mark, plaintiff convinced the court that it had built a strong reputation over the last seven years, but the court found that plaintiff’s mark was strong only in a more limited segment of the AI space. That factor moderately favored plaintiff.
  • In terms of marketing channels, there was some overlap in audience. Both companies appealed to AI developers and sophisticated users, and Perplexity had the resources to dominate certain markets. That factor slightly favored plaintiff.
  • The degree of care exercised by customers favored defendant, since the users of both products were more likely to be careful in making their choices.
  • The court found that intent only slightly favored plaintiff, and that it was not a critical factor at this early stage.
  • On actual confusion, the evidence was sparse and open to different interpretations, so the court considered this factor neutral.
  • On the likelihood of expansion, the court found in favor of plaintiff. It was concerned that Perplexity, as a company “seeking to become an AI juggernaut,” would eventually expand the COMET mark into new products that could directly interfere with plaintiff’s business. The court found this concern reasonable, and Perplexity’s testimony did little to calm those concerns.

The court also found that plaintiff would suffer irreparable harm if Perplexity expanded the use of the COMET mark beyond the browser. The harm could not be undone easily, and confusion in the market could damage plaintiff’s reputation and customer relationships.

On the balance of the equities, the court found that preventing Perplexity from launching the browser would create significant hardship. However, Perplexity had repeatedly promised under oath that it had no plans to use the mark for other services. Therefore, blocking those additional uses would not harm Perplexity. On the other hand, plaintiff would be harmed if Perplexity expanded use of the mark. The court explained that a broad, mistaken injunction would be more harmful than a narrow one that was mistakenly denied.

Finally, the court found that the public interest would be served by protecting plaintiff’s narrow slice of the artificial intelligence market from confusion. Since the risk of confusion was limited to that segment, the court tailored its injunction accordingly.

The court issued an order enjoining Perplexity from using the COMET mark on any service listed in plaintiff’s trademark registration. However, Perplexity was permitted to move forward with launching its AI browser under the COMET mark.

Comet ML Inc. v. Perplexity AI, Inc., 2025 WL 1822477 N.D. Cal. (June 30, 2025)

Scroll to top