Pro se litigant cited AI hallucinated cases but court found no harm, no foul

A bankruptcy case took a turn during a hearing when the court asked the pro se debtor how he had found the cases he cited in his legal filings. Debtor admitted that he had used artificial intelligence to generate legal arguments and case citations. When the court reviewed those citations, it found they were either misrepresented, irrelevant, or entirely fictitious. One citation led to a case that had been vacated. Another did not say anything close to what debtor claimed. And one did not exist at all.

The court made clear that parties, whether attorneys or pro se litigants, must make a reasonable inquiry before submitting legal contentions to the court. That means personally verifying that cited case law actually supports the argument being made. Using AI without checking the accuracy of the output is not enough.

Even so, the court declined to impose sanctions under the relevant rule. It noted that the case was already being dismissed for independent reasons under bankruptcy law and saw no need to pile on. Essentially, the judge applied a “no harm, no foul” approach. But the warning was clear: AI-generated case law that has not been verified cannot be trusted and will not be tolerated.

In re Perkins, No. 24-32731-thp13, 2025 WL 1871049 (Bankr. D. Or. July 7, 2025)

Court limits use of trademark for forthcoming AI-powered agentic browser

tro trademark

A technology company sued artificial intelligence company Perplexity for trademark infringement and unfair competition. Plaintiff claimed exclusive rights to the mark COMET, which it used in connection with a range of technology and consulting services. Perplexity planned to launch an “AI-powered browser for agentic search” under the same name. Plaintiff asked the court to stop Perplexity from using the mark altogether. The court granted a preliminary injunction in part, allowing the browser to launch under the COMET mark, but blocking all other uses of the mark.

To decide whether to issue the injunction, the court applied the standard four-part test. A plaintiff must show a likelihood of success on the merits, a likelihood of suffering irreparable harm without an injunction, that the balance of equities favors relief, and that the injunction is in the public interest.

The court found that plaintiff was likely to succeed on its trademark infringement claim. Plaintiff owned an incontestable federal registration for the COMET mark. That gave plaintiff a strong position on ownership. The court then looked at the likelihood of confusion between the two uses of COMET by applying the Sleekcraft factors, taken from AMF Inc. v. Sleekcraft Boats, 599 F.2d 341 (9th Cir. 1979).

  • On the question of similarity of the marks, the court noted the marks were identical. That factor weighed heavily in favor of plaintiff.
  • On the element of proximity of the competing products, the court found that even though both parties used artificial intelligence and machine learning, their overall products served different functions, had different designs, and targeted different users. That factor favored defendant.
  • Concerning the strength of plaintiff’s mark, plaintiff convinced the court that it had built a strong reputation over the last seven years, but the court found that plaintiff’s mark was strong only in a more limited segment of the AI space. That factor moderately favored plaintiff.
  • In terms of marketing channels, there was some overlap in audience. Both companies appealed to AI developers and sophisticated users, and Perplexity had the resources to dominate certain markets. That factor slightly favored plaintiff.
  • The degree of care exercised by customers favored defendant, since the users of both products were more likely to be careful in making their choices.
  • The court found that intent only slightly favored plaintiff, and that it was not a critical factor at this early stage.
  • On actual confusion, the evidence was sparse and open to different interpretations, so the court considered this factor neutral.
  • On the likelihood of expansion, the court found in favor of plaintiff. It was concerned that Perplexity, as a company “seeking to become an AI juggernaut,” would eventually expand the COMET mark into new products that could directly interfere with plaintiff’s business. The court found this concern reasonable, and Perplexity’s testimony did little to calm those concerns.

The court also found that plaintiff would suffer irreparable harm if Perplexity expanded the use of the COMET mark beyond the browser. The harm could not be undone easily, and confusion in the market could damage plaintiff’s reputation and customer relationships.

On the balance of the equities, the court found that preventing Perplexity from launching the browser would create significant hardship. However, Perplexity had repeatedly promised under oath that it had no plans to use the mark for other services. Therefore, blocking those additional uses would not harm Perplexity. On the other hand, plaintiff would be harmed if Perplexity expanded use of the mark. The court explained that a broad, mistaken injunction would be more harmful than a narrow one that was mistakenly denied.

Finally, the court found that the public interest would be served by protecting plaintiff’s narrow slice of the artificial intelligence market from confusion. Since the risk of confusion was limited to that segment, the court tailored its injunction accordingly.

The court issued an order enjoining Perplexity from using the COMET mark on any service listed in plaintiff’s trademark registration. However, Perplexity was permitted to move forward with launching its AI browser under the COMET mark.

Comet ML Inc. v. Perplexity AI, Inc., 2025 WL 1822477 N.D. Cal. (June 30, 2025)

Facial recognition missteps lead to dismissal in New York criminal case

facial recognition

A criminal defendant sought dismissal of a criminal charge for aggravated harassment in the second degree. An issue arose after law enforcement identified defendant using facial recognition technology operated by the FDNY, rather than approved NYPD methods. The investigation included unauthorized use of Clearview AI software and unlawful access to DMV records, which led to a digitally altered photo being included in a lineup that resulted in defendant’s identification.

Defendant asked the court to dismiss the case, arguing that the government violated its discovery obligations and denied defendant a speedy trial. Defendant claimed that critical evidence, including AI-generated facial recognition materials and records showing how the DMV photo was altered, had not been disclosed in time and that the government had failed to act with due diligence in obtaining and producing them.

The court ruled that the criminal case must be dismissed. It found that the government failed to file a valid certificate of compliance and was not ready for trial within the time limits required by New York’s speedy trial statute.

The court ruled this because the government relied on investigative tools that violated both policy and law, including the use of unauthorized AI facial recognition and improper access to protected DMV data. The government also failed to adequately pursue and disclose relevant records from FDNY and NYPD sources. The court concluded that the government’s handling of the investigation and discovery process showed a lack of reasonable diligence. The cumulative failures deprived defendant of the timely and fair process guaranteed by law.

People v. Zuhdi A., 86 Misc.3d 1227(A), 2025 WL 1790657 (Crim Ct, NY County, June 17, 2025).

Footnote in opinion warns counsel not to cite AI-generated fake cases again

A federal judge in Wisconsin suspected that one of the parties appearing before the court had used generative AI to write a brief, which resulted in a hallucinated case. The judge issued an opinion with this footnote:

Although it does not ultimately affect the Court’s analysis or disposition, Plaintiffs in their reply cite to a case that none of the Court’s staff were able to locate. ECF No. 32 at 5 (“Caserage Tech Corp. v. Caserage Labs, Inc., 972 F.3d 799, 803 (7th Cir. 1992) (The District Court correctly found the parties agreed to permit shareholder rights when one party stated to the other its understanding that a settlement agreement included shareholder rights, and the other party did not say anything to repudiate that understanding.).”). The citation goes to a case of a different name, from a different year, and from a different circuit. Court staff also could not locate the case by searching, either on Google or in legal databases, the case name provided in conjunction with the purported publication year. If this is, as the Court suspects, an instance of provision of falsified case authority derived from artificial intelligence, Plaintiffs’ counsel is on notice that any future instance of the presentation of nonexistent case authority will result in sanctions.

One must hope this friendly warning will be taken seriously.

Plumbers & Gasfitters Union Local No. 75 Health Fund v. Morris Plumbing, LLC, 2024 WL 1675010 (E.D. Wisconsin April 18, 2024)

Key Takeaways From the USPTO’s Guidance on AI Use

uspto ai

On April 10, 2024, the United States Patent and Trademark Office (“USPTO”) issued guidance to attorneys about using AI in matters before the USPTO. While there are no new rules implemented to address the use of AI, the guidance seeks to remind practitioners of the existing rules, inform of risks, and provide suggestions for mitigating those risks. The notice acknowledges that it is an effort to address AI considerations at the intersection of innovation, creativity and intellectual property, consistent with the President’s recent executive order that calls upon the federal government to enact and enforce protections against AI-related harms.

The guidance tends to address patent prosecution and examination more than trademark practice and prosecution, but there are still critically important ideas relevant to the practice of trademark law.

The USPTO takes a generally positive approach toward the use of AI, recognizing that tools using large language models can lower the barriers and costs for practicing before the USPTO and help practitioners serve clients better and more efficiently. But it recognizes potential downsides from misuse – some of which is not exclusive to intellectual property practice, e.g., using AI generated non-existent case citations in briefs filed before the USPTO and inadvertently disclosing confidential information via a prompt.

Key Reminders in the Guidance

The USPTO’s guidance reminds practitioners of some specific ways that they must adhere to USPTO rules and policies when using AI assistance in submissions – particularly because of the need for full, fair, and accurate disclosure and the protection of clients’ interests.

Candor and Good Faith: Practitioners involved in USPTO proceedings (including prosecution and matters such as oppositions and cancellation proceedings before the Trademark Trial and Appeal Board (TTAB)) are reminded of the duties of candor and good faith. This entails the disclosure of all material information known to be relevant to a matter. Though the guidance is patent-heavy in its examples (e.g., discussing communications with patent examiners), it is not limited to patent prosecution but applies to trademark prosecution as well. The guidance details the broader duty of candor and good faith, which prohibits fraudulent conduct and emphasizes the integrity of USPTO proceedings and the reliability of registration certificates issued.

Signature Requirements: The guidance outlines the signature requirement for correspondence with the USPTO, ensuring that documents drafted with AI assistance are reviewed and believed to be true by the signer.

Confidentiality: The confidentiality of client information is of key importance, with practitioners being required to prevent unauthorized disclosure, which could be exacerbated by the use of AI in drafting applications or conducting clearance searches.

International Practice: Foreign filing and compliance with export regulations are also highlighted, especially in the context of using AI for drafting applications or doing clearance searches. Again, while the posture in the guidance tends to be patent heavy, the guidance is relevant to trademark practitioners working with foreign associates and otherwise seeking protection of marks in other countries. Practitioners are reminded of their responsibilities to prevent improper data export.

USPTO Electronic Systems: The guidance further addresses the use of USPTO electronic systems, emphasizing that access is governed by terms and conditions to prevent unauthorized actions.

Staying Up-to-date: The guidance reiterates the duties owed to clients, including competent and diligent representation, stressing the need for practitioners to stay informed about the technologies they use in representing clients, including AI tools.

More Practical Guidance for Use of Tools

The guidance next moves to a discussion of particular use of AI tools in light of the nature of the practice and the rules of which readers have been reminded. Key takeaways in this second half of the guidance include the following:

Text creation:

Word processing tools have evolved to incorporate generative AI capabilities, enabling the automation of complex tasks such as responding to office actions. While the use of such AI-enhanced tools in preparing documents for submission to the USPTO is not prohibited or subject to mandatory disclosure, users are reminded to adhere to USPTO policies and their duties of candor and good faith towards the USPTO and their clients when employing these technologies.

Likely motivated by court cases that have gotten a lot of attention because lawyers used ChatGPT to generate fake case cites, the USPTO addressed the importance of human-review of AI generated content. All USPTO submissions, regardless of AI involvement in their drafting, must be signed by the presenting party, who attests to the truthfulness of the content and the adequacy of their inquiry into its accuracy.  Human review is crucial to uphold the duty of candor and good faith, requiring the correction of any errors or omissions before submission. While there is no general duty to disclose AI’s use in drafting unless specifically asked, practitioners must ensure their submissions are legally sound and factually accurate and consult with their clients about the representation methods used.

More specifically, submissions to the TTAB and trademark applications that utilize AI tools require meticulous review to ensure accuracy and compliance with the applicable rules. This is vital for all documents, including evidence for trademark applications, responses to office actions, and legal briefs, to ensure they reflect genuine marketplace usage and are supported by factual evidence. Special attention must be given to avoid the inclusion of AI-generated specimens or evidence that misrepresents actual use or existence in commerce. Materials produced by AI that distort facts, include irrelevant content, or are unduly repetitive risk being deemed as submitted with improper intent, potentially leading to unnecessary delays or increased costs in the proceedings.

Filling out Forms:

AI tools can enhance the efficiency of filing documents with the USPTO by automating tasks such as form completion and document uploads. But users must ensure their use aligns with USPTO rules, particularly regarding signatures, which must be made by a person and not delegated to AI. Users are reminded that USPTO.gov accounts are limited to use by natural persons. AI systems cannot hold such accounts, emphasizing the importance of human oversight in submissions to ensure adherence to USPTO regulations and policies.

Automated Access to USPTO IT Systems:

The guidance notes that when utilizing AI tools to interact with USPTO IT systems, it is crucial to adhere to legal and regulatory requirements, ensuring authorized use only. Users must have proper authorization, such as being an applicant, registrant, or practitioner, to file documents or access information. AI systems cannot be considered “users” and thus are ineligible for USPTO.gov accounts. Individuals employing AI assistance must ensure the tool does not overstep access permissions, risking potential revocation of the applicable USPTO.gov account or face other legal risk for unauthorized access. Additionally, the USPTO advises against excessive data mining from USPTO databases with AI tools. The USPTO reminds readers that it provides bulk data products that could assist in these efforts.

VIDEO: AI and Voice Clones – Tennessee enacts the ELVIS Act

 

Generative AI enables people to clone other peoples’ voices. And that can lead to fraud, identity theft, or intellectual property infringement.

On March 21, 2024, Tennessee enacted new law called the ELVIS Act that seeks to tackle this problem. What does the law say?

The law adds a person’s voice to the list of things over which that person has a property interest. And what is a “voice” under the law? It is any sound that is readily identifiable and attributable to a particular individual. It can be an actual voice or a simulation of the voice. A person can be liable for making available another person’s voice without consent. And one could also be liable for making available any technology having the “primary purpose or function” of producing another’s voice without permission.

 

New Jersey judiciary taking steps to better understand Generative AI in the practice of law

We are seeing the state of New Jersey take strides to make “safe and effective use of Generative AI” in the practice of law. The state’s judiciary’s Acting Administrative Director recently sent an email to New Jersey attorneys acknowledging the growth of Generative AI in the practice of law, recognizing its positive and negative uses.

The correspondence included a link to a 23-question online survey designed to gauge New Jersey attorneys’ knowledge about and attitudes toward Generative AI, with the aim of designing seminars and other training.

The questions seek to gather information on topics including the age and experience of the attorneys responding, attitudes toward Generative AI both in the and out of the practice of law, the levels of experience in using Generative AI, and whether Generative AI should be a part of the future of the practice of law.

This initiative signals the state may be taking a  proactive approach toward attorneys’ adoption of these newly-available technologies.

See also:

 

Lawyer gets called out a second time for using ChatGPT in court brief

You may recall the case of Park v. Kim, wherein the Second Circuit excoriated an attorney for using ChatGPT to generate a brief that contained a bunch of fake cases. Well, the same lawyer responsible for that debacle has been found out again, this time in a case where she is the pro se litigant.

Plaintiff sued Delta Airlines for racial discrimination. She filed a motion for leave to amend her complaint, which the court denied. In discussing the denial, the court observed the following:

[T]he Court maintains serious concern that at least one of Plaintiff’s cited cases is non-existent and may have been a hallucinated product of generative artificial intelligence, particularly given Plaintiff’s recent history of similar conduct before the Second Circuit. See Park v. Kim, 91 F.4th 610, 612 (2d Cir. 2024) (“We separately address the conduct of Park’s counsel, Attorney Jae S. Lee. Lee’s reply brief in this case includes a citation to a non-existent case, which she admits she generated using the artificial intelligence tool ChatGPT.”).

In Park v. Kim, the court referred plaintiff for potential disciplinary action. The court in this case  was more lenient, by just denying her motion for leave to amend, and eventually dismissing the case on summary judgment.

Jae Lee v. Delta Air Lines, Inc., 2024 WL 1230263 (E.D.N.Y. March 22, 2024)

See also:

AI and voice clones: Three things to know about Tennessee’s ELVIS Act

On March 21, 2024, the governor of Tennessee signed the ELVIS Act (the Ensuring Likeness, Voice, and Image Security Act of 2024) which is aimed at the problem of people using AI to simulate voices in a way not authorized by the person whose voice is being imitated.

Here are three key things to know about the new law:

(1) Voice defined.

The law adds the following definition to existing Tennessee law:

“Voice” means a sound in a medium that is readily identifiable and attributable to a particular individual, regardless of whether the sound contains the actual voice or a simulation of the voice of the individual;

There are a couple of interesting things to note. One could generate or use the voice of another without using the other person’s name. The voice simply has to be “readily identifiable” and “attributable” to a particular human. Those seem to be pretty open concepts and we could expect quite a bit of litigation over what it takes for a voice to be identifiable and attributable to another. Would this cover situations where a person naturally sounds like another, or is just trying to imitate another musical style?

(2) Voice is now a property right.

The following underlined words were added to the existing statute:

Every individual has a property right in the use of that individual’s name, photograph, voice, or likeness in any medium in any manner.

The word “person’s” was changed to “individual’s” presumably to clarify that this is a right belonging to a natural person (i.e., real human beings and not companies). And of course the word “voice” was added to expressly include that attribute as something in which the person can have a property interest.

(3) Two new things are banned under law.

The following two paragraphs have been added:

A person is liable to a civil action if the person publishes, performs, distributes, transmits, or otherwise makes available to the public an individual’s voice or likeness, with knowledge that use of the voice or likeness was not authorized by the individual or, in the case of a minor, the minor’s parent or legal guardian, or in the case of a deceased individual, the executor or administrator, heirs, or devisees of such deceased individual.

A person is liable to a civil action if the person distributes, transmits, or otherwise makes available an algorithm, software, tool, or other technology, service, or device, the primary purpose or function of which is the production of an individual’s photograph, voice, or likeness without authorization from the individual or, in the case of a minor, the minor’s parent or legal guardian, or in the case of a deceased individual, the executor or administrator, heirs, or devisees of such deceased individual.

With this language, we see the heart of the new law’s impact. One can sue another for making his or her voice publicly available without permission. Note that this restriction is not only on commercial use of another’s voice. Most states’ laws discussing name, image and likeness restrict commercial use by another. This statute is broader, and would make more things unlawful, for example, creating a deepfaked voice simply for fun (or harassment, of course), if the person whose voice is being imitated has not consented.

Note the other interesting new prohibition, the one on making available tools having as their “primary purpose or function” the production of another’s voice without authorization. If you were planning on launching that new app where you can make your voice sound like a celebrity’s voice, consider whether this Tennessee statute might shut you down.

See also:

Scroll to top