AI fake cases crisis reaches Illinois Appellate Court for the first time

illinois appellate

The Illinois Appellate Court for the Fourth District issued a decision that marks the first time an Illinois court at this level has addressed attorney misuse of artificial intelligence. While the sad underlying matter involved the termination of parental rights, the appellate decision is noteworthy for another reason: the use of fictitious legal citations generated by AI.

The Appeal

Respondent appealed from a trial court’s order terminating her parental rights to her two minor children. The appeal raised several arguments, including challenges to the trial court’s findings on unfitness and best interests, a due process claim regarding self-representation, and a claim of ineffective assistance of counsel. The appellate court ultimately affirmed the trial court’s decision. However, the focus of the opinion shifted when the court found irregularities in the appellate briefs that respondent’s appointed counsel had filed.

The Court’s Review of the Briefs

After reviewing the briefs, the court noticed that eight cited cases did not exist. The court ordered counsel to explain the source of these citations and to appear in person. Counsel responded by acknowledging that five of the cited cases were fictitious. He explained that he had used AI to assist in drafting the brief and did not independently verify the citations it produced. For the remaining three cases, which were real, the court found that the actual content did not support the legal arguments presented in the brief.

The Role of AI and Legal Responsibility

The court’s opinion noted that AI tools such as generative chatbots can assist legal professionals but must be used with caution. Citing recent guidance from the American Bar Association and the Illinois Supreme Court’s new AI policy, the court emphasized that attorneys are responsible for reviewing and verifying all material submitted to a court, regardless of how it was generated.

The court concluded that the attorney had not reviewed the AI-generated citations and that this lack of verification resulted in inaccurate filings. It clarified that while use of AI is not prohibited, reliance on unverified outputs can compromise the integrity of legal proceedings.

Sanctions Imposed

Rather than striking the briefs, the court chose to address the appeal on the merits but imposed monetary sanctions under Illinois Supreme Court Rule 375. It ordered the attorney to (1) return the $6,925.62 he had been paid by Sangamon County for his representation, (2) pay an additional $1,000 fine to the appellate clerk, and (3) submit a copy of the opinion to the Illinois Attorney Registration and Disciplinary Commission.

The court noted that these measures were intended to ensure accountability and to reinforce the expectations surrounding the use of AI in court proceedings.

In re Baby Boy, — N.E.3d —, 2025 WL 2046315 (Ct. App. Ill. 4th Dist., July 21, 2025)

Immigration attorney hit with sanctions for using Claude to generate fake case citations

A federal court imposed sanctions on a petitioner’s attorney for submitting fabricated legal quotations generated by the AI tool Claude Sonnet 4 in an emergency habeas case seeking to halt a client’s deportation. Facing an expedited timeline and suffering from a respiratory infection, the attorney admitted he used AI to draft a supplemental brief and failed to verify the quotations, despite knowing AI tools are prone to hallucinations.

The court found this conduct violated Rule 11 and constituted subjective bad faith, noting that the attorney either consciously avoided checking his sources or deliberately ignored the opposing party’s warning about the fake quotes.

While courts have typically imposed monetary sanctions ranging from $1,500 to $15,000 in similar cases, the court here imposed a reduced $1,000 fine in light of mitigating factors, including the attorney’s prompt admission, withdrawal of the filing, and enrollment in a continuing legal education (CLE) course on ethical AI use. He was also ordered to file proof of CLE completion.

The decision underscores that AI use in legal practice does not excuse attorneys from their duty to confirm the accuracy of filings and that even in emergency settings, courts will hold lawyers accountable for unverified, fictitious legal content.

Kaur v. Desso, 2025 WL 1895859 (N.D.N.Y. July 9, 2025)

Pro se litigant cited AI hallucinated cases but court found no harm, no foul

A bankruptcy case took a turn during a hearing when the court asked the pro se debtor how he had found the cases he cited in his legal filings. Debtor admitted that he had used artificial intelligence to generate legal arguments and case citations. When the court reviewed those citations, it found they were either misrepresented, irrelevant, or entirely fictitious. One citation led to a case that had been vacated. Another did not say anything close to what debtor claimed. And one did not exist at all.

The court made clear that parties, whether attorneys or pro se litigants, must make a reasonable inquiry before submitting legal contentions to the court. That means personally verifying that cited case law actually supports the argument being made. Using AI without checking the accuracy of the output is not enough.

Even so, the court declined to impose sanctions under the relevant rule. It noted that the case was already being dismissed for independent reasons under bankruptcy law and saw no need to pile on. Essentially, the judge applied a “no harm, no foul” approach. But the warning was clear: AI-generated case law that has not been verified cannot be trusted and will not be tolerated.

In re Perkins, No. 24-32731-thp13, 2025 WL 1871049 (Bankr. D. Or. July 7, 2025)

Court lets authors expand copyright case to target Databricks’ new AI models

amending complaing

Five copyright holders sued Databricks and Mosaic ML, claiming their copyrighted works were used to train artificial intelligence systems without permission. Plaintiffs originally alleged that Mosaic ML directly infringed their works by training its MPT large language models on datasets that included their works. Plaintiffs also accused Databricks, Mosaic ML’s parent company, of vicarious liability for that conduct.

After Databricks released a new set of AI models called DBRX, plaintiffs moved to amend the complaint. Plaintiffs asked the court to allow a new claim of direct copyright infringement against Databricks for allegedly using the same protected works to train DBRX. Plaintiffs also sought to update the list of copyrighted works allegedly copied. Defendants opposed the request, arguing that the amendment came too late and would unfairly change the case.

Timing

The court acknowledged that plaintiffs waited more than a year after DBRX was released before requesting to amend the complaint. That delay was significant, and plaintiffs did not provide a strong explanation. However, the court noted that discovery was still open, and key deadlines had not yet passed. Because the case was still active, the court said the delay alone was not enough to deny the motion.

Intent

Defendants claimed plaintiffs acted in bad faith by dragging out the case and making vague statements in court filings. But the court saw no signs of deliberate delay or dishonesty. Instead, it found that plaintiffs’ motion to amend reflected an effort to match the complaint with new information obtained through discovery.

Prejudice

Defendants argued that allowing new claims about DBRX would cause unfair prejudice by drastically changing the case. The court disagreed. It found that the parties were already engaged in discovery related to DBRX and that any added burden would be limited. Since the DBRX and MPT models might rely on overlapping data, the new claims would not require a completely new approach to the case.

Futility

Defendants also said the new claims were too vague and would not survive a challenge. But the court said such issues should be dealt with after the complaint is amended. Unless the new claims are clearly invalid, courts usually allow amendments and address legal sufficiency later in the process.

So the court granted plaintiffs’ motion to amend. The lawsuit will now include direct copyright infringement claims against Databricks based on its newer DBRX models, along with an updated list of works that plaintiffs claims were copied.

In re Mosaic LLM Litigation, 2025 WL 1755650 (N.D. California, June 25, 2025)

K-Pop companies seek U.S. court’s help to unmask anonymous YouTubers

Three South Korean entertainment companies turned to a U.S. court to assist in identifying anonymous YouTube users accused of posting defamatory content. The companies sought permission to issue a subpoena under 28 U.S.C. § 1782, a law that allows U.S. courts to facilitate evidence collection for foreign legal proceedings.

Applicants alleged that the YouTube channels in question posted false claims about K-pop groups they manages, including accusations of plagiarism and deliberate masking of poor vocal performances. Applicants – who had already initiated lawsuits in South Korea – needed the subpoena to obtain identifying information from Google, the parent company of YouTube, to pursue these claims further. Google did not oppose the request but reserved the right to challenge the subpoena if served.

The court ruled in favor of applicants, granting the subpoena. It determined that the statutory requirements under § 1782 were met: Google operates within the court’s jurisdiction, the discovery was intended for use in South Korean legal proceedings, and applicants qualified as interested persons. The court also weighed discretionary factors, such as the non-involvement of Google in the South Korean lawsuits and the relevance of the requested information, finding them supportive of applicants’ request.

The court emphasized that the subpoena was narrowly tailored to identify the operators of the YouTube channels while avoiding unnecessary intrusion into unrelated data. However, it also sought to ensure procedural fairness, requiring Google to notify the affected individuals, who would then have 30 days to contest the subpoena.

Three Reasons Why This Case Matters:

  • International Legal Cooperation: The case illustrates how U.S. courts can assist in resolving international disputes involving anonymous online actors.
  • Accountability for Online Speech: It highlights the balance between free expression and accountability for potentially harmful content on digital platforms.
  • Corporate Reputation Management: The decision reflects how businesses can use legal avenues to protect their reputation across jurisdictions.

In re Ex Parte Application of HYBE Co., Ltd., Belift Lab Inc., and Source Music Co., Ltd., 2024 WL 4906495 (N.D. Cal. Nov. 27, 2024).

Supreme Court weighs in on Texas and Florida social media laws

scotus social media case

In a significant case involving the intersection of technology and constitutional law, NetChoice LLC sued Florida and Texas, challenging their social media content-moderation laws. Both states had enacted statutes regulating how platforms such as Facebook and YouTube moderate, organize, and display user-generated content. NetChoice argued that the laws violated the First Amendment by interfering with the platforms’ editorial discretion. It asked the Court to invalidate these laws as unconstitutional.

The Supreme Court reviewed conflicting rulings from two lower courts. The Eleventh Circuit had upheld a preliminary injunction against Florida’s law, finding it likely violated the First Amendment. And the Fifth Circuit had reversed an injunction against the Texas law, reasoning that content moderation did not qualify as protected speech. However, the Supreme Court vacated both decisions, directing the lower courts to reconsider the challenges with a more comprehensive analysis.

The Court explained that content moderation—decisions about which posts to display, prioritize, or suppress—constitutes expressive activity akin to editorial decisions made by newspapers. The Texas and Florida laws, by restricting this activity, directly implicated First Amendment protections. Additionally, the Court noted that these cases involved facial challenges, requiring an evaluation of whether a law’s unconstitutional applications outweigh its constitutional ones. Neither lower court had sufficiently analyzed the laws in this manner.

The Court also addressed a key issue in the Texas law: its prohibition against platforms censoring content based on viewpoint. Texas justified the law as ensuring “viewpoint neutrality,” but the Court found this rationale problematic. Forcing platforms to carry speech they deem objectionable—such as hate speech or misinformation—would alter their expressive choices and violate their First Amendment rights.

Three reasons why this case matters:

  • Clarifies Free Speech Rights in the Digital Age: The case reinforces that social media platforms have editorial rights similar to traditional media, influencing how future laws may regulate online speech.
  • Impacts State-Level Regulation: The ruling limits states’ ability to impose viewpoint neutrality mandates on private platforms, shaping the balance of power between governments and tech companies.
  • Sets a Standard for Facial Challenges: By emphasizing the need to weigh a law’s unconstitutional and constitutional applications, the decision provides guidance for courts evaluating similar cases.

Moody v. Netchoice, et al., 144 S.Ct. 2383 (July 1, 2024)

Footnote in opinion warns counsel not to cite AI-generated fake cases again

A federal judge in Wisconsin suspected that one of the parties appearing before the court had used generative AI to write a brief, which resulted in a hallucinated case. The judge issued an opinion with this footnote:

Although it does not ultimately affect the Court’s analysis or disposition, Plaintiffs in their reply cite to a case that none of the Court’s staff were able to locate. ECF No. 32 at 5 (“Caserage Tech Corp. v. Caserage Labs, Inc., 972 F.3d 799, 803 (7th Cir. 1992) (The District Court correctly found the parties agreed to permit shareholder rights when one party stated to the other its understanding that a settlement agreement included shareholder rights, and the other party did not say anything to repudiate that understanding.).”). The citation goes to a case of a different name, from a different year, and from a different circuit. Court staff also could not locate the case by searching, either on Google or in legal databases, the case name provided in conjunction with the purported publication year. If this is, as the Court suspects, an instance of provision of falsified case authority derived from artificial intelligence, Plaintiffs’ counsel is on notice that any future instance of the presentation of nonexistent case authority will result in sanctions.

One must hope this friendly warning will be taken seriously.

Plumbers & Gasfitters Union Local No. 75 Health Fund v. Morris Plumbing, LLC, 2024 WL 1675010 (E.D. Wisconsin April 18, 2024)

New Jersey judiciary taking steps to better understand Generative AI in the practice of law

We are seeing the state of New Jersey take strides to make “safe and effective use of Generative AI” in the practice of law. The state’s judiciary’s Acting Administrative Director recently sent an email to New Jersey attorneys acknowledging the growth of Generative AI in the practice of law, recognizing its positive and negative uses.

The correspondence included a link to a 23-question online survey designed to gauge New Jersey attorneys’ knowledge about and attitudes toward Generative AI, with the aim of designing seminars and other training.

The questions seek to gather information on topics including the age and experience of the attorneys responding, attitudes toward Generative AI both in the and out of the practice of law, the levels of experience in using Generative AI, and whether Generative AI should be a part of the future of the practice of law.

This initiative signals the state may be taking a  proactive approach toward attorneys’ adoption of these newly-available technologies.

See also:

 

ChatGPT was “utterly and unusually unpersuasive” in case involving recovery of attorney’s fees

chatgpt billing

In a recent federal case in New York under the Individuals with Disabilities Act, plaintiff prevailed on her claims and sought an award of attorney’s fees under the statute. Though the court ended up awarding plaintiff’s attorneys some of their requested fees, the court lambasted counsel in the process for using information obtained from ChatGPT to support the claim of the attorneys’ hourly rates.

Plaintiff’s firm used ChatGPT-4 as a “cross-check” against other sources in confirming what should be a reasonably hourly rate for the attorneys on the case. The court found this reliance on ChatGPT-4 to be “utterly and unusually unpersuasive” for determining reasonable billing rates for legal services. The court criticized the firm’s use of ChatGPT-4 for not adequately considering the complexity and specificity required in legal billing, especially given the tool’s inability to discern between real and fictitious legal citations, as demonstrated in recent past cases within the Second Circuit.

In Mata v. Avianca, Inc., 2023 WL 4114965 (S.D.N.Y. June 22, 2023) the district court judge sanctioned lawyers for submitting fictitious judicial opinions generated by ChatGPT, and in Park v. Kim, — F.4th —, 2024 WL 332478 (2d Cir. January 30, 2024) an attorney was referred to the Circuit’s Grievance Panel for citing non-existent authority from ChatGPT in a brief. These examples highlighted the tool’s limitations in legal contexts, particularly its inability to differentiate between real and fabricated legal citations, raising concerns about its reliability and appropriateness for legal tasks.

J.G. v. New York City Dept. of Education, 2024 WL 728626 (February 22, 2024)

See also:

DMCA subpoena to “mere conduit” ISP was improper

DMCA defamatory

Because ISP acted as a conduit for the transmission of material that allegedly infringed copyright, it fell under the DMCA safe harbor in 17 U.S.C. § 512(a), and therefore § 512(h) did not authorize the subpoena issued in the case.

Some copyright owners needed to find out who was anonymously infringing their works, so they issued a subpoena to the users’ internet service provider (Cox Communications) under the Digital Millennium Copyright Act’s (“DMCA”) at 17 U.S.C. § 512(h). After the ISP notified one of the anonymous users – referred to as John Doe in the case – of the subpoena, Doe filed a motion to quash. The magistrate judge recommended the subpoena be quashed, and the district judge accepted such recommendation.

Contours of the Safe Harbor

The court explained how the DMCA enables copyright owners to send subpoenas for the identification of alleged infringers, contingent upon providing a notification that meets specific criteria outlined in the DMCA. However, the DMCA also establishes safe harbors for Internet Service Providers (ISPs), notably exempting those acting as “mere conduits” of information, like in peer-to-peer (P2P) filesharing, from liability and thus from the obligations of the notice and takedown provisions found in other parts of the DMCA. This distinction has led courts, including the Eighth and D.C. Circuits, to conclude that subpoenas under § 512(h) cannot be used to compel ISPs, which do not store or directly handle the infringing material but merely transmit it, to reveal the identities of P2P infringers.

Who is in?

The copyright owners raised a number of objections to quashing the subpoena. Their primary concerns were with the court’s interpretation of the ISP’s role as merely a “conduit” in the alleged infringement, arguing that the ISP’s assignment of IP addresses constituted a form of linking to infringing material, thus meeting the DMCA’s notice requirements. They also disputed the court’s conclusion that the material in question could not be removed or access disabled by the ISP due to its nature of transmission, and they took issue with certain factual conclusions drawn without input from the parties involved. Additionally, the petitioners objected to the directive to return or destroy any information obtained through the subpoena, requesting that such measures apply only to the information related to the specific subscriber John Doe.

Conduits are.

Notwithstanding these various arguments, the court upheld the magistrate judge’s recommendation, agreeing that the subpoena issued to the ISP was invalid due to non-compliance with the notice provisions required by 17 U.S.C. § 512(c)(3)(A). The petitioners’ arguments, suggesting that the ISP’s assignment of IP addresses to users constituted a form of linking to infringing material under § 512(d), were rejected. The court clarified that in the context of P2P file sharing, IP addresses do not serve as “information location tools” as defined under § 512(d) and that the ISP’s role was limited to providing internet connectivity, aligning with the “mere conduit” provision under § 512(a). The court also dismissed the petitioners’ suggestion that the ISP could disable access to infringing material by null routing, emphasizing the distinction between disabling access to material and terminating a subscriber’s account, with the latter being a more severe action than what the DMCA authorizes. The court suggested that the petitioners could pursue the infringer’s identity through other legal avenues, such as a John Doe lawsuit, despite potential challenges highlighted by the petitioners.

In re: Subpoena of Internet Subscribers of Cox Communications, LLC and Coxcom LLC, 2024 WL 341069 (D. Hawaii, January 30, 2024)

 

Scroll to top