Lawyer gets called out a second time for using ChatGPT in court brief

You may recall the case of Park v. Kim, wherein the Second Circuit excoriated an attorney for using ChatGPT to generate a brief that contained a bunch of fake cases. Well, the same lawyer responsible for that debacle has been found out again, this time in a case where she is the pro se litigant.

Plaintiff sued Delta Airlines for racial discrimination. She filed a motion for leave to amend her complaint, which the court denied. In discussing the denial, the court observed the following:

[T]he Court maintains serious concern that at least one of Plaintiff’s cited cases is non-existent and may have been a hallucinated product of generative artificial intelligence, particularly given Plaintiff’s recent history of similar conduct before the Second Circuit. See Park v. Kim, 91 F.4th 610, 612 (2d Cir. 2024) (“We separately address the conduct of Park’s counsel, Attorney Jae S. Lee. Lee’s reply brief in this case includes a citation to a non-existent case, which she admits she generated using the artificial intelligence tool ChatGPT.”).

In Park v. Kim, the court referred plaintiff for potential disciplinary action. The court in this case  was more lenient, by just denying her motion for leave to amend, and eventually dismissing the case on summary judgment.

Jae Lee v. Delta Air Lines, Inc., 2024 WL 1230263 (E.D.N.Y. March 22, 2024)

See also:

ChatGPT was “utterly and unusually unpersuasive” in case involving recovery of attorney’s fees

chatgpt billing

In a recent federal case in New York under the Individuals with Disabilities Act, plaintiff prevailed on her claims and sought an award of attorney’s fees under the statute. Though the court ended up awarding plaintiff’s attorneys some of their requested fees, the court lambasted counsel in the process for using information obtained from ChatGPT to support the claim of the attorneys’ hourly rates.

Plaintiff’s firm used ChatGPT-4 as a “cross-check” against other sources in confirming what should be a reasonably hourly rate for the attorneys on the case. The court found this reliance on ChatGPT-4 to be “utterly and unusually unpersuasive” for determining reasonable billing rates for legal services. The court criticized the firm’s use of ChatGPT-4 for not adequately considering the complexity and specificity required in legal billing, especially given the tool’s inability to discern between real and fictitious legal citations, as demonstrated in recent past cases within the Second Circuit.

In Mata v. Avianca, Inc., 2023 WL 4114965 (S.D.N.Y. June 22, 2023) the district court judge sanctioned lawyers for submitting fictitious judicial opinions generated by ChatGPT, and in Park v. Kim, — F.4th —, 2024 WL 332478 (2d Cir. January 30, 2024) an attorney was referred to the Circuit’s Grievance Panel for citing non-existent authority from ChatGPT in a brief. These examples highlighted the tool’s limitations in legal contexts, particularly its inability to differentiate between real and fabricated legal citations, raising concerns about its reliability and appropriateness for legal tasks.

J.G. v. New York City Dept. of Education, 2024 WL 728626 (February 22, 2024)

See also:

Using AI generated fake cases in court brief gets pro se litigant fined $10K

fake ai cases

Plaintiff sued defendant and won on summary judgment. Defendant sought review with the Missouri Court of Appeals. On appeal, the court dismissed the appeal and awarded damages to plaintiff/respondent because of the frivolousness of the appeal.

“Due to numerous fatal briefing deficiencies under the Rules of Appellate Procedure that prevent us from engaging in meaningful review, including the submission of fictitious cases generated by [AI], we dismiss the appeal.” With this, the court began its roast of the pro se appellant’s conduct.

The court detailed appellant’s numerous violations of the applicable Rules of Appellate Procedures. The appellate brief was unsigned, it had no required appendix, and had an inadequate statement of facts. It failed to provide points relied on, and a detailed table of cases, statutes and other authorities.

But the court made the biggest deal about how “the overwhelming majority of the [brief’s] citations are not only inaccurate but entirely fictitious.” Only two out of the twenty-four case citations in the brief were genuine.

Though appellant apologized for the fake cases in his reply brief, the court was not moved, because “the deed had been done.” It characterized the conduct as “a flagrant violation of the duties of candor” appellant owed to the court, and an “abuse of the judicial system.”

Because appellant “substantially failed to comply with court rules,” the court dismissed the appeal and ordered appellant to pay $10,000 in damages for filing a frivolous appeal.

Kruse v. Karlen, — S.W.3d —, 2024 WL 559497 (Mo. Ct. App. February 13, 2024)

See also:

ChatGPT providing fake case citations again – this time at the Second Circuit

Plaintiff sued defendant in federal court but the court eventually dismissed the case because plaintiff continued to fail to properly respond to defendant’s discovery requests. So plaintiff sought review with the Second Circuit Court of Appeals. On appeal, the court affirmed the dismissal, finding that plaintiff’s noncompliance in the lower court amounted to “sustained and willful intransigence in the face of repeated and explicit warnings from the court that the refusal to comply with court orders … would result in the dismissal of [the] action.”

But that was not the most intriguing or provocative part of the court’s opinion. The court also addressed the conduct of plaintiff’s lawyer, who admitted to using ChatGPT to help her write a brief before the appellate court. The AI assistance betrayed itself when the court noticed that the brief contained a non-existent case. Here’s the mythical citation: Matter of Bourguignon v. Coordinated Behavioral Health Servs., Inc., 114 A.D.3d 947 (3d Dep’t 2014).

When the court called her out on the legal hallucination, plaintiff’s attorney admitted to using ChatGPT, to which she was a “suscribed and paying member” but emphasized that she “did not cite any specific reasoning or decision from [the Bourguignon] case.” Unfortunately, counsel’s assertions did not blunt the court’s wrath.

“All counsel that appear before this Court are bound to exercise professional judgment and responsibility, and to comply with the Federal Rules of Civil Procedure,” read the court’s opinion as it began its rebuke. It reminded counsel that the rules of procedure impose a duty on attorneys to certify that they have conducted a reasonable inquiry and have determined that any papers filed with the court are legally tenable. “At the very least,” the court continued, attorneys must “read, and thereby confirm the existence and validity of, the legal authorities on which they rely.” Citing to a recent case involving a similar controversy, the court observed that “[a] fake opinion is not ‘existing law’ and citation to a fake opinion does not provide a non-frivolous ground for extending, modifying, or reversing existing law, or for establishing new law. An attempt to persuade a court or oppose an adversary by relying on fake opinions is an abuse of the adversary system.”

The court considered the matter so severe that it referred the attorney to the court’s Grievance Panel, for that panel to consider whether to refer the situation to the court’s Committee on Admissions and Grievances, which would have the power to revoke the attorney’s admission to practice before that court.

Park v. Kim, — F.4th —, 2024 WL 332478 (2d Cir. January 30, 2024)

See also:

Scroll to top