Footnote in opinion warns counsel not to cite AI-generated fake cases again

A federal judge in Wisconsin suspected that one of the parties appearing before the court had used generative AI to write a brief, which resulted in a hallucinated case. The judge issued an opinion with this footnote:

Although it does not ultimately affect the Court’s analysis or disposition, Plaintiffs in their reply cite to a case that none of the Court’s staff were able to locate. ECF No. 32 at 5 (“Caserage Tech Corp. v. Caserage Labs, Inc., 972 F.3d 799, 803 (7th Cir. 1992) (The District Court correctly found the parties agreed to permit shareholder rights when one party stated to the other its understanding that a settlement agreement included shareholder rights, and the other party did not say anything to repudiate that understanding.).”). The citation goes to a case of a different name, from a different year, and from a different circuit. Court staff also could not locate the case by searching, either on Google or in legal databases, the case name provided in conjunction with the purported publication year. If this is, as the Court suspects, an instance of provision of falsified case authority derived from artificial intelligence, Plaintiffs’ counsel is on notice that any future instance of the presentation of nonexistent case authority will result in sanctions.

One must hope this friendly warning will be taken seriously.

Plumbers & Gasfitters Union Local No. 75 Health Fund v. Morris Plumbing, LLC, 2024 WL 1675010 (E.D. Wisconsin April 18, 2024)

Can the owner of a company be personally liable for what the company does?

personally liable

One of the major benefits of forming a corporation or limited liability company is the shield from personal liability the business entity provides to its owners. But that shield does not protect against all of the company’s officers’ conduct.

In a recent trademark infringement case in federal court in California, a court evaluated whether a company’s officer could face liability for trademark infringement and cybersquatting. Plaintiff sued the company and the owner individually, asserting that that the owner should be personally liable because he controlled and was involved in all significant corporate decisions regarding the alleged infringement.

Citing to Facebook, Inc. v. Power Ventures, Inc., 844 F.3d 1058 (9th Cir. 2016), the court observed that a corporate officer can be personally liable when he or she is the “guiding spirit” behind the wrongful conduct, or the “central figure” in the challenged corporate activity.

In this case, the court declined to dismiss the individual defendant from the lawsuit. With respect to the alleged trademark infringement and cybersquatting, the court focused on the fact that the individual defendant:

  • was the founder and central figure of the company,
  • personally participated in all major business strategy, branding and marketing decisions and actions,
  • ran the company from his home,
  • was the only officer of the company and was simultaneously the CEO, CFO and Secretary,
  • promoted the company’s brand from his personal social media account, and
  • directly negotiated with the plaintiff’s founder to see whether the parties could “find a more peaceful resolution.”

Simply stated, the individual defendant was not merely a board member that “final say,” but was substantially involved in every aspect of the conduct of the business giving rise to the alleged intellectual property infringement.

Playground AI LLC v. Mighty Computing, Inc. et al., 2024 WL 1123214 (N.D. Cal., March 14, 2024)

See also: 

New Jersey judiciary taking steps to better understand Generative AI in the practice of law

We are seeing the state of New Jersey take strides to make “safe and effective use of Generative AI” in the practice of law. The state’s judiciary’s Acting Administrative Director recently sent an email to New Jersey attorneys acknowledging the growth of Generative AI in the practice of law, recognizing its positive and negative uses.

The correspondence included a link to a 23-question online survey designed to gauge New Jersey attorneys’ knowledge about and attitudes toward Generative AI, with the aim of designing seminars and other training.

The questions seek to gather information on topics including the age and experience of the attorneys responding, attitudes toward Generative AI both in the and out of the practice of law, the levels of experience in using Generative AI, and whether Generative AI should be a part of the future of the practice of law.

This initiative signals the state may be taking a  proactive approach toward attorneys’ adoption of these newly-available technologies.

See also:

 

Lawyer gets called out a second time for using ChatGPT in court brief

You may recall the case of Park v. Kim, wherein the Second Circuit excoriated an attorney for using ChatGPT to generate a brief that contained a bunch of fake cases. Well, the same lawyer responsible for that debacle has been found out again, this time in a case where she is the pro se litigant.

Plaintiff sued Delta Airlines for racial discrimination. She filed a motion for leave to amend her complaint, which the court denied. In discussing the denial, the court observed the following:

[T]he Court maintains serious concern that at least one of Plaintiff’s cited cases is non-existent and may have been a hallucinated product of generative artificial intelligence, particularly given Plaintiff’s recent history of similar conduct before the Second Circuit. See Park v. Kim, 91 F.4th 610, 612 (2d Cir. 2024) (“We separately address the conduct of Park’s counsel, Attorney Jae S. Lee. Lee’s reply brief in this case includes a citation to a non-existent case, which she admits she generated using the artificial intelligence tool ChatGPT.”).

In Park v. Kim, the court referred plaintiff for potential disciplinary action. The court in this case  was more lenient, by just denying her motion for leave to amend, and eventually dismissing the case on summary judgment.

Jae Lee v. Delta Air Lines, Inc., 2024 WL 1230263 (E.D.N.Y. March 22, 2024)

See also:

Nvidia forces consumer lawsuit into arbitration  

arbitration provisoin

Plaintiffs filed a class action suit against Nvidia alleging that Nvidia falsely advertised a game streaming feature for its Shield line of devices which was later disabled, thus depriving consumers of a paid feature and devaluing their devices. The suit included claims of trespass to chattels, breach of implied warranty, and violations of various consumer protection laws.

Nvidia filed a motion to compel arbitration, citing an agreement that users ostensibly accepted during the device setup process. This agreement provided that disputes would be resolved through binding arbitration in accordance with Delaware laws and that any arbitration would be conducted by an arbitrator in California.

The court looked to the Federal Arbitration Act, which upholds arbitration agreements unless general contract defenses like fraud or unconscionability apply. Nvidia emphasized the initial setup process for Shield devices, during which users were required to agree to certain terms of use that included the arbitration provision. In light of Nvidia’s claim that this constituted clear consent to arbitrate disputes, the court examined whether this agreement was conscionable and whether it indeed covered the plaintiffs’ claims.

The court found the arbitration agreement enforceable, rejecting plaintiffs’ claims of both procedural and substantive unconscionability. The court concluded that the setup process provided sufficient notice to users about the arbitration agreement, and the terms of the agreement were not so one-sided as to be deemed unconscionable. Furthermore, the court determined that plaintiffs’ claims fell within the scope of the arbitration agreement, leading to a decision to stay the action pending arbitration in accordance with the agreement’s terms.

Davenport v. Nvidia Corporation, — F.Supp.3d —, 2024 WL 832387 (N.D. Cal. Feb 28, 2024)

See also:

ChatGPT was “utterly and unusually unpersuasive” in case involving recovery of attorney’s fees

chatgpt billing

In a recent federal case in New York under the Individuals with Disabilities Act, plaintiff prevailed on her claims and sought an award of attorney’s fees under the statute. Though the court ended up awarding plaintiff’s attorneys some of their requested fees, the court lambasted counsel in the process for using information obtained from ChatGPT to support the claim of the attorneys’ hourly rates.

Plaintiff’s firm used ChatGPT-4 as a “cross-check” against other sources in confirming what should be a reasonably hourly rate for the attorneys on the case. The court found this reliance on ChatGPT-4 to be “utterly and unusually unpersuasive” for determining reasonable billing rates for legal services. The court criticized the firm’s use of ChatGPT-4 for not adequately considering the complexity and specificity required in legal billing, especially given the tool’s inability to discern between real and fictitious legal citations, as demonstrated in recent past cases within the Second Circuit.

In Mata v. Avianca, Inc., 2023 WL 4114965 (S.D.N.Y. June 22, 2023) the district court judge sanctioned lawyers for submitting fictitious judicial opinions generated by ChatGPT, and in Park v. Kim, — F.4th —, 2024 WL 332478 (2d Cir. January 30, 2024) an attorney was referred to the Circuit’s Grievance Panel for citing non-existent authority from ChatGPT in a brief. These examples highlighted the tool’s limitations in legal contexts, particularly its inability to differentiate between real and fabricated legal citations, raising concerns about its reliability and appropriateness for legal tasks.

J.G. v. New York City Dept. of Education, 2024 WL 728626 (February 22, 2024)

See also:

Using AI generated fake cases in court brief gets pro se litigant fined $10K

fake ai cases

Plaintiff sued defendant and won on summary judgment. Defendant sought review with the Missouri Court of Appeals. On appeal, the court dismissed the appeal and awarded damages to plaintiff/respondent because of the frivolousness of the appeal.

“Due to numerous fatal briefing deficiencies under the Rules of Appellate Procedure that prevent us from engaging in meaningful review, including the submission of fictitious cases generated by [AI], we dismiss the appeal.” With this, the court began its roast of the pro se appellant’s conduct.

The court detailed appellant’s numerous violations of the applicable Rules of Appellate Procedures. The appellate brief was unsigned, it had no required appendix, and had an inadequate statement of facts. It failed to provide points relied on, and a detailed table of cases, statutes and other authorities.

But the court made the biggest deal about how “the overwhelming majority of the [brief’s] citations are not only inaccurate but entirely fictitious.” Only two out of the twenty-four case citations in the brief were genuine.

Though appellant apologized for the fake cases in his reply brief, the court was not moved, because “the deed had been done.” It characterized the conduct as “a flagrant violation of the duties of candor” appellant owed to the court, and an “abuse of the judicial system.”

Because appellant “substantially failed to comply with court rules,” the court dismissed the appeal and ordered appellant to pay $10,000 in damages for filing a frivolous appeal.

Kruse v. Karlen, — S.W.3d —, 2024 WL 559497 (Mo. Ct. App. February 13, 2024)

See also:

DMCA subpoena to “mere conduit” ISP was improper

DMCA defamatory

Because ISP acted as a conduit for the transmission of material that allegedly infringed copyright, it fell under the DMCA safe harbor in 17 U.S.C. § 512(a), and therefore § 512(h) did not authorize the subpoena issued in the case.

Some copyright owners needed to find out who was anonymously infringing their works, so they issued a subpoena to the users’ internet service provider (Cox Communications) under the Digital Millennium Copyright Act’s (“DMCA”) at 17 U.S.C. § 512(h). After the ISP notified one of the anonymous users – referred to as John Doe in the case – of the subpoena, Doe filed a motion to quash. The magistrate judge recommended the subpoena be quashed, and the district judge accepted such recommendation.

Contours of the Safe Harbor

The court explained how the DMCA enables copyright owners to send subpoenas for the identification of alleged infringers, contingent upon providing a notification that meets specific criteria outlined in the DMCA. However, the DMCA also establishes safe harbors for Internet Service Providers (ISPs), notably exempting those acting as “mere conduits” of information, like in peer-to-peer (P2P) filesharing, from liability and thus from the obligations of the notice and takedown provisions found in other parts of the DMCA. This distinction has led courts, including the Eighth and D.C. Circuits, to conclude that subpoenas under § 512(h) cannot be used to compel ISPs, which do not store or directly handle the infringing material but merely transmit it, to reveal the identities of P2P infringers.

Who is in?

The copyright owners raised a number of objections to quashing the subpoena. Their primary concerns were with the court’s interpretation of the ISP’s role as merely a “conduit” in the alleged infringement, arguing that the ISP’s assignment of IP addresses constituted a form of linking to infringing material, thus meeting the DMCA’s notice requirements. They also disputed the court’s conclusion that the material in question could not be removed or access disabled by the ISP due to its nature of transmission, and they took issue with certain factual conclusions drawn without input from the parties involved. Additionally, the petitioners objected to the directive to return or destroy any information obtained through the subpoena, requesting that such measures apply only to the information related to the specific subscriber John Doe.

Conduits are.

Notwithstanding these various arguments, the court upheld the magistrate judge’s recommendation, agreeing that the subpoena issued to the ISP was invalid due to non-compliance with the notice provisions required by 17 U.S.C. § 512(c)(3)(A). The petitioners’ arguments, suggesting that the ISP’s assignment of IP addresses to users constituted a form of linking to infringing material under § 512(d), were rejected. The court clarified that in the context of P2P file sharing, IP addresses do not serve as “information location tools” as defined under § 512(d) and that the ISP’s role was limited to providing internet connectivity, aligning with the “mere conduit” provision under § 512(a). The court also dismissed the petitioners’ suggestion that the ISP could disable access to infringing material by null routing, emphasizing the distinction between disabling access to material and terminating a subscriber’s account, with the latter being a more severe action than what the DMCA authorizes. The court suggested that the petitioners could pursue the infringer’s identity through other legal avenues, such as a John Doe lawsuit, despite potential challenges highlighted by the petitioners.

In re: Subpoena of Internet Subscribers of Cox Communications, LLC and Coxcom LLC, 2024 WL 341069 (D. Hawaii, January 30, 2024)

 

ChatGPT providing fake case citations again – this time at the Second Circuit

Plaintiff sued defendant in federal court but the court eventually dismissed the case because plaintiff continued to fail to properly respond to defendant’s discovery requests. So plaintiff sought review with the Second Circuit Court of Appeals. On appeal, the court affirmed the dismissal, finding that plaintiff’s noncompliance in the lower court amounted to “sustained and willful intransigence in the face of repeated and explicit warnings from the court that the refusal to comply with court orders … would result in the dismissal of [the] action.”

But that was not the most intriguing or provocative part of the court’s opinion. The court also addressed the conduct of plaintiff’s lawyer, who admitted to using ChatGPT to help her write a brief before the appellate court. The AI assistance betrayed itself when the court noticed that the brief contained a non-existent case. Here’s the mythical citation: Matter of Bourguignon v. Coordinated Behavioral Health Servs., Inc., 114 A.D.3d 947 (3d Dep’t 2014).

When the court called her out on the legal hallucination, plaintiff’s attorney admitted to using ChatGPT, to which she was a “suscribed and paying member” but emphasized that she “did not cite any specific reasoning or decision from [the Bourguignon] case.” Unfortunately, counsel’s assertions did not blunt the court’s wrath.

“All counsel that appear before this Court are bound to exercise professional judgment and responsibility, and to comply with the Federal Rules of Civil Procedure,” read the court’s opinion as it began its rebuke. It reminded counsel that the rules of procedure impose a duty on attorneys to certify that they have conducted a reasonable inquiry and have determined that any papers filed with the court are legally tenable. “At the very least,” the court continued, attorneys must “read, and thereby confirm the existence and validity of, the legal authorities on which they rely.” Citing to a recent case involving a similar controversy, the court observed that “[a] fake opinion is not ‘existing law’ and citation to a fake opinion does not provide a non-frivolous ground for extending, modifying, or reversing existing law, or for establishing new law. An attempt to persuade a court or oppose an adversary by relying on fake opinions is an abuse of the adversary system.”

The court considered the matter so severe that it referred the attorney to the court’s Grievance Panel, for that panel to consider whether to refer the situation to the court’s Committee on Admissions and Grievances, which would have the power to revoke the attorney’s admission to practice before that court.

Park v. Kim, — F.4th —, 2024 WL 332478 (2d Cir. January 30, 2024)

See also:

Click to Agree: Online clickwrap agreements steered bank lawsuit to arbitration

online terms and conditions

Plaintiffs sued their bank alleging various claims under state law. The bank moved to compel arbitration based on various online clickwrap agreements plaintiffs had entered into.

One of the clickwrap agreements required plaintiffs to scroll through the entire agreement and then click an “Acknowledge” button before continuing to the next step. Citing to the case of Meyer v. Uber, 868 F.3d 66 (2d Cir. 2017), the court observed that “[c]ourts routinely uphold clickwrap agreements for the principal reason that the user has affirmatively assented to the terms of agreement by clicking ‘I agree.'”

Similarly, for the other relevant agreements, plaintiffs were required to click a box acknowledging that they agreed to those agreements before they could obtain access to digital products. Again, citing to the Meyer case: “A reasonable user would know that by clicking the registration button, he was agreeing to the terms and conditions accessible via the hyperlink, whether he clicked on the hyperlink or not.” By affirmatively clicking the acknowledgement, plaintiffs manifested their assent to the terms of the these agreements.

Curtis v. JPMorgan Chase Bank, N.A., 2024 WL 283474 (S.D.N.Y., January 25, 2024)

See also:

Scroll to top