Footnote in opinion warns counsel not to cite AI-generated fake cases again

A federal judge in Wisconsin suspected that one of the parties appearing before the court had used generative AI to write a brief, which resulted in a hallucinated case. The judge issued an opinion with this footnote:

Although it does not ultimately affect the Court’s analysis or disposition, Plaintiffs in their reply cite to a case that none of the Court’s staff were able to locate. ECF No. 32 at 5 (“Caserage Tech Corp. v. Caserage Labs, Inc., 972 F.3d 799, 803 (7th Cir. 1992) (The District Court correctly found the parties agreed to permit shareholder rights when one party stated to the other its understanding that a settlement agreement included shareholder rights, and the other party did not say anything to repudiate that understanding.).”). The citation goes to a case of a different name, from a different year, and from a different circuit. Court staff also could not locate the case by searching, either on Google or in legal databases, the case name provided in conjunction with the purported publication year. If this is, as the Court suspects, an instance of provision of falsified case authority derived from artificial intelligence, Plaintiffs’ counsel is on notice that any future instance of the presentation of nonexistent case authority will result in sanctions.

One must hope this friendly warning will be taken seriously.

Plumbers & Gasfitters Union Local No. 75 Health Fund v. Morris Plumbing, LLC, 2024 WL 1675010 (E.D. Wisconsin April 18, 2024)

Key Takeaways From the USPTO’s Guidance on AI Use

uspto ai

On April 10, 2024, the United States Patent and Trademark Office (“USPTO”) issued guidance to attorneys about using AI in matters before the USPTO. While there are no new rules implemented to address the use of AI, the guidance seeks to remind practitioners of the existing rules, inform of risks, and provide suggestions for mitigating those risks. The notice acknowledges that it is an effort to address AI considerations at the intersection of innovation, creativity and intellectual property, consistent with the President’s recent executive order that calls upon the federal government to enact and enforce protections against AI-related harms.

The guidance tends to address patent prosecution and examination more than trademark practice and prosecution, but there are still critically important ideas relevant to the practice of trademark law.

The USPTO takes a generally positive approach toward the use of AI, recognizing that tools using large language models can lower the barriers and costs for practicing before the USPTO and help practitioners serve clients better and more efficiently. But it recognizes potential downsides from misuse – some of which is not exclusive to intellectual property practice, e.g., using AI generated non-existent case citations in briefs filed before the USPTO and inadvertently disclosing confidential information via a prompt.

Key Reminders in the Guidance

The USPTO’s guidance reminds practitioners of some specific ways that they must adhere to USPTO rules and policies when using AI assistance in submissions – particularly because of the need for full, fair, and accurate disclosure and the protection of clients’ interests.

Candor and Good Faith: Practitioners involved in USPTO proceedings (including prosecution and matters such as oppositions and cancellation proceedings before the Trademark Trial and Appeal Board (TTAB)) are reminded of the duties of candor and good faith. This entails the disclosure of all material information known to be relevant to a matter. Though the guidance is patent-heavy in its examples (e.g., discussing communications with patent examiners), it is not limited to patent prosecution but applies to trademark prosecution as well. The guidance details the broader duty of candor and good faith, which prohibits fraudulent conduct and emphasizes the integrity of USPTO proceedings and the reliability of registration certificates issued.

Signature Requirements: The guidance outlines the signature requirement for correspondence with the USPTO, ensuring that documents drafted with AI assistance are reviewed and believed to be true by the signer.

Confidentiality: The confidentiality of client information is of key importance, with practitioners being required to prevent unauthorized disclosure, which could be exacerbated by the use of AI in drafting applications or conducting clearance searches.

International Practice: Foreign filing and compliance with export regulations are also highlighted, especially in the context of using AI for drafting applications or doing clearance searches. Again, while the posture in the guidance tends to be patent heavy, the guidance is relevant to trademark practitioners working with foreign associates and otherwise seeking protection of marks in other countries. Practitioners are reminded of their responsibilities to prevent improper data export.

USPTO Electronic Systems: The guidance further addresses the use of USPTO electronic systems, emphasizing that access is governed by terms and conditions to prevent unauthorized actions.

Staying Up-to-date: The guidance reiterates the duties owed to clients, including competent and diligent representation, stressing the need for practitioners to stay informed about the technologies they use in representing clients, including AI tools.

More Practical Guidance for Use of Tools

The guidance next moves to a discussion of particular use of AI tools in light of the nature of the practice and the rules of which readers have been reminded. Key takeaways in this second half of the guidance include the following:

Text creation:

Word processing tools have evolved to incorporate generative AI capabilities, enabling the automation of complex tasks such as responding to office actions. While the use of such AI-enhanced tools in preparing documents for submission to the USPTO is not prohibited or subject to mandatory disclosure, users are reminded to adhere to USPTO policies and their duties of candor and good faith towards the USPTO and their clients when employing these technologies.

Likely motivated by court cases that have gotten a lot of attention because lawyers used ChatGPT to generate fake case cites, the USPTO addressed the importance of human-review of AI generated content. All USPTO submissions, regardless of AI involvement in their drafting, must be signed by the presenting party, who attests to the truthfulness of the content and the adequacy of their inquiry into its accuracy.  Human review is crucial to uphold the duty of candor and good faith, requiring the correction of any errors or omissions before submission. While there is no general duty to disclose AI’s use in drafting unless specifically asked, practitioners must ensure their submissions are legally sound and factually accurate and consult with their clients about the representation methods used.

More specifically, submissions to the TTAB and trademark applications that utilize AI tools require meticulous review to ensure accuracy and compliance with the applicable rules. This is vital for all documents, including evidence for trademark applications, responses to office actions, and legal briefs, to ensure they reflect genuine marketplace usage and are supported by factual evidence. Special attention must be given to avoid the inclusion of AI-generated specimens or evidence that misrepresents actual use or existence in commerce. Materials produced by AI that distort facts, include irrelevant content, or are unduly repetitive risk being deemed as submitted with improper intent, potentially leading to unnecessary delays or increased costs in the proceedings.

Filling out Forms:

AI tools can enhance the efficiency of filing documents with the USPTO by automating tasks such as form completion and document uploads. But users must ensure their use aligns with USPTO rules, particularly regarding signatures, which must be made by a person and not delegated to AI. Users are reminded that USPTO.gov accounts are limited to use by natural persons. AI systems cannot hold such accounts, emphasizing the importance of human oversight in submissions to ensure adherence to USPTO regulations and policies.

Automated Access to USPTO IT Systems:

The guidance notes that when utilizing AI tools to interact with USPTO IT systems, it is crucial to adhere to legal and regulatory requirements, ensuring authorized use only. Users must have proper authorization, such as being an applicant, registrant, or practitioner, to file documents or access information. AI systems cannot be considered “users” and thus are ineligible for USPTO.gov accounts. Individuals employing AI assistance must ensure the tool does not overstep access permissions, risking potential revocation of the applicable USPTO.gov account or face other legal risk for unauthorized access. Additionally, the USPTO advises against excessive data mining from USPTO databases with AI tools. The USPTO reminds readers that it provides bulk data products that could assist in these efforts.

VIDEO: AI and Voice Clones – Tennessee enacts the ELVIS Act

 

Generative AI enables people to clone other peoples’ voices. And that can lead to fraud, identity theft, or intellectual property infringement.

On March 21, 2024, Tennessee enacted new law called the ELVIS Act that seeks to tackle this problem. What does the law say?

The law adds a person’s voice to the list of things over which that person has a property interest. And what is a “voice” under the law? It is any sound that is readily identifiable and attributable to a particular individual. It can be an actual voice or a simulation of the voice. A person can be liable for making available another person’s voice without consent. And one could also be liable for making available any technology having the “primary purpose or function” of producing another’s voice without permission.

 

New Jersey judiciary taking steps to better understand Generative AI in the practice of law

We are seeing the state of New Jersey take strides to make “safe and effective use of Generative AI” in the practice of law. The state’s judiciary’s Acting Administrative Director recently sent an email to New Jersey attorneys acknowledging the growth of Generative AI in the practice of law, recognizing its positive and negative uses.

The correspondence included a link to a 23-question online survey designed to gauge New Jersey attorneys’ knowledge about and attitudes toward Generative AI, with the aim of designing seminars and other training.

The questions seek to gather information on topics including the age and experience of the attorneys responding, attitudes toward Generative AI both in the and out of the practice of law, the levels of experience in using Generative AI, and whether Generative AI should be a part of the future of the practice of law.

This initiative signals the state may be taking a  proactive approach toward attorneys’ adoption of these newly-available technologies.

See also:

 

Lawyer gets called out a second time for using ChatGPT in court brief

You may recall the case of Park v. Kim, wherein the Second Circuit excoriated an attorney for using ChatGPT to generate a brief that contained a bunch of fake cases. Well, the same lawyer responsible for that debacle has been found out again, this time in a case where she is the pro se litigant.

Plaintiff sued Delta Airlines for racial discrimination. She filed a motion for leave to amend her complaint, which the court denied. In discussing the denial, the court observed the following:

[T]he Court maintains serious concern that at least one of Plaintiff’s cited cases is non-existent and may have been a hallucinated product of generative artificial intelligence, particularly given Plaintiff’s recent history of similar conduct before the Second Circuit. See Park v. Kim, 91 F.4th 610, 612 (2d Cir. 2024) (“We separately address the conduct of Park’s counsel, Attorney Jae S. Lee. Lee’s reply brief in this case includes a citation to a non-existent case, which she admits she generated using the artificial intelligence tool ChatGPT.”).

In Park v. Kim, the court referred plaintiff for potential disciplinary action. The court in this case  was more lenient, by just denying her motion for leave to amend, and eventually dismissing the case on summary judgment.

Jae Lee v. Delta Air Lines, Inc., 2024 WL 1230263 (E.D.N.Y. March 22, 2024)

See also:

AI and voice clones: Three things to know about Tennessee’s ELVIS Act

On March 21, 2024, the governor of Tennessee signed the ELVIS Act (the Ensuring Likeness, Voice, and Image Security Act of 2024) which is aimed at the problem of people using AI to simulate voices in a way not authorized by the person whose voice is being imitated.

Here are three key things to know about the new law:

(1) Voice defined.

The law adds the following definition to existing Tennessee law:

“Voice” means a sound in a medium that is readily identifiable and attributable to a particular individual, regardless of whether the sound contains the actual voice or a simulation of the voice of the individual;

There are a couple of interesting things to note. One could generate or use the voice of another without using the other person’s name. The voice simply has to be “readily identifiable” and “attributable” to a particular human. Those seem to be pretty open concepts and we could expect quite a bit of litigation over what it takes for a voice to be identifiable and attributable to another. Would this cover situations where a person naturally sounds like another, or is just trying to imitate another musical style?

(2) Voice is now a property right.

The following underlined words were added to the existing statute:

Every individual has a property right in the use of that individual’s name, photograph, voice, or likeness in any medium in any manner.

The word “person’s” was changed to “individual’s” presumably to clarify that this is a right belonging to a natural person (i.e., real human beings and not companies). And of course the word “voice” was added to expressly include that attribute as something in which the person can have a property interest.

(3) Two new things are banned under law.

The following two paragraphs have been added:

A person is liable to a civil action if the person publishes, performs, distributes, transmits, or otherwise makes available to the public an individual’s voice or likeness, with knowledge that use of the voice or likeness was not authorized by the individual or, in the case of a minor, the minor’s parent or legal guardian, or in the case of a deceased individual, the executor or administrator, heirs, or devisees of such deceased individual.

A person is liable to a civil action if the person distributes, transmits, or otherwise makes available an algorithm, software, tool, or other technology, service, or device, the primary purpose or function of which is the production of an individual’s photograph, voice, or likeness without authorization from the individual or, in the case of a minor, the minor’s parent or legal guardian, or in the case of a deceased individual, the executor or administrator, heirs, or devisees of such deceased individual.

With this language, we see the heart of the new law’s impact. One can sue another for making his or her voice publicly available without permission. Note that this restriction is not only on commercial use of another’s voice. Most states’ laws discussing name, image and likeness restrict commercial use by another. This statute is broader, and would make more things unlawful, for example, creating a deepfaked voice simply for fun (or harassment, of course), if the person whose voice is being imitated has not consented.

Note the other interesting new prohibition, the one on making available tools having as their “primary purpose or function” the production of another’s voice without authorization. If you were planning on launching that new app where you can make your voice sound like a celebrity’s voice, consider whether this Tennessee statute might shut you down.

See also:

Utah has a brand new law that regulates generative AI

On March 15, 2024, the Governor of Utah signed a bill that implements new law in the state regulating the use and development of artificial intelligence.  Here are some key things you should know about the law.

  • The statute adds to the state’s consumer protection laws, which govern things such as credit services, car sales, and online dating. The new law says that anyone accused of violating a consumer protection law cannot blame it on the use of generative AI (like Air Canada apparently attempted to do back in February).
  • The new law also says that a person involved in any act covered by the state’s consumer protection laws asks the company she’s dealing with if she is interacting with an AI, the company has to clearly and conspicuously disclose that fact.
  • And the law says that anyone providing services as a regulated occupation in the state (for example, an architect, surveyor or a therapist) must disclose in advance any use of generative AI. The statute outlines the requirements for these notifications.
  • In addition to addressing consumer protection, the law also establishes a plan for the state to further innovation in artificial intelligence. The new law introduces a regulatory framework for an AI learning laboratory to investigate AI’s risks and benefits and to guide regulation AI development.
  • The statute discusses requirements for participation in the program and also provides certain incentives for the development of AI technologies, including “regulatory mitigation” to adjust or ease certain regulatory requirements for participants and reduce potential liability.

This law the first of its kind and other states are likely to enact similar laws. Much more to come on this topic.

Lawyers and AI: Key takeaways from being on a panel at a legal ethics conference

Earlier today I was on a panel at Hinshaw & Culbertson’s LMRM Conference in Chicago. This was the 23rd annual LMRM Conference, and the event has become the gold standard for events that focus on the “law of lawyering.”

Our session was titled How Soon is Now—Generative AI, How It Works, How to Use it Now, How to Use it Ethically. Preparing for and participating in the event gave me the opportunity to seriously consider some of the key issues relating to how lawyers are using generative AI and the promise that wider future adoption of these technologies in the legal industry holds.

Here are a few key takeaways:

    • Effective use. Lawyers are already using generative AI in ways that aid efficiency. The technology can summarize complex texts during legal research, allowing the attorney to quickly assess if the content addresses her specific interests, is factually relevant, and aligns with desired legal outcomes. With a carefully crafted and detailed prompt, an attorney can generate a pretty good first draft of many types of correspondence (e.g., cease and desist letters). Tools such as ChatGPT can aid in brainstorming by generating a variety of ideas on a given topic, helping lawyers consider possible outcomes in a situation.

 

    • Access to justice. It is not clear how generative AI adoption will affect access to justice. While it is possible that something like “legal chatbots” could bring formerly unavailable legal help to parties without sufficient resources to hire expensive lawyers, the building and adoption of sophisticated tools by the most elite firms will come at a cost that is passed on to clients, making premium services even more expensive, thereby increasing the divide that already exists.

 

    • Confidentiality and privacy. Care must be taken to reduce the risk of unauthorized disclosure of information when law firms adopt generative AI tools. Data privacy concerns arise regardless of the industry in which generative AI is used. But lawyers have the additional obligation to preserve their clients’ confidential information in accordance with the rules governing the attorney-client relationship. This duty of confidentiality complicates the ways in which a law firm’s “enterprise knowledge” can be used to train a large language model. And lawyers must consider whether and how to let their clients know that the client’s information may be used to train the model.

 

    • Exposing lawyering problems. Cases such as Mata v. Avianca, Park v. Kim and Kruse v. Karlenwherein lawyers or litigants used AI to generate documents submitted to the court containing non-existent case citations (hallucinations)tend to be used to critique these kinds of tools and tend to discourage lawyers from adopting them. But if one looks at these cases carefully, it is apparent that the problem is not so much with the technology, but instead with lawyering that lacks the appropriate competence and diligence.
    •  

    • AI and the standard of the practice. There is plenty of data suggesting that most knowledge work jobs will be drastically impacted by the use of AI in the near term. Regardless of whether a lawyer or law firm wants to adopt generative AI in the practice of law, attorneys will not be able to avoid knowing how the use of AI will change norms and expectations, because clients will be effectively using these technologies and innovating in the space.

Thank you to Barry MacEntee for inviting me to be on his panel. Barry, you did an exemplary job of preparation and execution, which is exactly how you roll. Great to meet my co-panelist Andrew Sutton. Andrew, your insights and commentary on both the legal and technical aspects of the use of AI in the practice of law were terrific.

VIDEO: Elon Musk / OpenAI lawsuit – What’s it all about?

 

So Elon Musk has sued OpenAI. What’s this all about?

The lawsuit centers on the breach of a founding agreement and OpenAI’s shift from non-profit to for-profit through partnerships with companies like Microsoft. It has been filed in state court in California and talks about the risks of artificial general intelligence (or AGI). It talks about how Musk worked with Sam Altman back in 2015 to form OpenAI for the public good. That was the so called “founding agreement” which also got written into the company’s certificate of incorporation. One of the most intriguing things about the lawsuit is that Musk is asking the court to determine that OpenAI is Artificial General Intelligence and thereby has gone outside the initial scope of the Founding Agreement.

Scroll to top