Key Takeaways From the USPTO’s Guidance on AI Use

uspto ai

On April 10, 2024, the United States Patent and Trademark Office (“USPTO”) issued guidance to attorneys about using AI in matters before the USPTO. While there are no new rules implemented to address the use of AI, the guidance seeks to remind practitioners of the existing rules, inform of risks, and provide suggestions for mitigating those risks. The notice acknowledges that it is an effort to address AI considerations at the intersection of innovation, creativity and intellectual property, consistent with the President’s recent executive order that calls upon the federal government to enact and enforce protections against AI-related harms.

The guidance tends to address patent prosecution and examination more than trademark practice and prosecution, but there are still critically important ideas relevant to the practice of trademark law.

The USPTO takes a generally positive approach toward the use of AI, recognizing that tools using large language models can lower the barriers and costs for practicing before the USPTO and help practitioners serve clients better and more efficiently. But it recognizes potential downsides from misuse – some of which is not exclusive to intellectual property practice, e.g., using AI generated non-existent case citations in briefs filed before the USPTO and inadvertently disclosing confidential information via a prompt.

Key Reminders in the Guidance

The USPTO’s guidance reminds practitioners of some specific ways that they must adhere to USPTO rules and policies when using AI assistance in submissions – particularly because of the need for full, fair, and accurate disclosure and the protection of clients’ interests.

Candor and Good Faith: Practitioners involved in USPTO proceedings (including prosecution and matters such as oppositions and cancellation proceedings before the Trademark Trial and Appeal Board (TTAB)) are reminded of the duties of candor and good faith. This entails the disclosure of all material information known to be relevant to a matter. Though the guidance is patent-heavy in its examples (e.g., discussing communications with patent examiners), it is not limited to patent prosecution but applies to trademark prosecution as well. The guidance details the broader duty of candor and good faith, which prohibits fraudulent conduct and emphasizes the integrity of USPTO proceedings and the reliability of registration certificates issued.

Signature Requirements: The guidance outlines the signature requirement for correspondence with the USPTO, ensuring that documents drafted with AI assistance are reviewed and believed to be true by the signer.

Confidentiality: The confidentiality of client information is of key importance, with practitioners being required to prevent unauthorized disclosure, which could be exacerbated by the use of AI in drafting applications or conducting clearance searches.

International Practice: Foreign filing and compliance with export regulations are also highlighted, especially in the context of using AI for drafting applications or doing clearance searches. Again, while the posture in the guidance tends to be patent heavy, the guidance is relevant to trademark practitioners working with foreign associates and otherwise seeking protection of marks in other countries. Practitioners are reminded of their responsibilities to prevent improper data export.

USPTO Electronic Systems: The guidance further addresses the use of USPTO electronic systems, emphasizing that access is governed by terms and conditions to prevent unauthorized actions.

Staying Up-to-date: The guidance reiterates the duties owed to clients, including competent and diligent representation, stressing the need for practitioners to stay informed about the technologies they use in representing clients, including AI tools.

More Practical Guidance for Use of Tools

The guidance next moves to a discussion of particular use of AI tools in light of the nature of the practice and the rules of which readers have been reminded. Key takeaways in this second half of the guidance include the following:

Text creation:

Word processing tools have evolved to incorporate generative AI capabilities, enabling the automation of complex tasks such as responding to office actions. While the use of such AI-enhanced tools in preparing documents for submission to the USPTO is not prohibited or subject to mandatory disclosure, users are reminded to adhere to USPTO policies and their duties of candor and good faith towards the USPTO and their clients when employing these technologies.

Likely motivated by court cases that have gotten a lot of attention because lawyers used ChatGPT to generate fake case cites, the USPTO addressed the importance of human-review of AI generated content. All USPTO submissions, regardless of AI involvement in their drafting, must be signed by the presenting party, who attests to the truthfulness of the content and the adequacy of their inquiry into its accuracy.  Human review is crucial to uphold the duty of candor and good faith, requiring the correction of any errors or omissions before submission. While there is no general duty to disclose AI’s use in drafting unless specifically asked, practitioners must ensure their submissions are legally sound and factually accurate and consult with their clients about the representation methods used.

More specifically, submissions to the TTAB and trademark applications that utilize AI tools require meticulous review to ensure accuracy and compliance with the applicable rules. This is vital for all documents, including evidence for trademark applications, responses to office actions, and legal briefs, to ensure they reflect genuine marketplace usage and are supported by factual evidence. Special attention must be given to avoid the inclusion of AI-generated specimens or evidence that misrepresents actual use or existence in commerce. Materials produced by AI that distort facts, include irrelevant content, or are unduly repetitive risk being deemed as submitted with improper intent, potentially leading to unnecessary delays or increased costs in the proceedings.

Filling out Forms:

AI tools can enhance the efficiency of filing documents with the USPTO by automating tasks such as form completion and document uploads. But users must ensure their use aligns with USPTO rules, particularly regarding signatures, which must be made by a person and not delegated to AI. Users are reminded that USPTO.gov accounts are limited to use by natural persons. AI systems cannot hold such accounts, emphasizing the importance of human oversight in submissions to ensure adherence to USPTO regulations and policies.

Automated Access to USPTO IT Systems:

The guidance notes that when utilizing AI tools to interact with USPTO IT systems, it is crucial to adhere to legal and regulatory requirements, ensuring authorized use only. Users must have proper authorization, such as being an applicant, registrant, or practitioner, to file documents or access information. AI systems cannot be considered “users” and thus are ineligible for USPTO.gov accounts. Individuals employing AI assistance must ensure the tool does not overstep access permissions, risking potential revocation of the applicable USPTO.gov account or face other legal risk for unauthorized access. Additionally, the USPTO advises against excessive data mining from USPTO databases with AI tools. The USPTO reminds readers that it provides bulk data products that could assist in these efforts.

Lawyer gets called out a second time for using ChatGPT in court brief

You may recall the case of Park v. Kim, wherein the Second Circuit excoriated an attorney for using ChatGPT to generate a brief that contained a bunch of fake cases. Well, the same lawyer responsible for that debacle has been found out again, this time in a case where she is the pro se litigant.

Plaintiff sued Delta Airlines for racial discrimination. She filed a motion for leave to amend her complaint, which the court denied. In discussing the denial, the court observed the following:

[T]he Court maintains serious concern that at least one of Plaintiff’s cited cases is non-existent and may have been a hallucinated product of generative artificial intelligence, particularly given Plaintiff’s recent history of similar conduct before the Second Circuit. See Park v. Kim, 91 F.4th 610, 612 (2d Cir. 2024) (“We separately address the conduct of Park’s counsel, Attorney Jae S. Lee. Lee’s reply brief in this case includes a citation to a non-existent case, which she admits she generated using the artificial intelligence tool ChatGPT.”).

In Park v. Kim, the court referred plaintiff for potential disciplinary action. The court in this case  was more lenient, by just denying her motion for leave to amend, and eventually dismissing the case on summary judgment.

Jae Lee v. Delta Air Lines, Inc., 2024 WL 1230263 (E.D.N.Y. March 22, 2024)

See also:

Utah has a brand new law that regulates generative AI

On March 15, 2024, the Governor of Utah signed a bill that implements new law in the state regulating the use and development of artificial intelligence.  Here are some key things you should know about the law.

  • The statute adds to the state’s consumer protection laws, which govern things such as credit services, car sales, and online dating. The new law says that anyone accused of violating a consumer protection law cannot blame it on the use of generative AI (like Air Canada apparently attempted to do back in February).
  • The new law also says that a person involved in any act covered by the state’s consumer protection laws asks the company she’s dealing with if she is interacting with an AI, the company has to clearly and conspicuously disclose that fact.
  • And the law says that anyone providing services as a regulated occupation in the state (for example, an architect, surveyor or a therapist) must disclose in advance any use of generative AI. The statute outlines the requirements for these notifications.
  • In addition to addressing consumer protection, the law also establishes a plan for the state to further innovation in artificial intelligence. The new law introduces a regulatory framework for an AI learning laboratory to investigate AI’s risks and benefits and to guide regulation AI development.
  • The statute discusses requirements for participation in the program and also provides certain incentives for the development of AI technologies, including “regulatory mitigation” to adjust or ease certain regulatory requirements for participants and reduce potential liability.

This law the first of its kind and other states are likely to enact similar laws. Much more to come on this topic.

VIDEO: Elon Musk / OpenAI lawsuit – What’s it all about?

 

So Elon Musk has sued OpenAI. What’s this all about?

The lawsuit centers on the breach of a founding agreement and OpenAI’s shift from non-profit to for-profit through partnerships with companies like Microsoft. It has been filed in state court in California and talks about the risks of artificial general intelligence (or AGI). It talks about how Musk worked with Sam Altman back in 2015 to form OpenAI for the public good. That was the so called “founding agreement” which also got written into the company’s certificate of incorporation. One of the most intriguing things about the lawsuit is that Musk is asking the court to determine that OpenAI is Artificial General Intelligence and thereby has gone outside the initial scope of the Founding Agreement.

Using AI generated fake cases in court brief gets pro se litigant fined $10K

fake ai cases

Plaintiff sued defendant and won on summary judgment. Defendant sought review with the Missouri Court of Appeals. On appeal, the court dismissed the appeal and awarded damages to plaintiff/respondent because of the frivolousness of the appeal.

“Due to numerous fatal briefing deficiencies under the Rules of Appellate Procedure that prevent us from engaging in meaningful review, including the submission of fictitious cases generated by [AI], we dismiss the appeal.” With this, the court began its roast of the pro se appellant’s conduct.

The court detailed appellant’s numerous violations of the applicable Rules of Appellate Procedures. The appellate brief was unsigned, it had no required appendix, and had an inadequate statement of facts. It failed to provide points relied on, and a detailed table of cases, statutes and other authorities.

But the court made the biggest deal about how “the overwhelming majority of the [brief’s] citations are not only inaccurate but entirely fictitious.” Only two out of the twenty-four case citations in the brief were genuine.

Though appellant apologized for the fake cases in his reply brief, the court was not moved, because “the deed had been done.” It characterized the conduct as “a flagrant violation of the duties of candor” appellant owed to the court, and an “abuse of the judicial system.”

Because appellant “substantially failed to comply with court rules,” the court dismissed the appeal and ordered appellant to pay $10,000 in damages for filing a frivolous appeal.

Kruse v. Karlen, — S.W.3d —, 2024 WL 559497 (Mo. Ct. App. February 13, 2024)

See also:

GenAI and copyright: Court dismisses almost all claims against OpenAI in authors’ suit

copyright social media

Plaintiff authors sued large language model provider OpenAI and related entities for copyright infringement, alleging that plaintiffs’ books were used to train ChatGPT. Plaintiffs asserted six causes of action against various OpenAI entities: (1) direct copyright infringement, (2) vicarious infringement, (3) violation of Section 1202(b) of the Digital Millennium Copyright Act (“DMCA”), (4) unfair competition under  Cal. Bus. & Prof. Code Section 17200, (5) negligence, and (6) unjust enrichment.

Open AI moved to dismiss all of these claims except for the direct copyright infringement claim. The court granted the motion as to almost all the claims.

Vicarious liability claim dismissed

The court dismissed the claim for vicarious liability because plaintiffs did not successfully plead that direct copying occurs from use of the software. Citing to A&M Recs., Inc. v. Napster, Inc., 239 F.3d 1004, 1013 n.2 (9th Cir. 2001) aff’d,  284 F.3d 1091 (2002) the court noted that “[s]econdary liability for copyright infringement does not exist in the absence of direct infringement by a third party.” More specifically, the court dismissed the claim because plaintiffs had not alleged either direct copying when the outputs are generated, nor had they alleged “substantial similarity” between the ChatGPT outputs and plaintiffs’ works.

DMCA claims dismissed

The DMCA – at 17 U.S.C. 1202(b) – requires a defendant’s knowledge or “reasonable grounds to know” that the defendant’s removal of copyright management information (“CMI”) would “induce, enable, facilitate, or conceal an infringement.” Plaintiffs alleged “by design,” OpenAI removed CMI from the copyrighted books during the training process. But the court found that plaintiffs provided no factual support for that assertion. Moreover, the court found that even if plaintiffs had successfully asserted such facts, they had not provided any facts showing how the omitted CMI would induce, enable, facilitate or conceal infringement.

The other portion of the DMCA relevant to the lawsuit – Section 1202(b)(3) – prohibits the distribution of a plaintiff’s work without the plaintiff’s CMI included. In rejecting plaintiff’s assertions that defendants violated this provision, the court looked to the plain language of the statute. It noted that liability requires distributing the original “works” or “copies of [the] works.” Plaintiffs had not alleged that defendants distributed their books or copies of their books. Instead, they alleged that “every output from the OpenAI Language Models is an infringing derivative work” without providing any indication as to what such outputs entail – i.e., whether they were the copyrighted books or copies of the books.

Unfair competition claim survived

Plaintiffs asserted that defendants had violated California’s unfair competition statute based on “unlawful,” “fraudulent,” and “unfair” practices. As for the unlawful and fraudulent practices, these relied on the DMCA claims, which the court had already dismissed. So the unfair competition theory could not move forward on those grounds. But the court did find that plaintiffs had alleged sufficient facts to support the claim that it was “unfair” to use plaintiffs works without compensation to train the ChatGPT model.

Negligence claim dismissed

Plaintiffs alleged that defendants owed them a duty of care based on the control of plaintiffs’ information in their possession and breached their duty by “negligently, carelessly, and recklessly collecting, maintaining, and controlling systems – including ChatGPT – which are trained on Plaintiffs’ [copyrighted] works.” The court dismissed this claim, finding that there were insufficient facts showing that defendants owed plaintiffs a duty in this situation.

Unjust enrichment claim dismissed

Plaintiffs alleged that defendants were unjustly enriched by using plaintiffs’ copyright protected works to train the large language model. The court dismissed this claim because plaintiff had not alleged sufficient facts to show that plaintiffs had conferred any benefit onto OpenAI through “mistake, fraud, or coercion.”

Tremblay v. OpenAI, Inc., 2024 WL 557720 (N.D. Cal., February 12, 2024)

See also:

Generative AI executive who moved to competitor slapped with TRO

generative ai competitor

Generative AI is obviously a quickly growing segment, and competition among businesses in the space is fierce. As companies race to harness the transformative power of this technology, attracting and retaining top talent becomes a central battleground. Recent legal cases, like the newly-filed Kira v. Samman in Virginia, show just how intense the scramble for expertise has become. In the court’s opinion granting a temporary restraining order against a departing executive and the competitor to which he fled, we see some of the dynamics of non-competition clauses, and the lengths companies will go to in order to safeguard their intellectual property and strategic advantages, particularly in dealing with AI technology.

Kira and Samman Part Ways

Plaintiff Kira is a company that creates AI tools for law firms, while defendant DeepJudge AG offers comparable AI solutions to boost law firm efficiency.  Kira hired defendant Samman, who gained access to Kira’s confidential data. Samman had signed a Restrictive Covenants Agreement with Kira containing provisions that prohibited him from joining a competitor for 12 months post-termination. Samman resigned from Kira in June 2023, and Kira claimed he joined competitor DeepJudge after sending Kira’s proprietary data to his personal email.

The Dispute

Kira sued Samman and DeepJudge in federal court, alleging Samman breached his contractual obligations, and accusing DeepJudge of tortious interference with a contract. Kira also sought a temporary restraining order (TRO) to prevent Samman from working for DeepJudge and to mandate the return and deletion of Kira’s proprietary information in Samman’s possession.

Injunctive Relief Was Proper

The court observed that to obtain the sought-after injunction, Kira had to prove, among other things, a likelihood of success at trial. It found that Kira demonstrated this likelihood concerning Samman’s breach of the non-competition restrictive covenant. It determined the non-competition covenant Samman breached to be enforceable, given that it met specific requirements including advancing Kira’s economic interests. The court found that the evidence showed Samman, after leaving his role at Kira, joined a direct competitor, DeepJudge, in a role similar in function, thus likely violating the non-competition restrictive covenant.

The court found that Kira faced irreparable harm without the injunction, especially given the potential loss of clients due to Samman’s knowledge of confidential information. The court weighed the balance of equities in favor of Kira, emphasizing the protection of confidential business information and enforcement of valid contracts. It required Kira to post a bond of $15,000, to ensure coverage for potential losses Samman might face due to the injunction.

Kira (US) Inc. v. Samman, 2023 WL 4687189 (E.D. Va. July 21, 2023)

See also:

When can you use a competitor’s trademark in a domain name?

Does a human who edits an AI-created work become a joint author with the AI?

ai joint author

If a human edits a work that an AI initially created, is the human a joint author under copyright law?

U.S. copyright law (at 17 U.S.C. § 101) considers a work to be a “joint work” if it is made by two or more authors intending to mix their contributions into a single product. So, if a human significantly modifies or edits content that an AI originally created, one might think the human has made a big enough contribution to be considered a joint author. But it is not that straightforward. The law looks for a special kind of input: it must be original and creative, not just technical or mechanical. For instance, merely selecting options for the AI or doing basic editing might not cut it. But if the human’s editing changes the work in a creative way, it might just qualify as a joint author.

Where the human steps in.

This blog post is a clear example. ChatGPT created all the other paragraphs of this blog post (i.e. not this one). I typed this paragraph out from scratch. I have gone through and edited the other paragraphs, making what are obviously mechanical changes. For example, I didn’t like how ChatGPT used so many contractions. I mean, I did not like how ChatGPT used so many contractions. I suspect those are not the kind of “original” contributions that the Copyright Act’s authors had in mind to constitute the level of participation to give rise to a joint work. But I also added some sentences here and there, and struck some others. I took the photo that goes with the post, cropped it, and decided how to place it in relation to the text. Those activities are likely “creative” enough to be copyrightable contributions to the unitary whole that is this blog post. And then of course there is this paragraph that you are just about done reading. Has this paragraph not contributed some notable expression to make this whole blog post better than what it would have been without the paragraph?

Let’s say the human editing does indeed make the human a joint author. What rights would the human have? And how would these rights compare to any the AI might have? Copyright rights are generally held by human creators. This means the human would have rights to copy the work, distribute it, display or perform it publicly, and make derivative works.

Robot rights.

As for the AI, here’s where things get interesting. U.S. Copyright law generally does not recognize AI systems as authors, so they would not have any rights in the work. But this is a rapidly evolving field, and there is ongoing debate about how the law should treat creations made by AI.

This leaves us in a peculiar situation. You have a “joint work” that a human and an AI created together, but only the human can be an author. So, as it stands, the AI would not have any rights in the work, and the human would. Here’s an interesting nuance to consider: authors of joint works are pretty much free to do what they wish with the work as they see fit, so long as they fulfill certain obligations to the other authors (e.g., account for any royalties received). Does the human-owner have to fulfill these obligations to the purported AI-author of the joint work? It seems we cannot fairly address that question if we have not yet established that the AI system can be a joint author in the first place.

Where we go from here.

It seems reasonable to conclude that a human editing AI-created content might qualify as a joint author if the changes are significant and creative, not just technical. If that’s the case, the human would have full copyright rights under current law, while the AI would not have any. As these human-machine collaborations continue to become more commonplace, we will see how law and policy evolve to either strengthen the position that only “natural persons” (humans) can own intellectual property rights, or to move in the direction of granting some sort of “personhood” to non-human agents. It is like watching science fiction unfold in reality in real time.

What do you think?

See also:

Five legal issues around using AI in a branding strategy

Five legal issues around using AI in a branding strategy

AI branding strategy

The ability of AI to gather, analyze, and interpret large sets of data can lead to invaluable insights and efficiencies. But as businesses increasingly rely on AI to develop and execute branding strategies, they must be aware of the potential legal issues that can arise. Here are five issues to consider:

  • Data Protection and Privacy Laws: AI systems often require vast amounts of data to operate effectively, much of which may be personal data collected from customers. This brings into play data protection and privacy laws, such as the General Data Protection Regulation (GDPR) in the European Union and the California Consumer Privacy Act (CCPA) in the United States. Non-compliance with these laws can lead to substantial fines and reputational damage. So businesses must seek to ensure that their use of AI complies with all applicable data protection and privacy laws.
  •  

  • Intellectual Property Rights: AI systems can generate content, designs, or even brand names. But who owns the rights to this AI-generated output? This is a complex and evolving area of law, with different jurisdictions taking different approaches. Businesses need to remember to consider intellectual property issues, both in the context of protecting their own rights and not infringing upon the rights of others.
  •  

  • Bias and Discrimination: AI systems learn from the data on which they are trained. If this data contains biases, the AI system can amplify these biases, leading to potentially discriminatory outcomes. This not only has ethical implications but also legal ones. In many jurisdictions, businesses can be held liable for discriminatory practices, even if unintentional. Businesses should ensure their AI systems are trained on diverse and representative data sets and regularly audited for bias.
  •  

  • Transparency and Explainability: Many jurisdictions are considering regulations that require AI systems to be transparent and explainable. This means that businesses must be able to explain how their AI systems make decisions. If a customer feels that it has been unfairly treated by an AI system, the business may need to justify the AI’s decision-making process. Compliance with these requirements can be challenging, particularly with complex AI systems.
  •  

  • Contractual Obligations and Liability: When businesses use third-party AI systems, it is crucial to clearly understand and define who is responsible if something goes wrong. This includes potential breaches of data protection laws, intellectual property infringement, and any harm caused by the AI system. Businesses should ensure their contracts with AI vendors clearly outline the responsibilities and liabilities of each party.

While AI presents numerous opportunities for enhancing a branding strategy, it also introduces a range of legal considerations. Businesses must navigate these potential legal pitfalls carefully so that they can leverage the power of AI while minimizing legal risk.

Scroll to top