Utah has a brand new law that regulates generative AI

On March 15, 2024, the Governor of Utah signed a bill that implements new law in the state regulating the use and development of artificial intelligence.  Here are some key things you should know about the law.

  • The statute adds to the state’s consumer protection laws, which govern things such as credit services, car sales, and online dating. The new law says that anyone accused of violating a consumer protection law cannot blame it on the use of generative AI (like Air Canada apparently attempted to do back in February).
  • The new law also says that a person involved in any act covered by the state’s consumer protection laws asks the company she’s dealing with if she is interacting with an AI, the company has to clearly and conspicuously disclose that fact.
  • And the law says that anyone providing services as a regulated occupation in the state (for example, an architect, surveyor or a therapist) must disclose in advance any use of generative AI. The statute outlines the requirements for these notifications.
  • In addition to addressing consumer protection, the law also establishes a plan for the state to further innovation in artificial intelligence. The new law introduces a regulatory framework for an AI learning laboratory to investigate AI’s risks and benefits and to guide regulation AI development.
  • The statute discusses requirements for participation in the program and also provides certain incentives for the development of AI technologies, including “regulatory mitigation” to adjust or ease certain regulatory requirements for participants and reduce potential liability.

This law the first of its kind and other states are likely to enact similar laws. Much more to come on this topic.

Lawyers and AI: Key takeaways from being on a panel at a legal ethics conference

Earlier today I was on a panel at Hinshaw & Culbertson’s LMRM Conference in Chicago. This was the 23rd annual LMRM Conference, and the event has become the gold standard for events that focus on the “law of lawyering.”

Our session was titled How Soon is Now—Generative AI, How It Works, How to Use it Now, How to Use it Ethically. Preparing for and participating in the event gave me the opportunity to seriously consider some of the key issues relating to how lawyers are using generative AI and the promise that wider future adoption of these technologies in the legal industry holds.

Here are a few key takeaways:

    • Effective use. Lawyers are already using generative AI in ways that aid efficiency. The technology can summarize complex texts during legal research, allowing the attorney to quickly assess if the content addresses her specific interests, is factually relevant, and aligns with desired legal outcomes. With a carefully crafted and detailed prompt, an attorney can generate a pretty good first draft of many types of correspondence (e.g., cease and desist letters). Tools such as ChatGPT can aid in brainstorming by generating a variety of ideas on a given topic, helping lawyers consider possible outcomes in a situation.

 

    • Access to justice. It is not clear how generative AI adoption will affect access to justice. While it is possible that something like “legal chatbots” could bring formerly unavailable legal help to parties without sufficient resources to hire expensive lawyers, the building and adoption of sophisticated tools by the most elite firms will come at a cost that is passed on to clients, making premium services even more expensive, thereby increasing the divide that already exists.

 

    • Confidentiality and privacy. Care must be taken to reduce the risk of unauthorized disclosure of information when law firms adopt generative AI tools. Data privacy concerns arise regardless of the industry in which generative AI is used. But lawyers have the additional obligation to preserve their clients’ confidential information in accordance with the rules governing the attorney-client relationship. This duty of confidentiality complicates the ways in which a law firm’s “enterprise knowledge” can be used to train a large language model. And lawyers must consider whether and how to let their clients know that the client’s information may be used to train the model.

 

    • Exposing lawyering problems. Cases such as Mata v. Avianca, Park v. Kim and Kruse v. Karlenwherein lawyers or litigants used AI to generate documents submitted to the court containing non-existent case citations (hallucinations)tend to be used to critique these kinds of tools and tend to discourage lawyers from adopting them. But if one looks at these cases carefully, it is apparent that the problem is not so much with the technology, but instead with lawyering that lacks the appropriate competence and diligence.
    •  

    • AI and the standard of the practice. There is plenty of data suggesting that most knowledge work jobs will be drastically impacted by the use of AI in the near term. Regardless of whether a lawyer or law firm wants to adopt generative AI in the practice of law, attorneys will not be able to avoid knowing how the use of AI will change norms and expectations, because clients will be effectively using these technologies and innovating in the space.

Thank you to Barry MacEntee for inviting me to be on his panel. Barry, you did an exemplary job of preparation and execution, which is exactly how you roll. Great to meet my co-panelist Andrew Sutton. Andrew, your insights and commentary on both the legal and technical aspects of the use of AI in the practice of law were terrific.

VIDEO: Elon Musk / OpenAI lawsuit – What’s it all about?

 

So Elon Musk has sued OpenAI. What’s this all about?

The lawsuit centers on the breach of a founding agreement and OpenAI’s shift from non-profit to for-profit through partnerships with companies like Microsoft. It has been filed in state court in California and talks about the risks of artificial general intelligence (or AGI). It talks about how Musk worked with Sam Altman back in 2015 to form OpenAI for the public good. That was the so called “founding agreement” which also got written into the company’s certificate of incorporation. One of the most intriguing things about the lawsuit is that Musk is asking the court to determine that OpenAI is Artificial General Intelligence and thereby has gone outside the initial scope of the Founding Agreement.

ChatGPT was “utterly and unusually unpersuasive” in case involving recovery of attorney’s fees

chatgpt billing

In a recent federal case in New York under the Individuals with Disabilities Act, plaintiff prevailed on her claims and sought an award of attorney’s fees under the statute. Though the court ended up awarding plaintiff’s attorneys some of their requested fees, the court lambasted counsel in the process for using information obtained from ChatGPT to support the claim of the attorneys’ hourly rates.

Plaintiff’s firm used ChatGPT-4 as a “cross-check” against other sources in confirming what should be a reasonably hourly rate for the attorneys on the case. The court found this reliance on ChatGPT-4 to be “utterly and unusually unpersuasive” for determining reasonable billing rates for legal services. The court criticized the firm’s use of ChatGPT-4 for not adequately considering the complexity and specificity required in legal billing, especially given the tool’s inability to discern between real and fictitious legal citations, as demonstrated in recent past cases within the Second Circuit.

In Mata v. Avianca, Inc., 2023 WL 4114965 (S.D.N.Y. June 22, 2023) the district court judge sanctioned lawyers for submitting fictitious judicial opinions generated by ChatGPT, and in Park v. Kim, — F.4th —, 2024 WL 332478 (2d Cir. January 30, 2024) an attorney was referred to the Circuit’s Grievance Panel for citing non-existent authority from ChatGPT in a brief. These examples highlighted the tool’s limitations in legal contexts, particularly its inability to differentiate between real and fabricated legal citations, raising concerns about its reliability and appropriateness for legal tasks.

J.G. v. New York City Dept. of Education, 2024 WL 728626 (February 22, 2024)

See also:

Using AI generated fake cases in court brief gets pro se litigant fined $10K

fake ai cases

Plaintiff sued defendant and won on summary judgment. Defendant sought review with the Missouri Court of Appeals. On appeal, the court dismissed the appeal and awarded damages to plaintiff/respondent because of the frivolousness of the appeal.

“Due to numerous fatal briefing deficiencies under the Rules of Appellate Procedure that prevent us from engaging in meaningful review, including the submission of fictitious cases generated by [AI], we dismiss the appeal.” With this, the court began its roast of the pro se appellant’s conduct.

The court detailed appellant’s numerous violations of the applicable Rules of Appellate Procedures. The appellate brief was unsigned, it had no required appendix, and had an inadequate statement of facts. It failed to provide points relied on, and a detailed table of cases, statutes and other authorities.

But the court made the biggest deal about how “the overwhelming majority of the [brief’s] citations are not only inaccurate but entirely fictitious.” Only two out of the twenty-four case citations in the brief were genuine.

Though appellant apologized for the fake cases in his reply brief, the court was not moved, because “the deed had been done.” It characterized the conduct as “a flagrant violation of the duties of candor” appellant owed to the court, and an “abuse of the judicial system.”

Because appellant “substantially failed to comply with court rules,” the court dismissed the appeal and ordered appellant to pay $10,000 in damages for filing a frivolous appeal.

Kruse v. Karlen, — S.W.3d —, 2024 WL 559497 (Mo. Ct. App. February 13, 2024)

See also:

GenAI and copyright: Court dismisses almost all claims against OpenAI in authors’ suit

copyright social media

Plaintiff authors sued large language model provider OpenAI and related entities for copyright infringement, alleging that plaintiffs’ books were used to train ChatGPT. Plaintiffs asserted six causes of action against various OpenAI entities: (1) direct copyright infringement, (2) vicarious infringement, (3) violation of Section 1202(b) of the Digital Millennium Copyright Act (“DMCA”), (4) unfair competition under  Cal. Bus. & Prof. Code Section 17200, (5) negligence, and (6) unjust enrichment.

Open AI moved to dismiss all of these claims except for the direct copyright infringement claim. The court granted the motion as to almost all the claims.

Vicarious liability claim dismissed

The court dismissed the claim for vicarious liability because plaintiffs did not successfully plead that direct copying occurs from use of the software. Citing to A&M Recs., Inc. v. Napster, Inc., 239 F.3d 1004, 1013 n.2 (9th Cir. 2001) aff’d,  284 F.3d 1091 (2002) the court noted that “[s]econdary liability for copyright infringement does not exist in the absence of direct infringement by a third party.” More specifically, the court dismissed the claim because plaintiffs had not alleged either direct copying when the outputs are generated, nor had they alleged “substantial similarity” between the ChatGPT outputs and plaintiffs’ works.

DMCA claims dismissed

The DMCA – at 17 U.S.C. 1202(b) – requires a defendant’s knowledge or “reasonable grounds to know” that the defendant’s removal of copyright management information (“CMI”) would “induce, enable, facilitate, or conceal an infringement.” Plaintiffs alleged “by design,” OpenAI removed CMI from the copyrighted books during the training process. But the court found that plaintiffs provided no factual support for that assertion. Moreover, the court found that even if plaintiffs had successfully asserted such facts, they had not provided any facts showing how the omitted CMI would induce, enable, facilitate or conceal infringement.

The other portion of the DMCA relevant to the lawsuit – Section 1202(b)(3) – prohibits the distribution of a plaintiff’s work without the plaintiff’s CMI included. In rejecting plaintiff’s assertions that defendants violated this provision, the court looked to the plain language of the statute. It noted that liability requires distributing the original “works” or “copies of [the] works.” Plaintiffs had not alleged that defendants distributed their books or copies of their books. Instead, they alleged that “every output from the OpenAI Language Models is an infringing derivative work” without providing any indication as to what such outputs entail – i.e., whether they were the copyrighted books or copies of the books.

Unfair competition claim survived

Plaintiffs asserted that defendants had violated California’s unfair competition statute based on “unlawful,” “fraudulent,” and “unfair” practices. As for the unlawful and fraudulent practices, these relied on the DMCA claims, which the court had already dismissed. So the unfair competition theory could not move forward on those grounds. But the court did find that plaintiffs had alleged sufficient facts to support the claim that it was “unfair” to use plaintiffs works without compensation to train the ChatGPT model.

Negligence claim dismissed

Plaintiffs alleged that defendants owed them a duty of care based on the control of plaintiffs’ information in their possession and breached their duty by “negligently, carelessly, and recklessly collecting, maintaining, and controlling systems – including ChatGPT – which are trained on Plaintiffs’ [copyrighted] works.” The court dismissed this claim, finding that there were insufficient facts showing that defendants owed plaintiffs a duty in this situation.

Unjust enrichment claim dismissed

Plaintiffs alleged that defendants were unjustly enriched by using plaintiffs’ copyright protected works to train the large language model. The court dismissed this claim because plaintiff had not alleged sufficient facts to show that plaintiffs had conferred any benefit onto OpenAI through “mistake, fraud, or coercion.”

Tremblay v. OpenAI, Inc., 2024 WL 557720 (N.D. Cal., February 12, 2024)

See also:

ChatGPT providing fake case citations again – this time at the Second Circuit

Plaintiff sued defendant in federal court but the court eventually dismissed the case because plaintiff continued to fail to properly respond to defendant’s discovery requests. So plaintiff sought review with the Second Circuit Court of Appeals. On appeal, the court affirmed the dismissal, finding that plaintiff’s noncompliance in the lower court amounted to “sustained and willful intransigence in the face of repeated and explicit warnings from the court that the refusal to comply with court orders … would result in the dismissal of [the] action.”

But that was not the most intriguing or provocative part of the court’s opinion. The court also addressed the conduct of plaintiff’s lawyer, who admitted to using ChatGPT to help her write a brief before the appellate court. The AI assistance betrayed itself when the court noticed that the brief contained a non-existent case. Here’s the mythical citation: Matter of Bourguignon v. Coordinated Behavioral Health Servs., Inc., 114 A.D.3d 947 (3d Dep’t 2014).

When the court called her out on the legal hallucination, plaintiff’s attorney admitted to using ChatGPT, to which she was a “suscribed and paying member” but emphasized that she “did not cite any specific reasoning or decision from [the Bourguignon] case.” Unfortunately, counsel’s assertions did not blunt the court’s wrath.

“All counsel that appear before this Court are bound to exercise professional judgment and responsibility, and to comply with the Federal Rules of Civil Procedure,” read the court’s opinion as it began its rebuke. It reminded counsel that the rules of procedure impose a duty on attorneys to certify that they have conducted a reasonable inquiry and have determined that any papers filed with the court are legally tenable. “At the very least,” the court continued, attorneys must “read, and thereby confirm the existence and validity of, the legal authorities on which they rely.” Citing to a recent case involving a similar controversy, the court observed that “[a] fake opinion is not ‘existing law’ and citation to a fake opinion does not provide a non-frivolous ground for extending, modifying, or reversing existing law, or for establishing new law. An attempt to persuade a court or oppose an adversary by relying on fake opinions is an abuse of the adversary system.”

The court considered the matter so severe that it referred the attorney to the court’s Grievance Panel, for that panel to consider whether to refer the situation to the court’s Committee on Admissions and Grievances, which would have the power to revoke the attorney’s admission to practice before that court.

Park v. Kim, — F.4th —, 2024 WL 332478 (2d Cir. January 30, 2024)

See also:

Generative AI executive who moved to competitor slapped with TRO

generative ai competitor

Generative AI is obviously a quickly growing segment, and competition among businesses in the space is fierce. As companies race to harness the transformative power of this technology, attracting and retaining top talent becomes a central battleground. Recent legal cases, like the newly-filed Kira v. Samman in Virginia, show just how intense the scramble for expertise has become. In the court’s opinion granting a temporary restraining order against a departing executive and the competitor to which he fled, we see some of the dynamics of non-competition clauses, and the lengths companies will go to in order to safeguard their intellectual property and strategic advantages, particularly in dealing with AI technology.

Kira and Samman Part Ways

Plaintiff Kira is a company that creates AI tools for law firms, while defendant DeepJudge AG offers comparable AI solutions to boost law firm efficiency.  Kira hired defendant Samman, who gained access to Kira’s confidential data. Samman had signed a Restrictive Covenants Agreement with Kira containing provisions that prohibited him from joining a competitor for 12 months post-termination. Samman resigned from Kira in June 2023, and Kira claimed he joined competitor DeepJudge after sending Kira’s proprietary data to his personal email.

The Dispute

Kira sued Samman and DeepJudge in federal court, alleging Samman breached his contractual obligations, and accusing DeepJudge of tortious interference with a contract. Kira also sought a temporary restraining order (TRO) to prevent Samman from working for DeepJudge and to mandate the return and deletion of Kira’s proprietary information in Samman’s possession.

Injunctive Relief Was Proper

The court observed that to obtain the sought-after injunction, Kira had to prove, among other things, a likelihood of success at trial. It found that Kira demonstrated this likelihood concerning Samman’s breach of the non-competition restrictive covenant. It determined the non-competition covenant Samman breached to be enforceable, given that it met specific requirements including advancing Kira’s economic interests. The court found that the evidence showed Samman, after leaving his role at Kira, joined a direct competitor, DeepJudge, in a role similar in function, thus likely violating the non-competition restrictive covenant.

The court found that Kira faced irreparable harm without the injunction, especially given the potential loss of clients due to Samman’s knowledge of confidential information. The court weighed the balance of equities in favor of Kira, emphasizing the protection of confidential business information and enforcement of valid contracts. It required Kira to post a bond of $15,000, to ensure coverage for potential losses Samman might face due to the injunction.

Kira (US) Inc. v. Samman, 2023 WL 4687189 (E.D. Va. July 21, 2023)

See also:

When can you use a competitor’s trademark in a domain name?

Does a human who edits an AI-created work become a joint author with the AI?

ai joint author

If a human edits a work that an AI initially created, is the human a joint author under copyright law?

U.S. copyright law (at 17 U.S.C. § 101) considers a work to be a “joint work” if it is made by two or more authors intending to mix their contributions into a single product. So, if a human significantly modifies or edits content that an AI originally created, one might think the human has made a big enough contribution to be considered a joint author. But it is not that straightforward. The law looks for a special kind of input: it must be original and creative, not just technical or mechanical. For instance, merely selecting options for the AI or doing basic editing might not cut it. But if the human’s editing changes the work in a creative way, it might just qualify as a joint author.

Where the human steps in.

This blog post is a clear example. ChatGPT created all the other paragraphs of this blog post (i.e. not this one). I typed this paragraph out from scratch. I have gone through and edited the other paragraphs, making what are obviously mechanical changes. For example, I didn’t like how ChatGPT used so many contractions. I mean, I did not like how ChatGPT used so many contractions. I suspect those are not the kind of “original” contributions that the Copyright Act’s authors had in mind to constitute the level of participation to give rise to a joint work. But I also added some sentences here and there, and struck some others. I took the photo that goes with the post, cropped it, and decided how to place it in relation to the text. Those activities are likely “creative” enough to be copyrightable contributions to the unitary whole that is this blog post. And then of course there is this paragraph that you are just about done reading. Has this paragraph not contributed some notable expression to make this whole blog post better than what it would have been without the paragraph?

Let’s say the human editing does indeed make the human a joint author. What rights would the human have? And how would these rights compare to any the AI might have? Copyright rights are generally held by human creators. This means the human would have rights to copy the work, distribute it, display or perform it publicly, and make derivative works.

Robot rights.

As for the AI, here’s where things get interesting. U.S. Copyright law generally does not recognize AI systems as authors, so they would not have any rights in the work. But this is a rapidly evolving field, and there is ongoing debate about how the law should treat creations made by AI.

This leaves us in a peculiar situation. You have a “joint work” that a human and an AI created together, but only the human can be an author. So, as it stands, the AI would not have any rights in the work, and the human would. Here’s an interesting nuance to consider: authors of joint works are pretty much free to do what they wish with the work as they see fit, so long as they fulfill certain obligations to the other authors (e.g., account for any royalties received). Does the human-owner have to fulfill these obligations to the purported AI-author of the joint work? It seems we cannot fairly address that question if we have not yet established that the AI system can be a joint author in the first place.

Where we go from here.

It seems reasonable to conclude that a human editing AI-created content might qualify as a joint author if the changes are significant and creative, not just technical. If that’s the case, the human would have full copyright rights under current law, while the AI would not have any. As these human-machine collaborations continue to become more commonplace, we will see how law and policy evolve to either strengthen the position that only “natural persons” (humans) can own intellectual property rights, or to move in the direction of granting some sort of “personhood” to non-human agents. It is like watching science fiction unfold in reality in real time.

What do you think?

See also:

Five legal issues around using AI in a branding strategy

The use of AI in the domain name industry

AI

Artificial intelligence has important uses in the domain name industry. With the use of AI, domain name registration, management, and valuation have been made more efficient and accurate. Here are some specific ways AI is affecting domain names:

  • Domain name suggestion and search optimization: AI-powered domain name generators can suggest relevant and available domain names based on specific keywords, making the search process easier and faster for businesses and individuals. Additionally, AI algorithms can optimize search results based on user behavior and preferences, making it easier for potential customers to find the right domain name for their needs.
  •  

  • Domain name valuation: AI algorithms can analyze and evaluate domain names based on various factors such as age, traffic, and backlinks, among others. This information is valuable for domain name investors and businesses looking to acquire domain names that align with their branding strategies.
  •  

  • Domain name security: AI-powered security tools can detect and prevent domain name fraud and phishing attacks. These tools can identify suspicious behavior, such as attempts to hijack a domain name, and alert domain name owners and security teams to take necessary actions.
  •  

  • Domain name portfolio management: AI algorithms can help businesses and individuals manage their domain name portfolios more efficiently by providing insights on which domain names to renew, which to drop, and which to acquire. This information can help businesses save money and optimize their domain name strategies.
  •  

AI is transforming the domain name industry by making it more efficient, secure, and cost-effective. Domain name registrars, investors, and businesses can leverage AI-powered tools to find, evaluate, and manage domain names more effectively, making the process easier and faster for all involved. We can expect even more innovations in the domain name industry in the years to come.

Scroll to top