VIDEO: Elon Musk / OpenAI lawsuit – What’s it all about?

 

So Elon Musk has sued OpenAI. What’s this all about?

The lawsuit centers on the breach of a founding agreement and OpenAI’s shift from non-profit to for-profit through partnerships with companies like Microsoft. It has been filed in state court in California and talks about the risks of artificial general intelligence (or AGI). It talks about how Musk worked with Sam Altman back in 2015 to form OpenAI for the public good. That was the so called “founding agreement” which also got written into the company’s certificate of incorporation. One of the most intriguing things about the lawsuit is that Musk is asking the court to determine that OpenAI is Artificial General Intelligence and thereby has gone outside the initial scope of the Founding Agreement.

ChatGPT was “utterly and unusually unpersuasive” in case involving recovery of attorney’s fees

chatgpt billing

In a recent federal case in New York under the Individuals with Disabilities Act, plaintiff prevailed on her claims and sought an award of attorney’s fees under the statute. Though the court ended up awarding plaintiff’s attorneys some of their requested fees, the court lambasted counsel in the process for using information obtained from ChatGPT to support the claim of the attorneys’ hourly rates.

Plaintiff’s firm used ChatGPT-4 as a “cross-check” against other sources in confirming what should be a reasonably hourly rate for the attorneys on the case. The court found this reliance on ChatGPT-4 to be “utterly and unusually unpersuasive” for determining reasonable billing rates for legal services. The court criticized the firm’s use of ChatGPT-4 for not adequately considering the complexity and specificity required in legal billing, especially given the tool’s inability to discern between real and fictitious legal citations, as demonstrated in recent past cases within the Second Circuit.

In Mata v. Avianca, Inc., 2023 WL 4114965 (S.D.N.Y. June 22, 2023) the district court judge sanctioned lawyers for submitting fictitious judicial opinions generated by ChatGPT, and in Park v. Kim, — F.4th —, 2024 WL 332478 (2d Cir. January 30, 2024) an attorney was referred to the Circuit’s Grievance Panel for citing non-existent authority from ChatGPT in a brief. These examples highlighted the tool’s limitations in legal contexts, particularly its inability to differentiate between real and fabricated legal citations, raising concerns about its reliability and appropriateness for legal tasks.

J.G. v. New York City Dept. of Education, 2024 WL 728626 (February 22, 2024)

See also:

Using AI generated fake cases in court brief gets pro se litigant fined $10K

fake ai cases

Plaintiff sued defendant and won on summary judgment. Defendant sought review with the Missouri Court of Appeals. On appeal, the court dismissed the appeal and awarded damages to plaintiff/respondent because of the frivolousness of the appeal.

“Due to numerous fatal briefing deficiencies under the Rules of Appellate Procedure that prevent us from engaging in meaningful review, including the submission of fictitious cases generated by [AI], we dismiss the appeal.” With this, the court began its roast of the pro se appellant’s conduct.

The court detailed appellant’s numerous violations of the applicable Rules of Appellate Procedures. The appellate brief was unsigned, it had no required appendix, and had an inadequate statement of facts. It failed to provide points relied on, and a detailed table of cases, statutes and other authorities.

But the court made the biggest deal about how “the overwhelming majority of the [brief’s] citations are not only inaccurate but entirely fictitious.” Only two out of the twenty-four case citations in the brief were genuine.

Though appellant apologized for the fake cases in his reply brief, the court was not moved, because “the deed had been done.” It characterized the conduct as “a flagrant violation of the duties of candor” appellant owed to the court, and an “abuse of the judicial system.”

Because appellant “substantially failed to comply with court rules,” the court dismissed the appeal and ordered appellant to pay $10,000 in damages for filing a frivolous appeal.

Kruse v. Karlen, — S.W.3d —, 2024 WL 559497 (Mo. Ct. App. February 13, 2024)

See also:

GenAI and copyright: Court dismisses almost all claims against OpenAI in authors’ suit

copyright social media

Plaintiff authors sued large language model provider OpenAI and related entities for copyright infringement, alleging that plaintiffs’ books were used to train ChatGPT. Plaintiffs asserted six causes of action against various OpenAI entities: (1) direct copyright infringement, (2) vicarious infringement, (3) violation of Section 1202(b) of the Digital Millennium Copyright Act (“DMCA”), (4) unfair competition under  Cal. Bus. & Prof. Code Section 17200, (5) negligence, and (6) unjust enrichment.

Open AI moved to dismiss all of these claims except for the direct copyright infringement claim. The court granted the motion as to almost all the claims.

Vicarious liability claim dismissed

The court dismissed the claim for vicarious liability because plaintiffs did not successfully plead that direct copying occurs from use of the software. Citing to A&M Recs., Inc. v. Napster, Inc., 239 F.3d 1004, 1013 n.2 (9th Cir. 2001) aff’d,  284 F.3d 1091 (2002) the court noted that “[s]econdary liability for copyright infringement does not exist in the absence of direct infringement by a third party.” More specifically, the court dismissed the claim because plaintiffs had not alleged either direct copying when the outputs are generated, nor had they alleged “substantial similarity” between the ChatGPT outputs and plaintiffs’ works.

DMCA claims dismissed

The DMCA – at 17 U.S.C. 1202(b) – requires a defendant’s knowledge or “reasonable grounds to know” that the defendant’s removal of copyright management information (“CMI”) would “induce, enable, facilitate, or conceal an infringement.” Plaintiffs alleged “by design,” OpenAI removed CMI from the copyrighted books during the training process. But the court found that plaintiffs provided no factual support for that assertion. Moreover, the court found that even if plaintiffs had successfully asserted such facts, they had not provided any facts showing how the omitted CMI would induce, enable, facilitate or conceal infringement.

The other portion of the DMCA relevant to the lawsuit – Section 1202(b)(3) – prohibits the distribution of a plaintiff’s work without the plaintiff’s CMI included. In rejecting plaintiff’s assertions that defendants violated this provision, the court looked to the plain language of the statute. It noted that liability requires distributing the original “works” or “copies of [the] works.” Plaintiffs had not alleged that defendants distributed their books or copies of their books. Instead, they alleged that “every output from the OpenAI Language Models is an infringing derivative work” without providing any indication as to what such outputs entail – i.e., whether they were the copyrighted books or copies of the books.

Unfair competition claim survived

Plaintiffs asserted that defendants had violated California’s unfair competition statute based on “unlawful,” “fraudulent,” and “unfair” practices. As for the unlawful and fraudulent practices, these relied on the DMCA claims, which the court had already dismissed. So the unfair competition theory could not move forward on those grounds. But the court did find that plaintiffs had alleged sufficient facts to support the claim that it was “unfair” to use plaintiffs works without compensation to train the ChatGPT model.

Negligence claim dismissed

Plaintiffs alleged that defendants owed them a duty of care based on the control of plaintiffs’ information in their possession and breached their duty by “negligently, carelessly, and recklessly collecting, maintaining, and controlling systems – including ChatGPT – which are trained on Plaintiffs’ [copyrighted] works.” The court dismissed this claim, finding that there were insufficient facts showing that defendants owed plaintiffs a duty in this situation.

Unjust enrichment claim dismissed

Plaintiffs alleged that defendants were unjustly enriched by using plaintiffs’ copyright protected works to train the large language model. The court dismissed this claim because plaintiff had not alleged sufficient facts to show that plaintiffs had conferred any benefit onto OpenAI through “mistake, fraud, or coercion.”

Tremblay v. OpenAI, Inc., 2024 WL 557720 (N.D. Cal., February 12, 2024)

See also:

ChatGPT providing fake case citations again – this time at the Second Circuit

Plaintiff sued defendant in federal court but the court eventually dismissed the case because plaintiff continued to fail to properly respond to defendant’s discovery requests. So plaintiff sought review with the Second Circuit Court of Appeals. On appeal, the court affirmed the dismissal, finding that plaintiff’s noncompliance in the lower court amounted to “sustained and willful intransigence in the face of repeated and explicit warnings from the court that the refusal to comply with court orders … would result in the dismissal of [the] action.”

But that was not the most intriguing or provocative part of the court’s opinion. The court also addressed the conduct of plaintiff’s lawyer, who admitted to using ChatGPT to help her write a brief before the appellate court. The AI assistance betrayed itself when the court noticed that the brief contained a non-existent case. Here’s the mythical citation: Matter of Bourguignon v. Coordinated Behavioral Health Servs., Inc., 114 A.D.3d 947 (3d Dep’t 2014).

When the court called her out on the legal hallucination, plaintiff’s attorney admitted to using ChatGPT, to which she was a “suscribed and paying member” but emphasized that she “did not cite any specific reasoning or decision from [the Bourguignon] case.” Unfortunately, counsel’s assertions did not blunt the court’s wrath.

“All counsel that appear before this Court are bound to exercise professional judgment and responsibility, and to comply with the Federal Rules of Civil Procedure,” read the court’s opinion as it began its rebuke. It reminded counsel that the rules of procedure impose a duty on attorneys to certify that they have conducted a reasonable inquiry and have determined that any papers filed with the court are legally tenable. “At the very least,” the court continued, attorneys must “read, and thereby confirm the existence and validity of, the legal authorities on which they rely.” Citing to a recent case involving a similar controversy, the court observed that “[a] fake opinion is not ‘existing law’ and citation to a fake opinion does not provide a non-frivolous ground for extending, modifying, or reversing existing law, or for establishing new law. An attempt to persuade a court or oppose an adversary by relying on fake opinions is an abuse of the adversary system.”

The court considered the matter so severe that it referred the attorney to the court’s Grievance Panel, for that panel to consider whether to refer the situation to the court’s Committee on Admissions and Grievances, which would have the power to revoke the attorney’s admission to practice before that court.

Park v. Kim, — F.4th —, 2024 WL 332478 (2d Cir. January 30, 2024)

See also:

Generative AI executive who moved to competitor slapped with TRO

generative ai competitor

Generative AI is obviously a quickly growing segment, and competition among businesses in the space is fierce. As companies race to harness the transformative power of this technology, attracting and retaining top talent becomes a central battleground. Recent legal cases, like the newly-filed Kira v. Samman in Virginia, show just how intense the scramble for expertise has become. In the court’s opinion granting a temporary restraining order against a departing executive and the competitor to which he fled, we see some of the dynamics of non-competition clauses, and the lengths companies will go to in order to safeguard their intellectual property and strategic advantages, particularly in dealing with AI technology.

Kira and Samman Part Ways

Plaintiff Kira is a company that creates AI tools for law firms, while defendant DeepJudge AG offers comparable AI solutions to boost law firm efficiency.  Kira hired defendant Samman, who gained access to Kira’s confidential data. Samman had signed a Restrictive Covenants Agreement with Kira containing provisions that prohibited him from joining a competitor for 12 months post-termination. Samman resigned from Kira in June 2023, and Kira claimed he joined competitor DeepJudge after sending Kira’s proprietary data to his personal email.

The Dispute

Kira sued Samman and DeepJudge in federal court, alleging Samman breached his contractual obligations, and accusing DeepJudge of tortious interference with a contract. Kira also sought a temporary restraining order (TRO) to prevent Samman from working for DeepJudge and to mandate the return and deletion of Kira’s proprietary information in Samman’s possession.

Injunctive Relief Was Proper

The court observed that to obtain the sought-after injunction, Kira had to prove, among other things, a likelihood of success at trial. It found that Kira demonstrated this likelihood concerning Samman’s breach of the non-competition restrictive covenant. It determined the non-competition covenant Samman breached to be enforceable, given that it met specific requirements including advancing Kira’s economic interests. The court found that the evidence showed Samman, after leaving his role at Kira, joined a direct competitor, DeepJudge, in a role similar in function, thus likely violating the non-competition restrictive covenant.

The court found that Kira faced irreparable harm without the injunction, especially given the potential loss of clients due to Samman’s knowledge of confidential information. The court weighed the balance of equities in favor of Kira, emphasizing the protection of confidential business information and enforcement of valid contracts. It required Kira to post a bond of $15,000, to ensure coverage for potential losses Samman might face due to the injunction.

Kira (US) Inc. v. Samman, 2023 WL 4687189 (E.D. Va. July 21, 2023)

See also:

When can you use a competitor’s trademark in a domain name?

Does a human who edits an AI-created work become a joint author with the AI?

ai joint author

If a human edits a work that an AI initially created, is the human a joint author under copyright law?

U.S. copyright law (at 17 U.S.C. § 101) considers a work to be a “joint work” if it is made by two or more authors intending to mix their contributions into a single product. So, if a human significantly modifies or edits content that an AI originally created, one might think the human has made a big enough contribution to be considered a joint author. But it is not that straightforward. The law looks for a special kind of input: it must be original and creative, not just technical or mechanical. For instance, merely selecting options for the AI or doing basic editing might not cut it. But if the human’s editing changes the work in a creative way, it might just qualify as a joint author.

Where the human steps in.

This blog post is a clear example. ChatGPT created all the other paragraphs of this blog post (i.e. not this one). I typed this paragraph out from scratch. I have gone through and edited the other paragraphs, making what are obviously mechanical changes. For example, I didn’t like how ChatGPT used so many contractions. I mean, I did not like how ChatGPT used so many contractions. I suspect those are not the kind of “original” contributions that the Copyright Act’s authors had in mind to constitute the level of participation to give rise to a joint work. But I also added some sentences here and there, and struck some others. I took the photo that goes with the post, cropped it, and decided how to place it in relation to the text. Those activities are likely “creative” enough to be copyrightable contributions to the unitary whole that is this blog post. And then of course there is this paragraph that you are just about done reading. Has this paragraph not contributed some notable expression to make this whole blog post better than what it would have been without the paragraph?

Let’s say the human editing does indeed make the human a joint author. What rights would the human have? And how would these rights compare to any the AI might have? Copyright rights are generally held by human creators. This means the human would have rights to copy the work, distribute it, display or perform it publicly, and make derivative works.

Robot rights.

As for the AI, here’s where things get interesting. U.S. Copyright law generally does not recognize AI systems as authors, so they would not have any rights in the work. But this is a rapidly evolving field, and there is ongoing debate about how the law should treat creations made by AI.

This leaves us in a peculiar situation. You have a “joint work” that a human and an AI created together, but only the human can be an author. So, as it stands, the AI would not have any rights in the work, and the human would. Here’s an interesting nuance to consider: authors of joint works are pretty much free to do what they wish with the work as they see fit, so long as they fulfill certain obligations to the other authors (e.g., account for any royalties received). Does the human-owner have to fulfill these obligations to the purported AI-author of the joint work? It seems we cannot fairly address that question if we have not yet established that the AI system can be a joint author in the first place.

Where we go from here.

It seems reasonable to conclude that a human editing AI-created content might qualify as a joint author if the changes are significant and creative, not just technical. If that’s the case, the human would have full copyright rights under current law, while the AI would not have any. As these human-machine collaborations continue to become more commonplace, we will see how law and policy evolve to either strengthen the position that only “natural persons” (humans) can own intellectual property rights, or to move in the direction of granting some sort of “personhood” to non-human agents. It is like watching science fiction unfold in reality in real time.

What do you think?

See also:

Five legal issues around using AI in a branding strategy

Five legal issues around using AI in a branding strategy

AI branding strategy

The ability of AI to gather, analyze, and interpret large sets of data can lead to invaluable insights and efficiencies. But as businesses increasingly rely on AI to develop and execute branding strategies, they must be aware of the potential legal issues that can arise. Here are five issues to consider:

  • Data Protection and Privacy Laws: AI systems often require vast amounts of data to operate effectively, much of which may be personal data collected from customers. This brings into play data protection and privacy laws, such as the General Data Protection Regulation (GDPR) in the European Union and the California Consumer Privacy Act (CCPA) in the United States. Non-compliance with these laws can lead to substantial fines and reputational damage. So businesses must seek to ensure that their use of AI complies with all applicable data protection and privacy laws.
  •  

  • Intellectual Property Rights: AI systems can generate content, designs, or even brand names. But who owns the rights to this AI-generated output? This is a complex and evolving area of law, with different jurisdictions taking different approaches. Businesses need to remember to consider intellectual property issues, both in the context of protecting their own rights and not infringing upon the rights of others.
  •  

  • Bias and Discrimination: AI systems learn from the data on which they are trained. If this data contains biases, the AI system can amplify these biases, leading to potentially discriminatory outcomes. This not only has ethical implications but also legal ones. In many jurisdictions, businesses can be held liable for discriminatory practices, even if unintentional. Businesses should ensure their AI systems are trained on diverse and representative data sets and regularly audited for bias.
  •  

  • Transparency and Explainability: Many jurisdictions are considering regulations that require AI systems to be transparent and explainable. This means that businesses must be able to explain how their AI systems make decisions. If a customer feels that it has been unfairly treated by an AI system, the business may need to justify the AI’s decision-making process. Compliance with these requirements can be challenging, particularly with complex AI systems.
  •  

  • Contractual Obligations and Liability: When businesses use third-party AI systems, it is crucial to clearly understand and define who is responsible if something goes wrong. This includes potential breaches of data protection laws, intellectual property infringement, and any harm caused by the AI system. Businesses should ensure their contracts with AI vendors clearly outline the responsibilities and liabilities of each party.

While AI presents numerous opportunities for enhancing a branding strategy, it also introduces a range of legal considerations. Businesses must navigate these potential legal pitfalls carefully so that they can leverage the power of AI while minimizing legal risk.

Scroll to top