Blog

Lawyer gets called out a second time for using ChatGPT in court brief

You may recall the case of Park v. Kim, wherein the Second Circuit excoriated an attorney for using ChatGPT to generate a brief that contained a bunch of fake cases. Well, the same lawyer responsible for that debacle has been found out again, this time in a case where she is the pro se litigant.

Plaintiff sued Delta Airlines for racial discrimination. She filed a motion for leave to amend her complaint, which the court denied. In discussing the denial, the court observed the following:

[T]he Court maintains serious concern that at least one of Plaintiff’s cited cases is non-existent and may have been a hallucinated product of generative artificial intelligence, particularly given Plaintiff’s recent history of similar conduct before the Second Circuit. See Park v. Kim, 91 F.4th 610, 612 (2d Cir. 2024) (“We separately address the conduct of Park’s counsel, Attorney Jae S. Lee. Lee’s reply brief in this case includes a citation to a non-existent case, which she admits she generated using the artificial intelligence tool ChatGPT.”).

In Park v. Kim, the court referred plaintiff for potential disciplinary action. The court in this case  was more lenient, by just denying her motion for leave to amend, and eventually dismissing the case on summary judgment.

Jae Lee v. Delta Air Lines, Inc., 2024 WL 1230263 (E.D.N.Y. March 22, 2024)

See also:

AI and voice clones: Three things to know about Tennessee’s ELVIS Act

On March 21, 2024, the governor of Tennessee signed the ELVIS Act (the Ensuring Likeness, Voice, and Image Security Act of 2024) which is aimed at the problem of people using AI to simulate voices in a way not authorized by the person whose voice is being imitated.

Here are three key things to know about the new law:

(1) Voice defined.

The law adds the following definition to existing Tennessee law:

“Voice” means a sound in a medium that is readily identifiable and attributable to a particular individual, regardless of whether the sound contains the actual voice or a simulation of the voice of the individual;

There are a couple of interesting things to note. One could generate or use the voice of another without using the other person’s name. The voice simply has to be “readily identifiable” and “attributable” to a particular human. Those seem to be pretty open concepts and we could expect quite a bit of litigation over what it takes for a voice to be identifiable and attributable to another. Would this cover situations where a person naturally sounds like another, or is just trying to imitate another musical style?

(2) Voice is now a property right.

The following underlined words were added to the existing statute:

Every individual has a property right in the use of that individual’s name, photograph, voice, or likeness in any medium in any manner.

The word “person’s” was changed to “individual’s” presumably to clarify that this is a right belonging to a natural person (i.e., real human beings and not companies). And of course the word “voice” was added to expressly include that attribute as something in which the person can have a property interest.

(3) Two new things are banned under law.

The following two paragraphs have been added:

A person is liable to a civil action if the person publishes, performs, distributes, transmits, or otherwise makes available to the public an individual’s voice or likeness, with knowledge that use of the voice or likeness was not authorized by the individual or, in the case of a minor, the minor’s parent or legal guardian, or in the case of a deceased individual, the executor or administrator, heirs, or devisees of such deceased individual.

A person is liable to a civil action if the person distributes, transmits, or otherwise makes available an algorithm, software, tool, or other technology, service, or device, the primary purpose or function of which is the production of an individual’s photograph, voice, or likeness without authorization from the individual or, in the case of a minor, the minor’s parent or legal guardian, or in the case of a deceased individual, the executor or administrator, heirs, or devisees of such deceased individual.

With this language, we see the heart of the new law’s impact. One can sue another for making his or her voice publicly available without permission. Note that this restriction is not only on commercial use of another’s voice. Most states’ laws discussing name, image and likeness restrict commercial use by another. This statute is broader, and would make more things unlawful, for example, creating a deepfaked voice simply for fun (or harassment, of course), if the person whose voice is being imitated has not consented.

Note the other interesting new prohibition, the one on making available tools having as their “primary purpose or function” the production of another’s voice without authorization. If you were planning on launching that new app where you can make your voice sound like a celebrity’s voice, consider whether this Tennessee statute might shut you down.

See also:

VIDEO: What is the Apple antitrust lawsuit about?

On March 21, 2024, the U.S. government, 15 states and the District of Columbia filed an antitrust lawsuit against Apple. What is the case about?

The government says Apple built a dominant iPhone ecosystem, driving its high valuation. But Apple faced threats from other products, particularly Android devices. And in response, it didn’t offer lower prices or offer better terms to developers and consumers. Instead, it imposed complex rules and fees through its App Store and developer agreements, stifling innovation and limiting competition.

Apple’s actions have increased its smartphone dominance and expanded its control to digital wallets and smartwatches by restricting their compatibility with non-Apple products. And this has had broader implications in other industries. The government claims Apple has stifled innovation and competition tied to smartphone technology, such as financial services, entertainment, and more.

So the case seeks to address Apple’s anticompetitive behavior. It aims to restore competition, lower prices for consumers, reduce fees for developers, and encourage innovation. The case is particularly interesting in how it highlights the contrast between Apple’s early days as an innovative startup and its current status as a monopolist.

The government says that this has drastically hurt market competition and consumers.

Bitcoin miner denied injunction against colocation service provider accused of removing rigs

Plaintiff Bitcoin miner sued defendant colocation hosting provider for  breach of contract, conversion, and trespass to chattels under Washington law. After filing suit, plaintiff filed a motion for temporary restraining order against defendant, seeking to require defendant to restore plaintiff’s access to the more than 1,000 mining rigs that defendant allegedly removed from its hosting facility. The court denied the motion, finding that plaintiff had established only possible economic injury, not the kind of irreparable harm required for the issuance of a temporary restraining order.

The underlying agreement

In July 2021, the parties entered into an agreement whereby plaintiff would collocate 1,610 cryptocurrency mining rigs at defendant’s facility. Plaintiff had obtained a loan to purchase the rigs for over $6 million. Defendant was to operate the rigs at a high hash rate to efficiently mine Bitcoin, with defendant earning a portion of the mined BTC.

After plaintiff defaulted on its loan, however, in early 2023, defendant allegedly reduced the available power to the rigs, despite plaintiff having cured the delinquency. Plaintiff claimed this reduced power likewise reduced the amount of Bitcoin that imined, and claims that defendant reallocated resources to other miners in its facility from whom it could earn more money.

The discord between the parties continued through late 2023 and early 2024, with 402 rigs being removed, and then defendant’s eventual termination of the agreement. The parties then began disputing over the removal of the remaining rigs and alleged unpaid fees by plaintiff. In early March 2024, plaintiff attempted to retake possession of its rigs, only to allegedly find defendant’s facility empty and abandoned. This lawsuit followed.

No irreparable harm

The court observed that under applicable law, a party seeking injunctive relief must proffer evidence sufficient to establish a likelihood of irreparable harm and mere speculation of irreparable harm does not suffice. Moreover, the court noted, irreparable harm is traditionally defined as harm for which there is no adequate legal remedy, such as an award of damages. Further, the court stated that it is well established that economic injury alone does not support a finding of irreparable harm, because such injury can be remedied by a damage award.

In this situation, the court found there to be no problem of irreparable harm to plaintiff. The court distinguished this case from the case of EZ Blockchain LLC v. Blaise Energy Power, Inc., 589 F. Supp. 3d 1102 (D.N.D. 2022), in which a court granted a temporary restraining order against a datacenter provider who had threatened to sell its customer’s rigs. In that case, the court found irreparable harm based on the fact that the miners were sophisticated technology and could not be easily replaced.

The court in this case found there was no evidence defendant was going to sell off plaintiff’s equipment. It was similarly unpersuaded that the upcoming Bitcoin halving (anticipated in April 2024) created extra urgency for plaintiffs to have access to their rigs prior to such time, after which mining Bitcoin will be less profitable. Instead, the court found that any losses could be compensated via money damages. And since plaintiff had not provided any evidence to support the idea it would be forced out of business in these circumstances, the court found it appropriate to deny plaintiff’s motion for a temporary restraining order.

Block Mining, Inc. v. Hosting Source, LLC, 2024 WL 1156479 (W.D. Washington, March 18, 2024)

See also: 

Woz gets another (small) bite at the apple in YouTube bitcoin scam case

Apple co-founder Steve Wozniak sued YouTube and Google asserting various causes of action, including misappropriation of likeness, fraud, and negligence. The case arose from a common scam on YouTube, where popular channels are hijacked to show fake videos of a celebrity hosting a live event during which viewers are falsely told that anyone who sends cryptocurrency to a specified account will receive twice as much in return. Woz’s YouTube account was hijacked for these purposes, and several of the resulting victims joined him in the lawsuit.

The lower court tossed the case, holding that YouTube and Google were not liable because of Section 230 – which provides that the platforms could not be liable for the third party content giving rise to the scam. Woz and the other defendants sought review with the California Court of Appeal, which largely agreed with the lower court on the Section 230 issue, except for one part. The court allowed plaintiffs to file an amended complaint on this one issue.

Plaintiffs claimed that Google and YouTube contributed to scam ads and videos, thereby positioning defendants outside Section 230 immunity. They argued, among other things, that YouTube displayed false verification badges, thereby becoming active content providers contributing to the scam’s fraudulent nature.

The court found that although plaintiffs’ complaint suggested that defendants’ actions could strip them of Section 230 immunity by implying a level of endorsement or authenticity, the allegations were too conclusory as written to establish defendants as information content providers. So the court allowed for the possibility of amending these claims, indicating that a more detailed argument might better establish defendants’ direct contribution to the content’s illegality.

Wozniak v. YouTube, LLC, — Cal.Rptr.3d —, 2024 WL 1151750 (Cal.App. 6th Dist., March 15, 2024)

See also:

 

Months long video surveillance of house did not violate the Fourth Amendment

video surveillance fourth amendment

“As video cameras proliferate throughout society, regrettably, the reasonable expectation of privacy from filming is diminished.”

Defendant was convicted of stealing government funds and of wire fraud for receiving disability benefits provided to veterans when in fact defendant – though being a veteran – was not disabled. Part of the evidence the government used against defendant was video footage obtained from a pole camera the government had set up on the roof of a school across the street from defendant’s home. It surveilled his house for 15 hours a day for 68 days. After being convicted, defendant sought review with the Tenth Circuit Court of Appeals, arguing that the near-continual surveillance of his house was an unreasonable search under the Fourth Amendment. The court disagreed and affirmed the conviction.

The development of a reasonable expectation of privacy

The court observed the importance of the notion of a citizen’s “reasonable expectation of privacy,” a concept that has evolved over time from its original ties to common-law trespass to encompass a broader range of privacy expectations recognized by society as legitimate.

Historically, the Supreme Court has maintained that activities exposed to public view do not enjoy a reasonable expectation of privacy. For example, in California v. Ciraolo, 476 U.S. 207 (1986), the court held warrantless observation of a home’s exterior from public airspace was not a Fourth Amendment violation on the grounds that these observations did not penetrate private, concealed areas.

In Kyllo v. United States, 533 U.S. 27 (2001), the court held that the use of thermal imaging to discern details within a home, unobservable to the naked eye, was a search requiring a warrant. This marked a departure towards acknowledging privacy infringements facilitated by technology not widely available to the public.

In United States v. Jackson, 213 F. 3d 1269 (10th Cir. 2000) the Tenth Circuit held that video surveillance capturing activity visible without enhancement did not violate the Fourth Amendment. The court grounded its decision in the principle that what one knowingly exposes to public observation falls outside the Fourth Amendment’s protection. The surveillance in question, similar to the one in this case involved recording the exterior of a residence, capturing scenes observable from public vantage points, thus not constituting a search under the Fourth Amendment.

But in this case, the surveillance was constant

In this case, defendant relied heavily on the case of Carpenter v. United States, 138 S. Ct. 2206 (2018), where the Supreme Court ruled that accessing historical cell-site location information constituted a search under the Fourth Amendment. This decision underscored the intrusive potential of prolonged surveillance, highlighting the significant privacy concerns associated with compiling a comprehensive record of an individual’s movements over time. But the court in this case observed that the scope of the Carpenter case scope was explicitly narrow, not extending to conventional surveillance methods such as security cameras.

So the court distinguished the present situation from Carpenter, noting that the pole camera only captured what was visible from the street and did not provide a comprehensive record of defendant’s movements beyond the monitored location. Accordingly, in the court’s view, the surveillance did not infringe upon the reasonable expectation of privacy as articulated in Carpenter, which pertained to the aggregate of an individual’s movements over an extended period.

More technology = changing norms regarding privacy

Furthermore (in probably the most intriguing part of the opinion), the court noted the evolving societal norms around privacy, especially in the context of the widespread proliferation of cameras in public and private spheres. This ubiquity of video recording technology, coupled with the societal acclimatization to being recorded, has inevitably influenced expectations of privacy. As surveillance technologies become more integrated into everyday life, the threshold for what constitutes a “reasonable expectation of privacy” shifts, reflecting the dynamic interplay between technological advancements and societal norms.

So the court concluded that defendant did not have a reasonable expectation of privacy concerning the footage captured by the pole camera, as it only recorded what was visible to any passerby from the street.

United States v. Hay, — F.4th — 2024 WL 1163349 (10th Cir., March 19, 2024)

See also:

MetaBirkins defendant denied of opportunity to exhibit NFT artwork in Swedish museum

metabirkins museum
In February 2023, Sonny Estival, known by his pseudonym “Mason Rothschild,” was found liable by a jury on a number of claims, including intentional trademark infringement, trademark dilution, and cybersquatting against luxury brand Hermès. The court ordered Estival to pay $133,000 in damages to Hermès and issued a comprehensive permanent injunction against him and his associates. This injunction specifically prohibited the production, distribution, and promotion of “MetaBirkins” non-fungible tokens (NFTs) and related merchandise, aiming to prevent any association or confusion with Hermès’s “Birkin” trademark.

In January 2024, Estival sought clarification from the court regarding the scope of the permanent injunction, particularly whether it would prevent him from allowing a Swedish museum to exhibit his MetaBirkins artworks as part of an exhibition on Andy Warhol and Business Art. Despite his claims that the museum’s display would not imply any association with Hermès and would even include mention of the lawsuit and its outcome, Hermès opposed this motion. The court held an evidentiary hearing, and after considering submissions from both parties and testimony from museum representatives, denied Estival’s motion. The court could not conclude that the proposed exhibition would comply with the injunction’s terms, given the lack of detailed information about the nature of the permission Estival would be granting to the museum, especially concerning the promotion of the exhibit and potential merchandising.

The court’s decision was heavily influenced by the context of Estival’s previous actions and the jury’s findings, which characterized him as intentionally misleading the public to associate his NFTs with Hermès’s Birkin brand. Despite the museum’s assurance that the exhibit would not suggest any affiliation with Hermès, the court remained unconvinced, especially given discrepancies in the museum representatives’ testimonies regarding how the lawsuit and Estival’s infringement would be presented to the public.

Hermès Int’l v. Rothschild, 2024 WL 1089427 (S.D.N.Y. March 13, 2024)

See also:

Utah has a brand new law that regulates generative AI

On March 15, 2024, the Governor of Utah signed a bill that implements new law in the state regulating the use and development of artificial intelligence.  Here are some key things you should know about the law.

  • The statute adds to the state’s consumer protection laws, which govern things such as credit services, car sales, and online dating. The new law says that anyone accused of violating a consumer protection law cannot blame it on the use of generative AI (like Air Canada apparently attempted to do back in February).
  • The new law also says that a person involved in any act covered by the state’s consumer protection laws asks the company she’s dealing with if she is interacting with an AI, the company has to clearly and conspicuously disclose that fact.
  • And the law says that anyone providing services as a regulated occupation in the state (for example, an architect, surveyor or a therapist) must disclose in advance any use of generative AI. The statute outlines the requirements for these notifications.
  • In addition to addressing consumer protection, the law also establishes a plan for the state to further innovation in artificial intelligence. The new law introduces a regulatory framework for an AI learning laboratory to investigate AI’s risks and benefits and to guide regulation AI development.
  • The statute discusses requirements for participation in the program and also provides certain incentives for the development of AI technologies, including “regulatory mitigation” to adjust or ease certain regulatory requirements for participants and reduce potential liability.

This law the first of its kind and other states are likely to enact similar laws. Much more to come on this topic.

On FOX 2 Detroit talking about the TikTok ban

Earlier today I enjoyed appearing live on Fox 2 Detroit talking about the TikTok ban. We discussed what the act that the House of Representatives passed says, what it would mean for social media users, and the free speech litigation that will no doubt follow if the bill passes in the Senate and the President signs it. It’s a very intriguing issue.

Scroll to top