Blog

Ex-wife held in contempt for posting on TikTok about her ex-husband

tiktok contempt

Ex-husband sought to have his ex-wife held in contempt for violating an order that the divorce court had entered. In 2022, the court had ordered the ex-wife to take down social media posts that could make the ex-husband identifiable.

The ex-husband alleged that the ex-wife continued to post content on her TikTok account which made him identifiable as her ex-husband. Ex-wife argued that she did not name the ex-husband directly and that her social media was part of her work as a trauma therapist. But the family court found that the ex-wife’s posts violated the previous order because they made the ex-husband identifiable, and also noted that the children could be heard in the background of some videos. As a result, the court held the ex-wife in contempt and ordered her to pay $1,800 in the ex-husband’s attorney fees.

Ex-wife appealed the contempt ruling, arguing that ex-husband did not present enough evidence to support his claim, and that she had not violated the order. She also disputed the attorney fees. On appeal, the court affirmed the contempt finding, agreeing that her actions violated the order, but vacated the award of attorney fees due to insufficient evidence of the amount.

Three reasons why this case matters:

  • It illustrates the legal consequences of violating court orders in family law cases.
  • It emphasizes the importance of clarity in social media use during ongoing family disputes.
  • It highlights the need for clear evidence when courts are asked to impose financial sanctions such as attorney fees.

Kimmel v. Kimmel, 2024 WL 4521373 (Ct.App.Ky., October 18, 2024)

Online agreement to arbitrate not enforceable

website terms and conditions

Plaintiff sued defendant gaming company alleging violation of Washington state laws addressing gambling and consumer protection. Plaintiff claimed that after starting with free chips in defendant’s online casino games, users had to buy more chips to keep playing. Plaintiff had spent money on the games and argued that defendant’s practices were unfair.

Defendant moved to dismiss the case and asked the court to compel arbitration. Defendant argued that plaintiff had agreed to defendant’s terms of service, which included an arbitration clause. The company claimed that by playing the games, plaintiff was bound to these terms, even though plaintiff did not explicitly sign a contract.

The court denied the motion to dismiss. It found that defendant did not provide enough information to show that plaintiff had been given proper notice of the terms of service or that he agreed to them. The notice on the game’s homepage was not clear or conspicuous enough for a reasonable person to understand that they were agreeing to the terms, including arbitration, just by playing the games.

Three reasons why this case matters:

  • Consumer Protection: It highlights the importance of businesses providing clear and understandable terms to consumers.
  • Online Contracts: The case shows that courts are careful when it comes to online agreements, requiring companies to ensure consumers are fully aware of the terms.
  • Arbitration Clauses: This case reinforces that arbitration clauses must be clearly presented and agreed upon to be enforceable.

Kuhk v. Playstudios, Inc., 2024 WL 4529263 (W.D. Washington, October 18, 2024)

Recent case applies VHS-era law to modern digital privacy

vhs

Plaintiff sued the NBA, accusing it of violating the Video Privacy Protection Act, 18 U.S.C. 2701 (VPPA). Plaintiff claimed that after signing up for the NBA’s online newsletter and watching videos on NBA.com, the NBA shared his viewing history with Meta without his permission. The district court dismissed the case and plaintiff sought review with the Second Circuit. On review, the court vacated and remanded the case for further proceedings.

What is the VPPA?

The VPPA, enacted in 1988, aims to protect consumers’ privacy by restricting video tape service providers from sharing personally identifiable information without consent. The historical circumstances around its enactment, particularly involving Robert Bork, is worth taking a few minutes to read up on.

Key issue – what’s a consumer here?

Plaintiff argued that he qualified as a “consumer” under the VPPA’s definition, which includes any “renter, purchaser, or subscriber of goods or services.” He contended that by providing his email and other personal data in exchange for the NBA’s newsletter, he became a “subscriber,” thus entitling him to privacy protections. According to plaintiff, the NBA’s practice of embedding a “Facebook Pixel” on its website allowed Meta to track users’ video-watching behavior, which constituted a violation of the VPPA’s restrictions.

The NBA, however, argued that plaintiff did not meet the VPPA’s criteria for a “consumer” because the newsletter subscription did not involve any audiovisual services, as required under the law. The NBA further asserted that plaintiff did not suffer a “concrete” injury, a requirement for Article III standing under the standards set out by SCOTUS in TransUnion LLC v. Ramirez. The NBA maintained that merely signing up for a free newsletter did not establish a sufficient relationship to qualify as a “subscriber.”

Lower court proceedings

The United States District Court for the Southern District of New York ruled in favor of the NBA. While it determined that plaintiff had standing to sue, the court dismissed the case on the grounds that plaintiff failed to establish that he was a “consumer” as defined by the VPPA. The court ruled that the VPPA’s scope was limited to audiovisual goods or services, and an online newsletter did not fit this definition. It concluded that merely signing up for a newsletter did not create a relationship that would extend VPPA protections to plaintiff’s video-watching data.

But the appellate court said…

Plaintiff appealed the decision, and the Second Circuit found that plaintiff sufficiently alleged that he was a “subscriber of goods or services” because he provided personal information in exchange for the NBA’s online newsletter. The court emphasized that the VPPA’s language did not strictly limit “goods or services” to audiovisual content, thus broadening the potential scope of who could be considered a “consumer.” This meant that the case would proceed to further legal proceedings to address the other issues in the dispute.

Three reasons why this case matters:

  • It clarifies modern VPPA applications: The case explores how the VPPA, with its origins in a VHS-centric era, applies to modern digital interactions, like email newsletters and online video streaming.
  • It expands consumer privacy definitions: The court’s interpretation suggests that a “subscriber” could include individuals who exchange personal information for non-monetary services, influencing other privacy claims.
  • It influences digital business practices: It affects how businesses should collect and share user data, potentially increasing scrutiny over partnerships involving data tracking and disclosure to third parties such Meta.

Salazar v. NBA, — F.4th —, 2024 WL 4487971 (2nd Cir., October 15, 2024)

See also: Casual website visitor who watched videos was not protected under the Video Privacy Protection Act

Section 230 saves eBay from liability for violation of environmental laws

The United States government sued eBay for alleged violations of environmental regulations, claiming the online marketplace facilitated the sale of prohibited products in violation of the Clean Air Act (CAA), the Toxic Substances Control Act (TSCA), and the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA). According to the government’s complaint, eBay allowed third-party sellers to list and distribute items that violated these statutes, including devices that tamper with vehicle emissions controls, products containing methylene chloride used in paint removal, and unregistered pesticides.

eBay moved to dismiss, arguing that the government had failed to adequately state a claim under the CAA, TSCA, and FIFRA, and further contended that eBay was shielded from liability under Section 230 of the Communications Decency Act (CDA), 47 U.S.C. 230(c).

The court granted eBay’s motion to dismiss. It held that eBay was immune from liability because of Section 230, which protects online platforms in most situations from being held liable as publishers of third-party content. The court determined that, as a marketplace, eBay did not “sell” or “offer for sale” the products in question in the sense required by the environmental statutes, since it did not possess, own, or transfer title of the items listed by third-party sellers.

The court found that Section 230 provided broad immunity for eBay’s role as an online platform, preventing it from being treated as the “publisher or speaker” of content provided by its users. As the government sought to impose liability based on eBay’s role in hosting third-party listings, the court concluded that the claims were barred under the CDA.

United States of America v. eBay Inc., 2024 WL 4350523 (E.D.N.Y. September 30, 2024)

Counterfeit lubricant case gets preliminary injunction based on defendant’s slick conduct

A German-based lubricant manufacturer sued a U.S.-based distributor, alleging that the distributor produced and sold counterfeit versions of its products with branding that closely resembled plaintiff’s trademarks. Plaintiff brought claims for trademark infringement, cybersquatting, unfair competition, and other related violations, moving for a preliminary injunction against defendant, which the court granted.

The parties initiated a business relationship in 2019, but they never formalized a distribution agreement. Although plaintiff sent a draft agreement outlining trademark rights and restrictions, it was never executed. Plaintiff asserted that the relationship involved a limited license for defendant to distribute plaintiff’s authentic products, but defendant registered a “GP” mark in the U.S. without plaintiff’s consent. According to plaintiff, this was an unauthorized move, and defendant falsely represented itself as the mark’s legitimate owner.

Plaintiff further alleged that defendant continued to produce and sell lubricants with packaging mimicking plaintiff’s design, misleading consumers into believing they were purchasing legitimate products. Defendant also registered several domain names closely resembling plaintiff’s, which were used to display content imitating plaintiff’s branding and operations.

The court found plaintiff’s evidence of irreparable harm and likelihood of success on the merits compelling, issuing an injunction to stop defendant’s operations and prevent further distribution of the alleged counterfeit goods.

General Petroleum GmbH v. Stanley Oil & Lubricants, Inc., 2024 WL 4143535 (E.D.N.Y., September 11, 2024).

X gets Ninth Circuit win in case over California’s content moderation law

x bill of rights

X sued the California attorney general, challenging Assembly Bill 587 (AB 587) – a law that required large social media companies to submit semiannual reports detailing their terms of service and content moderation policies, as well as their practices for handling specific types of content such as hate speech and misinformation. X claimed that this law violated the First Amendment, was preempted by the federal Communications Decency Act, and infringed upon the Dormant Commerce Clause.

Plaintiff sought a preliminary injunction to prevent the government from enforcing AB 587 while the case was pending. Specifically, it argued that being forced to comply with the reporting requirements would compel speech in violation of the First Amendment. Plaintiff asserted that AB 587’s requirement to disclose how it defined and regulated certain categories of content compelled speech about contentious issues, infringing on its First Amendment rights.

The district court denied  plaintiff’s motion for a preliminary injunction. It found that the reporting requirements were commercial in nature and that they survived under a lower level of scrutiny applied to commercial speech regulations. Plaintiff sought review with the Ninth Circuit.

On review, the Ninth Circuit reversed the district court’s denial and granted the preliminary injunction. The court found that the reporting requirements compelled non-commercial speech and were thus subject to strict scrutiny under the First Amendment—a much higher standard. Under strict scrutiny, a law is presumed unconstitutional unless the government can show it is narrowly tailored to serve a compelling state interest. The court reasoned that plaintiff was likely to succeed on its claim that AB 587 violated the First Amendment because the law was not narrowly tailored. Less restrictive alternatives could have achieved the government’s goal of promoting transparency in social media content moderation without compelling companies to disclose their opinions on sensitive and contentious categories of speech.

The appellate court held that plaintiff would likely suffer irreparable harm if the law was enforced, as the compelled speech would infringe upon the platform’s First Amendment rights. Furthermore, the court found that the balance of equities and public interest supported granting the preliminary injunction because preventing potential constitutional violations was deemed more important than the government’s interest in transparency. Therefore, the court reversed and remanded the case, instructing the district court to enter a preliminary injunction consistent with its opinion.

X Corp. v. Bonta, 2024 WL 4033063 (9th Cir. September 4, 2024)

Court blocks part of Texas law targeting social media content

Two trade associations – the Computer & Communications Industry Association and NetChoice, LLC sued the Attorney General of Texas over a Texas law called House Bill 18 (HB 18), which was designed to regulate social media websites. Plaintiffs, who represented major technology companies such as Google, Meta  and X argued that the law violated the First Amendment and other legal protections. They asked the court for a preliminary injunction to stop the law from being enforced while the case continued.

Plaintiffs challenged several key parts of HB 18. Among other things, law required social media companies to verify users’ ages, give parents control over their children’s accounts, and block minors from viewing harmful content. Such content included anything that promoted suicide, self-harm, substance abuse, and other dangerous behaviors. Plaintiffs believed that the law unfairly restricted free speech and would force companies to over-censor online content to avoid penalties. Additionally, they claimed the law was vague, leaving companies confused about how to comply.

Defendant argued that the law was necessary to protect children from harmful content online. He asserted that social media companies were failing to protect minors and that the state had a compelling interest in stepping in. He also argued that plaintiffs were exaggerating the law’s impact on free speech and that the law was clear enough for companies to follow.

The court agreed with plaintiffs on some points but not all. It granted plaintiffs a partial preliminary injunction, meaning parts of the law were blocked from being enforced. Specifically, the court found that the law’s “monitoring-and-filtering” requirements were unconstitutional. These provisions forced social media companies to filter out harmful content for minors, which the court said was too broad and vague to survive legal scrutiny. The court also noted that these requirements violated the First Amendment by regulating speech based on its content. But the court allowed other parts of the law, such as parental control tools and data privacy protections, to remain in place, as they did not give rise to the same free speech issues.

Three reasons why this case matters:

  • Free Speech Online: This case highlights ongoing debates about how far the government can go in regulating content on social media without infringing on First Amendment rights.
  • Children’s Safety: While protecting children online is a major concern, the court’s ruling shows the difficulty in balancing safety with the rights of companies and users.
  • Technology Lawsuits: As states try to pass more laws regulating tech companies, this case sets an important standard for how courts may handle future legal battles over internet regulation.

Computer & Communications Industry Association v. Paxton, — F.Supp.3d —, 2024 WL 4051786 (W.D. Tex., August 30, 2024)

Supreme Court weighs in on Texas and Florida social media laws

scotus social media case

In a significant case involving the intersection of technology and constitutional law, NetChoice LLC sued Florida and Texas, challenging their social media content-moderation laws. Both states had enacted statutes regulating how platforms such as Facebook and YouTube moderate, organize, and display user-generated content. NetChoice argued that the laws violated the First Amendment by interfering with the platforms’ editorial discretion. It asked the Court to invalidate these laws as unconstitutional.

The Supreme Court reviewed conflicting rulings from two lower courts. The Eleventh Circuit had upheld a preliminary injunction against Florida’s law, finding it likely violated the First Amendment. And the Fifth Circuit had reversed an injunction against the Texas law, reasoning that content moderation did not qualify as protected speech. However, the Supreme Court vacated both decisions, directing the lower courts to reconsider the challenges with a more comprehensive analysis.

The Court explained that content moderation—decisions about which posts to display, prioritize, or suppress—constitutes expressive activity akin to editorial decisions made by newspapers. The Texas and Florida laws, by restricting this activity, directly implicated First Amendment protections. Additionally, the Court noted that these cases involved facial challenges, requiring an evaluation of whether a law’s unconstitutional applications outweigh its constitutional ones. Neither lower court had sufficiently analyzed the laws in this manner.

The Court also addressed a key issue in the Texas law: its prohibition against platforms censoring content based on viewpoint. Texas justified the law as ensuring “viewpoint neutrality,” but the Court found this rationale problematic. Forcing platforms to carry speech they deem objectionable—such as hate speech or misinformation—would alter their expressive choices and violate their First Amendment rights.

Three reasons why this case matters:

  • Clarifies Free Speech Rights in the Digital Age: The case reinforces that social media platforms have editorial rights similar to traditional media, influencing how future laws may regulate online speech.
  • Impacts State-Level Regulation: The ruling limits states’ ability to impose viewpoint neutrality mandates on private platforms, shaping the balance of power between governments and tech companies.
  • Sets a Standard for Facial Challenges: By emphasizing the need to weigh a law’s unconstitutional and constitutional applications, the decision provides guidance for courts evaluating similar cases.

Moody v. Netchoice, et al., 144 S.Ct. 2383 (July 1, 2024)

No Section 230 immunity for Facebook on contract-related claims

section 230

Plaintiffs sued Meta, claiming that they were harmed by fraudulent third-party ads posted on Facebook. Plaintiffs argued that these ads violated Meta’s own terms of service, which prohibits deceptive advertisements. They accused Meta of allowing scammers to run ads that targeted vulnerable users and of prioritizing revenue over user safety. Meta moved to dismiss claiming that it was immune from liability under 47 U.S.C. § 230(c)(1) (a portion of the Communications Decency Act (CDA)), which generally protects internet platforms from being held responsible for third-party content.

Plaintiffs asked the district court to hold Meta accountable for five claims: negligence, breach of contract, breach of the covenant of good faith and fair dealing, violation of California’s Unfair Competition Law (UCL), and unjust enrichment. They alleged that Meta not only failed to remove scam ads but actively solicited them, particularly from advertisers based in China, who accounted for a large portion of the fraudulent activity on the platform.

The district court held that § 230(c)(1) protected Meta from all claims, even the contract claims. Plaintiffs sought review with the Ninth Circuit.

On appeal, the Ninth Circuit affirmed that § 230(c)(1) provided Meta with immunity for the non-contract claims, such as negligence and UCL violations, because these claims treated Meta as a publisher of third-party ads. But the Ninth Circuit disagreed with the district court’s ruling on the contract-related claims. It held that the lower court had applied the wrong legal standard when deciding whether § 230(c)(1) barred those claims. So the court vacated the dismissal of the contract claims, explaining that contract claims were different because they arose from Meta’s promises to users, not from its role as a publisher. The case was remanded back to the district court to apply the correct standard for the contract claims.

Three reasons why this case matters:

  • It clarifies that § 230(c)(1) of the CDA does not provide blanket immunity for all types of claims, especially contract-related claims.
  • The case underscores the importance of holding internet companies accountable for their contractual promises to users, even when they enjoy broad protections for third-party content.
  • It shows that courts continue to wrestle with the boundaries of platform immunity under the CDA, which could shape future rulings about online platforms’ responsibilities.

Calise v. Meta Platforms, Inc., 103 F.4th 732 (9th Cir., June 4, 2024)

Federal judge optimistic about the future of courts’ use of AI

plain meaning AI

Eleventh Circuit concurrence could be a watershed moment for discourse on the judiciary’s future use of AI.

In a very thought-provoking recent ruling, Judge Kevin Newsom of the 11th Circuit Court of Appeals discussed the potential use of AI-powered large language models (LLMs) in legal text interpretation. Known for his commitment to textualism and plain-language interpretation, Judge Newsom’s career – prior to President Trump appointing him to the bench in 2017 – includes serving as the Solicitor General of Alabama and clerking for Justice David Souter of the U.S. Supreme Court. His concurrence in the case of Snell v. United Specialty Insurance Company, — F.4th — 2024 WL 2717700 (May 28, 2024), unusual in its approach, aimed to pull back the curtain on how legal professionals can leverage modern technology to enhance judicial processes. It takes an optimistic and hopeful tone on how LLMs could improve judges’ decision making, particularly when examining the meaning of words.

Background of the Case

The underlying case involved a plaintiff (Snell) who installed an in-ground trampoline for a customer. Snell got sued over the work and the defendant insurance company refused to pick up the tab. One of the key questions in the litigation was whether this work fell under the term “landscaping” as used in the insurance policy. The parties and the courts had anguished over the ordinary meaning of the word “landscaping,” relying heavily on traditional methods such consulting a dictionary. Ultimately, the court resolved the issue based on a unique aspect of Alabama law and Snell’s insurance application, which explicitly disclaimed any recreational or playground equipment work. But the definitional debate highlighted the complexities in interpreting legal texts and inspired Judge Newsom’s proposal to consider AI’s role in this process.

Judge Newsom’s Proposal

Judge Newsom’s proposal is both provocative and forward-thinking, discussing how to effectively incorporate AI-powered LLMs such as ChatGPT, Gemini, and Claude into the interpretive analysis of legal texts. He acknowledged that this suggestion may initially seem “heretical” to some but believed it was worth exploring. The basic rationale is that LLMs, trained on vast amounts of data reflecting everyday language use, could provide valuable insights into the ordinary meanings of words and phrases.

The concurrence reads very differently – in its solicitous treatment of AI – than many other early cases dealing with litigants’ use of AI, such as J.G. v. New York City Dept. of Education, 2024 WL 728626 (February 22, 2024). In that case, the found ChatGPT to be “utterly and unusually unpersuasive.” The present case has an entirely different attitude toward AI.

Strengths of Using LLMs for Ordinary Language Determinations

Judge Newsom systematically examined the various benefits of judges’ use of LLMs. He commented on the following issues and aspects:

  • Reflecting Ordinary Language: LLMs are trained on extensive datasets from the internet, encompassing a broad spectrum of language use, from academic papers to casual conversations. This training allows LLMs to offer predictions about how ordinary people use language in everyday life, potentially providing a more accurate reflection of common speech than traditional dictionaries.
  • Contextual Understanding: Modern LLMs can discern context and differentiate between various meanings of the same word based on usage patterns. This capability could be particularly useful in legal interpretation, where context is crucial.
  • Accessibility and Transparency: LLMs are increasingly accessible to judges, lawyers, and the general public, offering an inexpensive and transparent research tool. Unlike the opaque processes behind some dictionary definitions, LLMs can provide clear insights into their training data and predictive mechanisms.
  • Empirical Advantages: Compared to traditional empirical methods such as surveys and “corpus linguistics”, LLMs are more practical and less susceptible to manipulation. They offer a scalable solution that can quickly adapt to new data.

Challenges and Considerations

But the Judge’s take was not all completely rosy. He acknowledged certain potential downsides or vulnerabilities in the use of AI for making legal determinations. But even in this critique, his approach remained optimistic:

  • Hallucinations: LLMs can generate incorrect or fictional information. However, Judge Newsom argued that this issue is not unique to AI and that human lawyers also make mistakes or manipulate facts.
  • Representation: LLMs may not fully capture offline speech, potentially underrepresenting certain populations. This concern needs addressing, but Judge Newsom stated it does not fundamentally undermine the utility of LLMs.
  • Manipulation Risks: There is a risk of strategic manipulation of LLM outputs. However, this risk exists with traditional methods as well, and transparency in querying multiple models can mitigate it.
  • Dystopian Fears: Judge Newsom emphasized that LLMs should not replace human judgment but serve as one of many tools in the interpretive toolkit.

Future Directions

Judge Newsom concluded by suggesting further exploration into the proper querying of LLMs, refining the outputs, and ensuring that LLMs can handle temporal considerations for interpreting historical texts (i.e., a word must be given the meaning it had when it was written). These steps could maximize the utility of AI in legal interpretation, ensuring it complements rather than replaces traditional methods.

Snell v. United Specialty Insurance Company, — F.4th — 2024 WL 2717700 (May 28, 2024)

 

Scroll to top