Section 230 protected Meta from Huckabee cannabis lawsuit

Mike Huckabee, the former governor of Arkansas, sued Meta Platforms, Inc., the parent company of Facebook, for using his name and likeness without his permission in advertisements for CBD products. Huckabee argued that these ads falsely claimed he endorsed the products and made misleading statements about his personal health. He asked the court to hold Meta accountable under various legal theories, including violation of his publicity rights and privacy.

Plaintiff alleged that defendant approved and maintained advertisements that misappropriated plaintiff’s name, image, and likeness. Plaintiff further claimed that the ads placed plaintiff in a false light by attributing statements and endorsements to him that he never made. Additionally, plaintiff argued that defendant had been unjustly enriched by profiting from these misleading ads. Defendant, however, sought to dismiss the claims, relying on the Communications Decency Act at 47 U.S.C. 230, which grants immunity to platforms for third-party content.

The court granted Meta’s motion to dismiss. It determined that Section 230 shielded defendant from liability for the third-party content at issue. The court also noted that plaintiff’s allegations lacked the specificity needed to overcome the protections provided by Section 230. Furthermore, the court emphasized that federal law, such as Section 230, preempts conflicting state laws, such as Arkansas’s Frank Broyles Publicity Protection Act.

Three reasons why this case matters:

  • Defines Section 230 Protections: It reaffirms the broad immunity tech companies enjoy under Section 230, even in cases involving misuse of publicity rights.
  • Digital Rights and Privacy: The case highlights the tension between protecting individual rights and maintaining the free flow of online content.
  • Challenges for State Laws: It shows how federal law can preempt state-specific protections, leaving individuals with limited recourse.

Mike Huckabee v. Meta Platforms, Inc., 2024 WL 4817657 (D. Del. Nov. 18, 2024)

X can claim trespass to chattel in data scraping case

data harvesting data scraping

X Corp. sued Bright Data Ltd. for unauthorized access to X’s servers and the scraping and resale of data from X’s platform. Plaintiff sought the court’s permission to file a second amended complaint after the court dismissed its prior complaint. The court granted plaintiff’s motion in part and denied it in part, allowing some claims to proceed while dismissing others.

Plaintiff alleged that defendant’s scraping activities caused significant harm to its systems. According to plaintiff, defendant’s automated scraping overwhelmed servers, causing system glitches and forcing plaintiff to purchase additional server capacity. Plaintiff further alleged that defendant used deceptive techniques, including fake accounts and rotating IP addresses, to bypass technical barriers and access non-public data. Plaintiff claimed that these actions violated its Terms of Service, interfered with its contracts, and constituted unfair and fraudulent business practices. Plaintiff also introduced new claims under federal and state anti-hacking laws, including the Digital Millennium Copyright Act and the Computer Fraud and Abuse Act.

The court agreed with plaintiff on several points. It allowed claims related to server impairment, including trespass to chattels and breach of contract, to move forward. The court found that plaintiff’s revised complaint provided sufficient details to plausibly allege harm to its servers and unauthorized access to its systems.

However, the court dismissed claims concerning the scraping and resale of data, ruling that they were preempted by the Copyright Act. Plaintiff had argued that it could prevent defendant from copying user-generated or non-copyrightable data through state-law claims. The court disagreed, holding that such claims conflicted with federal copyright policy, which limits protections for factual data and prioritizes public access. Additionally, the court rejected plaintiff’s argument that defendant’s actions constituted “unfair” business practices, finding no evidence of harm to competition.

Finally, the court allowed plaintiff to proceed with its new anti-hacking claims but left the door open for defendant to challenge these allegations later in the case.

Three Reasons Why This Case Matters

  • Defines Platform Rights: This case clarifies the limits of platform operators’ ability to control user-generated and public data.
  • Reinforces Copyright Preemption: The decision highlights the importance of federal copyright laws in preventing conflicting state-law claims.
  • Explores Anti-Hacking Laws: It illustrates how federal and state anti-hacking statutes may be used to address unauthorized access in the digital age.

X Corp. v. Bright Data Ltd., 2024 WL 4894290 (N.D. Cal., Nov. 26, 2024)

People tagging the wrong place on Instagram did not help prove trademark infringement

The City and County of San Francisco sued the Port of Oakland and the City of Oakland alleging trademark infringement and unfair competition. The dispute began when Oakland renamed its airport “San Francisco Bay Oakland International Airport,” which San Francisco claimed created confusion and harmed the brand of its own airport, San Francisco International Airport (SFO). San Francisco asked the court for a preliminary injunction to stop Oakland from using the new name while the case proceeded.

The court granted the motion in part, finding that the new name improperly implied an affiliation between the airports. However, it rejected claims that Oakland’s actions caused confusion during online ticket searches or at the point of sale. Social media evidence featured prominently in the case but ultimately did not sway the court’s decision.

San Francisco argued that social media posts demonstrated actual consumer confusion. For example, some users on platforms such as Instagram tagged images of SFO with Oakland’s new name, while others expressed uncertainty about which airport they were referencing. Despite these examples, the court found the evidence weak and unconvincing. It noted that most of the posts lacked context, such as whether the users were actual travelers or how their confusion affected any purchasing decisions. Additionally, the court questioned the sincerity of some posts, particularly where users repeated the same confusion across multiple platforms or appeared to joke about the issue.

While the court acknowledged that social media evidence could have value, it stressed the need for reliability. Without clear patterns or evidence of widespread confusion, the posts provided little support for San Francisco’s broader claims.

Three reasons why this case matters:

  • The Limits of Social Media Evidence: This case demonstrates that courts demand robust, contextualized proof when social media posts are used to argue consumer confusion.
  • Trademark Law in the Digital Age: The case highlights the challenges of protecting trademarks in a world where branding and consumer perception are shaped online.
  • Impacts on Regional Branding: The ruling underscores the importance of clear naming practices for public infrastructure, especially in areas with competing interests.

City and County of San Francisco v. City of Oakland, 2024 WL 5563429 (N.D. Cal., November 12, 2024)

Disabled veteran’s $77 billion lawsuit against Amazon dismissed

gaming law

A disabled Army veteran sued Amazon alleging “cyberstalking” and “cyberbullying” on its gaming platform, New World. Plaintiff claimed Amazon allowed other players and employees to engage in harassment, culminating in his being banned from the platform after over 10,000 hours and $1,700 of investment. Plaintiff sought $7 billion in compensatory damages and $70 billion in punitive damages, asserting claims for intentional infliction of emotional distress, gross negligence, and unfair business practices. Plaintiff also filed motions for a preliminary injunction to reinstate his gaming account and to remand the case to state court.

The court, however, dismissed the case. It granted plaintiff in forma pauperis status, allowing him to proceed without paying court fees, but ruled that his complaint failed to state any claim upon which relief could be granted. The court found no grounds for allowing plaintiff to amend the complaint, as any amendment would be futile.

The court dismissed the case on several legal principles. First, it found that Amazon was immune from liability under the Communications Decency Act at 47 U.S.C. §230 for any content posted by third-party users on the New World platform. Section 230 protects providers of interactive computer services from being treated as publishers or speakers of user-generated content, even if they moderate or fail to moderate that content.

Second, plaintiff’s claims about Amazon employees’ conduct were legally insufficient. His allegations, such as complaints about bad customer service and being banned from the platform, failed to meet the standard for intentional infliction of emotional distress, which requires conduct so outrageous it exceeds all bounds tolerated in a civilized society. Similarly, plaintiff’s gross negligence claims did not demonstrate any extreme departure from reasonable conduct.

Finally, in the court’s view, plaintiff’s claim under California’s Unfair Competition Law (UCL) lacked the necessary specificity. The court found that poor customer service and banning a user from a platform did not constitute unlawful, unfair, or fraudulent business practices under the UCL.

Three Reasons Why This Case Matters

  • Clarifies Section 230 Protections: The case reinforces the broad immunity granted to online platforms for third-party content under Section 230, even when moderation decisions are involved.
  • Defines the Limits of Tort Law in Online Interactions: It highlights the high bar plaintiffs must meet to succeed on claims such as intentional infliction of emotional distress and gross negligence in digital contexts.
  • Sets Guidance for Gaming Platform Disputes: The decision underscores the limited liability of companies for banning users or providing subpar customer support, offering guidance for similar lawsuits.

Haymore v. Amazon.com, Inc., 2024 WL 4825253 (E.D. Cal., Nov. 19, 2024)

Meta faces antitrust trial: FTC’s case against Instagram and WhatsApp acquisitions moves forward

The Federal Trade Commission (FTC) is taking Facebook’s parent company, Meta Platforms, to task over allegations that Meta’s acquisitions of Instagram in 2012 and WhatsApp in 2014 were anticompetitive. A recent ruling in the case allowed the FTC’s key claims to proceed, marking a significant step in the government’s effort to curtail what it alleges is Meta’s illegal monopoly over personal social networking (PSN) services. While some parts of the case were dismissed, the trial will focus on whether Meta’s past actions stifled competition and harmed consumers.

The FTC’s claims: Crushing competition through acquisitions

The FTC contended that Meta acted unlawfully to maintain its dominance in social networking by acquiring Instagram in 2012 and WhatsApp in 2014 to neutralize emerging competition. According to the agency, Instagram’s rapid rise as a mobile-first photo-sharing platform posed a direct threat to Meta’s efforts to establish a strong presence in the mobile space, where its applications were underperforming. WhatsApp, the FTC argued, was a leader in mobile messaging and had potential to expand into personal social networking, making it another significant competitive threat. The FTC alleged that Meta purchased these companies not to innovate but to eliminate rivals and consolidate its monopoly.

The case reached this stage after Meta filed a motion for summary judgment, seeking to have the case dismissed without trial. Meta argued that the FTC’s claims lacked sufficient evidence to support its allegations and that the acquisitions benefited consumers and competition. The court denied Meta’s motion in large part, finding that substantial factual disputes existed about whether the acquisitions were anticompetitive. The court determined that the FTC had presented enough evidence to show that Instagram and WhatsApp were either actual or nascent competitors when acquired.

The court’s analysis highlighted internal Meta documents and statements from CEO Mark Zuckerberg as particularly persuasive. These documents revealed that Instagram’s growth was a source of concern at Meta and that WhatsApp’s trajectory as a mobile messaging service could have positioned it as a future competitor. Based on this evidence, the court ruled that the FTC’s claims about the acquisitions merited a trial to determine whether they violated antitrust laws.

However, the court dismissed another FTC claim alleging that Meta unlawfully restricted third-party app developers’ access to its platform unless they agreed not to compete with Facebook’s core services. The court found that this specific allegation lacked sufficient evidence to proceed, narrowing the scope of the trial to focus on the acquisitions of Instagram and WhatsApp.

Meta’s defenses and their limitations

Meta of course pushed back against the FTC’s case, arguing that its acquisitions ultimately benefited consumers and competition. It claimed Instagram and WhatsApp have thrived under Meta’s ownership due to investments in infrastructure, innovation, and features that the platforms could not have achieved independently. Meta also contended that the FTC’s definition of the market for personal social networking services was too narrow, ignoring competition from platforms such as TikTok, YouTube, LinkedIn, and X.

However, the court rejected some of Meta’s defenses outright. For example, Meta was barred from arguing that its acquisition of WhatsApp was justified by the need to strengthen its position against Apple and Google. The court found this rationale irrelevant to the antitrust claims and insufficient as a defense. Meta’s arguments about broader market competition will be tested at trial, but the court found enough evidence to support the FTC’s narrower focus on personal social networking services.

Three Reasons Why This Case Matters:

  • Defining Market Boundaries: The case could set new standards for how courts define markets in the tech industry, particularly when dealing with overlapping functionalities of platforms such as social media and messaging apps.
  • Reining in Big Tech: A trial outcome in favor of the FTC could embolden regulators to pursue other tech giants and challenge long-standing business practices.
  • Consumer Protection: The case highlights the tension between innovation and market power, raising questions about whether tech consolidation truly benefits consumers or stifles competition.

Case Citation

Federal Trade Commission v. Meta Platforms, Inc., Slip Copy, 2024 WL 4772423 (D.D.C. Nov. 13, 2024).

Ex-wife held in contempt for posting on TikTok about her ex-husband

tiktok contempt

Ex-husband sought to have his ex-wife held in contempt for violating an order that the divorce court had entered. In 2022, the court had ordered the ex-wife to take down social media posts that could make the ex-husband identifiable.

The ex-husband alleged that the ex-wife continued to post content on her TikTok account which made him identifiable as her ex-husband. Ex-wife argued that she did not name the ex-husband directly and that her social media was part of her work as a trauma therapist. But the family court found that the ex-wife’s posts violated the previous order because they made the ex-husband identifiable, and also noted that the children could be heard in the background of some videos. As a result, the court held the ex-wife in contempt and ordered her to pay $1,800 in the ex-husband’s attorney fees.

Ex-wife appealed the contempt ruling, arguing that ex-husband did not present enough evidence to support his claim, and that she had not violated the order. She also disputed the attorney fees. On appeal, the court affirmed the contempt finding, agreeing that her actions violated the order, but vacated the award of attorney fees due to insufficient evidence of the amount.

Three reasons why this case matters:

  • It illustrates the legal consequences of violating court orders in family law cases.
  • It emphasizes the importance of clarity in social media use during ongoing family disputes.
  • It highlights the need for clear evidence when courts are asked to impose financial sanctions such as attorney fees.

Kimmel v. Kimmel, 2024 WL 4521373 (Ct.App.Ky., October 18, 2024)

X gets Ninth Circuit win in case over California’s content moderation law

x bill of rights

X sued the California attorney general, challenging Assembly Bill 587 (AB 587) – a law that required large social media companies to submit semiannual reports detailing their terms of service and content moderation policies, as well as their practices for handling specific types of content such as hate speech and misinformation. X claimed that this law violated the First Amendment, was preempted by the federal Communications Decency Act, and infringed upon the Dormant Commerce Clause.

Plaintiff sought a preliminary injunction to prevent the government from enforcing AB 587 while the case was pending. Specifically, it argued that being forced to comply with the reporting requirements would compel speech in violation of the First Amendment. Plaintiff asserted that AB 587’s requirement to disclose how it defined and regulated certain categories of content compelled speech about contentious issues, infringing on its First Amendment rights.

The district court denied  plaintiff’s motion for a preliminary injunction. It found that the reporting requirements were commercial in nature and that they survived under a lower level of scrutiny applied to commercial speech regulations. Plaintiff sought review with the Ninth Circuit.

On review, the Ninth Circuit reversed the district court’s denial and granted the preliminary injunction. The court found that the reporting requirements compelled non-commercial speech and were thus subject to strict scrutiny under the First Amendment—a much higher standard. Under strict scrutiny, a law is presumed unconstitutional unless the government can show it is narrowly tailored to serve a compelling state interest. The court reasoned that plaintiff was likely to succeed on its claim that AB 587 violated the First Amendment because the law was not narrowly tailored. Less restrictive alternatives could have achieved the government’s goal of promoting transparency in social media content moderation without compelling companies to disclose their opinions on sensitive and contentious categories of speech.

The appellate court held that plaintiff would likely suffer irreparable harm if the law was enforced, as the compelled speech would infringe upon the platform’s First Amendment rights. Furthermore, the court found that the balance of equities and public interest supported granting the preliminary injunction because preventing potential constitutional violations was deemed more important than the government’s interest in transparency. Therefore, the court reversed and remanded the case, instructing the district court to enter a preliminary injunction consistent with its opinion.

X Corp. v. Bonta, 2024 WL 4033063 (9th Cir. September 4, 2024)

Court blocks part of Texas law targeting social media content

Two trade associations – the Computer & Communications Industry Association and NetChoice, LLC sued the Attorney General of Texas over a Texas law called House Bill 18 (HB 18), which was designed to regulate social media websites. Plaintiffs, who represented major technology companies such as Google, Meta  and X argued that the law violated the First Amendment and other legal protections. They asked the court for a preliminary injunction to stop the law from being enforced while the case continued.

Plaintiffs challenged several key parts of HB 18. Among other things, law required social media companies to verify users’ ages, give parents control over their children’s accounts, and block minors from viewing harmful content. Such content included anything that promoted suicide, self-harm, substance abuse, and other dangerous behaviors. Plaintiffs believed that the law unfairly restricted free speech and would force companies to over-censor online content to avoid penalties. Additionally, they claimed the law was vague, leaving companies confused about how to comply.

Defendant argued that the law was necessary to protect children from harmful content online. He asserted that social media companies were failing to protect minors and that the state had a compelling interest in stepping in. He also argued that plaintiffs were exaggerating the law’s impact on free speech and that the law was clear enough for companies to follow.

The court agreed with plaintiffs on some points but not all. It granted plaintiffs a partial preliminary injunction, meaning parts of the law were blocked from being enforced. Specifically, the court found that the law’s “monitoring-and-filtering” requirements were unconstitutional. These provisions forced social media companies to filter out harmful content for minors, which the court said was too broad and vague to survive legal scrutiny. The court also noted that these requirements violated the First Amendment by regulating speech based on its content. But the court allowed other parts of the law, such as parental control tools and data privacy protections, to remain in place, as they did not give rise to the same free speech issues.

Three reasons why this case matters:

  • Free Speech Online: This case highlights ongoing debates about how far the government can go in regulating content on social media without infringing on First Amendment rights.
  • Children’s Safety: While protecting children online is a major concern, the court’s ruling shows the difficulty in balancing safety with the rights of companies and users.
  • Technology Lawsuits: As states try to pass more laws regulating tech companies, this case sets an important standard for how courts may handle future legal battles over internet regulation.

Computer & Communications Industry Association v. Paxton, — F.Supp.3d —, 2024 WL 4051786 (W.D. Tex., August 30, 2024)

Supreme Court weighs in on Texas and Florida social media laws

scotus social media case

In a significant case involving the intersection of technology and constitutional law, NetChoice LLC sued Florida and Texas, challenging their social media content-moderation laws. Both states had enacted statutes regulating how platforms such as Facebook and YouTube moderate, organize, and display user-generated content. NetChoice argued that the laws violated the First Amendment by interfering with the platforms’ editorial discretion. It asked the Court to invalidate these laws as unconstitutional.

The Supreme Court reviewed conflicting rulings from two lower courts. The Eleventh Circuit had upheld a preliminary injunction against Florida’s law, finding it likely violated the First Amendment. And the Fifth Circuit had reversed an injunction against the Texas law, reasoning that content moderation did not qualify as protected speech. However, the Supreme Court vacated both decisions, directing the lower courts to reconsider the challenges with a more comprehensive analysis.

The Court explained that content moderation—decisions about which posts to display, prioritize, or suppress—constitutes expressive activity akin to editorial decisions made by newspapers. The Texas and Florida laws, by restricting this activity, directly implicated First Amendment protections. Additionally, the Court noted that these cases involved facial challenges, requiring an evaluation of whether a law’s unconstitutional applications outweigh its constitutional ones. Neither lower court had sufficiently analyzed the laws in this manner.

The Court also addressed a key issue in the Texas law: its prohibition against platforms censoring content based on viewpoint. Texas justified the law as ensuring “viewpoint neutrality,” but the Court found this rationale problematic. Forcing platforms to carry speech they deem objectionable—such as hate speech or misinformation—would alter their expressive choices and violate their First Amendment rights.

Three reasons why this case matters:

  • Clarifies Free Speech Rights in the Digital Age: The case reinforces that social media platforms have editorial rights similar to traditional media, influencing how future laws may regulate online speech.
  • Impacts State-Level Regulation: The ruling limits states’ ability to impose viewpoint neutrality mandates on private platforms, shaping the balance of power between governments and tech companies.
  • Sets a Standard for Facial Challenges: By emphasizing the need to weigh a law’s unconstitutional and constitutional applications, the decision provides guidance for courts evaluating similar cases.

Moody v. Netchoice, et al., 144 S.Ct. 2383 (July 1, 2024)

TikTok and the First Amendment: Previewing some of the free speech issues

TikTok is on the verge of a potential federal ban in the United States. This development echoes a previous situation in Montana, where a 2023 state law attempted to ban TikTok but faced legal challenges. TikTok and its users filed a lawsuit against the state, claiming the ban violated their First Amendment rights. The federal court sided with TikTok and the users, blocking the Montana law from being enforced on the grounds that it infringed on free speech.

The court’s decision highlighted that the law restricted TikTok users’ ability to communicate and impacted the company’s content decisions, thus failing to meet the intermediate scrutiny standard applicable to content-neutral speech restrictions. The ruling criticized the state’s attempt to regulate national security, deeming it outside the state’s jurisdiction and excessively restrictive compared to other available measures such as data privacy laws. Furthermore, the court noted that the ban left other similar apps unaffected and failed to provide alternative communication channels for TikTok users reliant on the app’s unique features.

Scroll to top