Section 230 protected Meta from Huckabee cannabis lawsuit

Mike Huckabee, the former governor of Arkansas, sued Meta Platforms, Inc., the parent company of Facebook, for using his name and likeness without his permission in advertisements for CBD products. Huckabee argued that these ads falsely claimed he endorsed the products and made misleading statements about his personal health. He asked the court to hold Meta accountable under various legal theories, including violation of his publicity rights and privacy.

Plaintiff alleged that defendant approved and maintained advertisements that misappropriated plaintiff’s name, image, and likeness. Plaintiff further claimed that the ads placed plaintiff in a false light by attributing statements and endorsements to him that he never made. Additionally, plaintiff argued that defendant had been unjustly enriched by profiting from these misleading ads. Defendant, however, sought to dismiss the claims, relying on the Communications Decency Act at 47 U.S.C. 230, which grants immunity to platforms for third-party content.

The court granted Meta’s motion to dismiss. It determined that Section 230 shielded defendant from liability for the third-party content at issue. The court also noted that plaintiff’s allegations lacked the specificity needed to overcome the protections provided by Section 230. Furthermore, the court emphasized that federal law, such as Section 230, preempts conflicting state laws, such as Arkansas’s Frank Broyles Publicity Protection Act.

Three reasons why this case matters:

  • Defines Section 230 Protections: It reaffirms the broad immunity tech companies enjoy under Section 230, even in cases involving misuse of publicity rights.
  • Digital Rights and Privacy: The case highlights the tension between protecting individual rights and maintaining the free flow of online content.
  • Challenges for State Laws: It shows how federal law can preempt state-specific protections, leaving individuals with limited recourse.

Mike Huckabee v. Meta Platforms, Inc., 2024 WL 4817657 (D. Del. Nov. 18, 2024)

Meta faces antitrust trial: FTC’s case against Instagram and WhatsApp acquisitions moves forward

The Federal Trade Commission (FTC) is taking Facebook’s parent company, Meta Platforms, to task over allegations that Meta’s acquisitions of Instagram in 2012 and WhatsApp in 2014 were anticompetitive. A recent ruling in the case allowed the FTC’s key claims to proceed, marking a significant step in the government’s effort to curtail what it alleges is Meta’s illegal monopoly over personal social networking (PSN) services. While some parts of the case were dismissed, the trial will focus on whether Meta’s past actions stifled competition and harmed consumers.

The FTC’s claims: Crushing competition through acquisitions

The FTC contended that Meta acted unlawfully to maintain its dominance in social networking by acquiring Instagram in 2012 and WhatsApp in 2014 to neutralize emerging competition. According to the agency, Instagram’s rapid rise as a mobile-first photo-sharing platform posed a direct threat to Meta’s efforts to establish a strong presence in the mobile space, where its applications were underperforming. WhatsApp, the FTC argued, was a leader in mobile messaging and had potential to expand into personal social networking, making it another significant competitive threat. The FTC alleged that Meta purchased these companies not to innovate but to eliminate rivals and consolidate its monopoly.

The case reached this stage after Meta filed a motion for summary judgment, seeking to have the case dismissed without trial. Meta argued that the FTC’s claims lacked sufficient evidence to support its allegations and that the acquisitions benefited consumers and competition. The court denied Meta’s motion in large part, finding that substantial factual disputes existed about whether the acquisitions were anticompetitive. The court determined that the FTC had presented enough evidence to show that Instagram and WhatsApp were either actual or nascent competitors when acquired.

The court’s analysis highlighted internal Meta documents and statements from CEO Mark Zuckerberg as particularly persuasive. These documents revealed that Instagram’s growth was a source of concern at Meta and that WhatsApp’s trajectory as a mobile messaging service could have positioned it as a future competitor. Based on this evidence, the court ruled that the FTC’s claims about the acquisitions merited a trial to determine whether they violated antitrust laws.

However, the court dismissed another FTC claim alleging that Meta unlawfully restricted third-party app developers’ access to its platform unless they agreed not to compete with Facebook’s core services. The court found that this specific allegation lacked sufficient evidence to proceed, narrowing the scope of the trial to focus on the acquisitions of Instagram and WhatsApp.

Meta’s defenses and their limitations

Meta of course pushed back against the FTC’s case, arguing that its acquisitions ultimately benefited consumers and competition. It claimed Instagram and WhatsApp have thrived under Meta’s ownership due to investments in infrastructure, innovation, and features that the platforms could not have achieved independently. Meta also contended that the FTC’s definition of the market for personal social networking services was too narrow, ignoring competition from platforms such as TikTok, YouTube, LinkedIn, and X.

However, the court rejected some of Meta’s defenses outright. For example, Meta was barred from arguing that its acquisition of WhatsApp was justified by the need to strengthen its position against Apple and Google. The court found this rationale irrelevant to the antitrust claims and insufficient as a defense. Meta’s arguments about broader market competition will be tested at trial, but the court found enough evidence to support the FTC’s narrower focus on personal social networking services.

Three Reasons Why This Case Matters:

  • Defining Market Boundaries: The case could set new standards for how courts define markets in the tech industry, particularly when dealing with overlapping functionalities of platforms such as social media and messaging apps.
  • Reining in Big Tech: A trial outcome in favor of the FTC could embolden regulators to pursue other tech giants and challenge long-standing business practices.
  • Consumer Protection: The case highlights the tension between innovation and market power, raising questions about whether tech consolidation truly benefits consumers or stifles competition.

Case Citation

Federal Trade Commission v. Meta Platforms, Inc., Slip Copy, 2024 WL 4772423 (D.D.C. Nov. 13, 2024).

No Section 230 immunity for Facebook on contract-related claims

section 230

Plaintiffs sued Meta, claiming that they were harmed by fraudulent third-party ads posted on Facebook. Plaintiffs argued that these ads violated Meta’s own terms of service, which prohibits deceptive advertisements. They accused Meta of allowing scammers to run ads that targeted vulnerable users and of prioritizing revenue over user safety. Meta moved to dismiss claiming that it was immune from liability under 47 U.S.C. § 230(c)(1) (a portion of the Communications Decency Act (CDA)), which generally protects internet platforms from being held responsible for third-party content.

Plaintiffs asked the district court to hold Meta accountable for five claims: negligence, breach of contract, breach of the covenant of good faith and fair dealing, violation of California’s Unfair Competition Law (UCL), and unjust enrichment. They alleged that Meta not only failed to remove scam ads but actively solicited them, particularly from advertisers based in China, who accounted for a large portion of the fraudulent activity on the platform.

The district court held that § 230(c)(1) protected Meta from all claims, even the contract claims. Plaintiffs sought review with the Ninth Circuit.

On appeal, the Ninth Circuit affirmed that § 230(c)(1) provided Meta with immunity for the non-contract claims, such as negligence and UCL violations, because these claims treated Meta as a publisher of third-party ads. But the Ninth Circuit disagreed with the district court’s ruling on the contract-related claims. It held that the lower court had applied the wrong legal standard when deciding whether § 230(c)(1) barred those claims. So the court vacated the dismissal of the contract claims, explaining that contract claims were different because they arose from Meta’s promises to users, not from its role as a publisher. The case was remanded back to the district court to apply the correct standard for the contract claims.

Three reasons why this case matters:

  • It clarifies that § 230(c)(1) of the CDA does not provide blanket immunity for all types of claims, especially contract-related claims.
  • The case underscores the importance of holding internet companies accountable for their contractual promises to users, even when they enjoy broad protections for third-party content.
  • It shows that courts continue to wrestle with the boundaries of platform immunity under the CDA, which could shape future rulings about online platforms’ responsibilities.

Calise v. Meta Platforms, Inc., 103 F.4th 732 (9th Cir., June 4, 2024)

Website operator not liable under Wiretap Act for allowing Meta to intercept visitor communications

Plaintiffs asserted that defendant healthcare organization inadequately protected the personal and health information of visitors to defendant’s website. In particular, plaintiffs alleged that unauthorized third parties – including Meta – could intercept user interactions through the use of tracking technologies such as the Meta Pixel and Conversions API. According to plaintiffs, these tools collected sensitive health information and sent it to Meta. Despite defendant’s privacy policy claiming to protect user privacy and information, plaintiffs alleged that using defendant’s website caused plaintiffs to receive unsolicited advertisements on their Facebook accounts.

Plaintiffs sued, asserting a number of claims, including under the federal Electronic Communications Privacy Act (“ECPA”) and the California Invasion of Privacy Act (“CIPA”). Defendant moved to dismiss these claims. The court granted the motion.

To establish an ECPA claim, a plaintiff must demonstrate that defendant intentionally intercepted or attempted to intercept electronic communications using a device. CIPA similarly prohibits using electronic means to understand the contents of a communication without consent. Both laws have a “party exception” allowing a person who is a party to the communication to intercept it, provided the interception is not for a criminal or tortious purpose. In other words, there is an exception to the exception.

In this case, defendant argued it was a legitimate party to plaintiffs’ communications on a website, thus invoking the party exception. Plaintiffs countered that the exception should not apply due to defendant’s alleged tortious intent (making the information available to Facebook without disclosure to plaintiffs). But the court found that plaintiffs did not provide sufficient evidence that defendant’s actions were for an illegal or actionable purpose beyond the act of interception itself. Under the guidance of Pena v. GameStop, Inc., 2023 WL 3170047 (S.D. Cal. April 27, 2023), (a plaintiff must plead sufficient facts to support an inference that the offender intercepted the communication for the purpose of a tortious or criminal act that is independent of the intentional act of recording or interception itself), the court concluded there was no separate tortious conduct involved in the interception and dismissed the claims.

B.K. v. Eisenhower Medical Center, 2024 WL 878100 (February 29, 2024)

See also:

California court decision strengthens Facebook’s ability to deplatform its users

vaccine information censorship

Plaintiff used Facebook to advertise his business. Facebook kicked him off and would not let him advertise, based on alleged violations of Facebook’s Terms of Service. Plaintiff sued for breach of contract. The lower court dismissed the case so plaintiff sought review with the California appellate court. That court affirmed the dismissal.

The Terms of Service authorized the company to unilaterally “suspend or permanently disable access” to a user’s account if the company determined the user “clearly, seriously, or repeatedly breached” the company’s terms, policies, or community standards.

An ordinary reading of such a provision would lead one to think that Facebook would not be able to terminate an account unless certain conditions were met, namely, that there had been a clear, serious or repeated breach by the user. In other words, Facebook would be required to make such a finding before terminating the account.

But the court applied the provision much more broadly. So broadly, in fact, that one could say the notion of clear, serious, or repeated breach was irrelevant, superfluous language in the terms.

The court said: “Courts have held these terms impose no ‘affirmative obligations’ on the company.” Discussing a similar case involving Twitter’s terms of service, the court observed that platform was authorized to suspend or terminate accounts “for any or no reason.” Then the court noted that “[t]he same is true here.”

So, the court arrived at the conclusion that despite Facebook’s own terms – which would lead users to think that they wouldn’t be suspended unless there was a clear, serious or repeated breach – one can get deplatformed for any reason or no reason. The decision pretty much gives Facebook unmitigated free speech police powers.

Strachan v. Facebook, Inc., 2023 WL 8589937 (Cal. App. December 12, 2023)

Executive order to clarify Section 230: a summary

Section 230 executive order

Late yesterday President Trump took steps to make good on his promise to regulate online platforms like Twitter and Facebook. He released a draft executive order to that end. You can read the actual draft executive order. Here is a summary of the key points. The draft order:

  • States that it is the policy of the U.S. to foster clear, nondiscriminatory ground rules promoting free and open debate on the Internet. It is the policy of the U.S. that the scope of Section 230 immunity should be clarified.
  • Argues that a platform becomes a “publisher or speaker” of content, and therefore not subject to Section 230 immunity, when it does not act in good faith to to restrict access to content (in accordance with Section 230(c)(2) that it considers to be “obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable.” The executive order argues that Section 230 “does not extend to deceptive or pretextual actions restricting online content or actions inconsistent with an online platform’s terms of service.”
  • Orders the Secretary of Commerce to petition the FCC, requesting that the FCC propose regulations to clarify the conditions around a platform’s “good faith” when restricting access or availability of content. In particuar, the requested rules would examine whether the action was, among other things, deceptive, pretextual, inconsistent with the provider’s terms of service, the product of unreasoned explanation, or without meaningful opportunity to be heard.
  • Directs each federal executive department and agency to review its advertising and marketing spending on online platforms. Each is to provide a report in 30 days on: amount spent, which platforms supported, any viewpoint-based restrictions of the platform, assessment whether the platform is appropriate, and statutory authority available to restrict advertising on platforms not deemed appropriate.
  • States that it is the policy of the U.S. that “large social media platforms, such as Twitter and Facebook, as the functional equivalent of a traditional public forum, should not infringe on protected speech”.
  • Re-establishes the White House “Tech Bias Reporting Tool” that allows Americans to report incidents of online censorship. These complaints are to be forwarded to the DoJ and the FTC.
  • Directs the FTC to “consider” taking action against entities covered by Section 230 who restrict speech in ways that do not align with those entities’ public representations about those practices.
  • Directs the FTC to develop a publicly-available report describing complaints of activity of Twitter and other “large internet platforms” that may violate the law in ways that implicate the policy that these are public fora and should not infringe on protected speech.
  • Establishes a working group with states’ attorneys general regarding enforcement of state statutes prohibiting online platforms from engaging in unfair and deceptive acts and practices. 
  • This working group is also to collect publicly available information for the creation and monitoring of user watch lists, based on their interactions with content and other users (likes, follows, time spent). This working group is also to monitor users based on their activity “off the platform”. (It is not clear whether that means “off the internet” or “on other online places”.)

Facebook hacking that causes emotional distress – does the CFAA provide recovery?

A recent federal case from Virginia provides information on the types of “losses” that are actionable under the federal anti-hacking statute, the Computer Fraud and Abuse Act (“CFAA”).

Unauthorized Access Under the Computer Fraud and Abuse Act

Underlying facts

Plaintiff worked as a campaign manager, communications director and private sector employee of a Virginia state legislator. While plaintiff was in the hospital, defendant allegedly, without authorization, accessed plaintiff’s Facebook, Gmail and Google Docs accounts, and tried to access her Wells Fargo online account.

Plaintiff’s lawsuit

Plaintiff sued, alleging a number of claims, among them a claim for violation of the CFAA. Defendant moved to dismiss. Although the court denied the motion to dismiss on other grounds, it held that plaintiff’s alleged emotional distress was not the type of “loss” that is actionable under the CFAA.

Loss under the CFAA

One can bring a civil action under the CFAA if the defendant’s alleged conduct involves certain factors. One of those factors, set out at 18 U.S.C. § 1030(c)(4)(A)(i)(II), provides recovery if there is “the modification or impairment, or potential modification or impairment, of the medical examination, diagnosis, treatment, or care of 1 or more individuals”.

Plaintiff alleged that defendant’s unauthorized access and attempted access to her accounts caused her to sustain a “loss” under this definition because it caused her to suffer emotional distress for which she needed to seek counseling.

The court disagreed with plaintiff’s assertions. Essentially, the court held, the modification of or impairment of a plaintiff’s treatment must be based on impairment due to the ability to access or used deleted or corrupted medical records. As an example – this was not in the court’s opinion but is provided by the author of this post – one might be able to state a claim if, for example, medical records were modified by a hacker to change prescription information. Further, the court held, to recover under the relevant provision of the CFAA, a defendant’s violation must modify or impair an individual’s medical treatment as it already exists, not merely cause the plaintiff mental pain and suffering that requires additional care.

Hains v. Adams, 2019 WL 5929259 (E.D. Virginia, November 12, 2019)

Facebook did not violate HIPAA by using data showing users browsed healthcare-related websites

Plaintiffs sued Facebook and other entities, including the American Cancer Society, alleging that Facebook violated numerous federal and state laws by collecting and using plaintiffs’ browsing data from various healthcare-related websites. The district court dismissed the action and plaintiffs sought review with the Ninth Circuit. On appeal, the court affirmed the dismissal.

The appellate court held that the district court properly determined that plaintiffs consented to Facebook’s data tracking and collection practices.

Plaintiffs consented to Facebook’s terms

It noted that in determining consent, courts consider whether the circumstances, considered as a whole, demonstrate that a reasonable person understood that an action would be carried out so that their acquiescence demonstrates knowing authorization.

In this case, plaintiffs did not dispute that their acceptance of Facebook’s Terms and Policies constituted a valid contract. Those Terms and Policies contained numerous disclosures related to information collection on third-party websites, including:

  • “We collect information when you visit or use third-party websites and apps that use our Services …. This includes information about the websites and apps you visit, your use of our Services on those websites and apps, as well as information the developer or publisher of the app or website provides to you or us,” and
  • “[W]e use all of the information we have about you to show you relevant ads.”

The court found that a reasonable person viewing those disclosures would understand that Facebook maintained the practices of (a) collecting its users’ data from third-party sites and (b) later using the data for advertising purposes. This was “knowing authorization”.

“But it’s health-related data”

The court rejected plaintiffs claim that—though they gave general consent to Facebook’s data tracking and collection practices—they did not consent to the collection of health-related data due to its “qualitatively different” and “sensitive” nature.

The court did not agree that the collected data was so different or sensitive. The data showed only that plaintiffs searched and viewed publicly available health information that could not, in and of itself, reveal details of an individual’s health status or medical history.

This notion supported the court’s conclusion that the use of the information did not violate the Health Information Portability and Accountability Act of 1996 (“HIPAA”) and its California counterpart.

The court held that information available on publicly accessible websites stands in stark contrast to the personally identifiable patient records and medical histories protected by HIPAA and other statutes — information that unequivocally provides a window into an individual’s personal medical history.

Smith v. Facebook, Inc., 2018 WL 6432974 (9th Cir. Dec. 6, 2018)

Facebook did not violate user’s constitutional rights by suspending account for alleged spam

constitution

Plaintiff sued Facebook and several media companies (including CNN, PBS and NPR) after Facebook suspended his account for alleged spamming. Plaintiff had posted articles and comments in an effort to “set the record straight” regarding Kellyanne Conway’s comments on the “Bowling Green Massacre”. Plaintiff claimed, among other things, that Facebook and the other defendants violated the First, Fourth, Fifth, and Fourteenth Amendments.

The court granted defendants’ motion to dismiss for failure to state a claim. It observed the well-established principle that these provisions of the constitution only apply to governmental actors – and do not apply to private parties. Facebook and the other media defendants could not plausibly be considered governmental actors.

It also noted that efforts to apply the First Amendment to Facebook have consistently failed. See, for example, Forbes v. Facebook, Inc., 2016 WL 676396, at *2 (E.D.N.Y. Feb. 18, 2016) (finding that Facebook is not a state actor for Section 1983 First Amendment claim); and Young v. Facebook, Inc., 2010 WL 4269304, at *3 (N.D. Cal. Oct. 25, 2010) (holding that Facebook is not a state actor).

Shulman v. Facebook et al., 2017 WL 5129885 (D.N.J., November 6, 2017)

About the Author: Evan Brown is a Chicago technology and intellectual property attorney. Call Evan at (630) 362-7237, send email to ebrown [at] internetcases.com, or follow him on Twitter @internetcases. Read Evan’s other blog, UDRP Tracker, for information about domain name disputes.

Facebook’s Terms of Service protect it from liability for offensive fake account

0723_facebook-screenshot
Someone set up a bogus Facebook account and posted, without consent, images and video of Plaintiff engaged in a lewd act. Facebook finally deleted the account, but not until two days had passed and Plaintiff had threatened legal action.

Plaintiff sued anyway, alleging, among other things, intrusion upon seclusion, public disclosure of private facts, and infliction of emotional distress. In his complaint, Plaintiff emphasized language from Facebook’s Terms of Service that prohibited users from posting content or taking any action that “infringes or violates someone else’s rights or otherwise would violate the law.”

Facebook moved to dismiss the claims, making two arguments: (1) that the claims contradicted Facebook’s Terms of Service, and (2) that the claims were barred by the Communications Decency Act at 47 U.S.C. 230. The court granted the motion to dismiss.

It looked to the following provision from Facebook’s Terms of Service:

Although we provide rules for user conduct, we do not control or direct users’ actions on Facebook and are not responsible for the content or information users transmit or share on Facebook. We are not responsible for any offensive, inappropriate, obscene, unlawful or otherwise objectionable content or information you may encounter on Facebook. We are not responsible for the conduct, whether online or offline, of any user of Facebook.

The court also examined the following language from the Terms of Service:

We try to keep Facebook up, bug-free, and safe, but you use it at your own risk. We are providing Facebook as is without any express or implied warranties including, but not limited to, implied warranties of merchantability, fitness for a particular purpose, and non-infringement. We do not guarantee that Facebook will always be safe, secure or error-free or that Facebook will always function without disruptions, delays or imperfections. Facebook is not responsible for the actions, content, information, or data of third parties, and you release us, our directors, officers, employees, and agents from any claims and damages, known and unknown, arising out of or in any way connected with any claims you have against any such third parties.

The court found that by looking to the Terms of Service to support his claims against Facebook, Plaintiff could not likewise disavow those portions of the Terms of Service which did not support his case. Because the Terms of Service said, among other things, that Facebook was not responsible for the content of what its users post, and that the a user uses the service as his or her on risk, the court could not place the responsibility onto Facebook for the offensive content.

Moreover, the court held that the Communications Decency Act shielded Facebook from liability. The CDA immunizes providers of interactive computer services against liability arising from content created by third parties. The court found that Facebook was an interactive computer service as contemplated under the CDA, the information for which Plaintiff sought to hold Facebook liable was information provided by another information content provider, and the complaint sought to hold Facebook as the publisher or speaker of that information.

Caraccioli v. Facebook, 2016 WL 859863 (N.D. Cal., March 7, 2016)

About the Author: Evan Brown is a Chicago attorney advising enterprises on important aspects of technology law, including software development, technology and content licensing, and general privacy issues.

Scroll to top