How did Ohio’s efforts to regulate children’s access to social media violate the constitution?

children social media

Ohio passed a law called the Parental Notification by Social Media Operators Act which sought to require certain categories of online services to obtain parental consent before allowing any unemancipated child under the age of sixteen to register or create accounts with the service.

Plaintiff internet trade association – representing platforms including Google, Meta, X, Nextdoor, and Pinterest – sought a preliminary injunction that would prohibit the State’s attorney general from enforcing the law. Finding the law to be unconstitutional, the court granted the preliminary injunction.

Likelihood of success on the merits: First Amendment Free Speech

The court found that plaintiff was likely to succeed on its constitutional claims. Rejecting the State’s argument that the law sought only to regulate commerce (i.e., the contracts governing use of social media platforms) and not speech, it held that the statute was a restriction on speech, implicating the First Amendment. It held that the law was a content-based restriction because the social media features the statute singled out in defining which platforms were subject to the law – e.g., the ability to interact socially with others – were “inextricable from the content produced by those features.” And the law violated the rights of minors living in Ohio because it infringed on minors’ rights to both access and produce First Amendment protected speech.

Given these attributes of the law, the court applied strict scrutiny to the statute. The court held that the statute failed to pass strict scrutiny for several reasons. First, the Act was not narrowly tailored to address the specific harms identified by the State, such as protecting minors from oppressive contract terms with social media platforms. Instead of targeting the contract terms directly, the Act broadly regulated access to and dissemination of speech, making it under-inclusive in addressing the specific issue of contract terms and over-inclusive by imposing sweeping restrictions on speech. Second, while the State aimed to protect minors from mental health issues and sexual predation related to social media use, the Act’s approach of requiring parental consent for minors under sixteen to access all covered websites was an untargeted and blunt instrument, failing to directly address the nuanced risks posed by specific features of social media platforms. Finally, in attempting to bolster parental authority, the Act mirrored previously rejected arguments that imposing speech restrictions, subject to parental veto, was a legitimate means of aiding parental control, making it over-inclusive by enforcing broad speech restrictions rather than focusing on the interests of genuinely concerned parents.

Likelihood of success on the merits: Fourteenth Amendment Due Process

The statute violated the Due Process Clause of the Fourteenth Amendment because its vague language failed to provide clear notice to operators of online services about the conduct that was forbidden or required. The Act’s broad and undefined criteria for determining applicable websites, such as targeting children or being reasonably anticipated to be accessed by children, left operators uncertain about their legal obligations. The inclusion of an eleven-factor list intended to clarify applicability, which contained vague and subjective elements like “design elements” and “language,” further contributed to the lack of precise guidance. The Act’s exception for “established” and “widely recognized” media outlets without clear definitions for these terms introduced additional ambiguity, risking arbitrary enforcement. Despite the State highlighting less vague aspects of the Act and drawing parallels with the federal Children Online Privacy Protection Act of 1998 (COPPA), these did not alleviate the overall vagueness, particularly with the Act’s broad and subjective exceptions.

Irreparable harm and balancing of the equities

The court found that plaintiff’s members would face irreparable harm through non-recoverable compliance costs and the potential for civil liability if the Act were enforced, as these monetary harms could not be fully compensated. Moreover, the Act’s infringement on constitutional rights, including those protected under the First Amendment, constituted irreparable harm since the loss of such freedoms, even for short durations, is considered significant.

The balance of equities and the public interest did not favor enforcing a statute that potentially violated constitutional principles, as the enforcement of unconstitutional laws serves no legitimate public interest. The argument that the Act aimed to protect minors did not outweigh the importance of upholding constitutional rights, especially when the statute’s measures were not narrowly tailored to address specific harms. Therefore, the potential harm to plaintiff’s members and the broader implications for constitutional rights underscored the lack of public interest in enforcing this statute.

NetChoice, LLC v. Yost, 2024 WL 55904 (S.D. Ohio, February 12, 2024)

See also: 

Required content moderation reporting does not violate X’s First Amendment rights

x bill of rights

A federal court in California has upheld the constitutionality of the state’s Assembly Bill 587 (AB 587), which mandates social media companies to submit to the state attorney general semi-annual reports detailing their content moderation practices. This decision comes after X filed a lawsuit claiming the law violated the company’s First Amendment rights.

The underlying law

AB 587 requires social media companies to provide detailed accounts of their content moderation policies, particularly addressing issues like hate speech, extremism, disinformation, harassment, and foreign political interference. These “terms of service reports” are to be submitted to the state’s attorney general, aiming to increase transparency in how these platforms manage user content.

X’s challenge

X challenged this law, seeking to prevent its enforcement on the grounds that it was unconstitutional. The court, however, denied their motion for injunctive relief, finding that X failed to demonstrate a likelihood of success on the merits of its constitutional claims.

The court’s decision relied heavily on SCOTUS’s opinion in Zauderer v. Office of Disciplinary Counsel of Supreme Court of Ohio, 471 U.S. 626 (1985). Under the Zauderer case, for governmentally compelled commercial disclosure to be constitutionally permissible, the information must be purely factual and uncontroversial, not unduly burdensome, and reasonably related to a substantial government interest.

The court’s constitutional analysis

In applying these criteria, the court found that AB 587’s requirements fit within these constitutional boundaries. The reports, while compulsory, do not constitute commercial speech in the traditional sense, as they are not advertisements and carry no direct economic benefit for the social media companies. Despite this, the court followed the rationale of other circuits that have assessed similar requirements for social media disclosures.

The court determined that the content of the reports mandated by AB 587 is purely factual, requiring companies to outline their existing content moderation policies related to specified areas. The statistical data, if provided, represents objective information about the company’s actions. The court also found that the disclosures are uncontroversial, noting that the mere association with contentious topics does not render the reports themselves controversial. We know how controversial and political the regulation of “disinformation” can be.

Addressing the burden of these requirements, the court recognized that while the reporting may be demanding, it is not unjustifiably so under First Amendment considerations. X argued that the law would necessitate significant resources to monitor and report the required metrics. But the court noted that AB 587 does not obligate companies to adopt any specific content categories, nor does it impose burdens on speech itself, a crucial aspect under Zauderer’s analysis.

What this means

The court confirmed that AB 587’s reporting requirements are reasonably related to a substantial government interest. This interest lies in ensuring transparency in social media content moderation practices, enabling consumers to make informed choices about their engagement with news and information on these platforms.

The court’s decision is a significant step in addressing the complexities of regulating social media platforms, balancing the need for transparency with the constitutional rights of these digital entities. As the landscape of digital communication continues to evolve, this case may be a marker for how governments might approach the regulation of social media companies, particularly in the realm of content moderation.

X Corp. v. Bonta, 2023 WL 8948286 (E.D. Cal., December 28, 2023)

See also: Maryland Court of Appeals addresses important question of internet anonymity

California court decision strengthens Facebook’s ability to deplatform its users

vaccine information censorship

Plaintiff used Facebook to advertise his business. Facebook kicked him off and would not let him advertise, based on alleged violations of Facebook’s Terms of Service. Plaintiff sued for breach of contract. The lower court dismissed the case so plaintiff sought review with the California appellate court. That court affirmed the dismissal.

The Terms of Service authorized the company to unilaterally “suspend or permanently disable access” to a user’s account if the company determined the user “clearly, seriously, or repeatedly breached” the company’s terms, policies, or community standards.

An ordinary reading of such a provision would lead one to think that Facebook would not be able to terminate an account unless certain conditions were met, namely, that there had been a clear, serious or repeated breach by the user. In other words, Facebook would be required to make such a finding before terminating the account.

But the court applied the provision much more broadly. So broadly, in fact, that one could say the notion of clear, serious, or repeated breach was irrelevant, superfluous language in the terms.

The court said: “Courts have held these terms impose no ‘affirmative obligations’ on the company.” Discussing a similar case involving Twitter’s terms of service, the court observed that platform was authorized to suspend or terminate accounts “for any or no reason.” Then the court noted that “[t]he same is true here.”

So, the court arrived at the conclusion that despite Facebook’s own terms – which would lead users to think that they wouldn’t be suspended unless there was a clear, serious or repeated breach – one can get deplatformed for any reason or no reason. The decision pretty much gives Facebook unmitigated free speech police powers.

Strachan v. Facebook, Inc., 2023 WL 8589937 (Cal. App. December 12, 2023)

How do you sort out who owns a social media account used to promote a business?

owns social media account
Imagine this scenario – a well-known founder of a company sets up social media accounts that promote the company’s products. The accounts also occasionally display personal content (e.g., public happy birthday messages the founder sends to his spouse). The company fires the founder and then the company claims it owns the accounts. If the founder says he owns the accounts, how should a court resolve that dispute?

The answer to this question is helpful in resolving actual disputes such as this, and perhaps even more helpful in setting up documentation and procedures to prevent such a dispute in the first place.

In the recent case of In re: Vital Pharmaceutical, the court considered whether certain social media accounts that a company’s founder and CEO used were property of the company’s bankruptcy estate under Bankruptcy Code § 541. Though this was a bankruptcy case, the analysis is useful in other contexts to determine who owns a social media account. The court held that various social media accounts (including Twitter, Instagram and TikTok accounts the CEO used) belonged to the company.

In reaching this decision, the court recognized a “dearth” of legal guidance from other courts on how to determine account ownership when there is a dispute. It noted the case of In re CTLI, LLC, 528 B.R. 359 (Bankr. S.D. Tex. 2015) but expressed concern that this eight year old case did not adequately address the current state of social media account usage, particularly in light of the rise of influencer marketing.

The court fashioned a rather detailed test:

  • Are there any agreements or other documents that show who owns the account? Perhaps an employee handbook? If so, then whoever such documents say owns the account is presumed to be the owner of the account.
  • But what if there are no documents that show ownership, or such documents do not show definitively who owns the account? In those circumstances, one should consider:
    • Does one party have exclusive power to access the account?
    • Does that same party have the ability to prevent others from accessing the account?
    • Does the account enable that party to identify itself as having that exclusive power?
  • If a party establishes both that documents show ownership and that a party has control, that ends the inquiry. But if one or both of those things are not definitively shown, one can still consider whether use of the social media account tips the scales one way or the other:
    • What name is used for the account?
    • Is the account used to promote more than one company’s products?
    • To what extent is the account used to promote the user’s persona?
    • Would any required changes fundamentally change the nature of the account?

Companies utilizing social media accounts run by influental individuals with well-known personas should take guidance from this decision. Under this court’s test, creating documentation or evidence of the account ownership would provide the clearest path forward. Absent such written indication, the parties should take care to establish clear protocols concerning account control and usage.

In re: Vital Pharmaceutical, 2023 WL 4048979 (Bankr. S.D. Fla., June 16, 2023)

Twitter avoids liability in terrorism lawsuit

The families of two U.S. contractors killed in Jordan sued Twitter, accusing the platform of providing material support to the terrorist organization ISIS. Plaintiffs alleged that by allowing ISIS to create and maintain Twitter accounts, the company violated the Anti-Terrorism Act (ATA). Plaintiffs further claimed this support enabled ISIS to recruit, fundraise, and promote extremist propaganda, ultimately leading to the deaths of the contractors. The lawsuit aimed to hold Twitter responsible for the actions of ISIS and to penalize it for facilitating the organization’s digital presence.

Twitter moved to dismiss, arguing that the claims were barred under the Communications Decency Act (CDA) at 47 U.S.C. §230. Section 230 provides immunity to internet platforms from being treated as the publisher or speaker of content posted by third parties. The court had to decide whether Twitter’s role in allowing ISIS to use its platform made it liable for the consequences of ISIS’s acts.

The court dismissed the case, finding that Section 230 shielded Twitter from liability. The court ruled that plaintiffs’ claims attempted to treat Twitter as the publisher of content created by ISIS, which is precisely the type of liability Section 230 was designed to prevent. The court also concluded that plaintiffs failed to establish a plausible connection, or proximate causation, between Twitter’s actions and the deaths. Importantly, in the court’s view, plaintiffs could not demonstrate that ISIS’s use of Twitter directly caused the attack in Jordan or that the shooter had interacted with ISIS content on the platform.

The court further addressed plaintiffs’ argument regarding private messages sent through Twitter’s direct messaging feature. It ruled that these private communications were also protected under Section 230, as the law applies to all publishing activities, whether public or private.

Three reasons why this case matters:

  • Expanding the scope of Section 230: The case reinforced the broad immunity provided to tech companies under Section 230, including their handling of controversial or harmful content.
  • Clarifying proximate causation in ATA claims: The ruling highlighted the challenges of proving a direct causal link between a platform’s operations and acts of terrorism.
  • Balancing tech innovation and accountability: The decision underscored the ongoing debate about how to balance the benefits of open platforms with the need for accountability in preventing misuse.

Fields v. Twitter, Inc., 200 F. Supp. 3d 964 (N.D. Cal., August 10, 2016).

Scroll to top