Negligence claim against Roblox for minors’ gambling moves forward

roblox negligence

Plaintiffs sued defendant Roblox asserting various claims, including under RICO and California unfair competition law. Plaintiffs also claimed that Roblox was negligent by providing a system whereby minors were lured into online gambling.

Roblox moved to dismiss the negligence claim for failure to state a claim upon which relief may be granted. The court denied the motion to dismiss, allowing the negligence claim to move forward.

Roblox’s alleged involvement with online casinos

The gaming platform uses a virtual currency called “Robux” for transactions within its ecosystem. Users can purchase Robux for in-game enhancements and experiences created by developers, who can then convert their earned Robux into real money through Roblox. However, plaintiffs allege that online casinos accept Robux for gambling, thereby targeting minors. These casinos allegedly conduct sham transactions on Roblox to access a minor’s Robux, allowing the minors to gamble. When minors lose Robux in these casinos, the lost currency is converted back into cash, with Roblox allegedly facilitating these transactions and profiting from them. Plaintiffs claim Roblox is fully aware that its services are being exploited to enable illegal gambling activities involving minors in this way.

Why the negligence claim survived

The court observed that under California law, there is a fundamental obligation for entities to act with reasonable care to prevent foreseeable harm. Roblox argued that it was exempt from this duty. But the court rejected this argument, holding that that Roblox did indeed owe a duty to manage its platform responsibly to avoid harm, including by alerting parents about gambling risks.

Colvin v. Roblox Corporation, — F.Supp.3d —, 2024 WL 1268420 (N.D. Cal. March 26, 2024)

See also:

On FOX 2 Detroit talking about the TikTok ban

Earlier today I enjoyed appearing live on Fox 2 Detroit talking about the TikTok ban. We discussed what the act that the House of Representatives passed says, what it would mean for social media users, and the free speech litigation that will no doubt follow if the bill passes in the Senate and the President signs it. It’s a very intriguing issue.

What does the “bill that could ban TikTok” actually say?

In addition to causing free speech concerns, the bill is troubling in the way it gives unchecked power to the Executive Branch.

Earlier this week the United States House of Representatives passed a bill that is being characterized as one that could ban TikTok. Styled as the Protecting Americans from Foreign Adversary Controlled Applications Act, the text of the bill calls TikTok and its owner ByteDance Ltd. by name and seeks to “protect the national security of the United States from the threat posed by foreign adversary controlled applications.”

What conduct would be prohibited?

The Act would make it unlawful for anyone to “distribute, maintain, or update” a “foreign adversary controlled application” within the United States. The Act specifically prohibits anyone from “carrying out” any such distribution, maintenance or updating via a “marketplace” (e.g., any app store) or by providing hosting services that would enable distribution, maintenance or updating of such an app. Interestingly, the ban does not so much directly prohibit ByteDance from making TikTok available, but would cause entities such as Apple and Google to be liable for making the app available for others to access, maintain and update the app.

What apps would be banned?

There are two ways one could find itself being a “foreign adversary controlled application” and thereby prohibited.

  • The first is simply by being TikTok or any app provided by ByteDance or its successors.
  • The second way – and perhaps the more concerning way because of its grant of great power to one person – is by being a “foreign adversary controlled application” that is “determined by the President to present a significant threat to the national security of the United States.” Though the President must first provide the public with notice of such determination and make a report to Congress on the specific national security concerns, there is ultimately no check on the President’s power to make this determination. For example, there is no provision in the statute saying that Congress could override the President’s determination.

Relatively insignificant apps, or apps with no social media component would not be covered by the ban. For example, to be a “covered company” under the statute, the app has to have more than one million monthly users in two of the three months prior to the time the President determines the app should be banned. And the statute specifically says that any site having a “primary purpose” of allowing users to post reviews is exempt from the ban.

When would the ban take effect?

TikTok would be banned 180 days after the date the President signs the bill. For any other app that the President would later decide to be a “foreign adversary controlled application,” it would be banned 180 days after the date the President makes that determination. The date of that determination would be after the public notice period and report to Congress discussed above.

What could TikTok do to avoid being banned?

It could undertake a “qualified divestiture” before the ban takes effect, i.e., within 180 days after the President signs the bill. Here is another point where one may be concerned about the great power given to the Executive Branch. A “qualified divestiture” would be situation in which the owner of the app sells off that portion of the business *and* the President determines two things: (1) that the app is no longer being controlled by a foreign adversary, and (2) there is no “operational relationship” between the United States operations of the company and the old company located in the foreign adversary company. In other words, the app could not avoid the ban by being owned by a United States entity but still share data with the foreign company and have the foreign company handle the algorithm.

What about users who would lose all their data?

The Act provides that the app being prohibited must provide users with “all the available data related to the account of such user,” if the user requests it, prior to the time the app becomes prohibited. That data would include all posts, photos and videos.

What penalties apply for violating the law?

The Attorney General is responsible for enforcing the law. (An individual could not sue and recover damages.) Anyone (most likely an app store) that violates the ban on distributing, maintaining or updating the app would face penalties of $5,000 x the number of users determined to access, maintain or update the app. Those damages could be astronomical – TikTok currently has 170 million users, so the damages would be $850,000,000,000. An app’s failure to provide data portability prior to being banned would cause it to be liable for $500 x the number of affected users.

Kids Online Safety Act: Quick Facts

What is KOSA?

Senators Blackburn and Blumenthal have introduced a new version of KOSA – the Kids Online Safety Act, which seeks to protect minors from online harms by requiring social media companies to prioritize children’s safety in product design and offer more robust parental control tools. Garnering bipartisan support with 62 Senate cosponsors in the wake of a significant hearing with Big Tech CEOs, the bill emphasizes accountability for tech companies, transparency in algorithms, and enhanced safety measures. The legislation has been refined following extensive discussions with various stakeholders, including tech companies, advocacy groups, and parents, to ensure its effectiveness and alignment with the goal of safeguarding young internet users from bullying, harassment, and other online risks.

Critics of the statute argue that the KOSA, despite amendments, remains a threat to constitutional rights, effectively censoring online content and empowering state officials to target undesirable services and speech. See, e.g., the EFF’s blog post about the statute. They contend that KOSA mandates extensive filtering and blocking of legal speech across numerous websites, apps, and platforms, likely leading to age verification requirements. Concerns are raised about the potential harm to minors’ access to important information, particularly for groups such as LGBTQ+ youth, those seeking health and reproductive information, and activists. The modifications in the 2024 version, including the removal of the authority for state attorneys general to sue for non-compliance with the “duty of care” provision, are seen as insufficient to address the core issues related to free speech and censorship. Critics urge opposition to KOSA, highlighting its impact not just on minors but on all internet users who could be subjected to a “second-class internet” due to restricted access to information.

What does the proposed law actually say? Below are some key facts about the contents of the legislation:

Who would be subject to the law:

The statute would place various obligations on “covered platforms”:

  • A “covered platform” encompasses online platforms, video games, messaging applications, and video streaming services accessible via the internet and used or likely to be used by minors.
  • Exclusions from the definition of “covered platform” include common carrier services, broadband internet access services, email services, specific teleconferencing or video conferencing services, and direct wireless messaging services not linked to an online platform.
  • Entities not for profit, educational institutions, libraries, news or sports news websites/apps with specific criteria, business-to-business software, and cloud services not functioning as online platforms are also excluded.
  • Virtual private networks and similar services that solely route internet traffic are not considered “covered platforms.”

Design and Implementation Requirements

  • Covered platforms are required to exercise reasonable care in designing and implementing features to prevent and mitigate harms to minors, including mental health disorders, addiction-like behaviors, physical violence, bullying, harassment, sexual exploitation, and certain types of harmful marketing.
  • The prevention of harm includes addressing issues such as anxiety, depression, eating disorders, substance abuse, suicidal behaviors, online bullying, sexual abuse, and the promotion of narcotics, tobacco, gambling, and alcohol to minors.
  • Despite these protections, platforms are not required to block minors from intentionally seeking content or from accessing resources aimed at preventing or mitigating these harms, including providing evidence-informed information and clinical resources.

Required Safeguards for Minors

  • Covered platforms must provide minors with safeguards to limit communication from others, restrict access to their personal data, control compulsive platform usage features, manage personalized recommendation systems, and protect their geolocation data. (One has to consider whether these would pass First Amendment scrutiny, particularly in light of recent decisions such as the one in NetChoice v. Yost).
  • Platforms are required to offer options for minors to delete their accounts and personal data, and limit their time on the platform, with the most protective privacy and safety settings enabled by default for minors.
  • Parental tools must be accessible and easy-to-use, allowing parents to manage their child’s privacy, account settings, and platform usage, including the ability to restrict purchases and view and limit time spent on the platform.
  • A reporting mechanism for harms to minors must be established, with platforms required to respond substantively within specified time frames, and immediate action required for reports involving imminent threats to minors’ safety.
  • Advertising of illegal products such as narcotics, tobacco, gambling, and alcohol to minors is strictly prohibited.
  • Safeguards and parental tools must be clear, accessible, and designed without “dark patterns” that could impair user autonomy or choice, with considerations for uninterrupted gameplay and offline device or account updates.

Disclosure Requirements

  • Before a minor registers or purchases on a platform, clear notices about data policies, safeguards for minors, and risks associated with certain features must be provided.
  • Platforms must inform parents about safeguards and parental tools for their children and obtain verifiable parental consent before a child uses the platform.
  • Platforms may consolidate notice and consent processes with existing obligations under the Children’s Online Privacy Protection Act (COPPA). (Like COPPA, a “child” under the act is one under 13 years of age.)
  • Platforms using personalized recommendation systems must clearly explain their operation, including data usage, and offer opt-out options for minors or their parents.
  • Advertising targeted at minors must be clearly labeled, explaining why ads are shown to them and distinguishing between content and commercial endorsements.
  • Platforms are required to provide accessible information to minors and parents about data policies and access to safeguards, ensuring resources are available in relevant languages.

Reporting Requirements

  • Covered platforms must annually publish a report, based on an independent audit, detailing the risks of harm to minors and the effectiveness of prevention and mitigation measures. (Providing these audit services is no doubt a good business opportunity for firms with such capabilities; unfortunately this will increase the cost of operating a covered platform.)
  • This requirement applies to platforms with over 10 million active monthly users in the U.S. that primarily host user-generated content and discussions, such as social media and virtual environments.
  • Reports must assess platform accessibility by minors, describe commercial interests related to minor usage, and provide data on minor users’ engagement, including time spent and content accessed.
  • The reports should identify foreseeable risks of harm to minors, evaluate the platform’s design features that could affect minor usage, and detail the personal data of minors collected or processed.
  • Platforms are required to describe safeguards and parental tools, interventions for potential harms, and plans for addressing identified risks and circumvention of safeguards.
  • Independent auditors conducting the risk assessment must consult with parents, youth experts, and consider research and industry best practices, ensuring privacy safeguards are in place for the reported data.

Keep an eye out to see if Congress passes this legislation in the spirit of “for the children.”

How did Ohio’s efforts to regulate children’s access to social media violate the constitution?

children social media

Ohio passed a law called the Parental Notification by Social Media Operators Act which sought to require certain categories of online services to obtain parental consent before allowing any unemancipated child under the age of sixteen to register or create accounts with the service.

Plaintiff internet trade association – representing platforms including Google, Meta, X, Nextdoor, and Pinterest – sought a preliminary injunction that would prohibit the State’s attorney general from enforcing the law. Finding the law to be unconstitutional, the court granted the preliminary injunction.

Likelihood of success on the merits: First Amendment Free Speech

The court found that plaintiff was likely to succeed on its constitutional claims. Rejecting the State’s argument that the law sought only to regulate commerce (i.e., the contracts governing use of social media platforms) and not speech, it held that the statute was a restriction on speech, implicating the First Amendment. It held that the law was a content-based restriction because the social media features the statute singled out in defining which platforms were subject to the law – e.g., the ability to interact socially with others – were “inextricable from the content produced by those features.” And the law violated the rights of minors living in Ohio because it infringed on minors’ rights to both access and produce First Amendment protected speech.

Given these attributes of the law, the court applied strict scrutiny to the statute. The court held that the statute failed to pass strict scrutiny for several reasons. First, the Act was not narrowly tailored to address the specific harms identified by the State, such as protecting minors from oppressive contract terms with social media platforms. Instead of targeting the contract terms directly, the Act broadly regulated access to and dissemination of speech, making it under-inclusive in addressing the specific issue of contract terms and over-inclusive by imposing sweeping restrictions on speech. Second, while the State aimed to protect minors from mental health issues and sexual predation related to social media use, the Act’s approach of requiring parental consent for minors under sixteen to access all covered websites was an untargeted and blunt instrument, failing to directly address the nuanced risks posed by specific features of social media platforms. Finally, in attempting to bolster parental authority, the Act mirrored previously rejected arguments that imposing speech restrictions, subject to parental veto, was a legitimate means of aiding parental control, making it over-inclusive by enforcing broad speech restrictions rather than focusing on the interests of genuinely concerned parents.

Likelihood of success on the merits: Fourteenth Amendment Due Process

The statute violated the Due Process Clause of the Fourteenth Amendment because its vague language failed to provide clear notice to operators of online services about the conduct that was forbidden or required. The Act’s broad and undefined criteria for determining applicable websites, such as targeting children or being reasonably anticipated to be accessed by children, left operators uncertain about their legal obligations. The inclusion of an eleven-factor list intended to clarify applicability, which contained vague and subjective elements like “design elements” and “language,” further contributed to the lack of precise guidance. The Act’s exception for “established” and “widely recognized” media outlets without clear definitions for these terms introduced additional ambiguity, risking arbitrary enforcement. Despite the State highlighting less vague aspects of the Act and drawing parallels with the federal Children Online Privacy Protection Act of 1998 (COPPA), these did not alleviate the overall vagueness, particularly with the Act’s broad and subjective exceptions.

Irreparable harm and balancing of the equities

The court found that plaintiff’s members would face irreparable harm through non-recoverable compliance costs and the potential for civil liability if the Act were enforced, as these monetary harms could not be fully compensated. Moreover, the Act’s infringement on constitutional rights, including those protected under the First Amendment, constituted irreparable harm since the loss of such freedoms, even for short durations, is considered significant.

The balance of equities and the public interest did not favor enforcing a statute that potentially violated constitutional principles, as the enforcement of unconstitutional laws serves no legitimate public interest. The argument that the Act aimed to protect minors did not outweigh the importance of upholding constitutional rights, especially when the statute’s measures were not narrowly tailored to address specific harms. Therefore, the potential harm to plaintiff’s members and the broader implications for constitutional rights underscored the lack of public interest in enforcing this statute.

NetChoice, LLC v. Yost, 2024 WL 55904 (S.D. Ohio, February 12, 2024)

See also: 

Required content moderation reporting does not violate X’s First Amendment rights

x bill of rights

A federal court in California has upheld the constitutionality of the state’s Assembly Bill 587 (AB 587), which mandates social media companies to submit to the state attorney general semi-annual reports detailing their content moderation practices. This decision comes after X filed a lawsuit claiming the law violated the company’s First Amendment rights.

The underlying law

AB 587 requires social media companies to provide detailed accounts of their content moderation policies, particularly addressing issues like hate speech, extremism, disinformation, harassment, and foreign political interference. These “terms of service reports” are to be submitted to the state’s attorney general, aiming to increase transparency in how these platforms manage user content.

X’s challenge

X challenged this law, seeking to prevent its enforcement on the grounds that it was unconstitutional. The court, however, denied their motion for injunctive relief, finding that X failed to demonstrate a likelihood of success on the merits of its constitutional claims.

The court’s decision relied heavily on SCOTUS’s opinion in Zauderer v. Office of Disciplinary Counsel of Supreme Court of Ohio, 471 U.S. 626 (1985). Under the Zauderer case, for governmentally compelled commercial disclosure to be constitutionally permissible, the information must be purely factual and uncontroversial, not unduly burdensome, and reasonably related to a substantial government interest.

The court’s constitutional analysis

In applying these criteria, the court found that AB 587’s requirements fit within these constitutional boundaries. The reports, while compulsory, do not constitute commercial speech in the traditional sense, as they are not advertisements and carry no direct economic benefit for the social media companies. Despite this, the court followed the rationale of other circuits that have assessed similar requirements for social media disclosures.

The court determined that the content of the reports mandated by AB 587 is purely factual, requiring companies to outline their existing content moderation policies related to specified areas. The statistical data, if provided, represents objective information about the company’s actions. The court also found that the disclosures are uncontroversial, noting that the mere association with contentious topics does not render the reports themselves controversial. We know how controversial and political the regulation of “disinformation” can be.

Addressing the burden of these requirements, the court recognized that while the reporting may be demanding, it is not unjustifiably so under First Amendment considerations. X argued that the law would necessitate significant resources to monitor and report the required metrics. But the court noted that AB 587 does not obligate companies to adopt any specific content categories, nor does it impose burdens on speech itself, a crucial aspect under Zauderer’s analysis.

What this means

The court confirmed that AB 587’s reporting requirements are reasonably related to a substantial government interest. This interest lies in ensuring transparency in social media content moderation practices, enabling consumers to make informed choices about their engagement with news and information on these platforms.

The court’s decision is a significant step in addressing the complexities of regulating social media platforms, balancing the need for transparency with the constitutional rights of these digital entities. As the landscape of digital communication continues to evolve, this case may be a marker for how governments might approach the regulation of social media companies, particularly in the realm of content moderation.

X Corp. v. Bonta, 2023 WL 8948286 (E.D. Cal., December 28, 2023)

See also: Maryland Court of Appeals addresses important question of internet anonymity

California court decision strengthens Facebook’s ability to deplatform its users

vaccine information censorship

Plaintiff used Facebook to advertise his business. Facebook kicked him off and would not let him advertise, based on alleged violations of Facebook’s Terms of Service. Plaintiff sued for breach of contract. The lower court dismissed the case so plaintiff sought review with the California appellate court. That court affirmed the dismissal.

The Terms of Service authorized the company to unilaterally “suspend or permanently disable access” to a user’s account if the company determined the user “clearly, seriously, or repeatedly breached” the company’s terms, policies, or community standards.

An ordinary reading of such a provision would lead one to think that Facebook would not be able to terminate an account unless certain conditions were met, namely, that there had been a clear, serious or repeated breach by the user. In other words, Facebook would be required to make such a finding before terminating the account.

But the court applied the provision much more broadly. So broadly, in fact, that one could say the notion of clear, serious, or repeated breach was irrelevant, superfluous language in the terms.

The court said: “Courts have held these terms impose no ‘affirmative obligations’ on the company.” Discussing a similar case involving Twitter’s terms of service, the court observed that platform was authorized to suspend or terminate accounts “for any or no reason.” Then the court noted that “[t]he same is true here.”

So, the court arrived at the conclusion that despite Facebook’s own terms – which would lead users to think that they wouldn’t be suspended unless there was a clear, serious or repeated breach – one can get deplatformed for any reason or no reason. The decision pretty much gives Facebook unmitigated free speech police powers.

Strachan v. Facebook, Inc., 2023 WL 8589937 (Cal. App. December 12, 2023)

How do you sort out who owns a social media account used to promote a business?

owns social media account
Imagine this scenario – a well-known founder of a company sets up social media accounts that promote the company’s products. The accounts also occasionally display personal content (e.g., public happy birthday messages the founder sends to his spouse). The company fires the founder and then the company claims it owns the accounts. If the founder says he owns the accounts, how should a court resolve that dispute?

The answer to this question is helpful in resolving actual disputes such as this, and perhaps even more helpful in setting up documentation and procedures to prevent such a dispute in the first place.

In the recent case of In re: Vital Pharmaceutical, the court considered whether certain social media accounts that a company’s founder and CEO used were property of the company’s bankruptcy estate under Bankruptcy Code § 541. Though this was a bankruptcy case, the analysis is useful in other contexts to determine who owns a social media account. The court held that various social media accounts (including Twitter, Instagram and TikTok accounts the CEO used) belonged to the company.

In reaching this decision, the court recognized a “dearth” of legal guidance from other courts on how to determine account ownership when there is a dispute. It noted the case of In re CTLI, LLC, 528 B.R. 359 (Bankr. S.D. Tex. 2015) but expressed concern that this eight year old case did not adequately address the current state of social media account usage, particularly in light of the rise of influencer marketing.

The court fashioned a rather detailed test:

  • Are there any agreements or other documents that show who owns the account? Perhaps an employee handbook? If so, then whoever such documents say owns the account is presumed to be the owner of the account.
  • But what if there are no documents that show ownership, or such documents do not show definitively who owns the account? In those circumstances, one should consider:
    • Does one party have exclusive power to access the account?
    • Does that same party have the ability to prevent others from accessing the account?
    • Does the account enable that party to identify itself as having that exclusive power?
  • If a party establishes both that documents show ownership and that a party has control, that ends the inquiry. But if one or both of those things are not definitively shown, one can still consider whether use of the social media account tips the scales one way or the other:
    • What name is used for the account?
    • Is the account used to promote more than one company’s products?
    • To what extent is the account used to promote the user’s persona?
    • Would any required changes fundamentally change the nature of the account?

Companies utilizing social media accounts run by influental individuals with well-known personas should take guidance from this decision. Under this court’s test, creating documentation or evidence of the account ownership would provide the clearest path forward. Absent such written indication, the parties should take care to establish clear protocols concerning account control and usage.

In re: Vital Pharmaceutical, 2023 WL 4048979 (Bankr. S.D. Fla., June 16, 2023)

Ninth Circuit upholds decision in favor of Twitter in terrorism case

Tamara Fields and Heather Creach, representing the estates of their late husbands and joined by Creach’s two minor children, sued Twitter, Inc. Plaintiffs alleged that the platform knowingly provided material support to ISIS, enabling the terrorist organization to carry out the 2015 attack in Jordan that killed their loved ones. The lawsuit sought damages under the Anti-Terrorism Act (ATA), which allows U.S. nationals injured by terrorism to seek compensation.

Plaintiffs alleged that defendant knowingly and recklessly provided ISIS with access to its platform, including tools such as direct messaging. Plaintiffs argued that these services allowed ISIS to spread propaganda, recruit followers, raise funds, and coordinate operations, ultimately contributing to the attack. Defendant moved to dismiss the case, arguing that plaintiffs failed to show a direct connection between its actions and the attack. Defendant also invoked Section 230 of the Communications Decency Act, which shields platforms from liability for content created by users.

The district court agreed with defendant and dismissed the case, finding that plaintiffs had not established proximate causation under the ATA. Plaintiffs appealed, but the Ninth Circuit upheld the dismissal. The appellate court ruled that plaintiffs failed to demonstrate a direct link between defendant’s alleged support and the attack. While plaintiffs showed that ISIS used defendant’s platform for various purposes, the court found no evidence connecting those activities to the specific attack in Jordan. The court emphasized that the ATA requires a clear, direct relationship between defendant’s conduct and the harm suffered.

The court did not address defendant’s arguments under Section 230, as the lack of proximate causation was sufficient to resolve the case. Accordingly, this decision helped clarify the legal limits of liability for platforms under the ATA and highlighted the challenges of holding technology companies accountable for how their services are used by third parties.

Three Reasons Why This Case Matters:

  • Sets the Bar for Proximate Cause: The ruling established that a direct causal link is essential for liability under the Anti-Terrorism Act.
  • Limits Platform Liability: The decision underscores the difficulty of holding online platforms accountable for misuse of their services by bad actors.
  • Reinforces Section 230’s Role: Although not directly addressed, the case highlights the protections Section 230 offers to tech companies.

Fields v. Twitter, Inc., 881 F.3d 739 (9th Cir. 2018)

Pastor’s First Amendment rights affected parole conditions barring social media use

Plaintiff – a Baptist minister on parole in California – sued several parole officials, arguing that conditions placed on his parole violated plaintiff’s First Amendment rights. Among the contested restrictions was a prohibition on plaintiff accessing social media. Plaintiff claimed this restriction infringed on both his right to free speech and his right to freely exercise his religion. Plaintiff asked the court for a preliminary injunction to stop the enforcement of this condition. The court ultimately sided with plaintiff, ruling that the social media ban was unconstitutional.

The Free Speech challenge

Plaintiff argued that the parole condition prevented him from sharing his religious message online. As a preacher, he relied on platforms such as Facebook and Twitter to post sermons, connect with congregants who could not attend services, and expand his ministry by engaging with other pastors. The social media ban, plaintiff claimed, silenced him in a space essential for modern communication.

The court agreed, citing the U.S. Supreme Court’s ruling in Packingham v. North Carolina, which struck down a law barring registered sex offenders from using social media. In Packingham, the Court emphasized that social media platforms are akin to a modern public square and are vital for exercising free speech rights. Similarly, the court in this case found that the blanket prohibition on social media access imposed by the parole conditions was overly broad and not narrowly tailored to address specific risks or concerns.

The court noted that plaintiff’s past offenses, which occurred decades earlier, did not involve social media or the internet, undermining the justification for such a sweeping restriction. While public safety was a legitimate concern, the court emphasized that parole conditions must be carefully tailored to avoid unnecessary burdens on constitutional rights.

The Free Exercise challenge

Plaintiff also argued that the social media ban interfered with his ability to practice his religion. He asserted that posting sermons online and engaging with his congregation through social media were integral parts of his ministry. By prohibiting social media use, the parole condition restricted his ability to preach and share his faith beyond the physical boundaries of his church.

The court found this argument compelling. Religious practice is not confined to in-person settings, and plaintiff demonstrated that social media was a vital tool for his ministry. The court noted that barring a preacher from using a key means of sharing religious teachings imposed a unique burden on religious activity. Drawing on principles from prior Free Exercise Clause cases, the court held that the parole condition was not narrowly tailored to serve a compelling government interest, as it broadly prohibited access to all social media regardless of its religious purpose.

The court’s decision

The court granted plaintiff’s request for a preliminary injunction, concluding that he was likely to succeed on his claims under both the Free Speech Clause and the Free Exercise Clause of the First Amendment. The ruling allowed plaintiff to use social media during the litigation, while acknowledging the government’s legitimate interest in monitoring parolees. The court encouraged less restrictive alternatives, such as targeted supervision or limiting access to specific sites that posed risks, rather than a blanket ban.

Three reasons why this case matters:

Intersection of Speech and Religion: The case highlights how digital tools are essential for both free speech and the practice of religion, especially for individuals sharing messages with broader communities.

Limits on Blanket Restrictions: The ruling reaffirms that government-imposed conditions, such as parole rules, must be narrowly tailored to avoid infringing constitutional rights.

Modern Application of First Amendment Rights: By referencing Packingham, the court acknowledged the evolving role of social media as a platform for public discourse and religious expression.

Manning v. Powers, 281 F. Supp. 3d 953 (C.D. Cal. Dec. 13, 2017)

Scroll to top