Blog

X gets Ninth Circuit win in case over California’s content moderation law

x bill of rights

X sued the California attorney general, challenging Assembly Bill 587 (AB 587) – a law that required large social media companies to submit semiannual reports detailing their terms of service and content moderation policies, as well as their practices for handling specific types of content such as hate speech and misinformation. X claimed that this law violated the First Amendment, was preempted by the federal Communications Decency Act, and infringed upon the Dormant Commerce Clause.

Plaintiff sought a preliminary injunction to prevent the government from enforcing AB 587 while the case was pending. Specifically, it argued that being forced to comply with the reporting requirements would compel speech in violation of the First Amendment. Plaintiff asserted that AB 587’s requirement to disclose how it defined and regulated certain categories of content compelled speech about contentious issues, infringing on its First Amendment rights.

The district court denied  plaintiff’s motion for a preliminary injunction. It found that the reporting requirements were commercial in nature and that they survived under a lower level of scrutiny applied to commercial speech regulations. Plaintiff sought review with the Ninth Circuit.

On review, the Ninth Circuit reversed the district court’s denial and granted the preliminary injunction. The court found that the reporting requirements compelled non-commercial speech and were thus subject to strict scrutiny under the First Amendment—a much higher standard. Under strict scrutiny, a law is presumed unconstitutional unless the government can show it is narrowly tailored to serve a compelling state interest. The court reasoned that plaintiff was likely to succeed on its claim that AB 587 violated the First Amendment because the law was not narrowly tailored. Less restrictive alternatives could have achieved the government’s goal of promoting transparency in social media content moderation without compelling companies to disclose their opinions on sensitive and contentious categories of speech.

The appellate court held that plaintiff would likely suffer irreparable harm if the law was enforced, as the compelled speech would infringe upon the platform’s First Amendment rights. Furthermore, the court found that the balance of equities and public interest supported granting the preliminary injunction because preventing potential constitutional violations was deemed more important than the government’s interest in transparency. Therefore, the court reversed and remanded the case, instructing the district court to enter a preliminary injunction consistent with its opinion.

X Corp. v. Bonta, 2024 WL 4033063 (9th Cir. September 4, 2024)

Court blocks part of Texas law targeting social media content

Two trade associations – the Computer & Communications Industry Association and NetChoice, LLC sued the Attorney General of Texas over a Texas law called House Bill 18 (HB 18), which was designed to regulate social media websites. Plaintiffs, who represented major technology companies such as Google, Meta  and X argued that the law violated the First Amendment and other legal protections. They asked the court for a preliminary injunction to stop the law from being enforced while the case continued.

Plaintiffs challenged several key parts of HB 18. Among other things, law required social media companies to verify users’ ages, give parents control over their children’s accounts, and block minors from viewing harmful content. Such content included anything that promoted suicide, self-harm, substance abuse, and other dangerous behaviors. Plaintiffs believed that the law unfairly restricted free speech and would force companies to over-censor online content to avoid penalties. Additionally, they claimed the law was vague, leaving companies confused about how to comply.

Defendant argued that the law was necessary to protect children from harmful content online. He asserted that social media companies were failing to protect minors and that the state had a compelling interest in stepping in. He also argued that plaintiffs were exaggerating the law’s impact on free speech and that the law was clear enough for companies to follow.

The court agreed with plaintiffs on some points but not all. It granted plaintiffs a partial preliminary injunction, meaning parts of the law were blocked from being enforced. Specifically, the court found that the law’s “monitoring-and-filtering” requirements were unconstitutional. These provisions forced social media companies to filter out harmful content for minors, which the court said was too broad and vague to survive legal scrutiny. The court also noted that these requirements violated the First Amendment by regulating speech based on its content. But the court allowed other parts of the law, such as parental control tools and data privacy protections, to remain in place, as they did not give rise to the same free speech issues.

Three reasons why this case matters:

  • Free Speech Online: This case highlights ongoing debates about how far the government can go in regulating content on social media without infringing on First Amendment rights.
  • Children’s Safety: While protecting children online is a major concern, the court’s ruling shows the difficulty in balancing safety with the rights of companies and users.
  • Technology Lawsuits: As states try to pass more laws regulating tech companies, this case sets an important standard for how courts may handle future legal battles over internet regulation.

Computer & Communications Industry Association v. Paxton, — F.Supp.3d —, 2024 WL 4051786 (W.D. Tex., August 30, 2024)

Supreme Court weighs in on Texas and Florida social media laws

scotus social media case

In a significant case involving the intersection of technology and constitutional law, NetChoice LLC sued Florida and Texas, challenging their social media content-moderation laws. Both states had enacted statutes regulating how platforms such as Facebook and YouTube moderate, organize, and display user-generated content. NetChoice argued that the laws violated the First Amendment by interfering with the platforms’ editorial discretion. It asked the Court to invalidate these laws as unconstitutional.

The Supreme Court reviewed conflicting rulings from two lower courts. The Eleventh Circuit had upheld a preliminary injunction against Florida’s law, finding it likely violated the First Amendment. And the Fifth Circuit had reversed an injunction against the Texas law, reasoning that content moderation did not qualify as protected speech. However, the Supreme Court vacated both decisions, directing the lower courts to reconsider the challenges with a more comprehensive analysis.

The Court explained that content moderation—decisions about which posts to display, prioritize, or suppress—constitutes expressive activity akin to editorial decisions made by newspapers. The Texas and Florida laws, by restricting this activity, directly implicated First Amendment protections. Additionally, the Court noted that these cases involved facial challenges, requiring an evaluation of whether a law’s unconstitutional applications outweigh its constitutional ones. Neither lower court had sufficiently analyzed the laws in this manner.

The Court also addressed a key issue in the Texas law: its prohibition against platforms censoring content based on viewpoint. Texas justified the law as ensuring “viewpoint neutrality,” but the Court found this rationale problematic. Forcing platforms to carry speech they deem objectionable—such as hate speech or misinformation—would alter their expressive choices and violate their First Amendment rights.

Three reasons why this case matters:

  • Clarifies Free Speech Rights in the Digital Age: The case reinforces that social media platforms have editorial rights similar to traditional media, influencing how future laws may regulate online speech.
  • Impacts State-Level Regulation: The ruling limits states’ ability to impose viewpoint neutrality mandates on private platforms, shaping the balance of power between governments and tech companies.
  • Sets a Standard for Facial Challenges: By emphasizing the need to weigh a law’s unconstitutional and constitutional applications, the decision provides guidance for courts evaluating similar cases.

Moody v. Netchoice, et al., 144 S.Ct. 2383 (July 1, 2024)

No Section 230 immunity for Facebook on contract-related claims

section 230

Plaintiffs sued Meta, claiming that they were harmed by fraudulent third-party ads posted on Facebook. Plaintiffs argued that these ads violated Meta’s own terms of service, which prohibits deceptive advertisements. They accused Meta of allowing scammers to run ads that targeted vulnerable users and of prioritizing revenue over user safety. Meta moved to dismiss claiming that it was immune from liability under 47 U.S.C. § 230(c)(1) (a portion of the Communications Decency Act (CDA)), which generally protects internet platforms from being held responsible for third-party content.

Plaintiffs asked the district court to hold Meta accountable for five claims: negligence, breach of contract, breach of the covenant of good faith and fair dealing, violation of California’s Unfair Competition Law (UCL), and unjust enrichment. They alleged that Meta not only failed to remove scam ads but actively solicited them, particularly from advertisers based in China, who accounted for a large portion of the fraudulent activity on the platform.

The district court held that § 230(c)(1) protected Meta from all claims, even the contract claims. Plaintiffs sought review with the Ninth Circuit.

On appeal, the Ninth Circuit affirmed that § 230(c)(1) provided Meta with immunity for the non-contract claims, such as negligence and UCL violations, because these claims treated Meta as a publisher of third-party ads. But the Ninth Circuit disagreed with the district court’s ruling on the contract-related claims. It held that the lower court had applied the wrong legal standard when deciding whether § 230(c)(1) barred those claims. So the court vacated the dismissal of the contract claims, explaining that contract claims were different because they arose from Meta’s promises to users, not from its role as a publisher. The case was remanded back to the district court to apply the correct standard for the contract claims.

Three reasons why this case matters:

  • It clarifies that § 230(c)(1) of the CDA does not provide blanket immunity for all types of claims, especially contract-related claims.
  • The case underscores the importance of holding internet companies accountable for their contractual promises to users, even when they enjoy broad protections for third-party content.
  • It shows that courts continue to wrestle with the boundaries of platform immunity under the CDA, which could shape future rulings about online platforms’ responsibilities.

Calise v. Meta Platforms, Inc., 103 F.4th 732 (9th Cir., June 4, 2024)

Federal judge optimistic about the future of courts’ use of AI

plain meaning AI

Eleventh Circuit concurrence could be a watershed moment for discourse on the judiciary’s future use of AI.

In a very thought-provoking recent ruling, Judge Kevin Newsom of the 11th Circuit Court of Appeals discussed the potential use of AI-powered large language models (LLMs) in legal text interpretation. Known for his commitment to textualism and plain-language interpretation, Judge Newsom’s career – prior to President Trump appointing him to the bench in 2017 – includes serving as the Solicitor General of Alabama and clerking for Justice David Souter of the U.S. Supreme Court. His concurrence in the case of Snell v. United Specialty Insurance Company, — F.4th — 2024 WL 2717700 (May 28, 2024), unusual in its approach, aimed to pull back the curtain on how legal professionals can leverage modern technology to enhance judicial processes. It takes an optimistic and hopeful tone on how LLMs could improve judges’ decision making, particularly when examining the meaning of words.

Background of the Case

The underlying case involved a plaintiff (Snell) who installed an in-ground trampoline for a customer. Snell got sued over the work and the defendant insurance company refused to pick up the tab. One of the key questions in the litigation was whether this work fell under the term “landscaping” as used in the insurance policy. The parties and the courts had anguished over the ordinary meaning of the word “landscaping,” relying heavily on traditional methods such consulting a dictionary. Ultimately, the court resolved the issue based on a unique aspect of Alabama law and Snell’s insurance application, which explicitly disclaimed any recreational or playground equipment work. But the definitional debate highlighted the complexities in interpreting legal texts and inspired Judge Newsom’s proposal to consider AI’s role in this process.

Judge Newsom’s Proposal

Judge Newsom’s proposal is both provocative and forward-thinking, discussing how to effectively incorporate AI-powered LLMs such as ChatGPT, Gemini, and Claude into the interpretive analysis of legal texts. He acknowledged that this suggestion may initially seem “heretical” to some but believed it was worth exploring. The basic rationale is that LLMs, trained on vast amounts of data reflecting everyday language use, could provide valuable insights into the ordinary meanings of words and phrases.

The concurrence reads very differently – in its solicitous treatment of AI – than many other early cases dealing with litigants’ use of AI, such as J.G. v. New York City Dept. of Education, 2024 WL 728626 (February 22, 2024). In that case, the found ChatGPT to be “utterly and unusually unpersuasive.” The present case has an entirely different attitude toward AI.

Strengths of Using LLMs for Ordinary Language Determinations

Judge Newsom systematically examined the various benefits of judges’ use of LLMs. He commented on the following issues and aspects:

  • Reflecting Ordinary Language: LLMs are trained on extensive datasets from the internet, encompassing a broad spectrum of language use, from academic papers to casual conversations. This training allows LLMs to offer predictions about how ordinary people use language in everyday life, potentially providing a more accurate reflection of common speech than traditional dictionaries.
  • Contextual Understanding: Modern LLMs can discern context and differentiate between various meanings of the same word based on usage patterns. This capability could be particularly useful in legal interpretation, where context is crucial.
  • Accessibility and Transparency: LLMs are increasingly accessible to judges, lawyers, and the general public, offering an inexpensive and transparent research tool. Unlike the opaque processes behind some dictionary definitions, LLMs can provide clear insights into their training data and predictive mechanisms.
  • Empirical Advantages: Compared to traditional empirical methods such as surveys and “corpus linguistics”, LLMs are more practical and less susceptible to manipulation. They offer a scalable solution that can quickly adapt to new data.

Challenges and Considerations

But the Judge’s take was not all completely rosy. He acknowledged certain potential downsides or vulnerabilities in the use of AI for making legal determinations. But even in this critique, his approach remained optimistic:

  • Hallucinations: LLMs can generate incorrect or fictional information. However, Judge Newsom argued that this issue is not unique to AI and that human lawyers also make mistakes or manipulate facts.
  • Representation: LLMs may not fully capture offline speech, potentially underrepresenting certain populations. This concern needs addressing, but Judge Newsom stated it does not fundamentally undermine the utility of LLMs.
  • Manipulation Risks: There is a risk of strategic manipulation of LLM outputs. However, this risk exists with traditional methods as well, and transparency in querying multiple models can mitigate it.
  • Dystopian Fears: Judge Newsom emphasized that LLMs should not replace human judgment but serve as one of many tools in the interpretive toolkit.

Future Directions

Judge Newsom concluded by suggesting further exploration into the proper querying of LLMs, refining the outputs, and ensuring that LLMs can handle temporal considerations for interpreting historical texts (i.e., a word must be given the meaning it had when it was written). These steps could maximize the utility of AI in legal interpretation, ensuring it complements rather than replaces traditional methods.

Snell v. United Specialty Insurance Company, — F.4th — 2024 WL 2717700 (May 28, 2024)

 

Artist’s side hustle lands him in DMCA litigation with autonomous vehicle company

DMCA bad faith

A dispute between a digital artist and his former employer over content rights resulted in a court allowing the employee’s DMCA claim while striking the employee’s state law claims.

For over five years, plaintiff worked for defendant, crafting digital street scenes of San Francisco to train the company’s self-driving vehicles. But plaintiff’s passion for digital art extended beyond his day job. So, in his spare time, using his own equipment, he created intricate urban scenery for video games, which he then sold on the Epic Games marketplace.

When defendant learned of plaintiff’s side hustle, it claimed plaintiff’s project infringed defendant’s copyright rights. It demanded that plaintiff cease all sales of his digital art. Plaintiff refused to comply. He argued that his creations were made on his own time, with his own resources, and did not utilize any proprietary information from defendant.

Defendant considered plaintiff’s refusal as a resignation and terminated his employment. But that did not end the matter. Defendant escalated the situation by sending a takedown notice to Epic Games under the Digital Millennium Copyright Act (DMCA), alleging that plaintiff’s content infringed on defendant’s copyrighted material. This resulted in Epic removing plaintiff’s content from its marketplace.

The lawsuit

Plaintiff sued, claiming that that defendant sent the takedown notice in bad faith, asserting federal claims under the DMCA and state law claims for interference with contractual relations, interference with prospective economic advantage, and violation of California’s Unfair Competition Law. Defendant moved to dismiss the DMCA claim and also moved to strike the state law claims under California’s anti-SLAPP statute, which aims to prevent lawsuits that chill the exercise of free speech.

The court’s decision

The court denied defendant’s motion to dismiss the DMCA claim, allowing plaintiff’s federal claim to proceed. It found that plaintiff had sufficiently alleged that defendant acted in bad faith when it issued the takedown notice, a key requirement under Section 512(f) of the DMCA. But the court granted defendant’s motion to strike the state law claims. It held that the state law claims were preempted by the DMCA and were also barred by California’s litigation privilege, which protects communications made in anticipation of litigation.

Shande v. Zoox, Inc., 2024 WL 2306284 (N.D. Cal., May 21, 2024)

TikTok v. Garland: A full rundown of the Constitutional issues

As anticipated, TikTok and ByteDance have initiated legal action against the U.S. government, challenging a recently enacted law that would ban TikTok unless ByteDance sells the company off in the next nine months to an entity not controlled by a foreign adversary. Petitioners argue that the law infringes on constitutional rights in several ways: the First Amendment, the prohibition against bills of attainder, and the Equal Protection Clauses and Takings Clauses of the Fifth Amendment. They are seeking a declaration from the court that the law is unconstitutional and an injunction to prevent the Attorney General from enforcing the law.

TikTok tries to make itself look good

The allegations of the complaint contest any characterization of the law as a mere regulatory measure on ownership. The companies assert that compliance with the ban — especially within the 270-day timeframe — is not feasible due to commercial, technical, and legal constraints. Moreover, petitioners argue that the law represents an unconstitutional overreach, setting a dangerous standard that could allow Congress to bypass First Amendment protections under a thin guise of national security.

The complaint discusses how TikTok enjoys over 170 million monthly users in the U.S. and more than 1 billion globally. The platform is known for its powerful recommendation engine, enhancing user engagement by presenting curated content on its “For You” page. Although developed by the China-based ByteDance, TikTok operates internationally, including a significant presence in the U.S., under American law.

TikTok claims to be a repeat victim of overreach

Petitioners continue by describing how the U.S. government has previously attempted to ban TikTok while citing national security concerns. These efforts began in earnest with President Trump’s 2020 executive order, which the courts blocked for exceeding the scope of the International Emergency Economic Powers Act (IEEPA) and for constitutional issues. Although discussions aimed at resolving these security concerns led to a draft National Security Agreement under President Biden, these talks have faltered, and a resolution remains elusive.

Selling isn’t easy

Petitioners claim the requirement for TikTok to divest its U.S. operations from its global network is impractical for technological, commercial and legal reasons. A U.S.-only version of TikTok would lose access to global content, severely diminishing its appeal and commercial viability. Technologically, transferring the sophisticated source code within the law’s tight timeline is unachievable. And legal constraints, particularly China’s stringent export controls, prevent the divestiture of essential technologies like the recommendation engine.

Why TikTok thinks the law is unconstitutional

Petitioners provide four grounds on which they believe the law is unconstitutional: (1) the First Amendment, (2) Article 1’s prohibition of bills of attainder, (3) the Equal Protection Clause of the Fifth Amendment, and (4) the Takings Clauses of the Fifth Amendment

First Amendment:

Petitioners assert that the law significantly limits their First Amendment rights, impacting both the company and the free speech rights of some 170 million users in the U.S. Petitioners claim that TikTok is recognized for its editorial activities in selecting and presenting content, and argue that these activities are protected forms of expression under the First Amendment. TikTok not only curates third-party content but also creates and shares its own, especially on topics such as support for small businesses and educational initiatives, which it considers core speech. Additionally, the law restricts other ByteDance subsidiaries from reaching U.S. audiences, further stifling protected speech activities.

Petitioners argue that the court should apply strict scrutiny to the law for three reasons. First, the law imposes content- and viewpoint-based restrictions, favoring certain types of speech such as business and travel reviews, over others such as political and religious speech, and appears motivated by the viewpoints expressed on TikTok. Second, the law discriminates between speakers, specifically targeting TikTok and other ByteDance subsidiaries by automatically deeming them as foreign adversary controlled, while other companies face less stringent criteria. Third, the law constitutes an unlawful prior restraint, suppressing speech in advance by prohibiting TikTok and its users from expressing themselves on the platform, a severe infringement on First Amendment rights.

Petitioners assert that the law fails the strict scrutiny test as it neither serves a compelling government interest nor is narrowly tailored. Citing national security, Congress has not provided concrete evidence that TikTok poses a specific threat or that the law effectively addresses such threats. The speculative nature of these risks and the continued use of TikTok by officials such as President Biden and members of Congress undermine the credibility of these security concerns. Furthermore, the law lacks a fair process or sufficient evidence to justify its restrictive measures. Additionally, the law is not narrowly tailored — less restrictive measures such as an agreement involving data security protocols with TikTok were already under negotiation. These alternatives, including more targeted regulations or industry-wide data protection laws, suggest that the law’s broad prohibitions and lack of procedural fairness for TikTok are unjustified, failing to meet the precision required by strict scrutiny.

Moreover, Petitioners argue that the law independently fails strict scrutiny because it is both under- and over-inclusive. It is under-inclusive as it neglects how other foreign and domestic companies could pose similar data security risks and spread misinformation, thereby suggesting selective enforcement against certain speakers or viewpoints. Over-inclusively, it targets all ByteDance-owned applications without evidence that these pose any significant risk, covering applications irrespective of their data collection practices and only if they display content. This flawed scope suggests no direct link between the law’s restrictions and the stated security concerns, weakening its justification under strict scrutiny.

And in a way similar to the way a federal court in Montana treated that state’s TikTok ban last year, TikTok argued that the law would not even survive under a less-demanding “intermediate scrutiny” standard. This is the standard applied to content-neutral time, place, and manner restrictions. Petitioners assert that the law completely prohibits TikTok speech activities across all settings in the U.S., requiring the law to be narrowly tailored to a significant government interest and not overly restrict more speech than necessary. But Petitioners assert the government cannot show its concerns about data security and propaganda to be anything beyond speculative. Furthermore, the law does not leave open adequate alternative channels for communication, as it significantly prevents TikTok from reaching its audience. For these reasons, along with the availability of less restrictive measures, TikTok asserts the law fails intermediate scrutiny as well.

The last part of the First Amendment argument is that the law closes off an entire medium of expression, which Supreme Court precedent generally deems unreasonable. And the law is constitutionally overbroad, as it prohibits all speech on ByteDance-owned applications, regardless of content. This broad suppression encompasses substantial unconstitutional applications, far outweighing its legitimate scope, making it a clear example of overbreadth as defined in U.S. law.

Bill of Attainder:

TikTok also challenges the law as an unconstitutional bill of attainder, which is prohibited by Article I of the U.S. Constitution. This part of the Constitution forbids Congress from enacting legislation that imposes legislative punishment on specific individuals or groups without a judicial trial. Petitioners say that the law specifically targets them, imposing severe restrictions by forcing the divestment of their U.S. businesses and barring them from operating in their chosen fields, akin to punitive measures historically associated with bills of attainder. Unlike other entities that can avoid similar prohibitions through less restrictive measures, petitioners face unique, punitive burdens without meaningful opportunities for corrective action, thus violating the separation of powers by allowing legislative encroachment on judicial functions. Additionally, petitioners assert that the law disproportionately impacts them by not applying the same standards to similarly situated companies, making it effectively a punitive measure against a specific corporate group, thereby rendering it a bill of attainder.

Equal Protection:

Petitioners’ third constitutional argument is that the law violates their rights under the equal protection component of the Fifth Amendment’s Due Process Clause by discriminatorily targeting them without justification. Unlike other companies deemed “controlled by a foreign adversary,” petitioners are automatically classified as such without the due process of notice and a presidential determination supported by evidence, which other companies receive. This classification imposes undue burdens on their free speech rights by bypassing the necessary procedural safeguards that other entities are afforded, such as detailed justifications for national security concerns that enable judicial review. Additionally, the law exempts other similarly situated companies from certain restrictions if they offer applications for posting reviews, unjustifiably leaving petitioners without similar exemptions. This differential treatment lacks a rational basis, undermining the equal protection principles by imposing arbitrary and discriminatory restrictions on petitioners.

Takings Clause:

And petioners’ fourth constitutional argument is that the law effects an unlawful taking of private property without just compensation, in violation of the Fifth Amendment’s Takings Clause. The law mandates the shutdown of ByteDance’s U.S. operations or forces the sale of these assets under conditions that do not assure fair market value, severely undercutting their worth. This compulsion to sell or close down constitutes a per se taking, as it strips ByteDance of all economically beneficial uses of its property without just compensation. Furthermore, the law also represents a regulatory taking by significantly impacting the economic value of ByteDance’s investments and interfering with reasonable investment-backed expectations. The legislative action here goes beyond permissible bounds, triggering a need for regulatory scrutiny under established criteria such as economic impact, disruption of investment expectations, and the nature of government action. As such, petitioners argue that the law unjustly deprives them of their property rights without adequate compensation, necessitating prospective injunctive relief.

Petitioners seek a judicial declaration that the law is unconstitutional and an injunction against its enforcement, arguing that the government’s measures are excessively punitive and not grounded in adequately demonstrated national security risks.

TikTok’s constitutional arguments against the ban: a first look

tiktok constitution

As expected, TikTok has sued the federal government over the law enacted last month that requires ByteDance to sell off the app or be banned. It seeks a declaratory judgment that the law is unconstitutional and asks for an injunction barring the law’s enforcement. Here’s a first look at the constitutional issues TikTok is raising:

  • First Amendment: TikTok contends the Act restricts its right to free speech more severely than other media entities without sufficient justification, failing to consider less restrictive alternatives. The ban also violates the free speech rights of the app’s 170 million American users.
  • Bill of Attainder: TikTok asserts that the Act singles out TikTok for punitive measures typically reserved for judicial processes, without due process.
  • Equal Protection: Under the Fifth Amendment, TikTok argues the Act wrongfully applies stricter conditions on it than on other similar entities.
  • Takings Clause: TikTok claims the Act effects an unlawful taking of its property without just compensation, as it forces a sale or shutdown of its U.S. operations at undervalued prices.

More analysis to come.

Software contract was not unconscionable

software contract

Software vendor sued its customer because the customer stopped paying the vendor during implementation. Customer filed a counterclaim asserting that the contract between the parties was unconscionable because, if enforced, it would provide a “gross disparity in the values exchanged.” In other words, customer would be required to pay, but vendor would not have to provide the software.

The court rejected customer’s argument and dismissed the claim of unconscionability. It observed that “[i]n essence, [customer’s] argument is that the Agreement is unconscionable because [vendor] did not perform on its promise to deliver software that could provide and perform certain functions. These are allegations supporting a claim for breach of contract, not unconscionability.”

PCS Software Inc. v. Dispatch Services, 2024 WL 1996126 (S.D. Texas, May 6, 2024)

See also:

TikTok and the First Amendment: Previewing some of the free speech issues

TikTok is on the verge of a potential federal ban in the United States. This development echoes a previous situation in Montana, where a 2023 state law attempted to ban TikTok but faced legal challenges. TikTok and its users filed a lawsuit against the state, claiming the ban violated their First Amendment rights. The federal court sided with TikTok and the users, blocking the Montana law from being enforced on the grounds that it infringed on free speech.

The court’s decision highlighted that the law restricted TikTok users’ ability to communicate and impacted the company’s content decisions, thus failing to meet the intermediate scrutiny standard applicable to content-neutral speech restrictions. The ruling criticized the state’s attempt to regulate national security, deeming it outside the state’s jurisdiction and excessively restrictive compared to other available measures such as data privacy laws. Furthermore, the court noted that the ban left other similar apps unaffected and failed to provide alternative communication channels for TikTok users reliant on the app’s unique features.

Scroll to top