TikTok is now officially among the walking dead

It’s now the law of the land that come nine months from now, if any of the app stores make TikTok available or if any hosting provider lends services enabling TikTok, those companies will face substantial penalties.That is, unless TikTok’s owner ByteDance sells off the company to an entity that is not located in or controlled by anyone from Russia, Iran, North Korea or China.The version of the law that the President signed on April 24, 2024 is pretty much the same as the one the House of Representatives passed in March 2024.

The only difference is that if in nine months there is a transaction underway to sell off TikTok, the President can grant one 90-day extension for the sale to be completed.

No doubt we’re going to see some serious free speech litigation over this. Stay tuned.

What does the “bill that could ban TikTok” actually say?

In addition to causing free speech concerns, the bill is troubling in the way it gives unchecked power to the Executive Branch.

Earlier this week the United States House of Representatives passed a bill that is being characterized as one that could ban TikTok. Styled as the Protecting Americans from Foreign Adversary Controlled Applications Act, the text of the bill calls TikTok and its owner ByteDance Ltd. by name and seeks to “protect the national security of the United States from the threat posed by foreign adversary controlled applications.”

What conduct would be prohibited?

The Act would make it unlawful for anyone to “distribute, maintain, or update” a “foreign adversary controlled application” within the United States. The Act specifically prohibits anyone from “carrying out” any such distribution, maintenance or updating via a “marketplace” (e.g., any app store) or by providing hosting services that would enable distribution, maintenance or updating of such an app. Interestingly, the ban does not so much directly prohibit ByteDance from making TikTok available, but would cause entities such as Apple and Google to be liable for making the app available for others to access, maintain and update the app.

What apps would be banned?

There are two ways one could find itself being a “foreign adversary controlled application” and thereby prohibited.

  • The first is simply by being TikTok or any app provided by ByteDance or its successors.
  • The second way – and perhaps the more concerning way because of its grant of great power to one person – is by being a “foreign adversary controlled application” that is “determined by the President to present a significant threat to the national security of the United States.” Though the President must first provide the public with notice of such determination and make a report to Congress on the specific national security concerns, there is ultimately no check on the President’s power to make this determination. For example, there is no provision in the statute saying that Congress could override the President’s determination.

Relatively insignificant apps, or apps with no social media component would not be covered by the ban. For example, to be a “covered company” under the statute, the app has to have more than one million monthly users in two of the three months prior to the time the President determines the app should be banned. And the statute specifically says that any site having a “primary purpose” of allowing users to post reviews is exempt from the ban.

When would the ban take effect?

TikTok would be banned 180 days after the date the President signs the bill. For any other app that the President would later decide to be a “foreign adversary controlled application,” it would be banned 180 days after the date the President makes that determination. The date of that determination would be after the public notice period and report to Congress discussed above.

What could TikTok do to avoid being banned?

It could undertake a “qualified divestiture” before the ban takes effect, i.e., within 180 days after the President signs the bill. Here is another point where one may be concerned about the great power given to the Executive Branch. A “qualified divestiture” would be situation in which the owner of the app sells off that portion of the business *and* the President determines two things: (1) that the app is no longer being controlled by a foreign adversary, and (2) there is no “operational relationship” between the United States operations of the company and the old company located in the foreign adversary company. In other words, the app could not avoid the ban by being owned by a United States entity but still share data with the foreign company and have the foreign company handle the algorithm.

What about users who would lose all their data?

The Act provides that the app being prohibited must provide users with “all the available data related to the account of such user,” if the user requests it, prior to the time the app becomes prohibited. That data would include all posts, photos and videos.

What penalties apply for violating the law?

The Attorney General is responsible for enforcing the law. (An individual could not sue and recover damages.) Anyone (most likely an app store) that violates the ban on distributing, maintaining or updating the app would face penalties of $5,000 x the number of users determined to access, maintain or update the app. Those damages could be astronomical – TikTok currently has 170 million users, so the damages would be $850,000,000,000. An app’s failure to provide data portability prior to being banned would cause it to be liable for $500 x the number of affected users.

How did Ohio’s efforts to regulate children’s access to social media violate the constitution?

children social media

Ohio passed a law called the Parental Notification by Social Media Operators Act which sought to require certain categories of online services to obtain parental consent before allowing any unemancipated child under the age of sixteen to register or create accounts with the service.

Plaintiff internet trade association – representing platforms including Google, Meta, X, Nextdoor, and Pinterest – sought a preliminary injunction that would prohibit the State’s attorney general from enforcing the law. Finding the law to be unconstitutional, the court granted the preliminary injunction.

Likelihood of success on the merits: First Amendment Free Speech

The court found that plaintiff was likely to succeed on its constitutional claims. Rejecting the State’s argument that the law sought only to regulate commerce (i.e., the contracts governing use of social media platforms) and not speech, it held that the statute was a restriction on speech, implicating the First Amendment. It held that the law was a content-based restriction because the social media features the statute singled out in defining which platforms were subject to the law – e.g., the ability to interact socially with others – were “inextricable from the content produced by those features.” And the law violated the rights of minors living in Ohio because it infringed on minors’ rights to both access and produce First Amendment protected speech.

Given these attributes of the law, the court applied strict scrutiny to the statute. The court held that the statute failed to pass strict scrutiny for several reasons. First, the Act was not narrowly tailored to address the specific harms identified by the State, such as protecting minors from oppressive contract terms with social media platforms. Instead of targeting the contract terms directly, the Act broadly regulated access to and dissemination of speech, making it under-inclusive in addressing the specific issue of contract terms and over-inclusive by imposing sweeping restrictions on speech. Second, while the State aimed to protect minors from mental health issues and sexual predation related to social media use, the Act’s approach of requiring parental consent for minors under sixteen to access all covered websites was an untargeted and blunt instrument, failing to directly address the nuanced risks posed by specific features of social media platforms. Finally, in attempting to bolster parental authority, the Act mirrored previously rejected arguments that imposing speech restrictions, subject to parental veto, was a legitimate means of aiding parental control, making it over-inclusive by enforcing broad speech restrictions rather than focusing on the interests of genuinely concerned parents.

Likelihood of success on the merits: Fourteenth Amendment Due Process

The statute violated the Due Process Clause of the Fourteenth Amendment because its vague language failed to provide clear notice to operators of online services about the conduct that was forbidden or required. The Act’s broad and undefined criteria for determining applicable websites, such as targeting children or being reasonably anticipated to be accessed by children, left operators uncertain about their legal obligations. The inclusion of an eleven-factor list intended to clarify applicability, which contained vague and subjective elements like “design elements” and “language,” further contributed to the lack of precise guidance. The Act’s exception for “established” and “widely recognized” media outlets without clear definitions for these terms introduced additional ambiguity, risking arbitrary enforcement. Despite the State highlighting less vague aspects of the Act and drawing parallels with the federal Children Online Privacy Protection Act of 1998 (COPPA), these did not alleviate the overall vagueness, particularly with the Act’s broad and subjective exceptions.

Irreparable harm and balancing of the equities

The court found that plaintiff’s members would face irreparable harm through non-recoverable compliance costs and the potential for civil liability if the Act were enforced, as these monetary harms could not be fully compensated. Moreover, the Act’s infringement on constitutional rights, including those protected under the First Amendment, constituted irreparable harm since the loss of such freedoms, even for short durations, is considered significant.

The balance of equities and the public interest did not favor enforcing a statute that potentially violated constitutional principles, as the enforcement of unconstitutional laws serves no legitimate public interest. The argument that the Act aimed to protect minors did not outweigh the importance of upholding constitutional rights, especially when the statute’s measures were not narrowly tailored to address specific harms. Therefore, the potential harm to plaintiff’s members and the broader implications for constitutional rights underscored the lack of public interest in enforcing this statute.

NetChoice, LLC v. Yost, 2024 WL 55904 (S.D. Ohio, February 12, 2024)

See also: 

Required content moderation reporting does not violate X’s First Amendment rights

x bill of rights

A federal court in California has upheld the constitutionality of the state’s Assembly Bill 587 (AB 587), which mandates social media companies to submit to the state attorney general semi-annual reports detailing their content moderation practices. This decision comes after X filed a lawsuit claiming the law violated the company’s First Amendment rights.

The underlying law

AB 587 requires social media companies to provide detailed accounts of their content moderation policies, particularly addressing issues like hate speech, extremism, disinformation, harassment, and foreign political interference. These “terms of service reports” are to be submitted to the state’s attorney general, aiming to increase transparency in how these platforms manage user content.

X’s challenge

X challenged this law, seeking to prevent its enforcement on the grounds that it was unconstitutional. The court, however, denied their motion for injunctive relief, finding that X failed to demonstrate a likelihood of success on the merits of its constitutional claims.

The court’s decision relied heavily on SCOTUS’s opinion in Zauderer v. Office of Disciplinary Counsel of Supreme Court of Ohio, 471 U.S. 626 (1985). Under the Zauderer case, for governmentally compelled commercial disclosure to be constitutionally permissible, the information must be purely factual and uncontroversial, not unduly burdensome, and reasonably related to a substantial government interest.

The court’s constitutional analysis

In applying these criteria, the court found that AB 587’s requirements fit within these constitutional boundaries. The reports, while compulsory, do not constitute commercial speech in the traditional sense, as they are not advertisements and carry no direct economic benefit for the social media companies. Despite this, the court followed the rationale of other circuits that have assessed similar requirements for social media disclosures.

The court determined that the content of the reports mandated by AB 587 is purely factual, requiring companies to outline their existing content moderation policies related to specified areas. The statistical data, if provided, represents objective information about the company’s actions. The court also found that the disclosures are uncontroversial, noting that the mere association with contentious topics does not render the reports themselves controversial. We know how controversial and political the regulation of “disinformation” can be.

Addressing the burden of these requirements, the court recognized that while the reporting may be demanding, it is not unjustifiably so under First Amendment considerations. X argued that the law would necessitate significant resources to monitor and report the required metrics. But the court noted that AB 587 does not obligate companies to adopt any specific content categories, nor does it impose burdens on speech itself, a crucial aspect under Zauderer’s analysis.

What this means

The court confirmed that AB 587’s reporting requirements are reasonably related to a substantial government interest. This interest lies in ensuring transparency in social media content moderation practices, enabling consumers to make informed choices about their engagement with news and information on these platforms.

The court’s decision is a significant step in addressing the complexities of regulating social media platforms, balancing the need for transparency with the constitutional rights of these digital entities. As the landscape of digital communication continues to evolve, this case may be a marker for how governments might approach the regulation of social media companies, particularly in the realm of content moderation.

X Corp. v. Bonta, 2023 WL 8948286 (E.D. Cal., December 28, 2023)

See also: Maryland Court of Appeals addresses important question of internet anonymity

California court decision strengthens Facebook’s ability to deplatform its users

vaccine information censorship

Plaintiff used Facebook to advertise his business. Facebook kicked him off and would not let him advertise, based on alleged violations of Facebook’s Terms of Service. Plaintiff sued for breach of contract. The lower court dismissed the case so plaintiff sought review with the California appellate court. That court affirmed the dismissal.

The Terms of Service authorized the company to unilaterally “suspend or permanently disable access” to a user’s account if the company determined the user “clearly, seriously, or repeatedly breached” the company’s terms, policies, or community standards.

An ordinary reading of such a provision would lead one to think that Facebook would not be able to terminate an account unless certain conditions were met, namely, that there had been a clear, serious or repeated breach by the user. In other words, Facebook would be required to make such a finding before terminating the account.

But the court applied the provision much more broadly. So broadly, in fact, that one could say the notion of clear, serious, or repeated breach was irrelevant, superfluous language in the terms.

The court said: “Courts have held these terms impose no ‘affirmative obligations’ on the company.” Discussing a similar case involving Twitter’s terms of service, the court observed that platform was authorized to suspend or terminate accounts “for any or no reason.” Then the court noted that “[t]he same is true here.”

So, the court arrived at the conclusion that despite Facebook’s own terms – which would lead users to think that they wouldn’t be suspended unless there was a clear, serious or repeated breach – one can get deplatformed for any reason or no reason. The decision pretty much gives Facebook unmitigated free speech police powers.

Strachan v. Facebook, Inc., 2023 WL 8589937 (Cal. App. December 12, 2023)

Restraining order entered against website that encouraged contacting children of plaintiff’s employees

Plaintiff sued defendant (who was an unhappy customer of plaintiff) under the Lanham Act (for trademark infringement) and for defamation. Defendant had registered a domain name using plaintiff’s company name and had set up a website that, among other things, he used to impersonate plaintiff’s employees and provide information about employees’ family members, some of whom were minors.

Plaintiff moved for a temporary restraining order and the court granted the motion.

The Website

The website was structured and designed in a way that made it appear as though it was affiliated with plaintiff. For example, it included a copyright notice identifying plaintiff as the owner. It also included allegedly false statements about plaintiff. For example, it included the following quotation, which was attributed to plaintiff’s CEO:

Well of course we engage in bad faith tactics like delaying and denying our policy holders [sic] valid claims. How do you think me [sic], my key executive officers, and my board members stay so damn rich. [sic]

The court found that plaintiff had shown a likelihood of success on the merits of its claims.

Lanham Act Claim

It found that defendant used plaintiff’s marks for the purpose of confusing the public by creating a website that looked as though it was a part of plaintiff’s business operations. This was evidenced by, for example, the inclusion of a copyright notice on the website.

Defamation

On the defamation claim, the court found that the nature of the statements about plaintiff, plaintiff’s assertion that they were false, and the allegation that the statements were posted on the internet sufficed to satisfy the first two elements of a defamation claim, namely, that they were false and defamatory statements pertaining to the plaintiff and were unprivileged publications to a third party. The allegations in the complaint were also sufficient to indicate that defendant “negligently disregarded the falsity of the statements.”

Furthermore, the statements on the website concerned the way that plaintiff processed its insurance claims, which related to the business of the company and the profession of plaintiff’s employees who handled the processing of claims. Therefore, the final element was also satisfied.

First Amendment Limitations

The court’s limitation in the TRO is interesting to note. To the extent that plaintiff sought injunctive relief directed at defendant’s speech encouraging others to contact the company and its employees with complaints about the business, whether at the workplace or at home, or at public “ad hominem” comments, the court would not grant the emergency relief that was sought.

The court also would not prohibit defendant from publishing allegations that plaintiff had engaged in fraudulent or improper business practices, or from publishing the personally identifying information of plaintiff’s employees, officers, agents, and directors. Plaintiff’s submission failed to demonstrate to the court’s satisfaction how such injunctive relief would not unlawfully impair defendant’s First Amendment rights.

The did, however, enjoin defendant from encouraging others to contact the children and other family members of employees about plaintiff’s business practices because contact of that nature had the potential to cause irreparable emotional harm to those family members, who have no employment or professional relationship with defendant.

Symetra Life Ins. Co. v. Emerson, 2018 WL 6338723(D. Maine, Dec. 4, 2018)

Seventh Circuit sides with Backpage in free speech suit against sheriff

trouble

Backpage is an infamous classified ads website that provides an online forum for users to post ads relating to adult services. The sheriff of Cook County, Illinois (i.e. Chicago) sent letters to the major credit card companies urging them to prohibit users from using the companies’ services to purchase Backpage ads (whether those ads were legal or not). Backpage sued the sheriff, arguing the communications with the credit card companies were a free speech violation.

The lower court denied Backpage’s motion for preliminary injunction. Backpage sought review with the Seventh Circuit. On appeal, the court reversed and remanded.

The appellate court held that while the sheriff has a First Amendment right to express his views about Backpage, a public official who tries to shut down an avenue of expression of ideas and opinions through “actual or threatened imposition of government power or sanction” is violating the First Amendment.

Judge Posner, writing for the court, mentioned the sheriff’s past failure to shut down Craigslist’s adult section through litigation (See Dart v. Craigslist, Inc. 665 F.Supp.2d 961 (N.D.Ill.2009)):

The suit against Craigslist having failed, the sheriff decided to proceed against Backpage not by litigation but instead by suffocation, depriving the company of ad revenues by scaring off its payments-service providers. The analogy is to killing a person by cutting off his oxygen supply rather than by shooting him. Still, if all the sheriff were doing to crush Backpage was done in his capacity as a private citizen rather than as a government official (and a powerful government official at that), he would be within his rights. But he is using the power of his office to threaten legal sanctions against the credit-card companies for facilitating future speech, and by doing so he is violating the First Amendment unless there is no constitutionally protected speech in the ads on Backpage’s website—and no one is claiming that.

The court went on to find that the sheriff’s communications made the credit card companies “victims of government coercion,” in that the letters threatened Backpage with criminal culpability when, à la Dart v. Craigslist and 47 U.S.C. 230, it was unclear whether Backpage was in violation of the law for providing the forum for the ads.

Backpage.com, LLC v. Dart, — F.3d —, 2015 WL 7717221 (7th Cir. Nov. 30, 2015)

Evan Brown is a Chicago attorney advising enterprises on important aspects of technology law, including software development, technology and content licensing, and general privacy issues.

California court okays lawsuit against mugshot posting website

The Court of Appeal of California has held that defendant website operator – who posted arrestees’ mugshots and names, and generated revenue from advertisements using arrestees’ names and by accepting money to take the photos down – was not entitled to have the lawsuit against it dismissed. Defendant’s profiting from the photos and their takedown was not in connection with an issue of public interest, and therefore did not entitle defendant to the relief afforded by an anti-SLAPP motion.

Plaintiff filed a class action lawsuit against defendant website operator, arguing that the website’s practice of accepting money to take down mugshots it posted violated California laws against misappropriation of likeness, and constituted unfair and unlawful business practices.

Defendant moved to dismiss, arguing plaintiff’s claims comprised a “strategic lawsuit against public participation” (or “SLAPP”). California has an anti-SLAPP statute that allows defendants to move to strike any cause of action “arising from any act of that person in furtherance of the person’s right of petition or free speech under the United States Constitution or the California Constitution in connection with a public issue …, unless the court determines that the plaintiff has established that there is a probability that the plaintiff will prevail on the claim.”

The court held that the posting of mugshots was in furtherance of defendant’s free speech rights and was in connection with a public issue. But the actual complained-of conduct – the generating of revenue through advertisements, and from fees generated for taking the photos down – was not protected activity under the anti-SLAPP statute.

Because the claims did not arise from the part of defendant’s conduct that would be considered “protected activity” under the anti-SLAPP statute, but instead arose from other, non-protected activity (making money off of people’s names and photos), the anti-SLAPP statute did not protect defendant. Unless the parties settle, the case will proceed.

Rogers v. Justmugshots.Com, Corp., 2015 WL 5838403, (Not Reported in Cal.Rptr.3d) (October 7, 2015)

Evan Brown is an attorney in Chicago helping clients manage issues involving technology and new media.

Police department did not violate First Amendment by demoting officer who posted Confederate flag on Facebook

Case illustrates the “frequent gamble” one makes when posting on social media.

When you hear about Georgia, the name Duke, dealing with the cops, and the Confederate flag, you think Hazzard County, right? Or better yet, Daisy Duke. This case had a number of those elements, but presented a much more serious free speech question than Bo or Luke could have ever done.

dukesPlaintiff (named Duke), a captain at Georgia’s Clayton State University police department, posted a picture of the Confederate flag to his Facebook account with the caption “It’s time for the second revolution.” He was not on duty when he posted it, nor did he intend it to be visible by everyone (just friends and family). He claimed that he wanted to “express his general dissatisfaction with Washington politicians.” At the time, the police department had no social media policy that would have prevented the post.

The chief of police demoted plaintiff and cut his pay by $15,000, stating that the Facebook post was inappropriate for someone in plaintiff’s position, and that officers should not espouse political views in public.

Plaintiff sued the police chief alleging, among other things, that his demotion over the Facebook post was a retaliation that violated his First Amendment rights. Defendant moved to dismiss. The court granted the motion.

It held that the police department’s legitimate interest in efficient public service outweighed plaintiff’s interest in speaking. The determination on this issue depended heavily on the content of the communication, and the fact that defendant was a police officer.

While the court acknowledged that plaintiff intended to express his disapproval of Washington politicians, it found that “on its face his speech could convey a drastically different message with different implications.” The court noted that order and favorable public perception were critical. “[A] police department is a ‘paramilitary organization, with a need to secure discipline, mutual respect, trust and particular efficiency among the ranks due to its status as a quasi-military entity different from other public employers.'” And police departments have a particular interest in maintaining “a favorable reputation with the public.” In sum, the court found, the speech at issue was capable of impeding the government’s ability to perform its duties efficiently.

The fact that the post was made off-duty and just to friends and family did not dissuade the court from finding the demotion to be proper. A local television station picked up the story that plaintiff had made the post. The court noted that “this illustrates the very gamble individuals take in posting content on the Internet and the frequent lack of control one has over its further dissemination.”

Duke v. Hamil, 2014 WL 414222 (N.D.Ga. February 4, 2014)

Injunction against blogger violated the First Amendment

Prohibiting former tenant from blogging about landlord was unconstitutional prior restraint against speech.

800px-Taize-SilenceDefendants wrote several blog posts critical of their former commercial landlord. The landlord sued for defamation and tortious interference, and sought an injunction against defendants’ blogging. The trial court granted the injunction, determining that defendants had “blogged extensively about [plaintiffs] and many of these blogs [were] arguably defamatory.” Although the court noted that a trial on the defamation claims was yet to be held, it ordered defendants “not to enter defamatory blogs in the future.”

Defendants sought review with the Court of Appeal of Florida. On appeal, the court reversed and remanded.

It held that injunctive relief was not available to prohibit the making of defamatory or libelous statements. “A temporary injunction directed to speech is a classic example of prior restraint on speech triggering First Amendment concerns.” But the court noted a limited exception to the general rule where the defamatory words are made in the furtherance of the commission of another intentional tort.

In this case, plaintiffs alleged another intentional tort – intentional interference with advantageous business relationships. But the court found that plaintiffs failed to present sufficient evidence to show they were entitled to an injunction for that claim. The trial court record failed to support an inference that the defendants’ blog posts had a deleterious effect upon defendants’ prospective business relationships.

Chevaldina v. R.K./FL Management, Inc., — So.3d —, 2014 WL 443977 (Fla.App. 3 Dist. February 5, 2014)

Image credit: By Maik Meid (Own work) [CC-BY-SA-3.0], via Wikimedia Commons

Scroll to top