Ex-wife held in contempt for posting on TikTok about her ex-husband

tiktok contempt

Ex-husband sought to have his ex-wife held in contempt for violating an order that the divorce court had entered. In 2022, the court had ordered the ex-wife to take down social media posts that could make the ex-husband identifiable.

The ex-husband alleged that the ex-wife continued to post content on her TikTok account which made him identifiable as her ex-husband. Ex-wife argued that she did not name the ex-husband directly and that her social media was part of her work as a trauma therapist. But the family court found that the ex-wife’s posts violated the previous order because they made the ex-husband identifiable, and also noted that the children could be heard in the background of some videos. As a result, the court held the ex-wife in contempt and ordered her to pay $1,800 in the ex-husband’s attorney fees.

Ex-wife appealed the contempt ruling, arguing that ex-husband did not present enough evidence to support his claim, and that she had not violated the order. She also disputed the attorney fees. On appeal, the court affirmed the contempt finding, agreeing that her actions violated the order, but vacated the award of attorney fees due to insufficient evidence of the amount.

Three reasons why this case matters:

  • It illustrates the legal consequences of violating court orders in family law cases.
  • It emphasizes the importance of clarity in social media use during ongoing family disputes.
  • It highlights the need for clear evidence when courts are asked to impose financial sanctions such as attorney fees.

Kimmel v. Kimmel, 2024 WL 4521373 (Ct.App.Ky., October 18, 2024)

X gets Ninth Circuit win in case over California’s content moderation law

x bill of rights

X sued the California attorney general, challenging Assembly Bill 587 (AB 587) – a law that required large social media companies to submit semiannual reports detailing their terms of service and content moderation policies, as well as their practices for handling specific types of content such as hate speech and misinformation. X claimed that this law violated the First Amendment, was preempted by the federal Communications Decency Act, and infringed upon the Dormant Commerce Clause.

Plaintiff sought a preliminary injunction to prevent the government from enforcing AB 587 while the case was pending. Specifically, it argued that being forced to comply with the reporting requirements would compel speech in violation of the First Amendment. Plaintiff asserted that AB 587’s requirement to disclose how it defined and regulated certain categories of content compelled speech about contentious issues, infringing on its First Amendment rights.

The district court denied  plaintiff’s motion for a preliminary injunction. It found that the reporting requirements were commercial in nature and that they survived under a lower level of scrutiny applied to commercial speech regulations. Plaintiff sought review with the Ninth Circuit.

On review, the Ninth Circuit reversed the district court’s denial and granted the preliminary injunction. The court found that the reporting requirements compelled non-commercial speech and were thus subject to strict scrutiny under the First Amendment—a much higher standard. Under strict scrutiny, a law is presumed unconstitutional unless the government can show it is narrowly tailored to serve a compelling state interest. The court reasoned that plaintiff was likely to succeed on its claim that AB 587 violated the First Amendment because the law was not narrowly tailored. Less restrictive alternatives could have achieved the government’s goal of promoting transparency in social media content moderation without compelling companies to disclose their opinions on sensitive and contentious categories of speech.

The appellate court held that plaintiff would likely suffer irreparable harm if the law was enforced, as the compelled speech would infringe upon the platform’s First Amendment rights. Furthermore, the court found that the balance of equities and public interest supported granting the preliminary injunction because preventing potential constitutional violations was deemed more important than the government’s interest in transparency. Therefore, the court reversed and remanded the case, instructing the district court to enter a preliminary injunction consistent with its opinion.

X Corp. v. Bonta, 2024 WL 4033063 (9th Cir. September 4, 2024)

No Section 230 immunity for Facebook on contract-related claims

section 230

Plaintiffs sued Meta, claiming that they were harmed by fraudulent third-party ads posted on Facebook. Plaintiffs argued that these ads violated Meta’s own terms of service, which prohibits deceptive advertisements. They accused Meta of allowing scammers to run ads that targeted vulnerable users and of prioritizing revenue over user safety. Meta moved to dismiss claiming that it was immune from liability under 47 U.S.C. § 230(c)(1) (a portion of the Communications Decency Act (CDA)), which generally protects internet platforms from being held responsible for third-party content.

Plaintiffs asked the district court to hold Meta accountable for five claims: negligence, breach of contract, breach of the covenant of good faith and fair dealing, violation of California’s Unfair Competition Law (UCL), and unjust enrichment. They alleged that Meta not only failed to remove scam ads but actively solicited them, particularly from advertisers based in China, who accounted for a large portion of the fraudulent activity on the platform.

The district court held that § 230(c)(1) protected Meta from all claims, even the contract claims. Plaintiffs sought review with the Ninth Circuit.

On appeal, the Ninth Circuit affirmed that § 230(c)(1) provided Meta with immunity for the non-contract claims, such as negligence and UCL violations, because these claims treated Meta as a publisher of third-party ads. But the Ninth Circuit disagreed with the district court’s ruling on the contract-related claims. It held that the lower court had applied the wrong legal standard when deciding whether § 230(c)(1) barred those claims. So the court vacated the dismissal of the contract claims, explaining that contract claims were different because they arose from Meta’s promises to users, not from its role as a publisher. The case was remanded back to the district court to apply the correct standard for the contract claims.

Three reasons why this case matters:

  • It clarifies that § 230(c)(1) of the CDA does not provide blanket immunity for all types of claims, especially contract-related claims.
  • The case underscores the importance of holding internet companies accountable for their contractual promises to users, even when they enjoy broad protections for third-party content.
  • It shows that courts continue to wrestle with the boundaries of platform immunity under the CDA, which could shape future rulings about online platforms’ responsibilities.

Calise v. Meta Platforms, Inc., 103 F.4th 732 (9th Cir., June 4, 2024)

TikTok and the First Amendment: Previewing some of the free speech issues

TikTok is on the verge of a potential federal ban in the United States. This development echoes a previous situation in Montana, where a 2023 state law attempted to ban TikTok but faced legal challenges. TikTok and its users filed a lawsuit against the state, claiming the ban violated their First Amendment rights. The federal court sided with TikTok and the users, blocking the Montana law from being enforced on the grounds that it infringed on free speech.

The court’s decision highlighted that the law restricted TikTok users’ ability to communicate and impacted the company’s content decisions, thus failing to meet the intermediate scrutiny standard applicable to content-neutral speech restrictions. The ruling criticized the state’s attempt to regulate national security, deeming it outside the state’s jurisdiction and excessively restrictive compared to other available measures such as data privacy laws. Furthermore, the court noted that the ban left other similar apps unaffected and failed to provide alternative communication channels for TikTok users reliant on the app’s unique features.

On FOX 2 Detroit talking about the TikTok ban

Earlier today I enjoyed appearing live on Fox 2 Detroit talking about the TikTok ban. We discussed what the act that the House of Representatives passed says, what it would mean for social media users, and the free speech litigation that will no doubt follow if the bill passes in the Senate and the President signs it. It’s a very intriguing issue.

What does the “bill that could ban TikTok” actually say?

In addition to causing free speech concerns, the bill is troubling in the way it gives unchecked power to the Executive Branch.

Earlier this week the United States House of Representatives passed a bill that is being characterized as one that could ban TikTok. Styled as the Protecting Americans from Foreign Adversary Controlled Applications Act, the text of the bill calls TikTok and its owner ByteDance Ltd. by name and seeks to “protect the national security of the United States from the threat posed by foreign adversary controlled applications.”

What conduct would be prohibited?

The Act would make it unlawful for anyone to “distribute, maintain, or update” a “foreign adversary controlled application” within the United States. The Act specifically prohibits anyone from “carrying out” any such distribution, maintenance or updating via a “marketplace” (e.g., any app store) or by providing hosting services that would enable distribution, maintenance or updating of such an app. Interestingly, the ban does not so much directly prohibit ByteDance from making TikTok available, but would cause entities such as Apple and Google to be liable for making the app available for others to access, maintain and update the app.

What apps would be banned?

There are two ways one could find itself being a “foreign adversary controlled application” and thereby prohibited.

  • The first is simply by being TikTok or any app provided by ByteDance or its successors.
  • The second way – and perhaps the more concerning way because of its grant of great power to one person – is by being a “foreign adversary controlled application” that is “determined by the President to present a significant threat to the national security of the United States.” Though the President must first provide the public with notice of such determination and make a report to Congress on the specific national security concerns, there is ultimately no check on the President’s power to make this determination. For example, there is no provision in the statute saying that Congress could override the President’s determination.

Relatively insignificant apps, or apps with no social media component would not be covered by the ban. For example, to be a “covered company” under the statute, the app has to have more than one million monthly users in two of the three months prior to the time the President determines the app should be banned. And the statute specifically says that any site having a “primary purpose” of allowing users to post reviews is exempt from the ban.

When would the ban take effect?

TikTok would be banned 180 days after the date the President signs the bill. For any other app that the President would later decide to be a “foreign adversary controlled application,” it would be banned 180 days after the date the President makes that determination. The date of that determination would be after the public notice period and report to Congress discussed above.

What could TikTok do to avoid being banned?

It could undertake a “qualified divestiture” before the ban takes effect, i.e., within 180 days after the President signs the bill. Here is another point where one may be concerned about the great power given to the Executive Branch. A “qualified divestiture” would be situation in which the owner of the app sells off that portion of the business *and* the President determines two things: (1) that the app is no longer being controlled by a foreign adversary, and (2) there is no “operational relationship” between the United States operations of the company and the old company located in the foreign adversary company. In other words, the app could not avoid the ban by being owned by a United States entity but still share data with the foreign company and have the foreign company handle the algorithm.

What about users who would lose all their data?

The Act provides that the app being prohibited must provide users with “all the available data related to the account of such user,” if the user requests it, prior to the time the app becomes prohibited. That data would include all posts, photos and videos.

What penalties apply for violating the law?

The Attorney General is responsible for enforcing the law. (An individual could not sue and recover damages.) Anyone (most likely an app store) that violates the ban on distributing, maintaining or updating the app would face penalties of $5,000 x the number of users determined to access, maintain or update the app. Those damages could be astronomical – TikTok currently has 170 million users, so the damages would be $850,000,000,000. An app’s failure to provide data portability prior to being banned would cause it to be liable for $500 x the number of affected users.

Website operator not liable under Wiretap Act for allowing Meta to intercept visitor communications

Plaintiffs asserted that defendant healthcare organization inadequately protected the personal and health information of visitors to defendant’s website. In particular, plaintiffs alleged that unauthorized third parties – including Meta – could intercept user interactions through the use of tracking technologies such as the Meta Pixel and Conversions API. According to plaintiffs, these tools collected sensitive health information and sent it to Meta. Despite defendant’s privacy policy claiming to protect user privacy and information, plaintiffs alleged that using defendant’s website caused plaintiffs to receive unsolicited advertisements on their Facebook accounts.

Plaintiffs sued, asserting a number of claims, including under the federal Electronic Communications Privacy Act (“ECPA”) and the California Invasion of Privacy Act (“CIPA”). Defendant moved to dismiss these claims. The court granted the motion.

To establish an ECPA claim, a plaintiff must demonstrate that defendant intentionally intercepted or attempted to intercept electronic communications using a device. CIPA similarly prohibits using electronic means to understand the contents of a communication without consent. Both laws have a “party exception” allowing a person who is a party to the communication to intercept it, provided the interception is not for a criminal or tortious purpose. In other words, there is an exception to the exception.

In this case, defendant argued it was a legitimate party to plaintiffs’ communications on a website, thus invoking the party exception. Plaintiffs countered that the exception should not apply due to defendant’s alleged tortious intent (making the information available to Facebook without disclosure to plaintiffs). But the court found that plaintiffs did not provide sufficient evidence that defendant’s actions were for an illegal or actionable purpose beyond the act of interception itself. Under the guidance of Pena v. GameStop, Inc., 2023 WL 3170047 (S.D. Cal. April 27, 2023), (a plaintiff must plead sufficient facts to support an inference that the offender intercepted the communication for the purpose of a tortious or criminal act that is independent of the intentional act of recording or interception itself), the court concluded there was no separate tortious conduct involved in the interception and dismissed the claims.

B.K. v. Eisenhower Medical Center, 2024 WL 878100 (February 29, 2024)

See also:

Kids Online Safety Act: Quick Facts

What is KOSA?

Senators Blackburn and Blumenthal have introduced a new version of KOSA – the Kids Online Safety Act, which seeks to protect minors from online harms by requiring social media companies to prioritize children’s safety in product design and offer more robust parental control tools. Garnering bipartisan support with 62 Senate cosponsors in the wake of a significant hearing with Big Tech CEOs, the bill emphasizes accountability for tech companies, transparency in algorithms, and enhanced safety measures. The legislation has been refined following extensive discussions with various stakeholders, including tech companies, advocacy groups, and parents, to ensure its effectiveness and alignment with the goal of safeguarding young internet users from bullying, harassment, and other online risks.

Critics of the statute argue that the KOSA, despite amendments, remains a threat to constitutional rights, effectively censoring online content and empowering state officials to target undesirable services and speech. See, e.g., the EFF’s blog post about the statute. They contend that KOSA mandates extensive filtering and blocking of legal speech across numerous websites, apps, and platforms, likely leading to age verification requirements. Concerns are raised about the potential harm to minors’ access to important information, particularly for groups such as LGBTQ+ youth, those seeking health and reproductive information, and activists. The modifications in the 2024 version, including the removal of the authority for state attorneys general to sue for non-compliance with the “duty of care” provision, are seen as insufficient to address the core issues related to free speech and censorship. Critics urge opposition to KOSA, highlighting its impact not just on minors but on all internet users who could be subjected to a “second-class internet” due to restricted access to information.

What does the proposed law actually say? Below are some key facts about the contents of the legislation:

Who would be subject to the law:

The statute would place various obligations on “covered platforms”:

  • A “covered platform” encompasses online platforms, video games, messaging applications, and video streaming services accessible via the internet and used or likely to be used by minors.
  • Exclusions from the definition of “covered platform” include common carrier services, broadband internet access services, email services, specific teleconferencing or video conferencing services, and direct wireless messaging services not linked to an online platform.
  • Entities not for profit, educational institutions, libraries, news or sports news websites/apps with specific criteria, business-to-business software, and cloud services not functioning as online platforms are also excluded.
  • Virtual private networks and similar services that solely route internet traffic are not considered “covered platforms.”

Design and Implementation Requirements

  • Covered platforms are required to exercise reasonable care in designing and implementing features to prevent and mitigate harms to minors, including mental health disorders, addiction-like behaviors, physical violence, bullying, harassment, sexual exploitation, and certain types of harmful marketing.
  • The prevention of harm includes addressing issues such as anxiety, depression, eating disorders, substance abuse, suicidal behaviors, online bullying, sexual abuse, and the promotion of narcotics, tobacco, gambling, and alcohol to minors.
  • Despite these protections, platforms are not required to block minors from intentionally seeking content or from accessing resources aimed at preventing or mitigating these harms, including providing evidence-informed information and clinical resources.

Required Safeguards for Minors

  • Covered platforms must provide minors with safeguards to limit communication from others, restrict access to their personal data, control compulsive platform usage features, manage personalized recommendation systems, and protect their geolocation data. (One has to consider whether these would pass First Amendment scrutiny, particularly in light of recent decisions such as the one in NetChoice v. Yost).
  • Platforms are required to offer options for minors to delete their accounts and personal data, and limit their time on the platform, with the most protective privacy and safety settings enabled by default for minors.
  • Parental tools must be accessible and easy-to-use, allowing parents to manage their child’s privacy, account settings, and platform usage, including the ability to restrict purchases and view and limit time spent on the platform.
  • A reporting mechanism for harms to minors must be established, with platforms required to respond substantively within specified time frames, and immediate action required for reports involving imminent threats to minors’ safety.
  • Advertising of illegal products such as narcotics, tobacco, gambling, and alcohol to minors is strictly prohibited.
  • Safeguards and parental tools must be clear, accessible, and designed without “dark patterns” that could impair user autonomy or choice, with considerations for uninterrupted gameplay and offline device or account updates.

Disclosure Requirements

  • Before a minor registers or purchases on a platform, clear notices about data policies, safeguards for minors, and risks associated with certain features must be provided.
  • Platforms must inform parents about safeguards and parental tools for their children and obtain verifiable parental consent before a child uses the platform.
  • Platforms may consolidate notice and consent processes with existing obligations under the Children’s Online Privacy Protection Act (COPPA). (Like COPPA, a “child” under the act is one under 13 years of age.)
  • Platforms using personalized recommendation systems must clearly explain their operation, including data usage, and offer opt-out options for minors or their parents.
  • Advertising targeted at minors must be clearly labeled, explaining why ads are shown to them and distinguishing between content and commercial endorsements.
  • Platforms are required to provide accessible information to minors and parents about data policies and access to safeguards, ensuring resources are available in relevant languages.

Reporting Requirements

  • Covered platforms must annually publish a report, based on an independent audit, detailing the risks of harm to minors and the effectiveness of prevention and mitigation measures. (Providing these audit services is no doubt a good business opportunity for firms with such capabilities; unfortunately this will increase the cost of operating a covered platform.)
  • This requirement applies to platforms with over 10 million active monthly users in the U.S. that primarily host user-generated content and discussions, such as social media and virtual environments.
  • Reports must assess platform accessibility by minors, describe commercial interests related to minor usage, and provide data on minor users’ engagement, including time spent and content accessed.
  • The reports should identify foreseeable risks of harm to minors, evaluate the platform’s design features that could affect minor usage, and detail the personal data of minors collected or processed.
  • Platforms are required to describe safeguards and parental tools, interventions for potential harms, and plans for addressing identified risks and circumvention of safeguards.
  • Independent auditors conducting the risk assessment must consult with parents, youth experts, and consider research and industry best practices, ensuring privacy safeguards are in place for the reported data.

Keep an eye out to see if Congress passes this legislation in the spirit of “for the children.”

How do you sort out who owns a social media account used to promote a business?

owns social media account
Imagine this scenario – a well-known founder of a company sets up social media accounts that promote the company’s products. The accounts also occasionally display personal content (e.g., public happy birthday messages the founder sends to his spouse). The company fires the founder and then the company claims it owns the accounts. If the founder says he owns the accounts, how should a court resolve that dispute?

The answer to this question is helpful in resolving actual disputes such as this, and perhaps even more helpful in setting up documentation and procedures to prevent such a dispute in the first place.

In the recent case of In re: Vital Pharmaceutical, the court considered whether certain social media accounts that a company’s founder and CEO used were property of the company’s bankruptcy estate under Bankruptcy Code § 541. Though this was a bankruptcy case, the analysis is useful in other contexts to determine who owns a social media account. The court held that various social media accounts (including Twitter, Instagram and TikTok accounts the CEO used) belonged to the company.

In reaching this decision, the court recognized a “dearth” of legal guidance from other courts on how to determine account ownership when there is a dispute. It noted the case of In re CTLI, LLC, 528 B.R. 359 (Bankr. S.D. Tex. 2015) but expressed concern that this eight year old case did not adequately address the current state of social media account usage, particularly in light of the rise of influencer marketing.

The court fashioned a rather detailed test:

  • Are there any agreements or other documents that show who owns the account? Perhaps an employee handbook? If so, then whoever such documents say owns the account is presumed to be the owner of the account.
  • But what if there are no documents that show ownership, or such documents do not show definitively who owns the account? In those circumstances, one should consider:
    • Does one party have exclusive power to access the account?
    • Does that same party have the ability to prevent others from accessing the account?
    • Does the account enable that party to identify itself as having that exclusive power?
  • If a party establishes both that documents show ownership and that a party has control, that ends the inquiry. But if one or both of those things are not definitively shown, one can still consider whether use of the social media account tips the scales one way or the other:
    • What name is used for the account?
    • Is the account used to promote more than one company’s products?
    • To what extent is the account used to promote the user’s persona?
    • Would any required changes fundamentally change the nature of the account?

Companies utilizing social media accounts run by influental individuals with well-known personas should take guidance from this decision. Under this court’s test, creating documentation or evidence of the account ownership would provide the clearest path forward. Absent such written indication, the parties should take care to establish clear protocols concerning account control and usage.

In re: Vital Pharmaceutical, 2023 WL 4048979 (Bankr. S.D. Fla., June 16, 2023)

Does tagging the wrong account in an Instagram post show actual confusion in trademark litigation?

In a recent trademark infringement case, the court considered whether Instagram users tagging photos of one product with the account of another company’s product was evidence of actual confusion. In this case, the court found that it was not evidence of actual confusion.

Plaintiff makes premium tequila sold in bottles and defendant makes inexpensive tequila-soda product sold in cans. Plaintiff sued defendant for trademark infringement and sought a preliminary injunction against defendant. To support its assertion that it was likely to succeed on the merits of the case, plaintiff argued there was actual confusion among the consuming public. For example, on Instagram, at least 30 people had tagged photos of plaintiff’s products with defendant’s account.

The court found that in these circumstances, particularly where a marketing survey also showed less than 10% of people were confused by the defendant’s mark, that the incorrect tagging did not show actual confusion.

Though the bar for showing actual confusion is low, the court noted that a showing of confusion requires more than a “fleeting mix-up of names” and that confusion must be caused by the trademark used and must “sway” consumer purchase.

In this case, the court found that defendant’s evidence regarding mistaken Instagram tags did not establish a likelihood of trademark confusion that would result in purchase decisions based on the mistaken belief that the defendant’s tequila-soda product was affiliated with the plaintiff. At best, in the court’s view, the plaintiff’s evidence demonstrated a “fleeting mix-up of names,” which was not evidence of actual confusion.

The court likened this case to the recent case of Reply All Corp. v. Gimlet Media, LLC, 843 F. App’x 392 (2d Cir. 2021), wherein “instances of general mistake or inadvertence—without more—[did] not suggest that those potential consumers in any way confused [plaintiff’s] and [defendant’s] products, let alone that there was confusion that could lead to a diversion of sales, damage to goodwill, or loss of control over reputation.”

Casa Tradición S.A. de C.V. v. Casa Azul Spirits, LLC, 2022 WL 17811396 (S.D. Texas, December 19, 2022)

Scroll to top