Why is social media so much better in 2026?
Does the DMCA safe harbor cover infringing images in an email?

Plaintiff photographer sued Pinterest for copyright infringement, alleging Pinterest displayed his and other photographers’ copyrighted images in notifications sent outside of the Pinterest website. Pinterest moved for summary judgment, arguing it was protected under the safe harbor provisions of Section 512(c) of the Digital Millennium Copyright Act (“DMCA”). The court granted Pinterest’s motion and dismissed the case.
Pinterest is a familiar and massive social media platform, where individuals upload and share image-based “Pins” that function as visual bookmarks. The platform displays Pins in personalized feeds curated by algorithms and which contain advertisements labeled as “promoted.” Pinterest also delivers through notifications such as emails, in-app alerts, and push notifications, which contain hyperlinks that trigger display of images hosted on its servers. One such notification that plaintiff received included his copyrighted photograph, prompting him to file suit six days later.
The court found that Pinterest’s actions fell within the DMCA’s Section 512(c) safe harbor, which shields service providers from copyright liability for content stored at the direction of users. Because Pinterest raised this as an affirmative defense, it had the burden to prove every element of the safe harbor criteria, and the court concluded it had met both the statutory threshold and all required conditions.
Statutory threshold requirements under the DMCA
To qualify for the DMCA safe harbor, Pinterest had to meet several threshold statutory requirements that are found in Sections 512(c) and (i): it had to be a service provider, maintain a designated agent, implement a repeat infringer policy, and accommodate standard technical measures. The court found that Pinterest satisfied all four. As “one of the largest social media platforms in the world,” it operated a qualifying online platform as defined by the statute. The evidence showed that Pinterest maintained a registered agent with the Copyright Office and that it enforced a strike-based policy for repeat infringers. And the court found that Pinterest did not interfere with any recognized standard technical measures that plaintiff implemented with his works. (Plaintiff had asserted that he embedded certain metadata in his photographs, but he did not argue that this metadata qualified as a “standard technical measure” under the DMCA, nor did he claim that Pinterest interfered with it — in fact, he alleged that Pinterest preserved the metadata on its servers.)
How Pinterest met the required conditions
After finding that Pinterest satisfied the DMCA’s threshold requirements, the court turned to whether Pinterest’s conduct of sending out copyright protected images in off-platform notifications was protected under Section 512(c). To do so, Pinterest had to show three things:
- the alleged infringement occurred due to user-directed storage;
- Pinterest lacked actual or red flag knowledge of the infringement; and
- Pinterest either had no right and ability to control the activity or did not receive a direct financial benefit from it.
The court evaluated each element in turn.
By reason of storage at the direction of a user
The court concluded that Pinterest met the first requirement for DMCA safe harbor protection: the alleged infringement occurred “by reason of the storage at the direction of a user.” It emphasized that the image at issue was not embedded in the notification itself but was instead hosted on Pinterest’s servers and accessed via a hyperlink contained in the notification. When a user opened the message, their software triggered a request to Pinterest’s server to retrieve and display the image, just as it would when accessing content directly through the platform. Because this method merely facilitated access to user-uploaded content without altering it, the court found the display was within the statutory definition.
No knowledge of infringement
The court found that Pinterest satisfied the second requirement for DMCA safe harbor protection by showing it lacked actual or red flag knowledge of the alleged infringement. Critically, Harrington never sent Pinterest a DMCA takedown notice or otherwise identified the allegedly infringing material before filing suit. The DMCA operates on a notice and takedown system: platforms are not required to proactively monitor user content but must respond once they receive proper notice. Because Harrington gave no such notice and offered no evidence that Pinterest otherwise knew about the specific image at issue, the court concluded there was no genuine dispute as to Pinterest’s lack of knowledge.
Control and financial benefit
The court found that Pinterest met the third and final requirement for DMCA safe harbor by showing it neither had the right and ability to control the alleged infringement nor received a financial benefit directly attributable to it. While Pinterest used algorithms to curate content and monetize its platform generally, the court held that this did not amount to the kind of “substantial influence” over user activity that would disqualify it under the DMCA. Pinterest did not direct users to upload specific content, nor did it participate in any purposeful conduct related to the display of plaintiff’s photo.
The court also rejected plaintiff’s claim that Pinterest profited directly from the infringement. Pinterest presented evidence that its notifications did not contain advertisements and that it earned no revenue specifically tied to the image in question. Plaintiff’s counter-evidence failed to show otherwise. Even if ads had appeared near the image, the law requires a direct connection between the infringing display and revenue, which was absent here. Therefore, Pinterest satisfied this final element of the DMCA safe harbor defense.
Harrington et al. v. Pinterest, Inc., No. 20-CV-5290, 2026 WL 25880 (N.D. Cal., January 5, 2026)
Ninth Circuit declines to impose broad injunction against California’s social media law for minors

NetChoice, an internet trade association representing companies such as Google, Meta, and X, sued the State of California over its Protecting Our Kids from Social Media Addiction Act, claiming that the law violates the First Amendment. The Act restricts how social media platforms interact with minors, particularly limiting access to algorithmic feeds, requiring certain default settings, and mandating age-verification procedures.
Plaintiff asked the court to block enforcement of several provisions of the law through a preliminary injunction, focusing on its claims that aspects of the Act unlawfully restrict speech and are unconstitutionally vague. The lower court declined to issue the injunction. Plaintiff sought review with the Ninth Circuit.
On appeal, the Ninth Circuit largely affirmed the district court’s refusal to issue a broad injunction but ruled that the provision of the law requiring platforms to hide like and share counts by default for minors is unconstitutional. It reversed the lower court on that point and instructed it to modify its injunction to prevent enforcement of that specific provision.
The court ruled this way because it found the like-count requirement to be content-based and therefore subject to strict scrutiny under the First Amendment. The government failed to show that hiding like counts was the least restrictive means to achieve its goal of protecting minors’ mental health. Other provisions, including those governing private-mode settings and age verification, either survived scrutiny or were deemed unripe for review.
NetChoice LLC v. Bonta, — F.4th —, 2025 WL 2600007 (9th Cir. Sept. 9, 2025)
TikTok and Meta terms granted other users remix rights

Plaintiff sued TikTok and Meta after other users on those platforms incorporated clips from her video into their own posts, allegedly without her permission. She claimed this was copyright infringement and also alleged that TikTok failed to protect her from harassment by users in the comments of her live videos. Plaintiff filed the lawsuit on her own, without a lawyer.
Plaintiff asked the court to hold TikTok and Meta liable for copyright infringement and to consider tort claims against TikTok for harassment. But both companies responded by asking the court to dismiss the case. They pointed to the user agreements Plaintiff had accepted when she signed up. Those terms gave the platforms and their users broad rights to use, modify, and distribute any content she uploaded. TikTok also invoked immunity under 47 U.S.C. 230, a provision in federal law protecting platforms from liability for user-generated content.
The court agreed with the platforms. It found that plaintiff had granted TikTok and Meta valid licenses to use her video, so there could be no copyright violation. The court also ruled that it had no authority to hear the tort claims because plaintiff had not shown that the court had jurisdiction over those parts of the case. The court rejected plaintiff’s arguments that she did not fully understand the contracts or that the agreements were unfair. On appeal, the Tenth Circuit upheld the decision, finding no clear error in how the lower court handled the case and ruling that plaintiff had waived her right to challenge the licensing issue by not objecting to it specifically.
In the end, the court dismissed all claims against both companies. The court also declined to take up any new claims plaintiff tried to raise during the appeal, saying she had not brought those up earlier and did not support them with enough detail.
Three reasons why this case matters:
-
It reinforces how powerful and far-reaching social media terms of service can be in protecting platforms from copyright claims.
-
It shows the importance of making specific objections and arguments in court—especially during appeals.
-
It highlights how courts apply procedural rules strictly, even when someone is representing themselves without a lawyer.
Sethunya v. TikTok, 2025 WL 1144776 (10th Cir. April 18, 2025)
Federal court says it was OK to fire CEO who criticized boy wearing prom dress

The fired CEO of a telehealth company sued his former employer’s customer, alleging that the customer wrongfully pressured his employer to fire him after a video went viral of him confronting a boy wearing a prom dress. The lower court granted summary judgment and dismissed the plaintiff’s tortious interference claims. Plaintiff sought review with the Sixth Circuit. On appeal, the court affirmed the summary judgment in favor of the former employer’s customer.
What happened
In April 2021, plaintiff encountered teenagers taking prom photos at a Tennessee hotel. During this encounter, plaintiff told a teenage boy wearing a red prom dress that he “looked like an idiot.” Another teen recorded the interaction and posted it online, where it quickly went viral. Actress Kathy Griffin shared the video with her two million Twitter followers, identifying plaintiff.
The video created significant problems for plaintiff’s former employer. The company’s board of directors expressed concern about how plaintiff’s behavior reflected on the company.
Defendant, the former employer’s largest customer, soon received many messages expressing disappointment about its business relationship with a company whose CEO behaved this way. Defendant arranged a call with the company to discuss the situation.
According to plaintiff, defendant threatened to end its contract with the company if the company did not fire plaintiff. Shortly after this call, the company’s directors voted to terminate plaintiff’s employment. The next day, defendant publicly stated that the company “stepped up to do the right thing” by firing plaintiff.
The lawsuit
Plaintiff sued defendant (but not his former employer) for tortious interference with his employment contract and tortious interference with his employment relationship under Tennessee law. Defendant asked the court for summary judgment, arguing that plaintiff couldn’t prove his claims even if all facts were viewed in his favor.
The Court’s decision
The Sixth Circuit affirmed the district court’s decision to grant summary judgment to defendant, rejecting plaintiff’s claims for two main reasons.
First, plaintiff’s tortious interference with contract claim failed because the company did not breach any contract when it fired him. Plaintiff’s employment contract allowed the company to fire him with or without cause. Since the company had the legal right to terminate plaintiff’s employment, it could not have breached the contract by doing so. Under Tennessee law, a claim for tortious interference with a contract requires an actual breach of contract.
Second, plaintiff’s tortious interference with employment relationship claim failed because he could not show that defendant acted with an improper motive or used improper means. The court found no evidence that defendant acted with the primary purpose of injuring plaintiff. Instead, the record showed defendant sought to protect its business from public criticism. Additionally, defendant’s contract with the company gave it the right to stop doing business with the company “for any reason or no reason,” so, in the Court’s mind, threatening to exercise this right was not improper.
Three reasons why this case matters:
- It clarifies that claims for tortious interference with contracts require an actual breach of contract, which does not occur when an employer exercises its contractual right to terminate an at-will employee.
- It demonstrates that businesses can take steps to protect their reputation without facing liability for tortious interference, as long as they act within their contractual rights.
- It illustrates how viral videos capturing personal conduct can have significant professional consequences, especially for people in leadership positions.
Johnson v. University Hospitals Health System, Inc., 2025 WL 637442 (6th Cir. February 27, 2025)
Section 230 protected Meta from Huckabee cannabis lawsuit

Mike Huckabee, the former governor of Arkansas, sued Meta Platforms, Inc., the parent company of Facebook, for using his name and likeness without his permission in advertisements for CBD products. Huckabee argued that these ads falsely claimed he endorsed the products and made misleading statements about his personal health. He asked the court to hold Meta accountable under various legal theories, including violation of his publicity rights and privacy.
Plaintiff alleged that defendant approved and maintained advertisements that misappropriated plaintiff’s name, image, and likeness. Plaintiff further claimed that the ads placed plaintiff in a false light by attributing statements and endorsements to him that he never made. Additionally, plaintiff argued that defendant had been unjustly enriched by profiting from these misleading ads. Defendant, however, sought to dismiss the claims, relying on the Communications Decency Act at 47 U.S.C. 230, which grants immunity to platforms for third-party content.
The court granted Meta’s motion to dismiss. It determined that Section 230 shielded defendant from liability for the third-party content at issue. The court also noted that plaintiff’s allegations lacked the specificity needed to overcome the protections provided by Section 230. Furthermore, the court emphasized that federal law, such as Section 230, preempts conflicting state laws, such as Arkansas’s Frank Broyles Publicity Protection Act.
Three reasons why this case matters:
- Defines Section 230 Protections: It reaffirms the broad immunity tech companies enjoy under Section 230, even in cases involving misuse of publicity rights.
- Digital Rights and Privacy: The case highlights the tension between protecting individual rights and maintaining the free flow of online content.
- Challenges for State Laws: It shows how federal law can preempt state-specific protections, leaving individuals with limited recourse.
Mike Huckabee v. Meta Platforms, Inc., 2024 WL 4817657 (D. Del. Nov. 18, 2024)
Ex-wife held in contempt for posting on TikTok about her ex-husband

Ex-husband sought to have his ex-wife held in contempt for violating an order that the divorce court had entered. In 2022, the court had ordered the ex-wife to take down social media posts that could make the ex-husband identifiable.
The ex-husband alleged that the ex-wife continued to post content on her TikTok account which made him identifiable as her ex-husband. Ex-wife argued that she did not name the ex-husband directly and that her social media was part of her work as a trauma therapist. But the family court found that the ex-wife’s posts violated the previous order because they made the ex-husband identifiable, and also noted that the children could be heard in the background of some videos. As a result, the court held the ex-wife in contempt and ordered her to pay $1,800 in the ex-husband’s attorney fees.
Ex-wife appealed the contempt ruling, arguing that ex-husband did not present enough evidence to support his claim, and that she had not violated the order. She also disputed the attorney fees. On appeal, the court affirmed the contempt finding, agreeing that her actions violated the order, but vacated the award of attorney fees due to insufficient evidence of the amount.
Three reasons why this case matters:
- It illustrates the legal consequences of violating court orders in family law cases.
- It emphasizes the importance of clarity in social media use during ongoing family disputes.
- It highlights the need for clear evidence when courts are asked to impose financial sanctions such as attorney fees.
Kimmel v. Kimmel, 2024 WL 4521373 (Ct.App.Ky., October 18, 2024)
X gets Ninth Circuit win in case over California’s content moderation law

X sued the California attorney general, challenging Assembly Bill 587 (AB 587) – a law that required large social media companies to submit semiannual reports detailing their terms of service and content moderation policies, as well as their practices for handling specific types of content such as hate speech and misinformation. X claimed that this law violated the First Amendment, was preempted by the federal Communications Decency Act, and infringed upon the Dormant Commerce Clause.
Plaintiff sought a preliminary injunction to prevent the government from enforcing AB 587 while the case was pending. Specifically, it argued that being forced to comply with the reporting requirements would compel speech in violation of the First Amendment. Plaintiff asserted that AB 587’s requirement to disclose how it defined and regulated certain categories of content compelled speech about contentious issues, infringing on its First Amendment rights.
The district court denied plaintiff’s motion for a preliminary injunction. It found that the reporting requirements were commercial in nature and that they survived under a lower level of scrutiny applied to commercial speech regulations. Plaintiff sought review with the Ninth Circuit.
On review, the Ninth Circuit reversed the district court’s denial and granted the preliminary injunction. The court found that the reporting requirements compelled non-commercial speech and were thus subject to strict scrutiny under the First Amendment—a much higher standard. Under strict scrutiny, a law is presumed unconstitutional unless the government can show it is narrowly tailored to serve a compelling state interest. The court reasoned that plaintiff was likely to succeed on its claim that AB 587 violated the First Amendment because the law was not narrowly tailored. Less restrictive alternatives could have achieved the government’s goal of promoting transparency in social media content moderation without compelling companies to disclose their opinions on sensitive and contentious categories of speech.
The appellate court held that plaintiff would likely suffer irreparable harm if the law was enforced, as the compelled speech would infringe upon the platform’s First Amendment rights. Furthermore, the court found that the balance of equities and public interest supported granting the preliminary injunction because preventing potential constitutional violations was deemed more important than the government’s interest in transparency. Therefore, the court reversed and remanded the case, instructing the district court to enter a preliminary injunction consistent with its opinion.
X Corp. v. Bonta, 2024 WL 4033063 (9th Cir. September 4, 2024)
Supreme Court weighs in on Texas and Florida social media laws

In a significant case involving the intersection of technology and constitutional law, NetChoice LLC sued Florida and Texas, challenging their social media content-moderation laws. Both states had enacted statutes regulating how platforms such as Facebook and YouTube moderate, organize, and display user-generated content. NetChoice argued that the laws violated the First Amendment by interfering with the platforms’ editorial discretion. It asked the Court to invalidate these laws as unconstitutional.
The Supreme Court reviewed conflicting rulings from two lower courts. The Eleventh Circuit had upheld a preliminary injunction against Florida’s law, finding it likely violated the First Amendment. And the Fifth Circuit had reversed an injunction against the Texas law, reasoning that content moderation did not qualify as protected speech. However, the Supreme Court vacated both decisions, directing the lower courts to reconsider the challenges with a more comprehensive analysis.
The Court explained that content moderation—decisions about which posts to display, prioritize, or suppress—constitutes expressive activity akin to editorial decisions made by newspapers. The Texas and Florida laws, by restricting this activity, directly implicated First Amendment protections. Additionally, the Court noted that these cases involved facial challenges, requiring an evaluation of whether a law’s unconstitutional applications outweigh its constitutional ones. Neither lower court had sufficiently analyzed the laws in this manner.
The Court also addressed a key issue in the Texas law: its prohibition against platforms censoring content based on viewpoint. Texas justified the law as ensuring “viewpoint neutrality,” but the Court found this rationale problematic. Forcing platforms to carry speech they deem objectionable—such as hate speech or misinformation—would alter their expressive choices and violate their First Amendment rights.
Three reasons why this case matters:
- Clarifies Free Speech Rights in the Digital Age: The case reinforces that social media platforms have editorial rights similar to traditional media, influencing how future laws may regulate online speech.
- Impacts State-Level Regulation: The ruling limits states’ ability to impose viewpoint neutrality mandates on private platforms, shaping the balance of power between governments and tech companies.
- Sets a Standard for Facial Challenges: By emphasizing the need to weigh a law’s unconstitutional and constitutional applications, the decision provides guidance for courts evaluating similar cases.
Moody v. Netchoice, et al., 144 S.Ct. 2383 (July 1, 2024)
