Why is social media so much better in 2026?
Does the DMCA safe harbor cover infringing images in an email?

Plaintiff photographer sued Pinterest for copyright infringement, alleging Pinterest displayed his and other photographers’ copyrighted images in notifications sent outside of the Pinterest website. Pinterest moved for summary judgment, arguing it was protected under the safe harbor provisions of Section 512(c) of the Digital Millennium Copyright Act (“DMCA”). The court granted Pinterest’s motion and dismissed the case.
Pinterest is a familiar and massive social media platform, where individuals upload and share image-based “Pins” that function as visual bookmarks. The platform displays Pins in personalized feeds curated by algorithms and which contain advertisements labeled as “promoted.” Pinterest also delivers through notifications such as emails, in-app alerts, and push notifications, which contain hyperlinks that trigger display of images hosted on its servers. One such notification that plaintiff received included his copyrighted photograph, prompting him to file suit six days later.
The court found that Pinterest’s actions fell within the DMCA’s Section 512(c) safe harbor, which shields service providers from copyright liability for content stored at the direction of users. Because Pinterest raised this as an affirmative defense, it had the burden to prove every element of the safe harbor criteria, and the court concluded it had met both the statutory threshold and all required conditions.
Statutory threshold requirements under the DMCA
To qualify for the DMCA safe harbor, Pinterest had to meet several threshold statutory requirements that are found in Sections 512(c) and (i): it had to be a service provider, maintain a designated agent, implement a repeat infringer policy, and accommodate standard technical measures. The court found that Pinterest satisfied all four. As “one of the largest social media platforms in the world,” it operated a qualifying online platform as defined by the statute. The evidence showed that Pinterest maintained a registered agent with the Copyright Office and that it enforced a strike-based policy for repeat infringers. And the court found that Pinterest did not interfere with any recognized standard technical measures that plaintiff implemented with his works. (Plaintiff had asserted that he embedded certain metadata in his photographs, but he did not argue that this metadata qualified as a “standard technical measure” under the DMCA, nor did he claim that Pinterest interfered with it — in fact, he alleged that Pinterest preserved the metadata on its servers.)
How Pinterest met the required conditions
After finding that Pinterest satisfied the DMCA’s threshold requirements, the court turned to whether Pinterest’s conduct of sending out copyright protected images in off-platform notifications was protected under Section 512(c). To do so, Pinterest had to show three things:
- the alleged infringement occurred due to user-directed storage;
- Pinterest lacked actual or red flag knowledge of the infringement; and
- Pinterest either had no right and ability to control the activity or did not receive a direct financial benefit from it.
The court evaluated each element in turn.
By reason of storage at the direction of a user
The court concluded that Pinterest met the first requirement for DMCA safe harbor protection: the alleged infringement occurred “by reason of the storage at the direction of a user.” It emphasized that the image at issue was not embedded in the notification itself but was instead hosted on Pinterest’s servers and accessed via a hyperlink contained in the notification. When a user opened the message, their software triggered a request to Pinterest’s server to retrieve and display the image, just as it would when accessing content directly through the platform. Because this method merely facilitated access to user-uploaded content without altering it, the court found the display was within the statutory definition.
No knowledge of infringement
The court found that Pinterest satisfied the second requirement for DMCA safe harbor protection by showing it lacked actual or red flag knowledge of the alleged infringement. Critically, Harrington never sent Pinterest a DMCA takedown notice or otherwise identified the allegedly infringing material before filing suit. The DMCA operates on a notice and takedown system: platforms are not required to proactively monitor user content but must respond once they receive proper notice. Because Harrington gave no such notice and offered no evidence that Pinterest otherwise knew about the specific image at issue, the court concluded there was no genuine dispute as to Pinterest’s lack of knowledge.
Control and financial benefit
The court found that Pinterest met the third and final requirement for DMCA safe harbor by showing it neither had the right and ability to control the alleged infringement nor received a financial benefit directly attributable to it. While Pinterest used algorithms to curate content and monetize its platform generally, the court held that this did not amount to the kind of “substantial influence” over user activity that would disqualify it under the DMCA. Pinterest did not direct users to upload specific content, nor did it participate in any purposeful conduct related to the display of plaintiff’s photo.
The court also rejected plaintiff’s claim that Pinterest profited directly from the infringement. Pinterest presented evidence that its notifications did not contain advertisements and that it earned no revenue specifically tied to the image in question. Plaintiff’s counter-evidence failed to show otherwise. Even if ads had appeared near the image, the law requires a direct connection between the infringing display and revenue, which was absent here. Therefore, Pinterest satisfied this final element of the DMCA safe harbor defense.
Harrington et al. v. Pinterest, Inc., No. 20-CV-5290, 2026 WL 25880 (N.D. Cal., January 5, 2026)
Ninth Circuit declines to impose broad injunction against California’s social media law for minors

NetChoice, an internet trade association representing companies such as Google, Meta, and X, sued the State of California over its Protecting Our Kids from Social Media Addiction Act, claiming that the law violates the First Amendment. The Act restricts how social media platforms interact with minors, particularly limiting access to algorithmic feeds, requiring certain default settings, and mandating age-verification procedures.
Plaintiff asked the court to block enforcement of several provisions of the law through a preliminary injunction, focusing on its claims that aspects of the Act unlawfully restrict speech and are unconstitutionally vague. The lower court declined to issue the injunction. Plaintiff sought review with the Ninth Circuit.
On appeal, the Ninth Circuit largely affirmed the district court’s refusal to issue a broad injunction but ruled that the provision of the law requiring platforms to hide like and share counts by default for minors is unconstitutional. It reversed the lower court on that point and instructed it to modify its injunction to prevent enforcement of that specific provision.
The court ruled this way because it found the like-count requirement to be content-based and therefore subject to strict scrutiny under the First Amendment. The government failed to show that hiding like counts was the least restrictive means to achieve its goal of protecting minors’ mental health. Other provisions, including those governing private-mode settings and age verification, either survived scrutiny or were deemed unripe for review.
NetChoice LLC v. Bonta, — F.4th —, 2025 WL 2600007 (9th Cir. Sept. 9, 2025)
TikTok and Meta terms granted other users remix rights

Plaintiff sued TikTok and Meta after other users on those platforms incorporated clips from her video into their own posts, allegedly without her permission. She claimed this was copyright infringement and also alleged that TikTok failed to protect her from harassment by users in the comments of her live videos. Plaintiff filed the lawsuit on her own, without a lawyer.
Plaintiff asked the court to hold TikTok and Meta liable for copyright infringement and to consider tort claims against TikTok for harassment. But both companies responded by asking the court to dismiss the case. They pointed to the user agreements Plaintiff had accepted when she signed up. Those terms gave the platforms and their users broad rights to use, modify, and distribute any content she uploaded. TikTok also invoked immunity under 47 U.S.C. 230, a provision in federal law protecting platforms from liability for user-generated content.
The court agreed with the platforms. It found that plaintiff had granted TikTok and Meta valid licenses to use her video, so there could be no copyright violation. The court also ruled that it had no authority to hear the tort claims because plaintiff had not shown that the court had jurisdiction over those parts of the case. The court rejected plaintiff’s arguments that she did not fully understand the contracts or that the agreements were unfair. On appeal, the Tenth Circuit upheld the decision, finding no clear error in how the lower court handled the case and ruling that plaintiff had waived her right to challenge the licensing issue by not objecting to it specifically.
In the end, the court dismissed all claims against both companies. The court also declined to take up any new claims plaintiff tried to raise during the appeal, saying she had not brought those up earlier and did not support them with enough detail.
Three reasons why this case matters:
-
It reinforces how powerful and far-reaching social media terms of service can be in protecting platforms from copyright claims.
-
It shows the importance of making specific objections and arguments in court—especially during appeals.
-
It highlights how courts apply procedural rules strictly, even when someone is representing themselves without a lawyer.
Sethunya v. TikTok, 2025 WL 1144776 (10th Cir. April 18, 2025)
Did Facebook ads targeted at people under 50 unlawfully discriminate on the basis of age?

Several property management companies in the Washington, D.C. area advertised rental properties on Facebook, but only to users aged 50 and younger. Plaintiff, a 55-year-old woman, never saw these ads while searching for housing. She sued, claiming the companies discriminated against her based on age.
Plaintiff argued that by excluding users over 50 from seeing the ads, the companies deprived her of housing opportunities and information. She asked the court for a declaratory judgment, a permanent injunction, and damages. The district court dismissed the case, ruling that she lacked standing because she had not suffered a concrete injury. She sought review with the Fourth Circuit.
The appellate court upheld the dismissal. The court explained that to have standing, a plaintiff must show an injury that is real, personal, and specific. Plaintiff’s claim failed because she did not allege that she had directly been denied housing or misled by the defendants. She also did not prove that, even without age targeting, she would have seen the ads. Facebook’s algorithm determined ad distribution based on multiple factors, not just age. The court also rejected her argument that she suffered stigma from the companies’ ad practices, finding that she had not been personally affected in a way that would give her standing to sue.
Three reasons why this case matters:
- Simply being part of a group that may have been treated unfairly is not enough; a plaintiff must show personal harm.
- Businesses using demographic filters in online ads may be shielded from lawsuits unless a plaintiff can prove direct harm.
- The ruling highlights that courts do not recognize speculative or abstract injuries as grounds for a lawsuit.
Opiotennione v. Bozzuto Mgmt. Co., 2025 WL 678636 (4th Cir. Mar. 4, 2025)
YouTube prevails in the Second Circuit over content removal breach of contract claim

Plaintiff sued defendants YouTube and Google for breach of contract, claiming that defendants violated their Terms of Service by removing and restricting plaintiff’s uploaded content without prior notice or cause. Plaintiff argued that defendants’ actions went against their agreement, which governed plaintiff’s use of the platform and the operation of plaintiff’s channels.
Defendants moved to dismiss plaintiff’s breach of contract claim, which the district court treated as a motion for summary judgment. The court granted the motion, finding that the Terms of Service clearly allowed defendants to remove content at their discretion. Plaintiff sought review with the Second Circuit. On appeal, the court affirmed the dismissal.
The appellate court noted that defendants’ Terms of Service explicitly reserved the right to take down content that violated their policies or posed potential harm. The agreement also stated that defendants would notify users after content was removed but did not require prior notice or a detailed explanation before taking action. Plaintiff received an email explaining that defendants removed content for serious or repeated violations of their Community Guidelines, which the court found sufficient under the contract’s terms.
On appeal, the pro se plaintiff did not present specific arguments against the district court’s decision. Instead, plaintiff repeated claims from the original complaint and attempted to introduce new allegations, including violations of intellectual property rights and his right to free speech. The appellate court declined to consider these new arguments because they were not part of the original case. Given the unambiguous contract terms, the court ruled that defendants had not breached the agreement and upheld the lower court’s ruling in favor of defendants.
Three reasons why this case matters:
- Clarifies platform control – The ruling reinforces that social media companies have broad discretion under their Terms of Service to remove user content.
- Limits user challenges – It highlights the difficulty users face when challenging content moderation decisions through breach-of-contract claims.
- Confirms contract enforcement – The case affirms that courts will uphold clear contractual terms, even if users feel the enforcement is unfair.
Qian v. YouTube, LLC, 2025 WL 582785 (2d Cir. Feb. 24, 2025)
Section 230 protected Meta from Huckabee cannabis lawsuit

Mike Huckabee, the former governor of Arkansas, sued Meta Platforms, Inc., the parent company of Facebook, for using his name and likeness without his permission in advertisements for CBD products. Huckabee argued that these ads falsely claimed he endorsed the products and made misleading statements about his personal health. He asked the court to hold Meta accountable under various legal theories, including violation of his publicity rights and privacy.
Plaintiff alleged that defendant approved and maintained advertisements that misappropriated plaintiff’s name, image, and likeness. Plaintiff further claimed that the ads placed plaintiff in a false light by attributing statements and endorsements to him that he never made. Additionally, plaintiff argued that defendant had been unjustly enriched by profiting from these misleading ads. Defendant, however, sought to dismiss the claims, relying on the Communications Decency Act at 47 U.S.C. 230, which grants immunity to platforms for third-party content.
The court granted Meta’s motion to dismiss. It determined that Section 230 shielded defendant from liability for the third-party content at issue. The court also noted that plaintiff’s allegations lacked the specificity needed to overcome the protections provided by Section 230. Furthermore, the court emphasized that federal law, such as Section 230, preempts conflicting state laws, such as Arkansas’s Frank Broyles Publicity Protection Act.
Three reasons why this case matters:
- Defines Section 230 Protections: It reaffirms the broad immunity tech companies enjoy under Section 230, even in cases involving misuse of publicity rights.
- Digital Rights and Privacy: The case highlights the tension between protecting individual rights and maintaining the free flow of online content.
- Challenges for State Laws: It shows how federal law can preempt state-specific protections, leaving individuals with limited recourse.
Mike Huckabee v. Meta Platforms, Inc., 2024 WL 4817657 (D. Del. Nov. 18, 2024)
X can claim trespass to chattel in data scraping case

X Corp. sued Bright Data Ltd. for unauthorized access to X’s servers and the scraping and resale of data from X’s platform. Plaintiff sought the court’s permission to file a second amended complaint after the court dismissed its prior complaint. The court granted plaintiff’s motion in part and denied it in part, allowing some claims to proceed while dismissing others.
Plaintiff alleged that defendant’s scraping activities caused significant harm to its systems. According to plaintiff, defendant’s automated scraping overwhelmed servers, causing system glitches and forcing plaintiff to purchase additional server capacity. Plaintiff further alleged that defendant used deceptive techniques, including fake accounts and rotating IP addresses, to bypass technical barriers and access non-public data. Plaintiff claimed that these actions violated its Terms of Service, interfered with its contracts, and constituted unfair and fraudulent business practices. Plaintiff also introduced new claims under federal and state anti-hacking laws, including the Digital Millennium Copyright Act and the Computer Fraud and Abuse Act.
The court agreed with plaintiff on several points. It allowed claims related to server impairment, including trespass to chattels and breach of contract, to move forward. The court found that plaintiff’s revised complaint provided sufficient details to plausibly allege harm to its servers and unauthorized access to its systems.
However, the court dismissed claims concerning the scraping and resale of data, ruling that they were preempted by the Copyright Act. Plaintiff had argued that it could prevent defendant from copying user-generated or non-copyrightable data through state-law claims. The court disagreed, holding that such claims conflicted with federal copyright policy, which limits protections for factual data and prioritizes public access. Additionally, the court rejected plaintiff’s argument that defendant’s actions constituted “unfair” business practices, finding no evidence of harm to competition.
Finally, the court allowed plaintiff to proceed with its new anti-hacking claims but left the door open for defendant to challenge these allegations later in the case.
Three Reasons Why This Case Matters
- Defines Platform Rights: This case clarifies the limits of platform operators’ ability to control user-generated and public data.
- Reinforces Copyright Preemption: The decision highlights the importance of federal copyright laws in preventing conflicting state-law claims.
- Explores Anti-Hacking Laws: It illustrates how federal and state anti-hacking statutes may be used to address unauthorized access in the digital age.
X Corp. v. Bright Data Ltd., 2024 WL 4894290 (N.D. Cal., Nov. 26, 2024)
People tagging the wrong place on Instagram did not help prove trademark infringement

The City and County of San Francisco sued the Port of Oakland and the City of Oakland alleging trademark infringement and unfair competition. The dispute began when Oakland renamed its airport “San Francisco Bay Oakland International Airport,” which San Francisco claimed created confusion and harmed the brand of its own airport, San Francisco International Airport (SFO). San Francisco asked the court for a preliminary injunction to stop Oakland from using the new name while the case proceeded.
The court granted the motion in part, finding that the new name improperly implied an affiliation between the airports. However, it rejected claims that Oakland’s actions caused confusion during online ticket searches or at the point of sale. Social media evidence featured prominently in the case but ultimately did not sway the court’s decision.
San Francisco argued that social media posts demonstrated actual consumer confusion. For example, some users on platforms such as Instagram tagged images of SFO with Oakland’s new name, while others expressed uncertainty about which airport they were referencing. Despite these examples, the court found the evidence weak and unconvincing. It noted that most of the posts lacked context, such as whether the users were actual travelers or how their confusion affected any purchasing decisions. Additionally, the court questioned the sincerity of some posts, particularly where users repeated the same confusion across multiple platforms or appeared to joke about the issue.
While the court acknowledged that social media evidence could have value, it stressed the need for reliability. Without clear patterns or evidence of widespread confusion, the posts provided little support for San Francisco’s broader claims.
Three reasons why this case matters:
- The Limits of Social Media Evidence: This case demonstrates that courts demand robust, contextualized proof when social media posts are used to argue consumer confusion.
- Trademark Law in the Digital Age: The case highlights the challenges of protecting trademarks in a world where branding and consumer perception are shaped online.
- Impacts on Regional Branding: The ruling underscores the importance of clear naming practices for public infrastructure, especially in areas with competing interests.
City and County of San Francisco v. City of Oakland, 2024 WL 5563429 (N.D. Cal., November 12, 2024)
