Blog

Online platforms will have to answer for sales of alleged counterfeit products

 

A federal court in New York has denied the motion to dismiss filed by Chinese online platforms Alibaba and AliExpress in a lawsuit brought by a toymaker alleging that these companies’ merchant customers were engaged in contributory trademark and copyright infringement through the online sale of counterfeit products.

Background of the Case

Plaintiff toymaker accused the Alibaba defendants of facilitating the sale of counterfeit products on their platforms. The lawsuit stemmed from the activities of around 90 e-commerce merchants who were reportedly using the platforms to sell to sell fake goods.

The Court’s Rationale

The court’s decision to deny the motion to dismiss turned on several allegations that suggest the Alibaba defendants played a more complicit role than that of a passive service provider. These allegations included:

  1. Specific Awareness of Infringement: The Alibaba defendants were allegedly well-informed about the infringing activities of several merchants, including some named in the lawsuit. The Alibaba defendants should have known of these instances from orders in six separate lawsuits against sellers on the platforms.
  2. Continued Proliferation of Infringing Listings: Despite this awareness, the platforms reportedly allowed the continued presence and proliferation of infringing listings. This included listings from merchants already flagged under Alibaba’s “three-strike policy” for repeat offenders.
  3. Promotion of Infringing Listings: Plaintiff alleged the Alibaba defendants actively promoted infringing listings. The Alibaba defendants reportedly granted “Gold Supplier” and “Verified” statuses to infringing merchants, sold related keywords, and even promoted these listings through Google and promotional emails.
  4. Financial Gains from Infringements: Crucially, plaintiff argued that the Alibaba defendants financially benefited from these activities by attracting more customers, encouraging merchants to purchase additional services, and earning commissions on transactions involving counterfeit goods.

DMCA Safe Harbor Provisions Not Applicable

The court rejected the Alibaba defendants’ argument that safe harbor provisions under the Digital Millennium Copyright Act (DMCA) applied at this stage of the litigation. The DMCA safe harbor is typically an affirmative defense to liability, and for it to apply at the motion to dismiss stage, such defense must be evident on the face of the complaint. The court found that in this case, it was not.

Implications of the Ruling

This decision is relevant to purveyors of online products who face the persistent challenges of online enforcement of intellectual property rights. Remedies against overseas companies in situations such as this are often elusive. The case provides a roadmap of sorts concerning the types of facts that must be asserted to support a claim against an online provider in the position of the Alibaba defendants.

Kelly Toys Holdings, LLC v. 19885566 Store et al., 2023 WL 8936307 (S.D.N.Y. December 27, 2023)

Fifth Circuit dissent issues scathing rebuke of broad Section 230 immunity

section 230 immunity

Dissenting in the court’s refusal to rehear an appeal en banc, Judge Elrod of the Fifth Circuit Court of Appeals – joined by six of her colleagues – penned an opinion that sharply criticized the broad immunity granted to social media companies under Section 230 of the Communications Decency Act. The dissent emerged in a case involving John Doe, a minor who was sexually abused by his high school teacher, a crime in which the messaging app Snapchat played a pivotal role.

The Core of the Controversy

Section 230 (47 U.S.C. 230) is a provision that courts have long held to shield internet companies from liability for content posted by their users. The dissenting opinion, however, argues that this immunity has been stretched far beyond its intended scope, potentially enabling platforms to evade responsibility even when their design and operations contribute to illegal activities.

Snapchat’s Role in the Abuse Case

Snapchat, owned by Snap, Inc., was used by the teacher to send sexually explicit material to Doe. Doe sought to hold Snap accountable, alleging that Snapchat’s design defects, such as inadequate age-verification mechanisms, indirectly facilitated the abuse. But the lower court, applying previous cases interpreting Section 230, dismissed these claims at the initial stage.

A Critical Examination of Section 230

The dissent criticized the court’s interpretation of Section 230, arguing that it has been applied too broadly to protect social media companies from various forms of liability, including design defects and distributor responsibilities. It highlighted the statute’s original text, which was meant to protect platforms from being deemed publishers or speakers of third-party content, not to shield them from liability for their own conduct.

Varied Interpretations Across Courts

Notably, the dissent pointed out the inconsistency in judicial interpretations of Section 230. While some courts, like the Ninth Circuit, have allowed claims related to design defects to proceed, others have extended sweeping protections to platforms, significantly limiting the scope for holding them accountable.

The Implications for Internet Liability

This case and the resulting dissent underscore a significant legal issue in the digital age: how to balance the need to protect online platforms from excessive liability with ensuring they do not become facilitators of illegal or harmful activities. The dissent suggested that the current interpretation of Section 230 has tipped this balance too far in favor of the platforms, leaving victims like Doe without recourse.

Looking Ahead: The Need for Reevaluation

The dissenting opinion called for a reevaluation of Section 230, urging a return to the statute’s original text and intent. This reexamination – in the court’s view – would be crucial in the face of evolving internet technologies and the increasing role of social media platforms in everyday life. The dissent warned of the dangers of a legal framework that overly shields these powerful platforms while leaving individuals exposed to the risks associated with their operations.

Conclusion

The court’s dissent in this case is a clarion call for a critical reassessment of legal protections afforded to social media platforms. As the internet continues to evolve, the legal system must adapt to ensure that the balance between immunity and accountability is appropriately maintained, safeguarding individuals’ rights without stifling technological innovation and freedom of expression online.

Doe through Roe v. Snap, Incorporated, — F4th — 2023 WL 8705665, (5th Cir., December 18, 2023)

See also: Snapchat not liable for enabling teacher to groom minor student

California court decision strengthens Facebook’s ability to deplatform its users

vaccine information censorship

Plaintiff used Facebook to advertise his business. Facebook kicked him off and would not let him advertise, based on alleged violations of Facebook’s Terms of Service. Plaintiff sued for breach of contract. The lower court dismissed the case so plaintiff sought review with the California appellate court. That court affirmed the dismissal.

The Terms of Service authorized the company to unilaterally “suspend or permanently disable access” to a user’s account if the company determined the user “clearly, seriously, or repeatedly breached” the company’s terms, policies, or community standards.

An ordinary reading of such a provision would lead one to think that Facebook would not be able to terminate an account unless certain conditions were met, namely, that there had been a clear, serious or repeated breach by the user. In other words, Facebook would be required to make such a finding before terminating the account.

But the court applied the provision much more broadly. So broadly, in fact, that one could say the notion of clear, serious, or repeated breach was irrelevant, superfluous language in the terms.

The court said: “Courts have held these terms impose no ‘affirmative obligations’ on the company.” Discussing a similar case involving Twitter’s terms of service, the court observed that platform was authorized to suspend or terminate accounts “for any or no reason.” Then the court noted that “[t]he same is true here.”

So, the court arrived at the conclusion that despite Facebook’s own terms – which would lead users to think that they wouldn’t be suspended unless there was a clear, serious or repeated breach – one can get deplatformed for any reason or no reason. The decision pretty much gives Facebook unmitigated free speech police powers.

Strachan v. Facebook, Inc., 2023 WL 8589937 (Cal. App. December 12, 2023)

No knowledge of infringement, no secondary copyright liability for YouTube

This case underscores that platforms like YouTube, when promptly addressing DMCA takedown notices, are not necessarily held liable for user-uploaded content that infringes copyright.

Plaintiff sued defendant YouTube accusing it of secondary copyright infringement liability — that YouTube was contributorily and vicariously liable for infringement concerning three videos that nonparty TV-Novosti (operator of various RT channels, including RT Arabic) posted on YouTube. These videos contained content from documentary videos plaintiff had created and for which it owned the copyright.

Defendant moved to dismiss the complaint. The lower court granted the motion to dismiss. Plaintiff filed a motion for leave to file an amended complaint, which the court denied. That court had determined that the proposed amendments would be futile. Plaintiff sought review with the Second Circuit, arguing it had sufficiently alleged YouTube’s liability under theories of contributory and vicarious liability. On appeal, the court affirmed the denial of the motion to amend.

The court rejected plaintiff’s argument that YouTube was liable for infringement by failing to delete TV-Novosti’s entire YouTube account. Plaintiff’s argument apparently went something like this: “We made YouTube aware of the infringement by sending a DMCA takedown notice. Though YouTube took down the videos (which it did not catch in its copyright-detection technology) once it found out about them, by continuing to provide the platform for this infringer, YouTube took on liability for the infringement.”

The court held that it agreed with the lower court’s denial of the motion for leave to amend. “[B]ecause YouTube promptly and permanently removed the [allegedly infringing videos] from its platform once it received the plaintiff’s DMCA notices, the Amended Complaint does not permit an inference that YouTube acted in concert with TV-Novosti.”

Business Casual Holdings, LLC v. YouTube, LLC, 2023 WL 6842449 (2d Cir., October 17, 2023)

See also: BitTorrent site liable for Grokster style inducement of copyright infringement

Hackers stole cryptocurrency but the insurance company did not have to pay

hackers cryptocurrency insurance

Insurance and loss

Plaintiffs had a homeowners insurance policy with defendant insurance company. The policy covered personal property owned or used by the plaintiffs with a maximum limit of $359,500 for direct physical loss due to certain perils, including theft. In June 2021, hackers accessed plaintiffs’ computer and stole crypto tokens from their crypto wallets on two blockchain networks, amounting to approximately $750,000. Plaintiffs reported the incident and filed an insurance claim with defendant. Defendant only paid $200 on the claim because of a special limit of liability found in the policy.

Thinking that to be a pretty insufficient payment for such a dramatic loss, plaintiffs sued, alleging breach of contract and unreasonable denial of coverage under a Minnesota statute. Defendant moved for judgment on the pleadings. (“Judgment on the pleadings” in US federal court refers to a ruling made by the court based solely on the parties’ written pleadings and documents, without the need for a trial, when there are no genuine issues of material fact in dispute.) The court granted the motion.

Not direct and physical

Defendant had argued that the theft of digital assets (crypto tokens) did not constitute a “direct physical loss” under the policy, and thus, the claim was not covered. The court analyzed the language of the insurance policy, stating that “direct physical loss” required a distinct, demonstrable, and physical alteration to the covered property. Since crypto tokens are purely digital and lack physicality, according to the court, they do not meet the requirements for “direct physical loss” under Minnesota law.

Plaintiffs claimed that the policy’s language was ambiguous, but the court rejected this argument, applying the ordinary meaning of “direct physical loss” as required by Minnesota law.

The court also addressed plaintiffs’ statutory claim for bad-faith denial of coverage under Minnesota Statute § 604.18. To succeed in this claim, plaintiffs needed to prove that defendant lacked a reasonable basis for denying coverage and acted in reckless disregard of this fact. But since defendant did not breach the policy, the court found that the bad-faith claim failed as well.

Rosenberg v. Homesite Insurance Agency, Inc., 2023 WL 4686412 (D. Minn., July 21, 2023)

From the archives: 

Exploiting blockchain software defect supports unjust enrichment claim

When X makes it an ex-brand: Can a company retain rights in an old trademark after rebranding?

This past weekend Elon Musk announced plans to rebrand Twitter as X. This strategic shift from one of the most recognized names and logos in the social media realm is stirring discussion throughout the industry. This notable transformation raises a broader question: Can a company still have rights in its trademarks after rebranding? What might come of the famous TWITTER mark and the friendly blue bird logo?

Continued Use is Key

In the United States, trademark rights primarily arise from the actual use of the mark in commerce (and not just from registration of the mark). The Commerce Clause of the United States Constitution grants Congress the power to regulate commerce among the states. Exercising this constitutional authority, Congress enacted the Lanham Act, which serves as the foundation for modern trademark law in the United States. By linking the Lanham Act’s protections to the “use in commerce” of a trademark, the legislation reinforces the principle that active commercial use, rather than mere registration, is a key determinant of rights in that trademark. So, as long as a company has genuinely used its trademark in commerce (assuming no other company has rights that arose prior in time), the company retains rights to that mark.

Though a company may transition to a new brand identity, it can maintain rights to its former trademark by continuing its use in some form or another. This might involve selling a limited line of products under the old brand, using the old brand in specific regions, or licensing the old trademark to other entities. Such actions show the company’s intent to maintain its claim and rights to the mark—such rights being tied strongly to the actual use of the mark in commerce. No doubt continued use of the old marks after a rebrand can be problematic, as it may paint an unclear picture as to how the company is developing its identity. For example, as of the time of this blog post, X has placed the new X logo, but still has the words “Search Twitter” in the search bar. And there is also the open question of whether we will in the future call content posted to the platform “tweets”.

Avoiding Abandonment

If a company does not actively use its trademark and demonstrates no intention to use it in the future, it runs the risk of abandonment. Once a trademark is deemed abandoned, the original owner loses exclusive rights to it. This is obviously problematic for a brand owner, because a third party could then enter the scene, adopt use of the abandoned mark, and thereby pick up on the goodwill the former brand owner developed over the years.

What Will Twitter Do?

It is difficult to imagine that X will allow the TWITTER mark to fall into the history books of abandoned marks. The mark has immense value through its long use and recognition—indeed the platform has been the prime mover in its space since its founding in 2006. Even if the company commits to the X rebranding, we probably have not seen the end of TWITTER and the blue bird as trademarks. There will likely be some use, even if different than what we have seen over the past 17 years, to keep certain trademark rights alive.

From the archives:

Is Twitter a big fat copyright infringing turkey?

Generative AI executive who moved to competitor slapped with TRO

generative ai competitor

Generative AI is obviously a quickly growing segment, and competition among businesses in the space is fierce. As companies race to harness the transformative power of this technology, attracting and retaining top talent becomes a central battleground. Recent legal cases, like the newly-filed Kira v. Samman in Virginia, show just how intense the scramble for expertise has become. In the court’s opinion granting a temporary restraining order against a departing executive and the competitor to which he fled, we see some of the dynamics of non-competition clauses, and the lengths companies will go to in order to safeguard their intellectual property and strategic advantages, particularly in dealing with AI technology.

Kira and Samman Part Ways

Plaintiff Kira is a company that creates AI tools for law firms, while defendant DeepJudge AG offers comparable AI solutions to boost law firm efficiency.  Kira hired defendant Samman, who gained access to Kira’s confidential data. Samman had signed a Restrictive Covenants Agreement with Kira containing provisions that prohibited him from joining a competitor for 12 months post-termination. Samman resigned from Kira in June 2023, and Kira claimed he joined competitor DeepJudge after sending Kira’s proprietary data to his personal email.

The Dispute

Kira sued Samman and DeepJudge in federal court, alleging Samman breached his contractual obligations, and accusing DeepJudge of tortious interference with a contract. Kira also sought a temporary restraining order (TRO) to prevent Samman from working for DeepJudge and to mandate the return and deletion of Kira’s proprietary information in Samman’s possession.

Injunctive Relief Was Proper

The court observed that to obtain the sought-after injunction, Kira had to prove, among other things, a likelihood of success at trial. It found that Kira demonstrated this likelihood concerning Samman’s breach of the non-competition restrictive covenant. It determined the non-competition covenant Samman breached to be enforceable, given that it met specific requirements including advancing Kira’s economic interests. The court found that the evidence showed Samman, after leaving his role at Kira, joined a direct competitor, DeepJudge, in a role similar in function, thus likely violating the non-competition restrictive covenant.

The court found that Kira faced irreparable harm without the injunction, especially given the potential loss of clients due to Samman’s knowledge of confidential information. The court weighed the balance of equities in favor of Kira, emphasizing the protection of confidential business information and enforcement of valid contracts. It required Kira to post a bond of $15,000, to ensure coverage for potential losses Samman might face due to the injunction.

Kira (US) Inc. v. Samman, 2023 WL 4687189 (E.D. Va. July 21, 2023)

See also:

When can you use a competitor’s trademark in a domain name?

Court allows Amazon to censor “Wuhan plague” book reviews

amazon book reviews

In 2015, plaintiff began posting book reviews on Amazon, but in 2019 Amazon revoked his review privileges due to guideline violations, including reviews that criticized Donald Trump and two authors. After arbitration in 2020 favored Amazon, plaintiff and Amazon reached a settlement allowing plaintiff to post reviews if he adhered to Amazon’s policies. However, in 2022, after posting reviews derogatory of millennials and referring to COVID-19 as the “Wuhan plague,” Amazon once again revoked plaintiff’s ability to post book reviews and deleted his prior reviews from the platform.

Plaintiff sued Amazon alleging breach of contract and violation of Washington’s Consumer Protection Act (CPA), and seeking a request for a declaratory judgment saying Section 230 of the Communications Decency Act should not protect Amazon. Plaintiff asserted that Amazon wrongfully removed plaintiff’s reviews and did not adequately explain its actions. The CPA violation centered on Amazon’s insufficient explanations and inconsistent policy enforcement. Amazon sought to dismiss the complaint, arguing there was no legal basis for the breach of contract claim, the other claim lacked merit, and that both the Section 230 and the First Amendment protect Amazon from liability. The court granted Amazon’s motion.

Breach of Contract Claim Tossed

The court noted that to win a breach of contract claim in Washington, plaintiff had to prove a contractual duty was imposed and breached, causing plaintiff to suffer damages. Plaintiff claimed that Amazon breached its contract by banning him from posting book reviews and asserted that Amazon’s Conditions and Guidelines were ambiguous. But the court found that Amazon’s Conditions and Guidelines gave Amazon the exclusive right to remove content or revoke user privileges at its discretion, and that plaintiff’s claim sought to hold Amazon responsible for actions the contract permitted. Similarly, the court found plaintiff’s claims for both breach of contract and breach of the implied duty of good faith and fair dealing to be baseless, as they failed to identify any specific contractual duty Amazon allegedly violated.

No Violation of Washington Consumer Protection Act

To be successful under Washington’s Consumer Protection Act, plaintiff would have had to allege five elements, including an unfair or deceptive act and a public interest impact. The court found that plaintiff’s claim against Amazon, based on the company’s decision to remove reviews, failed to establish an “unfair or deceptive act” since Amazon’s Conditions and Guidelines transparently allowed such actions, and plaintiff presented no evidence showing Amazon’s practices would mislead reasonable consumers. Additionally, plaintiff did not adequately demonstrate a public interest impact, as he did not provide evidence of a widespread pattern of behavior by Amazon or the potential harm to other users. Consequently, plaintiff’s claim was insufficient in two essential areas, rendering the CPA claim invalid.

Section 230 Also Saved the Day for Amazon

Amazon claimed immunity under Section 230(c)(1) of the Communications Decency Act (CDA) against plaintiff’s allegations under the CPA and for breach of the implied duty of good faith and fair dealing. Section 230 of the CDA protects providers of interactive computer services from liability resulting from third-party content (e.g., online messaging boards). For Amazon to receive immunity under this section, it had to show three things: it is an interactive computer service, it is treated by plaintiff as a publisher, and the information in dispute (the book reviews) was provided by another content provider. Given that Amazon met these conditions, the court determined that plaintiff’s claims against Amazon under Washington’s CPA and for breach of the implied duty were barred by Section 230 of the CDA.

As for plaintiff’s declaratory judgment claim regarding Section 230, the court found that since the Declaratory Judgment Act only offers a remedy and not a cause of action, and given the absence of a “substantial controversy,” the Court could not grant this declaratory relief. The court noted that its decision was further reinforced by the court’s conclusion that Section 230 did bar two of plaintiff’s claims.

Haywood v. Amazon.com, Inc., 2023 WL 4585362 (W.D. Washington, July 18, 2023)

See also:

Amazon and other booksellers off the hook for sale of Obama drug use book

Does a human who edits an AI-created work become a joint author with the AI?

ai joint author

If a human edits a work that an AI initially created, is the human a joint author under copyright law?

U.S. copyright law (at 17 U.S.C. § 101) considers a work to be a “joint work” if it is made by two or more authors intending to mix their contributions into a single product. So, if a human significantly modifies or edits content that an AI originally created, one might think the human has made a big enough contribution to be considered a joint author. But it is not that straightforward. The law looks for a special kind of input: it must be original and creative, not just technical or mechanical. For instance, merely selecting options for the AI or doing basic editing might not cut it. But if the human’s editing changes the work in a creative way, it might just qualify as a joint author.

Where the human steps in.

This blog post is a clear example. ChatGPT created all the other paragraphs of this blog post (i.e. not this one). I typed this paragraph out from scratch. I have gone through and edited the other paragraphs, making what are obviously mechanical changes. For example, I didn’t like how ChatGPT used so many contractions. I mean, I did not like how ChatGPT used so many contractions. I suspect those are not the kind of “original” contributions that the Copyright Act’s authors had in mind to constitute the level of participation to give rise to a joint work. But I also added some sentences here and there, and struck some others. I took the photo that goes with the post, cropped it, and decided how to place it in relation to the text. Those activities are likely “creative” enough to be copyrightable contributions to the unitary whole that is this blog post. And then of course there is this paragraph that you are just about done reading. Has this paragraph not contributed some notable expression to make this whole blog post better than what it would have been without the paragraph?

Let’s say the human editing does indeed make the human a joint author. What rights would the human have? And how would these rights compare to any the AI might have? Copyright rights are generally held by human creators. This means the human would have rights to copy the work, distribute it, display or perform it publicly, and make derivative works.

Robot rights.

As for the AI, here’s where things get interesting. U.S. Copyright law generally does not recognize AI systems as authors, so they would not have any rights in the work. But this is a rapidly evolving field, and there is ongoing debate about how the law should treat creations made by AI.

This leaves us in a peculiar situation. You have a “joint work” that a human and an AI created together, but only the human can be an author. So, as it stands, the AI would not have any rights in the work, and the human would. Here’s an interesting nuance to consider: authors of joint works are pretty much free to do what they wish with the work as they see fit, so long as they fulfill certain obligations to the other authors (e.g., account for any royalties received). Does the human-owner have to fulfill these obligations to the purported AI-author of the joint work? It seems we cannot fairly address that question if we have not yet established that the AI system can be a joint author in the first place.

Where we go from here.

It seems reasonable to conclude that a human editing AI-created content might qualify as a joint author if the changes are significant and creative, not just technical. If that’s the case, the human would have full copyright rights under current law, while the AI would not have any. As these human-machine collaborations continue to become more commonplace, we will see how law and policy evolve to either strengthen the position that only “natural persons” (humans) can own intellectual property rights, or to move in the direction of granting some sort of “personhood” to non-human agents. It is like watching science fiction unfold in reality in real time.

What do you think?

See also:

Five legal issues around using AI in a branding strategy

Snapchat not liable for enabling teacher to groom minor student

A high school science teacher used Snapchat to send sexually explicit content to one of her students, whom she eventually assaulted. Authorities uncovered this abuse after the student overdosed on drugs. The student (as John Doe) sued the teacher, the school district and Snapchat. The lower court threw out the case against Snapchat on the basis of the federal Communications Decency Act at 47 USC § 230. The student sought review with the United States Court of Appeals for the Fifth Circuit. On appeal, the court affirmed.

Relying on Doe v. MySpace, Inc., 528 F.3d 413 (5th Cir. 2008), the court affirmed the lower court’s finding that the student’s claims against Snapchat were based on the teacher’s messages. Accordingly, Snapchat was immune from liability because this provision of federal law – under the doctrine of the MySpace case – provides “immunity … to Web-based service providers for all claims stemming from their publication of information created by third parties.”

Doe v. Snap, Inc., 2023 WL 4174061 (5th Cir. June 26, 2023)

Scroll to top