X avoids much of music industry copyright lawsuit

Plaintiffs sued X for copyright infringement arising from X’s users uploading tweets that contained copyright-protected music. Plaintiffs accused X of “trying to generate the kind of revenue that one would expect as a lawful purveyor of music and other media, without incurring the cost of actually paying for the licenses.” For example, plaintiffs highlighted a feature within the X platform whereby one could seek out tweets that include audiovisual media. And they pointed out infringing content surrounded by “promoted” content on the platform that generated revenue for X. The parties disputed the extent to which X actively encouraged infringing conduct. Plaintiffs sent many DMCA takedown notices to X but complained that the company took too long to respond to those notices. And plaintiffs asserted that X did not have an appropriate procedure in place to terminate users engaged in repeated acts of copyright infringement.

The complaint alleged three counts – direct, contributory and vicarious infringement. X moved to dismiss the complaint for failure to state a claim. The court granted the motion for the most part, except as to certain practices concerning contributory liability, namely, being more lenient to verified users, failing to act quickly concerning DMCA takedown notices, and failing to take steps to in response to severe serial infringers.

No direct infringement liability

The court found that plaintiffs had not successfully alleged direct infringement liability because their claims did not align with the required notion of “transmission” as defined in the Copyright Act and interpreted in the Supreme Court case of American Broadcasting Companies, Inc. v. Aereo, Inc., 573 US 431 (2014). The court distinguished X’s actions from the defendant in the Aereo case by noting that X merely provided the platform for third-party transmissions, rather than actively participating in the transmission of copyrighted material. Therefore, the court concluded that X’s role was more akin to a passive carrier, similar to a telegraph system or telephone company, making its actions more suitable for consideration under theories of secondary liability rather than direct infringement.

Some possible contributory liability

The court found that certain portions of plaintiffs’ claims for contributory infringement liability survived because they plausibly alleged that X engaged in actions that could materially contribute to infringement on the X platform. These actions included failing to promptly respond to valid takedown notices, allowing users to pay for less stringent copyright policy enforcement, and not taking meaningful steps against severe serial infringers. Consequently, the court dismissed the broader claim of general liability across the X platform but allowed the plaintiffs to proceed with their claims related to these specific practices.

No vicarious liability

Finally, the court determined that plaintiffs had not successfully pled vicarious liability for copyright infringement because their allegations did not establish that X had the requisite level of control over the infringing activities on X. The court found that simply providing a service that users might exploit for infringement did not equate to the direct control or supervisory capacity typically required for vicarious liability, as seen in traditional employer-employee or principal-agent relationships. Consequently, the court rejected the application of vicarious liability in this context, emphasizing that contributory infringement, rather than vicarious liability, was the more appropriate legal framework for the plaintiffs’ claims.

Concord Music Group, Inc. v. X Corp., 2024 WL 945325 (M.D. Tenn. March 5, 2024)

See also:

Required content moderation reporting does not violate X’s First Amendment rights

x bill of rights

A federal court in California has upheld the constitutionality of the state’s Assembly Bill 587 (AB 587), which mandates social media companies to submit to the state attorney general semi-annual reports detailing their content moderation practices. This decision comes after X filed a lawsuit claiming the law violated the company’s First Amendment rights.

The underlying law

AB 587 requires social media companies to provide detailed accounts of their content moderation policies, particularly addressing issues like hate speech, extremism, disinformation, harassment, and foreign political interference. These “terms of service reports” are to be submitted to the state’s attorney general, aiming to increase transparency in how these platforms manage user content.

X’s challenge

X challenged this law, seeking to prevent its enforcement on the grounds that it was unconstitutional. The court, however, denied their motion for injunctive relief, finding that X failed to demonstrate a likelihood of success on the merits of its constitutional claims.

The court’s decision relied heavily on SCOTUS’s opinion in Zauderer v. Office of Disciplinary Counsel of Supreme Court of Ohio, 471 U.S. 626 (1985). Under the Zauderer case, for governmentally compelled commercial disclosure to be constitutionally permissible, the information must be purely factual and uncontroversial, not unduly burdensome, and reasonably related to a substantial government interest.

The court’s constitutional analysis

In applying these criteria, the court found that AB 587’s requirements fit within these constitutional boundaries. The reports, while compulsory, do not constitute commercial speech in the traditional sense, as they are not advertisements and carry no direct economic benefit for the social media companies. Despite this, the court followed the rationale of other circuits that have assessed similar requirements for social media disclosures.

The court determined that the content of the reports mandated by AB 587 is purely factual, requiring companies to outline their existing content moderation policies related to specified areas. The statistical data, if provided, represents objective information about the company’s actions. The court also found that the disclosures are uncontroversial, noting that the mere association with contentious topics does not render the reports themselves controversial. We know how controversial and political the regulation of “disinformation” can be.

Addressing the burden of these requirements, the court recognized that while the reporting may be demanding, it is not unjustifiably so under First Amendment considerations. X argued that the law would necessitate significant resources to monitor and report the required metrics. But the court noted that AB 587 does not obligate companies to adopt any specific content categories, nor does it impose burdens on speech itself, a crucial aspect under Zauderer’s analysis.

What this means

The court confirmed that AB 587’s reporting requirements are reasonably related to a substantial government interest. This interest lies in ensuring transparency in social media content moderation practices, enabling consumers to make informed choices about their engagement with news and information on these platforms.

The court’s decision is a significant step in addressing the complexities of regulating social media platforms, balancing the need for transparency with the constitutional rights of these digital entities. As the landscape of digital communication continues to evolve, this case may be a marker for how governments might approach the regulation of social media companies, particularly in the realm of content moderation.

X Corp. v. Bonta, 2023 WL 8948286 (E.D. Cal., December 28, 2023)

See also: Maryland Court of Appeals addresses important question of internet anonymity

When X makes it an ex-brand: Can a company retain rights in an old trademark after rebranding?

This past weekend Elon Musk announced plans to rebrand Twitter as X. This strategic shift from one of the most recognized names and logos in the social media realm is stirring discussion throughout the industry. This notable transformation raises a broader question: Can a company still have rights in its trademarks after rebranding? What might come of the famous TWITTER mark and the friendly blue bird logo?

Continued Use is Key

In the United States, trademark rights primarily arise from the actual use of the mark in commerce (and not just from registration of the mark). The Commerce Clause of the United States Constitution grants Congress the power to regulate commerce among the states. Exercising this constitutional authority, Congress enacted the Lanham Act, which serves as the foundation for modern trademark law in the United States. By linking the Lanham Act’s protections to the “use in commerce” of a trademark, the legislation reinforces the principle that active commercial use, rather than mere registration, is a key determinant of rights in that trademark. So, as long as a company has genuinely used its trademark in commerce (assuming no other company has rights that arose prior in time), the company retains rights to that mark.

Though a company may transition to a new brand identity, it can maintain rights to its former trademark by continuing its use in some form or another. This might involve selling a limited line of products under the old brand, using the old brand in specific regions, or licensing the old trademark to other entities. Such actions show the company’s intent to maintain its claim and rights to the mark—such rights being tied strongly to the actual use of the mark in commerce. No doubt continued use of the old marks after a rebrand can be problematic, as it may paint an unclear picture as to how the company is developing its identity. For example, as of the time of this blog post, X has placed the new X logo, but still has the words “Search Twitter” in the search bar. And there is also the open question of whether we will in the future call content posted to the platform “tweets”.

Avoiding Abandonment

If a company does not actively use its trademark and demonstrates no intention to use it in the future, it runs the risk of abandonment. Once a trademark is deemed abandoned, the original owner loses exclusive rights to it. This is obviously problematic for a brand owner, because a third party could then enter the scene, adopt use of the abandoned mark, and thereby pick up on the goodwill the former brand owner developed over the years.

What Will Twitter Do?

It is difficult to imagine that X will allow the TWITTER mark to fall into the history books of abandoned marks. The mark has immense value through its long use and recognition—indeed the platform has been the prime mover in its space since its founding in 2006. Even if the company commits to the X rebranding, we probably have not seen the end of TWITTER and the blue bird as trademarks. There will likely be some use, even if different than what we have seen over the past 17 years, to keep certain trademark rights alive.

From the archives:

Is Twitter a big fat copyright infringing turkey?

Why are API access agreements important?

api access agreements

Twitter has been in the news lately for what some seem to imply has been a problematic termination of third-party developers from its platform. This is a good occasion to talk about API access agreements in general, what they should cover, and why they are important.

An API (Application Programming Interface) access agreement is a legal document that outlines the terms and conditions under which a third-party developer can access and use an API. These agreements are important because they ensure that the API owner maintains control over their system and that the third-party developer understands and agrees to the terms and conditions of use.

System security and stability

One of the key provisions in an API access agreement relates to security. As APIs are used to access sensitive data and perform critical functions, it is essential that the API is protected from unauthorized access and misuse. The API owner should set strict security requirements for the third-party developer, such as data encryption and authentication protocols, to ensure that the API is used in a secure manner. The API owner may also wish to set limits on how often calls can be made to the API, so that the system is not overloaded or otherwise subject to diminished performance.

Intellectual property protection

Another key provision in an API access agreement relates to copyright. The API owner should have the right to control the use of their API, including the right to limit the third-party developer’s use of the API as needed to protect intellectual property rights. The API owner should also ensure that the third-party developer agrees not to copy, distribute, or otherwise use the API in a manner that is outside of an agreed scope.

These are contracts

API access agreements are contracts, and as such, they are legally binding. The API owner must be able to maintain control of its system for the system to function properly. This means that the API owner should have the right to revoke access to the API if the third-party developer breaches the terms of the agreement or if the API is being used in a manner that is not in compliance with the agreement.

Avoiding problems with termination

When terminating access to an API, the provider can treat a third-party developer fairly by providing adequate notice and a clear explanation for the termination. The developer should also negotiate for a reasonable amount of time to transition to an alternative solution or to retrieve any data it has stored within the API. Additionally, the provider may wish to make a good faith effort to assist the developer in finding a suitable alternative solution. If the termination is due to a breach of the API access agreement, the provider may provide the developer with specific details about the breach and allow for an opportunity for the developer to cure the breach before terminating access. A developer should also consider trying to negotiate a provision that says it is entitled to compensation from the developer for any losses or damages incurred as a result of an improper termination. Overall, the provider should approach the termination process in a fair, transparent and reasonable manner, taking into account the developer’s business needs and interest.

API access agreements are an essential part of the API ecosystem. They help ensure that the API owner maintains control over its system, that the third-party developer understands and agrees to the terms and conditions of use, and that the API is used in a secure and compliant manner. It is important that the parties understand the key provisions in an API access agreement and seek to comply with them in order to use the API successfully.

See also: Court will not aid company that was banned from accessing Facebook API

Evan Brown is a technology and intellectual property attorney in Chicago. Follow him on Twitter at @internetcases.

Can a person be liable for retweeting a defamatory tweet?

section 230 user retweet defamatory

Under traditional principles of defamation law, one can be liable for repeating a defamatory statement to others. Does the same principle apply, however, on social media such as Twitter, where one can easily repeat the words of others via a retweet?

Hacking, tweet, retweet, lawsuit

A high school student hacked the server hosting the local middle school’s website, and modified plaintiff’s web page to make it appear she was seeking inappropriate relationships. Another student tweeted a picture of the modified web page, and several people retweeted that picture.

The teacher sued the retweeters for defamation and reckless infliction of emotional distress. The court dismissed the case, holding that 47 USC §230 immunized defendants from liability as “users” of an interactive computer service. Plaintiff sought review with the New Hampshire Supreme Court. On appeal, the court affirmed the dismissal.

Who is a “user” under Section 230?

Section 230 provides, in relevant part, that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”. Importantly, the statute does not define the word “user”. The lower court held that defendant retweeters fit into the category of “user” under the statute and therefore could not be liable for their retweeting, because to impose such liability would require treating them as the publisher or speaker of information provided by another.

Looking primarily at the plain language of the statute, and guided by the 2006 California case of Barrett v. Rosenthal, the state supreme court found no basis in plaintiff’s arguments that defendants were not “users” under the statute. Plaintiff had argued that “user” should be interpreted to mean libraries, colleges, computer coffee shops and others who, “at the beginning of the internet” were primary access points for people. And she also argued that because Section 230 changed common law defamation, the statute must speak directly to immunizing individual users.

The court held that it was “evident” that Section 230 abrogated the common law of defamation as applied to individual users. “That individual users are immunized from claims of defamation for retweeting content they did not create is evident from the statutory language. ”

Banaian v. Bascom, — A.3d —, 2022 WL 1482521 (N.H. May 11, 2022)

See also:

Section 230 immunity protected Twitter from claims it aided and abetted defamation

Twitter enjoyed Section 230 immunity for aiding and abetting defamation because plaintiffs’ claims on that point did not transform Twitter into a party that created or developed content.

An anonymous Twitter user posted some tweets that plaintiffs thought were defamatory. So plaintiffs sued Twitter for defamation after Twitter refused to take the tweets down. Twitter moved to dismiss the lawsuit. It argued that the Communications Decency Act (CDA) at 47 U.S.C. §230 barred the claim. The court agreed that Section 230 provided immunity to Twitter, and granted the motion to dismiss.

The court applied the Second Circuit’s test for Section 230 immunity as set out in La Liberte v. Reid, 966 F.3d 79 (2d Cir. 2020). Under this test, which parses Section 230’s language, plaintiffs’ claims failed because:

  • (1) Twitter was a provider of an interactive computer service,
  • (2) the claims were based on information provided by another information content provider, and
  • (3) the claims treated Twitter as the publisher or speaker of that information.

Twitter is a provider of an interactive computer service

The CDA defines an “interactive computer service” as “any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server.” 47 U.S.C. § 230(f)(2). The court found that Twitter is an online platform that allows multiple users to access and share the content hosted on its servers. As such, it is an interactive computer service for purposes of the CDA.

Plaintiffs’ claims were based on information provided by another information content provider

The court also found that the claims against Twitter were based on information provided by another information content provider. The CDA defines an “information content provider” as “any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.” 47 U.S.C. § 230(f)(3). In this case, the court found that plaintiffs’ claims were based on information created or developed by another information content provider – the unknown Twitter user who posted the alleged defamatory content. Plaintiffs did not allege that Twitter played any role in the “creation or development” of the challenged tweets.

The claim treated Twitter as the publisher or speaker of the alleged defamatory information

The court gave careful analysis to this third prong of the test. Plaintiffs alleged that Twitter had “allowed and helped” the unknown Twitter user to defame plaintiffs by hosting its tweets on its platform, or by refusing to remove those tweets when plaintiffs reported them. The court found that either theory would amount to holding Twitter liable as the “publisher or speaker” of “information provided by another information content provider.” The court observed that making information public and distributing it to interested parties are quintessential acts of publishing. Plaintiffs’ theory of liability would “eviscerate” Section 230 protection because it would hold Twitter liable simply for organizing and displaying content exclusively provided by third parties.

Similarly, the court concluded that holding Twitter liable for failing to remove the tweets plaintiffs found objectionable would also hold Twitter liable based on its role as a publisher of those tweets, because deciding whether or not to remove content falls squarely within the exercise of a publisher’s traditional role and is therefore subject to the CDA’s broad immunity.

The court found that plaintiffs’ suggestion that Twitter aided and abetted defamation by arranging and displaying others’ content on its platform failed to overcome Twitter’s immunity under the CDA. In the court’s view, such activity would be tantamount to holding Twitter responsible as the “developer” or “creator” of that content. But in reality, to impose liability on Twitter as a developer or creator of third-party content – rather than as a publisher of it – Twitter would have to directly and materially contribute to what made the content itself unlawful.

Plaintiffs in this case did not allege that Twitter contributed to the defamatory content of the tweets at issue, and thus pled no basis upon which Twitter could be held liable as the creator or developer of those tweets. Accordingly, plaintiffs’ defamation claims against Twitter also satisfied the final requirement for CDA immunity: the claims sought to hold Twitter, an interactive computer service, liable as the publisher of information provided by another information content provider. Ultimately, Twitter had Section 230 immunity for aiding and abetting defamation.

Brikman v. Twitter, Inc., 2020 WL 5594637 (E.D.N.Y., September 17, 2020)

See also:

Website avoided liability over user content thanks to Section 230

About the author:

Evan Brown, Copyright work made for hireEvan Brown is an attorney in Chicago practicing copyright, trademark, technology and in other areas of the law. His clients include individuals and companies in many industries, as well as the technology companies that serve them. Twitter: @internetcases

Need help with an online legal issue?

Let’s talk. Give me a call at (630) 362-7237, or send me an email at ebrown@internetcases.com.

Executive order to clarify Section 230: a summary

Section 230 executive order

Late yesterday President Trump took steps to make good on his promise to regulate online platforms like Twitter and Facebook. He released a draft executive order to that end. You can read the actual draft executive order. Here is a summary of the key points. The draft order:

  • States that it is the policy of the U.S. to foster clear, nondiscriminatory ground rules promoting free and open debate on the Internet. It is the policy of the U.S. that the scope of Section 230 immunity should be clarified.
  • Argues that a platform becomes a “publisher or speaker” of content, and therefore not subject to Section 230 immunity, when it does not act in good faith to to restrict access to content (in accordance with Section 230(c)(2) that it considers to be “obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable.” The executive order argues that Section 230 “does not extend to deceptive or pretextual actions restricting online content or actions inconsistent with an online platform’s terms of service.”
  • Orders the Secretary of Commerce to petition the FCC, requesting that the FCC propose regulations to clarify the conditions around a platform’s “good faith” when restricting access or availability of content. In particuar, the requested rules would examine whether the action was, among other things, deceptive, pretextual, inconsistent with the provider’s terms of service, the product of unreasoned explanation, or without meaningful opportunity to be heard.
  • Directs each federal executive department and agency to review its advertising and marketing spending on online platforms. Each is to provide a report in 30 days on: amount spent, which platforms supported, any viewpoint-based restrictions of the platform, assessment whether the platform is appropriate, and statutory authority available to restrict advertising on platforms not deemed appropriate.
  • States that it is the policy of the U.S. that “large social media platforms, such as Twitter and Facebook, as the functional equivalent of a traditional public forum, should not infringe on protected speech”.
  • Re-establishes the White House “Tech Bias Reporting Tool” that allows Americans to report incidents of online censorship. These complaints are to be forwarded to the DoJ and the FTC.
  • Directs the FTC to “consider” taking action against entities covered by Section 230 who restrict speech in ways that do not align with those entities’ public representations about those practices.
  • Directs the FTC to develop a publicly-available report describing complaints of activity of Twitter and other “large internet platforms” that may violate the law in ways that implicate the policy that these are public fora and should not infringe on protected speech.
  • Establishes a working group with states’ attorneys general regarding enforcement of state statutes prohibiting online platforms from engaging in unfair and deceptive acts and practices. 
  • This working group is also to collect publicly available information for the creation and monitoring of user watch lists, based on their interactions with content and other users (likes, follows, time spent). This working group is also to monitor users based on their activity “off the platform”. (It is not clear whether that means “off the internet” or “on other online places”.)

Section 230 protected Twitter from liability for deleting Senate candidate’s accounts

Plaintiff (an Arizona senate candidate) sued Twitter after it suspended four of plaintiff’s accounts. He brought claims for (1) violation of the First Amendment; (2) violation of federal election law; (3) breach of contract; (4) conversion, (5) antitrust; (6) negligent infliction of emotional distress; (7) tortious interference; and (8) promissory estoppel.

Twitter moved to dismiss on multiple grounds, including that Section 230(c)(1) of the Communications Decency Act (“CDA”), 47 U.S.C. § 230, rendered it immune from liability for each of plaintiff’s claims that sought to treat it as a publisher of third-party content.

The CDA protects from liability (1) any provider of an interactive computer service (2) whom a plaintiff seeks to treat as a publisher or speaker (3) of information provided by another information content provider.

The court granted the motion to dismiss, on Section 230 grounds, all of the claims except the antitrust claim (which it dismissed for other reasons). It held that Twitter is a provider of an interactive computer service. And plaintiff sought to treat Twitter as a publisher or speaker by trying to pin liability on it for deleting accounts, which is a quintessential activity of a publisher. The deleted accounts were comprised of information provided by another information content provider (i.e., not Twitter, but plaintiff himself).

Brittain v. Twitter, 2019 WL 2423375 (N.D. Cal. June 10, 2019)

Toilet paper bearing Trump tweets a copyright problem?

This report from Fox News discusses toilet paper available via Amazon printed with Donald Trump tweets. It raises the question of whether the president would be able to stop the sale of this product and seek damages for copyright infringement. Assuming the tweets are printed without his permission, could he make a claim?

tp_tweets

There are a number of issues to consider here.

The first is the long-kicked-around question of whether tweets are copyrightable. In other words, do they contain enough and the kind of content to rise to the level of originality that copyright law requires? One would be hard pressed to argue that a tweet not comprised just of facts is outside copyright’s protection. Though only 140 characters, there is plenty of room for originality. If a tweet is not copyrightable, then neither would The Red Wheelbarrow.

The second issue is whether the tweets may not be subject to copyright because they are a work of the federal goverment. Section 105 of the Copyright Act says that “[c]opyright protection . . . is not available for any work of the United States Government.” This splits out into a couple of other issues. Are the tweets from when their author was president? And even if they are, would tweets from a personal account be a “work of the United States Government”?

The article gives us an answer to the first question, which postpones the opportunity to answer the second one. The tweets are from before he became president. So there does not appear to be much standing in the way of the Donald fighting these on copyright grounds if he were to so choose.

But not so fast. What about fair use? Obviously the nature of the product is a commentary on the content printed upon it.

What a crappy situation. Maybe Amazon will just flush this product from its site before we get there.

Court orders Twitter to identify anonymous users

Defamation plaintiffs’ need for requested information outweighed any impact on Doe defendants’ free speech right to tweet anonymously.

Plaintiff company and its CEO sued several unknown defendants who tweeted that plaintiff company encouraged domestic violence and misogyny and that the CEO visited prostitutes. The court allowed plaintiffs to serve subpoenas on Twitter to seek the identity of the unknown Twitter users. Twitter would not comply with the subpoenas unless and until the court ruled on whether the production of information would violate the users’ First Amendment rights.

The court ruled in favor of the plaintiffs and ordered Twitter to turn over identifying information about the unknown users. In reaching this decision, the court applied the Ninth Circuit analysis for unmasking anonymous internet speakers set out in Perry v. Schwarzenegger, 591 F.3d. 1126 (9th Cir. 2009). The court found that the requested discovery raised the possibility of “arguable first amendment infringement,” so it continued its analysis by weighing the balance between the aggrieved plaintiffs’ interests with the anonymous defendants’ free speech rights.

The Perry balancing test places a burden on the party seeking discovery to show that the information sought is rationally related to a compelling governmental interest and that the requested discovery is the least restrictive means of obtaining the desired information.

In this case, the court found that the subpoenas were narrowly tailored to plaintiffs’ need to uncover the identities of the anonymous defendants so that plaintiffs could serve process. It also found that the “nature” of defendants’ speech weighed in favor of enforcing the subpoena. The challenged speech went “beyond criticism into what appear[ed] to be pure defamation, ostensibly unrelated to normal corporate activity.”

Music Group Macao Commercial Offshore Ltd. v. Does I-IX, 2015 WL 75073 (N.D. Cal., January 6, 2015).

Scroll to top