Can a person be liable for retweeting a defamatory tweet?

section 230 user retweet defamatory

Under traditional principles of defamation law, one can be liable for repeating a defamatory statement to others. Does the same principle apply, however, on social media such as Twitter, where one can easily repeat the words of others via a retweet?

Hacking, tweet, retweet, lawsuit

A high school student hacked the server hosting the local middle school’s website, and modified plaintiff’s web page to make it appear she was seeking inappropriate relationships. Another student tweeted a picture of the modified web page, and several people retweeted that picture.

The teacher sued the retweeters for defamation and reckless infliction of emotional distress. The court dismissed the case, holding that 47 USC §230 immunized defendants from liability as “users” of an interactive computer service. Plaintiff sought review with the New Hampshire Supreme Court. On appeal, the court affirmed the dismissal.

Who is a “user” under Section 230?

Section 230 provides, in relevant part, that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”. Importantly, the statute does not define the word “user”. The lower court held that defendant retweeters fit into the category of “user” under the statute and therefore could not be liable for their retweeting, because to impose such liability would require treating them as the publisher or speaker of information provided by another.

Looking primarily at the plain language of the statute, and guided by the 2006 California case of Barrett v. Rosenthal, the state supreme court found no basis in plaintiff’s arguments that defendants were not “users” under the statute. Plaintiff had argued that “user” should be interpreted to mean libraries, colleges, computer coffee shops and others who, “at the beginning of the internet” were primary access points for people. And she also argued that because Section 230 changed common law defamation, the statute must speak directly to immunizing individual users.

The court held that it was “evident” that Section 230 abrogated the common law of defamation as applied to individual users. “That individual users are immunized from claims of defamation for retweeting content they did not create is evident from the statutory language. ”

Banaian v. Bascom, — A.3d —, 2022 WL 1482521 (N.H. May 11, 2022)

See also:

Old social media posts violated trade dress infringement injunction

social media trade dress
The parties in the case of H.I.S.C., Inc. v. Franmar are competitors, each making garden broom products. In earlier litigation, the defendant filed a counterclaim against plaintiff for trade dress infringement, and successfully obtained an injunction against plaintiff, prohibiting plaintiff from advertising brooms designed in a certain way. Defendant asked the court to find plaintiff in contempt for, among other reasons, certain social media posts that plaintiff posted before the injunction, but that still remained after the injunction was entered. The court agreed that the continuing existence of such posts was improper and found plaintiff in contempt for having violated the injunction.

The court noted that the injunction prohibited “[a]dvertising, soliciting, marketing, selling, offering for sale or otherwise using in the United States the [applicable product trade dress] in connection with any garden broom products.” It observed that “[o]n the Internet and in social media, a post from days, weeks, months, or even years ago can still serve to advertise a product today.” The court cited to Ariix, LLC v. NutriSearch Corp., 985 F.3d 1107, 1116 n.5, in which that court noted that one prominent influencer receives $300,000 to $500,000 for a single Instagram post endorsing a company’s product – a sum surely including both the post itself and an agreement to continue allowing the post to be visible to consumers for a substantial duration of time. Interestingly, the court found that the nature of a social media post may be different from a television or radio advertisement that has a fixed air date and time. Accordingly, the court found that it was inappropriate for social media posts published before the injunction to stay online.

H.I.S.C., Inc. v. Franmar Int’l Importers, Ltd., 2022 WL 104730 (S.D. Cal. January 11, 2022)

See also:

Executive order to clarify Section 230: a summary

Section 230 executive order

Late yesterday President Trump took steps to make good on his promise to regulate online platforms like Twitter and Facebook. He released a draft executive order to that end. You can read the actual draft executive order. Here is a summary of the key points. The draft order:

  • States that it is the policy of the U.S. to foster clear, nondiscriminatory ground rules promoting free and open debate on the Internet. It is the policy of the U.S. that the scope of Section 230 immunity should be clarified.
  • Argues that a platform becomes a “publisher or speaker” of content, and therefore not subject to Section 230 immunity, when it does not act in good faith to to restrict access to content (in accordance with Section 230(c)(2) that it considers to be “obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable.” The executive order argues that Section 230 “does not extend to deceptive or pretextual actions restricting online content or actions inconsistent with an online platform’s terms of service.”
  • Orders the Secretary of Commerce to petition the FCC, requesting that the FCC propose regulations to clarify the conditions around a platform’s “good faith” when restricting access or availability of content. In particuar, the requested rules would examine whether the action was, among other things, deceptive, pretextual, inconsistent with the provider’s terms of service, the product of unreasoned explanation, or without meaningful opportunity to be heard.
  • Directs each federal executive department and agency to review its advertising and marketing spending on online platforms. Each is to provide a report in 30 days on: amount spent, which platforms supported, any viewpoint-based restrictions of the platform, assessment whether the platform is appropriate, and statutory authority available to restrict advertising on platforms not deemed appropriate.
  • States that it is the policy of the U.S. that “large social media platforms, such as Twitter and Facebook, as the functional equivalent of a traditional public forum, should not infringe on protected speech”.
  • Re-establishes the White House “Tech Bias Reporting Tool” that allows Americans to report incidents of online censorship. These complaints are to be forwarded to the DoJ and the FTC.
  • Directs the FTC to “consider” taking action against entities covered by Section 230 who restrict speech in ways that do not align with those entities’ public representations about those practices.
  • Directs the FTC to develop a publicly-available report describing complaints of activity of Twitter and other “large internet platforms” that may violate the law in ways that implicate the policy that these are public fora and should not infringe on protected speech.
  • Establishes a working group with states’ attorneys general regarding enforcement of state statutes prohibiting online platforms from engaging in unfair and deceptive acts and practices. 
  • This working group is also to collect publicly available information for the creation and monitoring of user watch lists, based on their interactions with content and other users (likes, follows, time spent). This working group is also to monitor users based on their activity “off the platform”. (It is not clear whether that means “off the internet” or “on other online places”.)

Influencer agreements: what needs to be in them

If you are a social media influencer, or are a brand looking to engage an influencer, you may need to enter into an influencer agreement. Here are five key things that should be in the contract between the influencer and the brand: 

  • Obligations 
  • Payment 
  • Content ownership 
  • Publicity rights 
  • Endorsement guidelines compliance 

Obligations under the influencer agreement.

The main thing that a brand wants from an influencer is for the influencer to say certain things about the brand’s products, in a certain way, and at certain times. What kind of content? Photos? Video? Which platforms? What hashtags? When? How many posts? The agreement should spell all these things out.

Payment.

Influencers are compensated in a number of ways. In addition to getting free products, they may be paid a flat fee upfront or from time to time. And it’s also common too see a revenue share arrangement. That is, the influencer will get a certain percentage based on sales of the products she is endorsing. These may be tracked by a promo code. The contract should identify all these amounts and percentages, and the timing for payment.

So what about content ownership? 

The main work of an influencer is to generate content. This could be pictures posted to Instagram, tweets, or video posted to her story. All that content is covered by copyright. Unless the contract says otherwise, the influencer will own the copyright. If the brand wants to do more with that content outside of social media, that needs to be addressed in the influencer agreement.

And then there are rights of publicity. 

Individuals have the right to determine how their image and name are used for commercial purposes. If the brand is going to feature the influencer on the brand’s own platform, then there needs to be language that specifies the limits on that use. That’s key to an influencer who wants to control her personal brand and reputation. 

Finally, endorsement guidelines and the influencer agreement. 

The federal government wants to make sure the consuming public gets clear information about products. So there are guidelines that influencers have to follow. You have to know what these guidelines are to stay out of trouble. And the contract should address what happens if these guidelines aren’t followed.

See also: When is it okay to use social media to make fun of people?

About the author: Evan Brown is an attorney helping individuals and businesses with a wide variety of agreements involving social media, intellectual property and technology. Call him at (630) 362-7237 or send email to ebrown@internetcases.com. 

Police not required to publicly disclose how they monitor social media accounts in investigations

In the same week that news has broken about how Amazon is assisting police departments with facial recognition technology, here is a decision from a Pennsylvania court that held police do not have to turn over details to the public about how they monitor social media accounts in investigations.

The ACLU sought a copy under Pennsylvania’s Right-to-Know Law of the policies and procedures of the Pennsylvania State Police (PSP) for personnel when using social media monitoring software. The PSP produced a redacted copy, and after the ACLU challenged the redaction, the state’s Office of Open Records ordered the full document be provided. The PSP sought review in state court, and that court reversed the Office of Open Records order. The court found that disclosure of the record would be reasonably likely to threaten public safety or a public protection activity.

The court found in particular that disclosure would: (i) allow individuals to know when the PSP can monitor their activities using “open sources” and allow them to conceal their activities; (ii) expose the specific investigative method used; (iii) provide criminals with tactics the PSP uses when conducting undercover investigations; (iv) reveal how the PSP conducts its investigations; and (v) provide insight into how the PSP conducts an investigation and what sources and methods it would use. Additionally, the court credited the PSP’s affidavit which explained that disclosure would jeopardize the PSP’s ability to hire suitable candidates – troopers in particular – because disclosure would reveal the specific information that may be reviewed as part of a background check to determine whether candidates are suitable for employment.

Pennsylvania State Police v. American Civil Liberties Union of Pennsylvania, 2018 WL 2272597 (Commonwealth Court of Pennsylvania, May 18, 2018)

About the Author: Evan Brown is a Chicago technology and intellectual property attorney. Call Evan at (630) 362-7237, send email to ebrown [at] internetcases.com, or follow him on Twitter @internetcases. Read Evan’s other blog, UDRP Tracker, for information about domain name disputes.

internetcases turns 10 years old today

Ten years ago today, somewhat on a whim, yet to fulfill a need I saw for discussion about the law of the internet in the “blogosphere” (a term we loved dearly back then), I launched internetcases.

What started as a one-page handwritten pamphlet that I would mimeograph in the basement of my one-bedroom apartment and then foist upon unsuspecting people on street corners has in ten years turned into a billion dollar conglomerate and network. internetcases is now translated into 7 languages daily and employs a staff of thousands to do the Lord’s work fighting Ebola and terrorism on 4 continents. Or it’s a WordPress install on some cheap GoDaddy space and I write when I can.

All seriousness aside, on this 10th anniversary, I want to sincerely thank my loyal readers and followers. Writing this blog has been the single most satisfying thing I’ve done in my professional life, and I am immensely grateful for the knowledge it has helped me develop, the opportunities for personal brand development it has given (speaking, press, media opportunities), but most of all, I’m grateful for the hundreds of people it has enabled me to connect with and get to know.

Blogging (and the web in general) has changed a lot in 10 years. And the legal issues arising from the internet continue to challenge us to stretch our thinking and amp up our powers of analysis. It’s great to have a platform on the web from which to share news and thoughts about the role that technology plays in shaping our legal rules and our culture.

Thanks all.

Court orders Twitter to identify anonymous users

Defamation plaintiffs’ need for requested information outweighed any impact on Doe defendants’ free speech right to tweet anonymously.

Plaintiff company and its CEO sued several unknown defendants who tweeted that plaintiff company encouraged domestic violence and misogyny and that the CEO visited prostitutes. The court allowed plaintiffs to serve subpoenas on Twitter to seek the identity of the unknown Twitter users. Twitter would not comply with the subpoenas unless and until the court ruled on whether the production of information would violate the users’ First Amendment rights.

The court ruled in favor of the plaintiffs and ordered Twitter to turn over identifying information about the unknown users. In reaching this decision, the court applied the Ninth Circuit analysis for unmasking anonymous internet speakers set out in Perry v. Schwarzenegger, 591 F.3d. 1126 (9th Cir. 2009). The court found that the requested discovery raised the possibility of “arguable first amendment infringement,” so it continued its analysis by weighing the balance between the aggrieved plaintiffs’ interests with the anonymous defendants’ free speech rights.

The Perry balancing test places a burden on the party seeking discovery to show that the information sought is rationally related to a compelling governmental interest and that the requested discovery is the least restrictive means of obtaining the desired information.

In this case, the court found that the subpoenas were narrowly tailored to plaintiffs’ need to uncover the identities of the anonymous defendants so that plaintiffs could serve process. It also found that the “nature” of defendants’ speech weighed in favor of enforcing the subpoena. The challenged speech went “beyond criticism into what appear[ed] to be pure defamation, ostensibly unrelated to normal corporate activity.”

Music Group Macao Commercial Offshore Ltd. v. Does I-IX, 2015 WL 75073 (N.D. Cal., January 6, 2015).

Court allows class action plaintiffs to set up social media accounts to draw in other plaintiffs

Some former interns sued Gawker media under the Fair Labor Standards Act. The court ordered the parties to meet and confer about the content and dissemination of the proposed notice to other potential class members. Plaintiffs suggested, among other things, that they establish social media accounts (Facebook, Twitter, LinkedIn) titled “Gawker Intern Lawsuit” or “Gawker Class Action”. Gawker objected.

The court permitted the establishment of the social media accounts. It rejected Gawker’s argument that the lack of evidence that any former intern used social media would make the notice ineffective. The court found it “unrealistic” that the former interns did not maintain social media accounts.

Gawker also argued that social media to give notice would take control of the dissemination out of the court’s hands. Since users could comment on the posted content, Gawker argued, the court would be “deprived” of its ability to oversee the message. The court likewise rejected this argument, holding that its “role [was] to ensure the fairness and accuracy of the parties’ communications with potential plaintiffs – not to be the arbiter of all discussions not involving the parties that may take place thereafter.”

Mark v. Gawker Media LLC, No. 13-4347, 2014 WL 5557489 (S.D.N.Y. November 3, 2014)

When is it okay to use social media to make fun of people?

There is news from California that discusses a Facebook page called 530 Fatties that was created to collect photos of and poke fun at obese people. It’s a rude project, and sets the context for discussing some intriguing legal and normative issues.

Apparently the site collects photos that are taken in public. One generally doesn’t have a privacy interest in being photographed while in public places. And that seems pretty straightforward if you stop and think about it — you’re in public after all. But should technology change that legal analysis? Mobile devices with good cameras connected to high speed broadband networks make creation, sharing and shaming much easier than it used to be. A population equipped with these means essentially turns all public space into a panopticon. Does that mean the individual should be given more of something-like-privacy when in public? If you think that’s crazy, consider it in light of what Justice Sotomayor wrote in her concurrence in the 2012 case of U.S. v. Jones: “I would ask whether people reasonably expect that their movements will be recorded and aggregated in a manner that enables [one] to ascertain, more or less at will, their political and religious beliefs, sexual habits, and so on.”

Apart from privacy harms, what else is at play here? For the same reasons that mobile cameras + social media jeopardizes traditional privacy assurances, the combination can magnify the emotional harms against a person. The public shaming that modern technology occasions can inflict deeper wounds because of the greater spatial and temporal characteristics of the medium. One can now easily distribute a photo or other content to countless individuals, and since the web means the end of forgetting, that content may be around for much longer than the typical human memory.

Against these concerns are the free speech interests of the speaking parties. In the U.S. especially, it’s hardwired into our sensibilities that each of us has great freedom to speak and otherwise express ourselves. The traditional First Amendment analysis will protect speech — even if it offends — unless there is something truly unlawful about it. For example, there is no free speech right to defame, to distribute obscene materials, or to use “fighting words.” Certain forms of harassment fall into the category of unprotected speech. How should we examine the role that technology plays in moving what would otherwise be playground-like bullying (like calling someone a fatty) to unlawful speech that can subject one to civil or even criminal liability? Is the impact that technology’s use makes even a valid issue to discuss?

Finally, we should examine the responsibility of the intermediaries here. A social media platform generally is going to be protected by the Communications Decency Act at 47 USC 230 from liability for third party content. But we should discuss the roles of the intermediary in terms other than pure legal ones. Many social media platforms are proactive in taking down otherwise lawful content that has the tendency to offend. The pervasiveness of social media underscores the power that these platforms have to shape normative values around what is appropriate behavior among individuals. This power is indeed potentially greater than any legal or governmental power to constrain the generation and distribution of content.

Evan Brown is an attorney in Chicago advising clients on matters dealing with technology, the internet and new media.

Tweet served as evidence of initial interest confusion in trade dress case

The maker of KIND bars sued the maker of Clif bars alleging that the packaging of the Clif MOJO bar infringes the trade dress used for KIND bars. Plaintiff moved for a preliminary injunction, but the court denied the motion. But in its analysis, the court considered the relevance of a Twitter user’s impression of the products. Plaintiff submitted a tweet as evidence in which the user wrote, “I was about to pick up one of those [Clif MOJO bars] because I thought it was a Kind Bar at the vitamin shop ….” The court found that this type of initial interest confusion was actionable and therefore the tweet supported plaintiff’s argument.

KIND LLC v. Clif Bar & Company, 2014 WL 2619817 (S.D.N.Y. June 12, 2014)

Evan Brown is an attorney in Chicago, advising clients on matters dealing with trademark protection and enforcement, technology, the internet and new media. Contact him.

Scroll to top