Alex Jones gets partial win in Connecticut lawsuit over unfair trade practices 

Erica Lafferty, William Sherlach, and other family members of victims of the Sandy Hook Elementary School shooting sued Alex Jones, Free Speech Systems, LLC, and related entities. Plaintiffs sought damages for defamation, invasion of privacy, emotional distress, and violations of the Connecticut Unfair Trade Practices Act (CUTPA). Plaintiffs argued that defendants’ conspiracy theories about the Sandy Hook shooting violated CUTPA because defendants spread lies to attract audiences and sell products such as dietary supplements and survival gear. Plaintiffs asked the court to hold defendants liable for using false statements as a deceptive trade practice tied to their business interests.

The trial court ruling

The trial court sided with plaintiffs and allowed the CUTPA claim to proceed. It found that defendants’ false statements about the shooting being a hoax were tied to the sale of products advertised on their media platforms. According to the lower court, defendants’ spreading of falsehoods to increase product sales qualified as an unfair trade practice under CUTPA. The jury awarded plaintiffs substantial damages, including compensation for the CUTPA violation.

The appellate court reversal

Defendants appealed, and the appellate court reversed the trial court’s ruling on the CUTPA claim. The appellate court concluded that defendants’ defamatory statements were not directly tied to the sale of goods or services in a way that CUTPA covers. While defendants monetized their platforms, the court reasoned that the alleged lies about Sandy Hook were not themselves commercial conduct. The court ruled that the connection between the false statements and product sales was too weak to support a CUTPA violation. As a result, the appellate court directed the trial court to adjust the judgment by removing the damages associated with the CUTPA claim.

Three Reasons Why This Case Matters:

  • It’s Sensational: Anything involving Alex Jones and the Sandy Hook Massacre are attention-getting.
  • Protects Defamation Framework: By separating defamation from trade practices, the court preserved traditional tort remedies for harmful speech without expanding CUTPA.
  • Addresses Modern Media Monetization: The case underscores how courts assess financial gain from speech in an era of monetized platforms.

Lafferty v. Jones, — A.3d —, 2024 WL 5036021 (App. Ct. Conn. December 10, 2024)

 

K-Pop companies seek U.S. court’s help to unmask anonymous YouTubers

Three South Korean entertainment companies turned to a U.S. court to assist in identifying anonymous YouTube users accused of posting defamatory content. The companies sought permission to issue a subpoena under 28 U.S.C. § 1782, a law that allows U.S. courts to facilitate evidence collection for foreign legal proceedings.

Applicants alleged that the YouTube channels in question posted false claims about K-pop groups they manages, including accusations of plagiarism and deliberate masking of poor vocal performances. Applicants – who had already initiated lawsuits in South Korea – needed the subpoena to obtain identifying information from Google, the parent company of YouTube, to pursue these claims further. Google did not oppose the request but reserved the right to challenge the subpoena if served.

The court ruled in favor of applicants, granting the subpoena. It determined that the statutory requirements under § 1782 were met: Google operates within the court’s jurisdiction, the discovery was intended for use in South Korean legal proceedings, and applicants qualified as interested persons. The court also weighed discretionary factors, such as the non-involvement of Google in the South Korean lawsuits and the relevance of the requested information, finding them supportive of applicants’ request.

The court emphasized that the subpoena was narrowly tailored to identify the operators of the YouTube channels while avoiding unnecessary intrusion into unrelated data. However, it also sought to ensure procedural fairness, requiring Google to notify the affected individuals, who would then have 30 days to contest the subpoena.

Three Reasons Why This Case Matters:

  • International Legal Cooperation: The case illustrates how U.S. courts can assist in resolving international disputes involving anonymous online actors.
  • Accountability for Online Speech: It highlights the balance between free expression and accountability for potentially harmful content on digital platforms.
  • Corporate Reputation Management: The decision reflects how businesses can use legal avenues to protect their reputation across jurisdictions.

In re Ex Parte Application of HYBE Co., Ltd., Belift Lab Inc., and Source Music Co., Ltd., 2024 WL 4906495 (N.D. Cal. Nov. 27, 2024).

Section 230 protected President Trump from defamation liability

TRUMP 230

Plaintiff sued the Trump campaign, some of the President’s advisors and several conservative media outlets asserting claims for defamation. Plaintiff – an employee of voting systems maker Dominion – claimed defendants slandered him by saying plaintiff had said he was going to make sure Trump would not win the 2020 election.

The Trump campaign had argued that two retweets – one by Donald Trump and another by his son Eric – could not form the basis for liability because Section 230 shielded the two from liability. The lower court rejected the Section 230 argument. But on review, the Colorado Court of Appeals held that Section 230 immunity should apply to these retweets.

Section 230 shields users of interactive computer services from liability arising from information provided by third parties. The facts of the case showed that both President Trump and Eric Trump simply retweeted a Gateway Pundit article and an One America Network article without adding any new defamatory content.

The court specifically rejected plaintiff’s argument that Section 230 immunity should not apply because of the Trump defendants’ knowledge that the retweeted information was defamatory. The court looked to a broader consensus of courts that hold such an idea is not woven into Section 230 imm.

The case supports the proposition that defendants could repost verbatim content that someone else generated – even with knowledge that the content is defamatory – and not face liability.

Coomer v. Donald J. Trump for President, Inc., — P.3d —, 2024 WL 1560462  (Colo. Ct. App. April 11, 2024)

Second Circuit rules in favor of Barstool Sports in high profile online defamation case

The Second Circuit Court of Appeals has ruled in favor of Barstool Sports and certain of its employees in the longstanding defamation case brought by Michael Rapaport. The actor and comedian Rapaport and his company, Michael David Productions Inc., had appealed a lower court decision that had granted summary judgment to Barstool Sports and several of its employees, including founder David Portnoy.

Barstool Sports, a media and comedy brand established in 2004, is known for its unfiltered content across various platforms. Michael Rapaport, a prominent figure in entertainment, is similarly recognized for his candid commentary on social and political issues. The partnership between Rapaport and Barstool Sports began in 2017 but soon deteriorated, leading to a public and messy feud.

The Dispute

The conflict escalated when Rapaport had a disagreement with Barstool personality Adam Smith. This led to a series of derogatory exchanges on social media, ultimately resulting in Rapaport’s dismissal from Barstool. Portnoy publicly announced the split, citing Rapaport’s negative comments about Barstool’s fanbase. Following this, both parties continued to engage in a bitter exchange of insults online.

Lower Court Proceedings

Rapaport filed a lawsuit against Barstool, alleging defamation, among other claims. The defamation claim was based on multiple comments Barstool personalities made on various platforms. The district court, however, ruled in favor of Barstool, leading to Rapaport’s appeal.

The Appellate Court’s Decision

The appellate court observed the criteria under New York law for establishing defamation. The court differentiated between statements of fact and expressions of opinion, with the latter being protected and not actionable for defamation. The analysis focused on the context in which the statements were made, considering the nature of the language used and the broader setting of the dispute.

The court found that the statements made by Barstool, including accusations of racism, fraud, and other personal attacks, were part of a hyperbolic and vulgar feud, and were thus likely to be perceived as opinions rather than factual assertions. Moreover, the court noted that many statements were made on platforms where opinionated content is expected, further undermining the claim that they conveyed factual information about Rapaport.

Conclusion

The appellate court affirmed the district court’s judgment, emphasizing that the context and nature of the statements were key in determining their status as non-actionable opinions. The decision underlines the complexities of defamation claims in the digital era, where the line between fact and opinion can be blurred by the nature of the platform and the style of communication used.

This case serves as a reminder of the challenges in navigating defamation in the age of social media, where public figures often engage in heated exchanges that can have legal implications. The ruling reinforces the importance of context in evaluating such claims, setting a precedent for future defamation cases in the digital landscape.

Rapaport v. Barstool Sports Inc., 2024 WL 88636 (2nd Cir. January 9, 2023) [Link to decision]

Can a person be liable for retweeting a defamatory tweet?

section 230 user retweet defamatory

Under traditional principles of defamation law, one can be liable for repeating a defamatory statement to others. Does the same principle apply, however, on social media such as Twitter, where one can easily repeat the words of others via a retweet?

Hacking, tweet, retweet, lawsuit

A high school student hacked the server hosting the local middle school’s website, and modified plaintiff’s web page to make it appear she was seeking inappropriate relationships. Another student tweeted a picture of the modified web page, and several people retweeted that picture.

The teacher sued the retweeters for defamation and reckless infliction of emotional distress. The court dismissed the case, holding that 47 USC §230 immunized defendants from liability as “users” of an interactive computer service. Plaintiff sought review with the New Hampshire Supreme Court. On appeal, the court affirmed the dismissal.

Who is a “user” under Section 230?

Section 230 provides, in relevant part, that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”. Importantly, the statute does not define the word “user”. The lower court held that defendant retweeters fit into the category of “user” under the statute and therefore could not be liable for their retweeting, because to impose such liability would require treating them as the publisher or speaker of information provided by another.

Looking primarily at the plain language of the statute, and guided by the 2006 California case of Barrett v. Rosenthal, the state supreme court found no basis in plaintiff’s arguments that defendants were not “users” under the statute. Plaintiff had argued that “user” should be interpreted to mean libraries, colleges, computer coffee shops and others who, “at the beginning of the internet” were primary access points for people. And she also argued that because Section 230 changed common law defamation, the statute must speak directly to immunizing individual users.

The court held that it was “evident” that Section 230 abrogated the common law of defamation as applied to individual users. “That individual users are immunized from claims of defamation for retweeting content they did not create is evident from the statutory language. ”

Banaian v. Bascom, — A.3d —, 2022 WL 1482521 (N.H. May 11, 2022)

See also:

Is it defamation to accuse someone of sending a bogus DMCA takedown notice?

DMCA defamatory

Esports aren’t only about 21st century video games. Apparently there is a relatively robust community of Tecmo Bowl enthusiasts who – though the game is three decades old – gets together to compete in tournaments. A couple of members of that community got into it with one another online, and the spat spawned some fierce litigation. That scuffle raised the question of whether accusing someone of sending a bogus DMCA takedown notice is defamatory.

The online scuffle

Plaintiff was upset about posts defendant made in the forum of a Tecmo Bowl tournament website. One of plaintiff’s claims was that defendant had wrongfully accused plaintiff of sending bogus DMCA takedown notices to Facebook concerning a page related to a previous Tecmo Bowl tournament.

Claims of bogus DMCA takedown notices defamatory?

So plaintiff sued defendant in Texas state court for defamation, and lost. He believed that he had established a defamation claim, since defendant had – in plaintiff’s view – accused plaintiff of violating the law by abusing the DMCA process. So plaintiff sought review with the Court of Appeals of Texas. But that higher court agreed with the lower court. It was proper to dismiss the defamation case.

The court evaluated whether an objectively reasonable reader of the forum posts would draw the implication that plaintiff had committed a crime. Specifically, plaintiff had asserted that defendant accused plaintiff of committing perjury, since DMCA takedown notices have to be sworn to. See 17 U.S.C. §512(c)(3)(A)(vi).

But the court did not agree with plaintiff’s theory. It found “that the general public, or more accurately the reasonable reader, is not likely aware of what a “DMCA claim” [is] or what the acronym DMCA even means.” So in this court’s view, and on these facts, accusing someone of sending a DMCA takedown notice that was bogus was not defamatory.

Hawkins v. Knobbe, 2020 WL 7693111 (Ct. App. Texas) December 28, 2020

See also:

Need help with an online issue? Let’s talk.

About the author:

Evan Brown is an intellectual property and technology attorney in Chicago. This post originally appeared on evan.law.

Section 230 immunity protected Twitter from claims it aided and abetted defamation

Twitter enjoyed Section 230 immunity for aiding and abetting defamation because plaintiffs’ claims on that point did not transform Twitter into a party that created or developed content.

An anonymous Twitter user posted some tweets that plaintiffs thought were defamatory. So plaintiffs sued Twitter for defamation after Twitter refused to take the tweets down. Twitter moved to dismiss the lawsuit. It argued that the Communications Decency Act (CDA) at 47 U.S.C. §230 barred the claim. The court agreed that Section 230 provided immunity to Twitter, and granted the motion to dismiss.

The court applied the Second Circuit’s test for Section 230 immunity as set out in La Liberte v. Reid, 966 F.3d 79 (2d Cir. 2020). Under this test, which parses Section 230’s language, plaintiffs’ claims failed because:

  • (1) Twitter was a provider of an interactive computer service,
  • (2) the claims were based on information provided by another information content provider, and
  • (3) the claims treated Twitter as the publisher or speaker of that information.

Twitter is a provider of an interactive computer service

The CDA defines an “interactive computer service” as “any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server.” 47 U.S.C. § 230(f)(2). The court found that Twitter is an online platform that allows multiple users to access and share the content hosted on its servers. As such, it is an interactive computer service for purposes of the CDA.

Plaintiffs’ claims were based on information provided by another information content provider

The court also found that the claims against Twitter were based on information provided by another information content provider. The CDA defines an “information content provider” as “any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.” 47 U.S.C. § 230(f)(3). In this case, the court found that plaintiffs’ claims were based on information created or developed by another information content provider – the unknown Twitter user who posted the alleged defamatory content. Plaintiffs did not allege that Twitter played any role in the “creation or development” of the challenged tweets.

The claim treated Twitter as the publisher or speaker of the alleged defamatory information

The court gave careful analysis to this third prong of the test. Plaintiffs alleged that Twitter had “allowed and helped” the unknown Twitter user to defame plaintiffs by hosting its tweets on its platform, or by refusing to remove those tweets when plaintiffs reported them. The court found that either theory would amount to holding Twitter liable as the “publisher or speaker” of “information provided by another information content provider.” The court observed that making information public and distributing it to interested parties are quintessential acts of publishing. Plaintiffs’ theory of liability would “eviscerate” Section 230 protection because it would hold Twitter liable simply for organizing and displaying content exclusively provided by third parties.

Similarly, the court concluded that holding Twitter liable for failing to remove the tweets plaintiffs found objectionable would also hold Twitter liable based on its role as a publisher of those tweets, because deciding whether or not to remove content falls squarely within the exercise of a publisher’s traditional role and is therefore subject to the CDA’s broad immunity.

The court found that plaintiffs’ suggestion that Twitter aided and abetted defamation by arranging and displaying others’ content on its platform failed to overcome Twitter’s immunity under the CDA. In the court’s view, such activity would be tantamount to holding Twitter responsible as the “developer” or “creator” of that content. But in reality, to impose liability on Twitter as a developer or creator of third-party content – rather than as a publisher of it – Twitter would have to directly and materially contribute to what made the content itself unlawful.

Plaintiffs in this case did not allege that Twitter contributed to the defamatory content of the tweets at issue, and thus pled no basis upon which Twitter could be held liable as the creator or developer of those tweets. Accordingly, plaintiffs’ defamation claims against Twitter also satisfied the final requirement for CDA immunity: the claims sought to hold Twitter, an interactive computer service, liable as the publisher of information provided by another information content provider. Ultimately, Twitter had Section 230 immunity for aiding and abetting defamation.

Brikman v. Twitter, Inc., 2020 WL 5594637 (E.D.N.Y., September 17, 2020)

See also:

Website avoided liability over user content thanks to Section 230

About the author:

Evan Brown, Copyright work made for hireEvan Brown is an attorney in Chicago practicing copyright, trademark, technology and in other areas of the law. His clients include individuals and companies in many industries, as well as the technology companies that serve them. Twitter: @internetcases

Need help with an online legal issue?

Let’s talk. Give me a call at (630) 362-7237, or send me an email at ebrown@internetcases.com.

Restraining order entered against website that encouraged contacting children of plaintiff’s employees

Plaintiff sued defendant (who was an unhappy customer of plaintiff) under the Lanham Act (for trademark infringement) and for defamation. Defendant had registered a domain name using plaintiff’s company name and had set up a website that, among other things, he used to impersonate plaintiff’s employees and provide information about employees’ family members, some of whom were minors.

Plaintiff moved for a temporary restraining order and the court granted the motion.

The Website

The website was structured and designed in a way that made it appear as though it was affiliated with plaintiff. For example, it included a copyright notice identifying plaintiff as the owner. It also included allegedly false statements about plaintiff. For example, it included the following quotation, which was attributed to plaintiff’s CEO:

Well of course we engage in bad faith tactics like delaying and denying our policy holders [sic] valid claims. How do you think me [sic], my key executive officers, and my board members stay so damn rich. [sic]

The court found that plaintiff had shown a likelihood of success on the merits of its claims.

Lanham Act Claim

It found that defendant used plaintiff’s marks for the purpose of confusing the public by creating a website that looked as though it was a part of plaintiff’s business operations. This was evidenced by, for example, the inclusion of a copyright notice on the website.

Defamation

On the defamation claim, the court found that the nature of the statements about plaintiff, plaintiff’s assertion that they were false, and the allegation that the statements were posted on the internet sufficed to satisfy the first two elements of a defamation claim, namely, that they were false and defamatory statements pertaining to the plaintiff and were unprivileged publications to a third party. The allegations in the complaint were also sufficient to indicate that defendant “negligently disregarded the falsity of the statements.”

Furthermore, the statements on the website concerned the way that plaintiff processed its insurance claims, which related to the business of the company and the profession of plaintiff’s employees who handled the processing of claims. Therefore, the final element was also satisfied.

First Amendment Limitations

The court’s limitation in the TRO is interesting to note. To the extent that plaintiff sought injunctive relief directed at defendant’s speech encouraging others to contact the company and its employees with complaints about the business, whether at the workplace or at home, or at public “ad hominem” comments, the court would not grant the emergency relief that was sought.

The court also would not prohibit defendant from publishing allegations that plaintiff had engaged in fraudulent or improper business practices, or from publishing the personally identifying information of plaintiff’s employees, officers, agents, and directors. Plaintiff’s submission failed to demonstrate to the court’s satisfaction how such injunctive relief would not unlawfully impair defendant’s First Amendment rights.

The did, however, enjoin defendant from encouraging others to contact the children and other family members of employees about plaintiff’s business practices because contact of that nature had the potential to cause irreparable emotional harm to those family members, who have no employment or professional relationship with defendant.

Symetra Life Ins. Co. v. Emerson, 2018 WL 6338723(D. Maine, Dec. 4, 2018)

Section 230 protected Google in lawsuit over blog post

Defendant used Google’s Blogger service to write a post – about plaintiffs’ business practices – that plaintiffs found objectionable. So plaintiffs sued Google in federal court for defamation, tortious interference with a business relationship, and intentional infliction of emotional distress. The lower court dismissed the case on grounds that the Communications Decency Act (at 47 U.S.C. §230) immunized Google from liability for the publication of third party content.

Plaintiffs sought review with the U.S. Court of Appeals for the District of Columbia. On appeal, the court affirmed the dismissal. Applying a three part test the court developed in Klayman v. Zuckerberg, 753 F.3d 1354 (D.C. Cir. 2014) (which in turn applied analysis from the leading case of Zeran v. America Online, Inc., 129 F.3d 327 (4th Cir. 1997)), the court held that Section 230 entitled Google to immunity because: (1) Google was a “provider or user of an interactive computer service,” (2) the relevant blog post contained “information provided by another information content provider,” and (3) the complaint sought to hold Google liable as “the publisher or speaker” of the blog post.

The court rejected defendant’s argument that in establishing and enforcing its Blogger Content Policy, Google influenced and thereby created the content it published. It held that Google’s role was strictly one of “output control” – because Google’s choice was limited to a “yes” or a “no” decision whether to remove the post, its action constituted “the very essence of publishing.” Since Section 230 immunizes online defendants against complaints seeking to hold them as the publisher of content, the lower court properly dismissed the action.

Bennett v. Google, LLC, 882 F.3d 1163 (D.C. Cir., February 23, 2018)

About the Author: Evan Brown is a Chicago technology and intellectual property attorney. Call Evan at (630) 362-7237, send email to ebrown [at] internetcases.com, or follow him on Twitter @internetcases. Read Evan’s other blog, UDRP Tracker, for information about domain name disputes.

Google and YouTube protected by Section 230

The case of Weerahandi v. Shelesh is a classic example of how Section 230 (a provision of the Communications Decency Act (CDA), found at 47 USC 230) shielded online intermediaries from alleged tort liability occasioned by their users.

Background Facts

Plaintiff was a YouTuber and filed a pro se lawsuit for, among other things, defamation, against a number of other YouTubers as well as Google and YouTube. The allegations arose from a situation back in 2013 in which one of the individual defendants sent what plaintiff believed to be a “false and malicious” DMCA takedown notice to YouTube. One of the defendants later took the contact information plaintiff had to provide in the counter-notification and allegedly disseminated that information to others who were alleged to have published additional defamatory YouTube videos.

Google and YouTube also got named as defendants for “failure to remove the videos” and for not taking “corrective action”. These parties moved to dismiss the complaint, claiming immunity under Section 230. The court granted the motion to dismiss.

Section 230’s Protections

Section 230 provides, in pertinent part that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” 47 U.S.C. § 230(c)(1). Section 230 also provides that “[n]o cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section.” 47 U.S.C. § 230(e)(3).

The CDA also “proscribes liability in situations where an interactive service provider makes decisions ‘relating to the monitoring, screening, and deletion of content from its network.’ ” Obado v. Magedson, 612 Fed.Appx. 90, 94–95 (3d Cir. 2015). Courts have recognized Congress conferred broad immunity upon internet companies by enacting the CDA, because the breadth of the internet precludes such companies from policing content as traditional media have. See Jones v. Dirty World Entm’t Recordings LLC, 755 F.3d 398, 407 (6th Cir. 2014); Batzel v Smith, 333 F.3d 1018, 1026 (9th Cir. 2003); Zeran v. Am. Online, Inc., 129 F.3d 327, 330 (4th 1997); DiMeo v. Max, 433 F. Supp. 2d 523, 528 (E.D. Pa. 2006).

How Section 230 Applied Here

In this case, the court found that the CDA barred plaintiff’s claims against Google and YouTube. Both Google and YouTube were considered “interactive computer service[s].” Parker v. Google, Inc., 422 F. Supp. 2d 492, 551 (E.D. Pa. 2006). Plaintiff did not allege that Google or YouTube played any role in producing the allegedly defamatory content. Instead, Plaintiff alleged both websites failed to remove the defamatory content, despite his repeated requests.

Plaintiff did not cite any authority in his opposition to Google and YouTube’s motion, and instead argued that the CDA did not bar claims for the “failure to remove the videos” or to “take corrective action.” The court held that to the contrary, the CDA expressly protected internet companies from such liability. Under the CDA, plaintiff could not assert a claim against Google or YouTube for decisions “relating to the monitoring, screening, and deletion of content from its network. ” Obado, 612 Fed.Appx. at 94–95 (3d Cir. 2015); 47 U.S.C. § 230(c)(1) (“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”). For these reasons, the court found the CDA barred plaintiff’s claims against Google and YouTube.

Weerahandi v. Shelesh, 2017 WL 4330365 (D.N.J. September 29, 2017)

Evan_BrownAbout the Author: Evan Brown is a Chicago technology and intellectual property attorney. Call Evan at (630) 362-7237, send email to ebrown [at] internetcases.com, or follow him on Twitter @internetcases. Read Evan’s other blog, UDRP Tracker, for information about domain name disputes.

Scroll to top