Yahoo not liable for blocking marketing email

Section 230 of Communications Decency Act (47 U.S.C. 230) shields Yahoo’s spam filtering efforts

Holomaxx v. Yahoo, 2011 WL 865794 (N.D.Cal. March 11, 2011)

Plaintiff provides email marketing services for its clients. It sends out millions of emails a day, many of those to recipients having Yahoo email addresses. Yahoo used its spam filtering technology to block many of the messages plaintiff was trying to send to Yahoo account users. So plaintiff sued Yahoo, alleging various causes of action such as intentional interference with prospective business advantage.

Yahoo moved to dismiss, arguing, among other things, that it was immune from liability under Section 230(c)(2) of the Communications Decency Act. The court granted the motion to dismiss.

Section 230(c)(2) provides, in relevant part, that “[n]o provider or user of an interactive computer service shall be held liable on account of … any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.”

Plaintiff argued that immunity should not apply here because Yahoo acted in bad faith by using “faulty filtering technology and techniques,” motivated “by profit derived from blocking both good and bad e-mails.” But the court found no factual basis to support plaintiff’s allegations that Yahoo used “cheap and ineffective technologies to avoid the expense of appropriately tracking and eliminating only spam email.”

The court rejected another of plaintiff’s arguments against applying Section 230, namely, that Yahoo should not be afforded blanket immunity for blocking legitimate business emails. Looking to the cases of Goddard v. Google and National Numismatic Certification v. eBay, plaintiff argued that the court should apply the canon of statutory construction known as ejusdem generis to find that legitimate business email should not be treated the same as the more nefarious types of content enumerated in Section 230(c)(2). (Content that is, for example, obscene, lewd, lascivious, filthy, excessively violent, harassing).

On this point the court looked to the sheer volume of the purported spam to conclude Yahoo was within Section 230’s protection to block the messages — plaintiff acknowledged that it sent approximately six million emails per day through Yahoo’s servers and that at least .1% of those emails either were sent to invalid addresses or resulted in user opt-out. On an annual basis, that amounted to more than two million invalid or unwanted emails.

Section 230 shields Google from liability for anonymous defamation

Black v. Google Inc., 2010 WL 3746474 (N.D.Cal. September 20, 2010)

Back in August, the U.S. District Court for the Northern District of California dismissed a lawsuit against Google brought by two pro se plaintiffs, holding that the action was barred under the immunity provisions of 47 USC 230. That section says that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Plaintiffs had complained about a comment on Google (probably a review) disparaging their roofing business.

Plaintiffs filed and “objection” to the dismissal, which the court read as a motion to alter or amend under Fed. R. Civ. P. 59. The court denied plaintiffs’ motion.

In their “objection,” plaintiffs claimed — apparently without much support — that Congress did not intend Section 230 to apply in situations involving anonymous speech. The court did not buy this argument.

The court looked to the Ninth Circuit case of Carafano v. Metrosplash as an example of a website operator protected under Section 230 from liability for anonymous content: “To be sure, the website [in Carafano] provided neutral tools, which the anonymous dastard used to publish the libel, but the website did absolutely nothing to encourage the posting of defamatory content.” As in Carafano, Google was a passive conduit and could not be liable for failing to detect and remove the allegedly defamatory content.

No CDA immunity for letting co-defendant use computer to post material

Capital Corp. Merchant Banking, Inc. v. Corporate Colocation, Inc., No. 07-1626, 2008 WL 4058014 (M.D.Fla., August 27, 2008)

Professor Goldman points us to a recent decision in a case where the plaintiff alleged that one of the individual defendants “allowed [a co-defendant] to use ‘a computer registered in her name’ to make . . . defamatory statements.” The defendants filed a 12(b)(6) motion to dismiss, arguing that the Communications Decency Act (CDA) at 47 U.S.C. 230 barred the claims. The court denied the motion.

With little analysis, the court cited to the 9th Circuit’s Roommates.com decision, holding that “[t]he CDA provides immunity for the removal of content, not the creation of the content.” While that is not an incorrect statement, it is troublesome in this context inasmuch as it tells half the story.

Yes, 47 U.S.C. 230(c) does provide protection to “Good Samaritan” operators of interactive computer services who remove offensive content. The user whose content has been removed would not have a cause of action against the operator who took down the content in good faith. See 47 U.S.C. 230(c)(2).

But 47 U.S.C. 230(c)(1) provides that no provider of an interactive computer service shall be treated as a publisher or speaker of any information provided by a third party. Courts have usually held that when a defamation plaintiff brings a claim against the operator of the computer service used to post defamatory content (who was not responsible for creating the content), such a claim is barred, as the plaintiff would not be able to satisfy the publication element of a defamation prima facie case.

Maybe in this situation the court found that the defendant who let a co-defendant use her computer did not meet the definition of a service provider as contemplated by the CDA. But it would have been nice to see that analysis written down, rather than having to merely surmise or speculate.

Communications Decency Act shields web host as “distributor” of defamatory content

Plaintiff Austin, the owner of a travel-related business, accused the owner of one of his business’s competitors of posting defamatory content on the competitor’s website. Austin filed a defamation lawsuit against the company that hosted the website, claiming that it was liable for refusing to take down the alleged defamatory statements.

The web hosting company successfully moved for summary judgment, citing to 47 U.S.C. §230, a portion of the Communications Decency Act of 1996 which provides, in relevant part, that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Austin sought review of the trial court’s decision.

Austin argued that the plain language of §230 provides a shield only for liability that would result from being a publisher of defamatory material. Because the web hosting company was a distributor of defamatory content, Austin argued, §230 should not apply, and thus the lower court erred in granting summary judgment on that basis.

The appellate court rejected Austin’s argument, relying heavily on the decision of Zeran v. America Online, Inc., 129 F.3d 327 (4th Cir., 1997). As in Zeran, the court found that Congress had spoken directly to the issue by “employing the legally significant term ‘publisher,’ which has traditionally encompassed distributors and original publishers alike.” The court held that because distributor liability is a subset of publisher liability, it is therefore specifically foreclosed by § 230.

Austin v. CrystalTech Web Hosting, 125 P.3d 389, 2005 WL 3489249 (Ariz. App. Div. 1, December 22, 2005).

Technorati:


Scroll to top