Blog

Section 230 immunity did not protect Omegle in product liability lawsuit

Section 230

When plaintiff was 11 years old, she was connected to a man in his late thirties using Omegle (a “free online chat room that randomly pairs strangers from around the world for one-on-one chats”). Before the man was arrested some three years later, he forced plaintiff to send him pornographic videos of herself, made threats against her, and engaged in other inappropriate and unlawful conduct with plaintiff.

Plaintiff sued Omegle, alleging product liability and negligence relating to how Omegle was designed, and for failure to warn users of the site’s dangers. Omegle moved to dismiss these claims, claiming that it could not be liable because it was protected by 47 U.S.C. §230.

The court found that that Section 230 did not apply because plaintiff’s claims did not seek to treat Omegle as the publisher or speaker of content. The court observed that to meet the obligation plaintiff sought to impose on Omegle, Omegle would not have had to alter the content posted by its users. It would only have had to change its design and warnings.

And the court found that plaintiff’s claims did not rest on Omegle’s publication of third party content. In the same way that Snapchat did not avoid liability on the basis of Section 230 in Lemmon v. Snap, Inc., 995 F.3d 1085 (9th Cir. 2021), Omegle’s alleged liability was based on its “own acts,” namely, designing and operating the service in a certain way that connected sex offenders with minors, and failed to warn of such dangers.

A.M. v. Omegle.com, LLC, 2022 WL 2713721 (D. Oregon, July 13, 2022)

See also:

Is it unlawful to access someone else’s Google Drive content that is not password protected?

Plaintiff set up a Google Drive so that he could collect photos and other content related to a local school board controversy. He thought it was private, but it was actually configured so that anyone using the URL could access the content. After the local controversy escalated, plaintiff’s son emailed some photos to an opponent, and one of those photos contained the Google Drive’s URL. That photo made its way into the hands of defendant, who, using the URL, allegedly reviewed, downloaded, deleted, added, reorganized, renamed, and publicly disclosed contents of the Google Drive.

Google Drive CFAA

So plaintiff sued under the Computer Fraud and Abuse Act, 18 U.S.C. §1030, (the “CFAA”). Defendant moved to dismiss, arguing, among other things, that plaintiff had failed to adequately plead that defendant’s access to the Google Drive was without authorization.

Defendant had argued that her access using the URL could not be considered unauthorized under the CFAA, in accordance with the holding of hiQ Labs, Inc. v. LinkedIn Corp., 31 F.4th 1180 (9th Cir. 2022). In that case, the Ninth Circuit reasoned that “the prohibition on unauthorized access is properly understood to apply only to private information – information delineated as private through use of a permission requirement of some sort.” Thus, for a website to fall under CFAA protections, it must have erected “limitations on access.” And if “anyone with a browser” could access the website, it had no limitations on access.

In this case, defendant merely used her web browser and the URL she obtained to access plaintiff’s Google Drive. The portion of the Google Drive was not password protected. And plaintiff had – though inadvertently – enabled the setting that allowed anyone with the URL to access the drive’s contents.

But in the court’s view, the Google Drive nonetheless had limitations that made defendant’s access unauthorized. The court differentiated the situation from one in which just “anyone with a web browser” might access the content, for example, via a web search. One needed to enter a 68-character URL to access the content. And the content was not indexed by any search engines. So the Google Drive was not “per se” public. And defendant’s access – as plaintiff had pled it – was not authorized.

Greenburg v. Wray, 2022 WL 2176499 (D. Ariz., June 16, 2022)

See also:

Company president may be liable for vicarious copyright infringement

vicarious liability copyright

Plaintiff sued a company and its president for copyright infringement, over some photos that the company published online. The individual defendant moved to dismiss the claim against him, arguing that the complaint (1) did not plead any facts concerning action that he took, (2) did not try to pierce the company’s corporate veil, and (3) contained no facts to establish that the company is the alter ego of the individual defendant. Plaintiff conceded it was neither pursuing an alter-ego theory nor seeking to pierce the corporate veil. Instead, plaintiff argued that the individual defendant was vicariously liable for the company’s infringement. The court denied the motion to dismiss.

The court looked first to Metro-Goldwyn-Mayer Studios Inc. v. Grokster, Ltd., 545 U.S. 913 (2005), which provides that one infringes vicariously by profiting from direct infringement while declining to exercise a right to stop or limit it. But then it cited to later Tenth Circuit cases (e.g., Diversey v. Schmidly, 738 F.3d 1196 (10th Cir. 2013)) which state the test for vicarious liability a bit differently. Under  Diversey, “[v]icarious liability attaches when the defendant ‘has the right and ability to supervise the infringing activity’ and ‘has a direct financial interest in such activities.” There is no mention of declining to exercise the right to stop or limit the infringement under this test, as there is in Grokster.

The court found that the plaintiff’s claims for vicarious liability against the individual defendant survive because the complaint alleged that defendant was the owner and president of the company, had the ability to supervise and control content on the website, and received a financial benefit from the operation of the website. It rejected the individual defendant’s argument that the claim should fail because there were no allegations that he declined to exercise the right to stop or limit the infringement.

Great Bowery v. Best Little Sites, 2022 WL 2074253 (D. Utah June 9, 2022)

See also:

Can a person be liable for retweeting a defamatory tweet?

section 230 user retweet defamatory

Under traditional principles of defamation law, one can be liable for repeating a defamatory statement to others. Does the same principle apply, however, on social media such as Twitter, where one can easily repeat the words of others via a retweet?

Hacking, tweet, retweet, lawsuit

A high school student hacked the server hosting the local middle school’s website, and modified plaintiff’s web page to make it appear she was seeking inappropriate relationships. Another student tweeted a picture of the modified web page, and several people retweeted that picture.

The teacher sued the retweeters for defamation and reckless infliction of emotional distress. The court dismissed the case, holding that 47 USC §230 immunized defendants from liability as “users” of an interactive computer service. Plaintiff sought review with the New Hampshire Supreme Court. On appeal, the court affirmed the dismissal.

Who is a “user” under Section 230?

Section 230 provides, in relevant part, that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”. Importantly, the statute does not define the word “user”. The lower court held that defendant retweeters fit into the category of “user” under the statute and therefore could not be liable for their retweeting, because to impose such liability would require treating them as the publisher or speaker of information provided by another.

Looking primarily at the plain language of the statute, and guided by the 2006 California case of Barrett v. Rosenthal, the state supreme court found no basis in plaintiff’s arguments that defendants were not “users” under the statute. Plaintiff had argued that “user” should be interpreted to mean libraries, colleges, computer coffee shops and others who, “at the beginning of the internet” were primary access points for people. And she also argued that because Section 230 changed common law defamation, the statute must speak directly to immunizing individual users.

The court held that it was “evident” that Section 230 abrogated the common law of defamation as applied to individual users. “That individual users are immunized from claims of defamation for retweeting content they did not create is evident from the statutory language. ”

Banaian v. Bascom, — A.3d —, 2022 WL 1482521 (N.H. May 11, 2022)

See also:

Restraining order issued against domain name seller who refused to transfer

restraining order domain name

Defendant listed a domain name for sale using DomainAgents. After a couple rounds of negotiation, plaintiff accepted defendant’s counteroffer to sell the domain name. But when the time came to put the domain name in escrow to enable transfer, defendant backed out of the deal, saying he had changed his mind. Plaintiff sued for breach of contract and sought a temporary restraining order that would prohibit defendant from transferring the domain name.

The court granted the motion. It agreed with plaintiff that it was appropriate to determine the motion ex parte (that is, without giving notice to the defendant) because the defendant could transfer the domain name in the meantime, thereby depriving plaintiff of the ability to procure an irreplaceable asset.

It found plaintiff would likely succeed on the merits of the breach of contract claim, because plaintiff had shown that a valid contract likely existed, that plaintiff was willing to perform its end of the bargain, that defendant had breached by refusing to go through with the transaction, and that plaintiff had been damaged due to the loss of the ability to procure the domain name from defendant.

The court further found a likelihood of irreparable harm to plaintiff, in that defendant’s communicated belief that he was not bound by the purchase agreement indicated he would sell the domain name to another interested party. If that were to happen, plaintiff would have no recourse against that purchaser, who was not in privity of contract with plaintiff.

Moreover, the court found the balance of equities favored plaintiff. The temporary restraining order would only be in place until a further hearing on injunctive relief could be had, and defendant would not otherwise be restricted from using the domain name in the meantime.

Finally, the court held that the public interest favored granting injunctive relief. The public interest strongly favors enforcing contracts.

Jump Operations, LLC v. Merryman, 2022 WL 1082641 (D. Nev., April 8, 2022)

See also:

Is storing protected information on an unencrypted server a disclosure of that information?

unencrypted server disclosure

Back in the 1990s, Congress recognized that stalkers were aided in their crimes by using victims’ driver’s license information, and states were selling driver’s license information to marketers. So Congress passed the Driver’s Privacy Protection Act, 18 U.S.C. § 2721, et seq. (the “DPPA”). This statute makes it unlawful for any person to knowingly disclose personal information from a motor vehicle record for any use other than certain uses that the statute permits.

Defendant had more than 27 million Texas driver’s license records that it stored on an external unencrypted server. In 2020, it announced that a third party had accessed the records without authorization. As expected, the class action lawyers jumped on board and sued under the DPPA.

The lower court dismissed the DPPA claim in response to defendant’s motion to dismiss for failure to state a claim. Plaintiffs sought review with the Fifth Circuit Court of Appeals. On appeal, the court affirmed the dismissal.

It held that plaintiffs failed to plausibly allege that storing the data on an unencrypted server amounted to a “disclosure”. More specifically, although plaintiffs argued that defendants had placed the information on a server that was readily accessible to the public, that assertion was nowhere in the complaint, nor was it supported by the facts alleged in the complaint.

In finding there to be no disclosure, the court observed that the storage of the data, as alleged, did not make it visible to a digital “passer-by”. This made the case different from Senne v. Village of Palatine, Ill.,695 F.3d 597 (7th Cir. 2012), in which a police officer disclosed information by putting a traffic ticket on a windshield, which any passer-by could see. The court also looked to Enslin v. Coca-Cola Co., 136 F. Supp. 3d 654 (E.D. Pa. 2015), in which that court held there to be no disclosure under the DPPA when someone stole an unencrypted laptop containing information protected under the statute.

Allen v. Vertafore, Inc., No. 21-20404 (5th Cir., March 11, 2022)

Can you sue a platform for not following DMCA takedown procedures?

platform DMCA takedown

Plaintiff sued YouTube after the platform took down one of plaintiff’s videos in response to a DMCA takedown notice. She claimed YouTube failed to follow the procedures in 17 U.S.C. §512 by, for example, not providing her with a physical signature of the copyright owner and otherwise not following the “exact course of the DMCA guidelines.” YouTube moved to dismiss and the court granted the motion.

It held that §512 serves to provide safe harbor protections from copyright infringement liability to platforms. Section 512 does not,  however, give rise to a standalone basis to sue a platform.

When a copyright owner sends a DMCA takedown notice to a platform, it must include substantially the following:

  • A physical or electronic signature of a person authorized to act on behalf of the owner of an exclusive right that is allegedly infringed.
  • Identification of the copyrighted work claimed to have been infringed, or, if multiple copyrighted works at a single online site are covered by a single notification, a representative list of such works at that site.
  • Identification of the material that is claimed to be infringing or to be the subject of infringing activity and that is to be removed or access to which is to be disabled, and information reasonably sufficient to permit the service provider to locate the material.
  • Information reasonably sufficient to permit the service provider to contact the complaining party, such as an address, telephone number, and, if available, an electronic mail address at which the complaining party may be contacted.
  • A statement that the complaining party has a good faith belief that use of the material in the manner complained of is not authorized by the copyright owner, its agent, or the law.
  • A statement that the information in the notification is accurate, and under penalty of perjury, that the complaining party is authorized to act on behalf of the owner of an exclusive right that is allegedly infringed.

If the notification does not include substantially these items, then if the platform leaves the content up, the insufficient notification will not be considered to have placed the platform on notice of the alleged infringing activity. So if the copyright holder sues the platform for copyright infringement, it will have to plead other facts showing that the platform is liable for infringement. Relatedly, if the platform acts expeditiously to remove or disable access to allegedly infringing content when it gets knowledge via a valid takedown notice, it will have an affirmative defense to a claim of copyright infringement.

In this case, the court confirmed the notion that the DMCA provides this safe harbor to the platform but does not give the person whose content has been taken down the right to sue the platform. The court does not mention – but it remains relevant – that one whose content has been taken down can seek a remedy under 17 U.S.C. §512(f) against a party who sends a takedown notice while knowingly materially misrepresenting that the content is infringing. But that is not the course plaintiff in this case took – she sued YouTube for not following the requirements that apply to copyright holders. And she lost.

Finley v. YouTube, 2022 WL 704835 (N.D. Cal., March 9, 2022)

See also: Is it defamation to accuse someone of sending a bogus DMCA takedown notice?

 

Biometric privacy statute does not violate First Amendment

biometric privacy First Amendment
Biometric identifiers extracted from a photo are not public in the same way the photo itself is

 

Plaintiffs filed a class action lawsuit against a facial recognition technology company and related individual defendants, asserting violations of the Illinois Biometric Information Privacy Act (“BIPA”). Plaintiffs alleged that defendants covertly scraped over three billion photographs of faces from the internet and then used artificial intelligence algorithms to scan the face geometry of each individual depicted to harvest the individuals’ unique biometric identifiers and corresponding biometric information. One of the defendants then created a searchable database containing this biometric information and data that enabled users of its proprietary platform to identify unknown individuals by uploading a photograph to the database. Accordingly, plaintiffs alleged that defendants collected, captured, or otherwise obtained their biometric data without notice and consent, and thereafter, sold or otherwise profited from their biometric information, all in violation of BIPA.

Unconstitutional restriction on public information?

Defendants moved to dismiss the BIPA claim on a number of grounds, including an argument that BIPA violated defendants’ First Amendment rights. More specifically, defendants maintained that the capture and analysis of faceprints from public images was protected speech, and thus, BIPA was unconstitutional because it inhibited the ability to collect and analyze public information. Plaintiffs, however, asserted that the capturing of faceprints and the action of extracting private biometric identifiers from the faceprints was unprotected conduct. The court sided with plaintiffs and rejected defendants’ argument.

The court held that defendants’ argument oversimplified plaintiffs’ allegations. Although defendants captured public photographs from the internet, they then harvested an individual’s unique biometric identifiers and information – which are not public information – without the individual’s consent. Put differently, plaintiffs asserted that the defendants’ business model was not based on the collection of public photographs from the internet, some source code, and republishing information via a search engine, but the additional conduct of harvesting nonpublic, personal biometric data. And, as plaintiffs further alleged, unlike fingerprints, facial biometrics are readily observable and present a grave and immediate danger to privacy, individual autonomy, and liberty.

An intermediate approach to biometric privacy

Accordingly, the court looked at defendants’ conduct as involving both speech and nonspeech elements. Looking to the test set out in the Supreme Court case of United States v. O’Brien, 391 U.S. 367 (1968), the court evaluated how when “elements are combined in the same course of conduct, a sufficiently important governmental interest in regulating the nonspeech element can justify incidental limitations on First Amendment freedoms.” The court applied the intermediate scrutiny standard set out in O’Brien, namely, a regulation does not violate the First Amendment if (1) it is within the power of the government to enact, (2) furthers an important government interest, (3) the governmental interest is unrelated to the suppression of free expression, and (4) any incidental restriction on speech is no greater than is necessary to further the government interest.

The first element was easy to dispense with because the parties did not argue that the Illinois General Assembly lacked the power to enact BIPA. On the second element, the court found that the General Assembly enacted BIPA to protect Illinois residents’ highly sensitive biometric information from unauthorized collection and disclosure. Regarding the third element, the court noted that BIPA, including its exceptions, does not restrict a particular viewpoint, nor does it target public discussion of an entire topic. And on the fourth O’Brien element, the court found BIPA to be narrowly tailored by legitimately protecting Illinois residents’ highly sensitive biometric information and data, yet allowing residents to share their biometric information through its consent provision. And BIPA is not overly-broad, in the court’s view, because it does not prohibit a substantial amount of protected speech.

In re Clearview AI, Inc., Consumer Privacy Litigation, 2022 WL 444135 (N.D. Illinois, February 14, 2022)

Court protects the privacy of bitcoin address and transaction information

Defendant asked the court to redact his bitcoin address and transaction information from exhibits used at trial, which ordinarily would become part of the public record. He argued that for each transaction recorded on the blockchain, one could reverse engineer the entire transaction if he or she knows the individual associated with one of a number of pieces of information, including transaction ID and public bitcoin address. “[O]nce a particular individual is associated [with] any of this information, it is essentially akin to providing that individual’s financial account number.”

The court allowed the redaction of the bitcoin address and bitcoin transactions. It found that defendant had demonstrated good cause to support the redactions. The court balanced the public’s right of access to court information against defendant’s interest in keeping the information confidential. It agreed with defendant’s assertion that the bitcoin information he sought to redact is akin to a financial account number or personally identifiable information.

Kleiman v. Wright, 2022 WL 390702 (S.D. Fla., February 9, 2022)

Strip club operator wins motion in domain name dispute against GoDaddy company

NameFind is a GoDaddy company that holds registrations of domain names and seeks to make money off of them by placing pay-per-click ads on parked pages found at the domain names. Global Licensing owns the DEJA VU trademark that is used in connection with strip clubs and other adult-related services. When NameFind used the domain name dejavushowgirls.com to set up a page of pay-per-click ads, Global Licensing sued, raising claims under the federal Anticybersquatting Consumer Protection Act (ACPA), 15 U.S.C. 1125(d).

cybersquatting domain name dispute

Arguing that the cybersquatting claim had been insufficiently pled, NameFind moved to dismiss. The court denied the motion.

To establish a “cybersquatting” claim under the ACPA, a plaintiff must establish that: (1) it has a valid trademark entitled to protection; (2) its mark is distinctive or famous; (3) the defendant’s domain name is identical or confusingly similar to, or in the case of famous marks, dilutive of, plaintiff’s mark; and (4) defendant used, registered, or trafficked in the domain name (5) with a bad faith intent to profit. DaimlerChrysler v. The Net Inc., 388 F.3d 201 (6th Cir. 2004) (citing Ford Motor Co. v. Catalanotte, 342 F.3d 543, 546 (6th Cir. 2003)).

Identical or confusingly similar

NameFind first argued that the court should dismiss the ACPA claim because there were no “non-conclusory” allegations explaining how the content on its website could be confusingly similar to plaintiff’s entertainment services. The court found this argument unpersuasive, however, because the content of the website was not important in evaluating this element. Instead, the court was to make a direct comparison between the protected mark and the domain name itself, rather than an assessment of the context in which each is used or the content of the offending website. It found the disputed domain name and plaintiff’s mark to be identical or confusingly similar because the disputed domain name incorporated plaintiff’s mark, and there were no words or letters added to plaintiff’s mark that clearly distinguished it from plaintiff’s usage.

Bad faith intent to profit

The court likewise rejected NameFind’s second argument, which was that plaintiff had not sufficiently pled NameFind’s bad faith intent to profit. The main point of the argument was that most of plaintiff’s allegations were made “on information and belief”. (That phrase is used in lawsuits when the plaintiff does not know for sure whether a fact is true, so it hedges a bit.) The court observed that allegations made on information and belief are not per se insufficient.

In this case, the court stated that the “on information and belief” allegations should not be considered in isolation, but should be considered in the context of the entire Complaint, including the factual allegations that: (1) NameFind had no intellectual property rights in or to the DEJA VU mark; (2) the disputed domain name was essentially identical to plaintiff’s mark and did not contain defendant’s legal name; (3) plaintiff did not authorize or consent to such use; (4) the domain name was configured to display pay-per-click advertisements to visitors, which provided links to adult-related entertainment sites; (5) as such, the disputed domain name was likely to be confused with plaintiff’s legitimate online location and other domain names, and deceive the public; and, (6) defendant’s website harmed plaintiff’s reputation and the goodwill associated with its marks by causing customers to associate plaintiff with the negative qualities of defendant’s website.

Global Licensing, Inc. v. NameFind LLC, 2022 WL 274104 (E.D. Michigan, January 28, 2022)

Scroll to top