Online marketplace not liable for causing murder committed by criminal seller

online marketplace liable

The Tenth Circuit Court of Appeals in Colorado upheld the dismissal of a tort case brought against the operator of the website Letgo. The court held that the website was not negligent and did not commit fraud in facilitating the purported transaction that resulted in the murder of a married couple.

The tragic story

At 11 PM on August 14, 2020, Mr. and Mrs. Roland met an execrable miscreant named  Brown at a PETCO parking lot, intending to buy a car that Brown had placed for sale on the online marketplace Letgo. Claiming to not have the right title for the car (it turns out the car was stolen), Brown led the couple to a second location under the pretense of retrieving the correct vehicle title. But Brown ambushed the Rolands with a handgun, resulting in a struggle that led to the tragic deaths of both Mr. and Mrs. Roland, who left behind five minor children. Brown was later convicted of two counts of first-degree felony murder.

The estate sued

The Rolands’ estate sued Letgo under Colorado law, alleging a number of claims, including negligence, fraud and negligent misrepresentation. The lower court dismissed the claims. So the estate sought review with the Tenth Circuit. On appeal, the court affirmed the dismissal.

To act or not to act

The court’s negligence analysis turned on whether plaintiffs’ claim was one of misfeasance (active conduct causing injury) or nonfeasance (a failure to take positive steps to protect others from harm). The plaintiffs contended that Letgo’s claims of collaboration with law enforcement, use of technology to block stolen goods, and a user verification system created a false sense of security – that the negligence was based on misfeasance. But the court was not persuaded, emphasizing that the representations, when viewed in context, did not constitute misfeasance or active misconduct.

Instead, the court determined that the plaintiffs’ allegations were more indicative of nonfeasance, or Letgo’s failure to act, which required a special relationship between the parties for a duty of care to be established – a condition the plaintiffs could not satisfy. And in the court’s mind, even if Letgo’s actions were misfeasance, the plaintiffs failed to adequately plead that these actions were a substantial factor in the Rolands’ deaths, as the decisions made by the Rolands to pursue the transaction, and the decision of the perpetrator to commit murder, were more significant factors in the tragic outcome than the provision of the online platform.

No fraud or negligent misrepresentation either

Colorado law required plaintiffs to demonstrate a series of stringent criteria: Letgo must have made a false representation of a material fact, known to be false, with the intention that the Rolands would rely upon it, leading to damages as a result of this reliance. Negligent misrepresentation, while similar, necessitates showing that Letgo, within its professional capacity, made a careless misstatement of a crucial fact intended for the guidance of others in their business dealings, which the Rolands justifiably relied upon to their detriment.

Federal Rule of Civil Procedure 9(b) sets a higher bar for fraud claims, requiring plaintiffs to specify the circumstances of the alleged fraud with particularity, including the time, place, and content of the false representations, as well as the identity of those making them and the resultant consequences. This rule aims to provide defendants with fair notice of the claims against them and the factual basis for these claims. In certain cases, this standard may also extend to claims of negligent misrepresentation if they closely resemble allegations of fraud, highlighting the necessity for precise and detailed pleadings in such legal matters.

In this case, plaintiffs contended that Letgo’s assurances regarding safety and user verification —such as collaboration with law enforcement, technology to identify stolen goods, and “verified” user statuses — were misleading, constituting either fraud or negligent misrepresentation. However, the court found that plaintiffs failed to plausibly link the alleged misrepresentations to the tragic outcome, failing to provide sufficient factual content to demonstrate causation between Letgo’s actions and the Rolands’ deaths.

Roland v. Letgo, Inc., 2024 WL 372218 (10th Cir., February 1, 2024)

See also:

Exploiting blockchain software defect supports unjust enrichment claim

blockchain unjust enrichment
Most court cases involving blockchain have to do with securities regulation or some other business aspect of what the parties are doing. The case of Shin v. ICON Foundation, however, deals with the technology side of blockchain. The U.S. District Court for the Northern District of California recently issued an opinion having to do with how the law should handle a person who exploits a software flaw to quickly (and, as other members of the community claim, unfairly) generate tokens.

Exploiting software flaw to generate tokens

Mark Shin was a member of the ICON Community – a group that includes users who create and transact in the ICX cryptocurrency. The ICON Network hosts the delegated proof of stake blockchain protocol. The process by which delegates are selected for the environment’s governance involves ICX users “staking” tokens. As an incentive to participate in the process, ICX holders receive rewards that can be redeemed for more ICX. The system does not give rewards, however, when a user “unstakes” his or her tokens.

When a new version of the ICON Network software was released, Shin discovered that he was immediately awarded one ICX token each time he would unstake a token. Exploiting this software defect, he staked and unstaked tokens until he generated new ICX valued at the time at approximately $9 million.

Bring in the lawyers

Other members of the community did not take kindly to Shin’s conduct, and took steps to mitigate the effect. Shin filed suit for conversion and trespass to chattel. And the members of the cryptocurrency community filed a counterclaim, asserting a number of theories against Shin, including a claim for unjust enrichment. Shin moved to dismiss the unjust enrichment claim, arguing that the community’s claim failed to state a claim upon which relief could be granted. In general, unjust enrichment occurs when a person has been unjustly conferred a benefit, including through fraud or mistake. Under California law (which applied in this case), the elements of unjust enrichment are (1) receipt of a benefit, and (2) unjust retention of the benefit at the expense of another.

Moving toward trial

In this case, the court disagreed with Shin’s arguments. It held that the members of the community had sufficiently pled a claim for unjust enrichment. It’s important to note that this opinion does not mean that Shin is liable for unjust enrichment – it only means that the facts as alleged, if they are proven true, support a viable legal claim. In other words, the opinion confirms that the law recognizes that Shin’s alleged conduct would be unjust enrichment. We will have to see whether Shin is actually found liable for unjust enrichment, either at the summary judgment stage or at trial.

Examining the elements of unjust enrichment, the court found that the alleged benefit to Shin was clear, and that the community members had adequately pled that Shin unjustly retained this benefit. The allegations supported the theory that Shin materially diluted the value of the tokens held by other members of the community, and that he “arrogated value to himself from the other members.” According to the members of the community, if Shin had not engaged in the alleged conduct, the present-day value of ICX would be even higher. (It will be interesting to see how that will be proven – perhaps one more knowledgeable than this author in crypto can weigh in.)

Shin v. ICON Foundation, 2021 WL 6117508 (N.D. Cal., December 27, 2021)

Intellectual property exception to CDA 230 immunity did not apply in case against Google

Plaintiff sued Google for false advertising and violations of New Jersey’s Consumer Fraud Act over Google’s provision of Adwords services for other defendants’ website, which plaintiff claimed sold counterfeit versions of plaintiff’s products. Google moved to dismiss these two claims and the court granted the motion. 

On the false advertising issue, the court held that plaintiff had failed to allege the critical element that Google was the party that made the alleged misrepresentations concerning the counterfeit products. 

As for the Consumer Fraud Act claim, the court held that Google enjoyed immunity from such a claim in accordance with the Communications Decency Act at 47 U.S.C. 230(c). 

Specifically, the court found: (1) Google’s services made Google the provider of an interactive computer service, (2) the claim sought to hold Google liable for the publishing of the offending ads, and (3) the offending ads were published by a party other than Google, namely, the purveyor of the allegedly counterfeit goods. CDA immunity applied because all three of these elements were met. 

The court rejected plaintiff’s argument that the New Jersey Consumer Fraud Act was an intellectual property statute and that therefore under Section 230(e)(2), CDA immunity did not apply. With immunity present, the court dismissed the consumer fraud claim. 

InvenTel Products, LLC v. Li, No. 19-9190, 2019 WL 5078807 (D.N.J. October 10, 2019)

About the Author: Evan Brown is a Chicago technology and intellectual property attorney. Call Evan at (630) 362-7237, send email to ebrown@internetcases.com, or follow him on Twitter @internetcases. Read Evan’s other blog, UDRP Tracker, for information about domain name disputes.

No fraud claim against VRBO over bogus listing because website terms did not guarantee prescreening

Plaintiff sued the website VRBO for fraud after he used the website to find a purported vacation rental property that he paid for and later learned to be nonexistent. He specifically claimed that the website’s “Basic Rental Guarantee” misled him into believing that VRBO pre-screened the listings that third parties post to the site. The lower court granted VRBO’s summary judgment motion. Plaintiff sought review with the First Circuit Court of Appeals. On appeal, the court affirmed summary judgment, finding the guarantee was not fraudulent.

The court found the Basic Rental Guarantee was not fraudulent for a number of reasons. The document simply established a process for obtaining a refund (of up to $1,000) that involved satisfying certain conditions (e.g., having paid using a certain method, being denied a refund by the property owner, and making a claim to VRBO within a certain time). The document gave no indication that VRBO conducted any pre-screening of listed properties, but instead the document mentioned investigation that would be conducted only in the event a claim of “Internet Fraud” (as VRBO defined it) was made. And VRBO’s terms and conditions expressly stated that VRBO had no duty to pre-screen content on the website, and also disclaimed liability arising from any inaccurate listings.

Finally, the court found that the guarantee did not, under a Massachusetts statute, constitute a representation or warranty about the accuracy of the listings. Among other things, the document clearly and conspicuously disclosed the nature and extent of the guarantee, its duration, and what the guarantor undertook to do.

Hiam v. Homeaway.com, 887 F.3d 542 (1st Cir., April 12, 2018)

About the Author: Evan Brown is a Chicago technology and intellectual property attorney. Call Evan at (630) 362-7237, send email to ebrown [at] internetcases.com, or follow him on Twitter @internetcases. Read Evan’s other blog, UDRP Tracker, for information about domain name disputes.

Facebook wins against alleged advertising fraudster

Defendant set up more than 70 bogus Facebook accounts and impersonated online advertising companies (including by sending Facebook falsified bank records) to obtain an advertising credit line from Facebook. He ran more than $340,000 worth of ads for which he never paid. Facebook sued, among other things, for breach of contract, fraud, and violation of the Computer Fraud and Abuse Act (CFAA). Despite the court giving defendant several opportunities to be heard, defendant failed to answer the claims and the court entered a default.

The court found that Facebook had successfully pled a CFAA claim. After Facebook implemented technological measures to block defendant’s access, and after it sent him two cease-and-desist letters, defendant continued to intentionally access Facebook’s “computers and servers to obtain account credentials, Facebook credit lines, Facebook ads, and other information.” The court entered an injunction against defendant accessing or using any Facebook website or service in the future, and set the matter over for Facebook to prove up its $340,000 in damages. It also notified the U.S. Attorney’s Office.

Facebook, Inc. v. Grunin, 2015 WL 124781 (N.D. Cal. January 8, 2015)

Lawsuit against Yelp over how it marketed its review filters can move forward

Plaintiff restaurant owner sued Yelp under California unfair competition law, claiming that certain statements Yelp made about the filters it uses to ascertain the unreliability or bias of user reviews were misleading and untrue. For example, plaintiff alleged that Yelp advertised that its filtering process “takes the reviews that are the most trustworthy and from the most established sources and displays them on the business page.” But, according to plaintiff, the filter did not give consumers the most trusted reviews, excluded legitimate reviews, and included reviews that were demonstrably false and biased.

Yelp filed an Anti-SLAPP motion to strike plaintiff’s complaint under California Code of Civil Procedure section 425.16, arguing that the complaint sought to interfere with Yelp’s free speech rights, and targeted speech that appeared in a public forum and was a matter of public interest. The trial court granted the motion, and plaintiff sought review with the Court of Appeal of California. On appeal, the court reversed.

It held that a motion to strike under the mechanism of California’s Anti-SLAPP statute was unavailable under section 425.17 (c), which prohibits Anti-SLAPP motions against “any cause of action brought against a person primarily engaged in the business of selling or leasing goods or services,” where certain other conditions are met, including the statement being made for purposes of promoting the speaker’s goods or services.

The appellate court disagreed with the lower court which found that Yelp’s statements about its filters were mere “puffery”. Instead, the court held that these actions disqualified the Anti-SLAPP motion under the very language of the statute pertaining to commercial speech.

Demetriades v. Yelp, Inc., 2014 WL 3661491 (Cal. Ct. App. July 24, 2014)

Evan Brown is an attorney in Chicago advising clients on matters dealing with technology, the internet and new media.

Court rules against woman accused of fraudulent misrepresentation for creating fake internet boyfriend

Bonhomme v. St. James, — N.E.2d —, (Ill.App. 2 Dist March 10, 2011.)

Perhaps the most famous legal case about someone creating a false persona online and using that to dupe someone is the sad case of Megan Meier, which resulted in the (unsuccessful) prosecution of Lori Drew. The facts of that case were hard to believe — a woman created the identity of a teenage boy from scratch by setting up a bogus MySpace profile, then engaged in sustained communications with young Megan, leading her to believe the two of them had a real relationship. After the “boy” broke that relationship off, Megan committed suicide.

Here’s a case that has not seen quite as much tragedy, but the extent and the nature of the alleged deception is just as incredible, if not more so, than that undertaken by Lori Drew.

The appellate court of Illinois has held that a woman who was allegedly the victim of an elaborate ruse, perpetrated in large part over the internet, can move forward with her fraudulent misrepresentation claim against the woman who created the fake persona of a “man” who became her “boyfriend”. The story should satisfy your daily requirement of schadenfreude.

Plaintiff first got to know defendant Janna St. James back in 2005 in an online forum for fans of the HBO show Deadwood. A couple months after they first began talking online, defendant set up another username on the Deadwood site, posing as a man named “Jesse”. Plaintiff and this “Jesse” (which was actually defendant) struck up an online romance which apparently got pretty intense.

To add detail to the ploy, defendant invented no less than 20 fictitious identities — all of whom were purportedly in “Jesse’s” social or family circle — which she used to communicate with plaintiff.

The interactions which took place, both online and through other media and forms of communication (e.g., phone calls using a voice disguiser) were extensive. “Jesse” and plaintiff planned to meet up in person once, but “Jesse” cancelled. Plaintiff sent $10,000 worth of gifts to “Jesse” and to the other avatars of defendant. It even went so far as “Jesse” and plaintiff planning to move to live with one another in Colorado. But before that could happen, defendant pulled the plug on “Jesse” — he “died” of liver cancer.

Some time after that, defendant (as herself) flew from Illinois to California to visit plaintiff. During this trip, some of plaintiff’s real friends discovered the complex facade. Plaintiff sued.

The trial court dismissed plaintiff’s fraudulent misrepresentation claim. Plaintiff sought appellate review. On appeal, the court reversed, sending the case for fraudulent misrepresentation back to the trial court.

The court said some interesting things about whether the facts that plaintiff alleged supported her claim for fraudulent misrepresentation. A plaintiff suing for fraudulent misrepresentation under Illinois law must show: (1) a false statement of material fact; (2) knowledge or belief of the falsity by the party making it; (3) intention to induce the plaintiff to act; (4) action by the plaintiff in justifiable reliance on the truth of the statement; and (5) damage to the plaintiff resulting from that reliance.

Defendant made a strange kind of circular argument as to the first element — falsity of a material fact. She asserted that plaintiff’s claim was based more on the fiction that defendant pursued rather than specific representations. And the concepts of “falsity” and “material fact,” defendant argued, should not apply in the context of fiction, which does not purport to represent actuality. So defendant essentially argued that so long as she knew the masquerade was fiction, there could be no misrepresentation. The court recognized how invalid this argument was. The logic would shift the element of reliance on the truth of the statement from the injured party to the utterer.

Though the appellate court ruled in favor of plaintiff, the judges disagreed on the question of whether plaintiff was justified in relying on the truth of what defendant (as “Jesse,” as the other created characters, and herself) had told plaintiff. One judge dissented, observing that “[t]he reality of the Internet age is that an online individual may not always be — and indeed frequently is not — who or what he or she purports to be.” The dissenting judge thought it simply was not justifiable for plaintiff to spend $10,000 on people she had not met, and to plan on moving in with a man sight-unseen. (In so many words, the judge seemed to be saying that plaintiff was too gullible to have the benefit of this legal claim.)

The majority opinion, on the other hand, found the question of justifiable reliance to be more properly determined by the finder of fact in the trial court. For the motion to dismiss stage, plaintiff had alleged sufficient facts as to justifiable reliance.

(Congratulations to my friend Daliah Saper for her good lawyering in this case on behalf of plaintiff.)

Facebook victorious in lawsuit brought by kicked-off user

Young v. Facebook, 2010 WL 4269304 (N.D. Cal. October 25, 2010)

Plaintiff took offense to a certain Facebook page critical of Barack Obama and spoke out on Facebook in opposition. In response, many other Facebook users allegedly poked fun at plaintiff, sometimes using offensive Photoshopped versions of her profile picture. She felt harassed.

But maybe that harassment went both ways. Plaintiff eventually got kicked off of Facebook because she allegedly harassed other users, doing things like sending friend requests to people she did not know.

When Facebook refused to reactivate plaintiff’s account (even after she drove from her home in Maryland to Facebook’s California offices twice), she sued.

Facebook moved to dismiss the lawsuit. The court granted the motion.

Constitutional claims

Plaintiff claimed that Facebook violated her First and Fourteenth Amendment rights. The court dismissed this claim because plaintiff failed to demonstrate that the complained-of conduct on Facebook’s part (kicking her off) “was fairly attributable to the government.” Plaintiff attempted to get around the problem of Facebook-as-private-actor by pointing to the various federal agencies that have Facebook pages. But the court was unmoved, finding that the termination of her account had nothing to do with these government-created pages.

Breach of contract

Plaintiff’s breach of contract claim was based on other users harassing her when she voiced her disapproval of the Facebook page critical of the president. She claimed that in failing to take action against this harassment, Facebook violated its own Statement of Rights and Responsibilities.

The court rejected this argument, finding that although the Statement of Rights and Responsibilities may place restrictions on users’ behavior, it does not create affirmative obligations on the part of Facebook. Moreover, Facebook expressly disclaims any responsibility in the Statement of Rights and Responsibilities for policing the safety of the network.

Good faith and fair dealing

Every contract (under California law and under the laws of most other states) has an implied duty of good faith and fair dealing, which means that there is an implied “covenant by each party not to do anything which will deprive the other parties . . . of the benefits of the contract.” Plaintiff claimed Facebook violated this implied duty in two ways: by failing to provide the safety services it advertised and violating the spirit of the terms of service by terminating her account.

Neither of these arguments worked. As for failing to provide the safety services, the court looked again to how Facebook disclaimed responsibility for such actions.

The court gave more intriguing treatment to plaintiff’s claim that Facebook violated the spirit of its terms of service. It looked to the contractual nature of the terms of service, and Facebook’s assertions that users’ accounts should not be terminated other than for reasons described in the Statement of Rights and Responsibilities. The court found that “it is at least conceivable that arbitrary or bad faith termination of user accounts, or even termination . . . with no explanation at all, could implicate the implied covenant of good faith and fair dealing.”

But plaintiff’s claim failed anyway, because of the way she had articulated her claim. She asserted that Facebook violated the implied duty by treating her coldly in the termination process, namely, by depriving her of human interaction. The court said that termination process was okay, given that the Statement of Rights and Responsibilities said that it would simply notify users by email in the event their accounts are terminated. There was no implied obligation to provide a more touchy-feely way to terminate.

Negligence

Among other things, to be successful in a negligence claim, a plaintiff has to allege a duty on the part of the defendant. Plaintiff’s negligence claim failed in this case because she failed to establish that Facebook had any duty to “condemn all acts or statements that inspire, imply, incite, or directly threaten violence against anyone.” Finding that plaintiff provided no basis for such a broad duty, the court also looked to the policy behind Section 230 of the Communications Decency Act (47 U.S.C. 230) which immunizes website providers from content provided by third parties that may be lewd or harassing.

Fraud

The court dismissed plaintiff’s fraud claim, essentially finding that plaintiff’s allegations that Facebook’s “terms of agreement [were] deceptive in the sense of misrepresentation and false representation of company standards,” simply were not specific enough to give Facebook notice of the claim alleged.

Yelp successful in defamation and deceptive acts and practices case

Reit v. Yelp, Inc., — N.Y.S.2d —, 2010 WL 3490167 (September 2, 2010)

Section 230 of Communications Decency Act shielded site as interactive computer service; assertions regarding manipulation of reviews was not consumer oriented and therefore not actionable.

As I am sure you know, Yelp! is an interactive website designed to allow the general public to write, post, and view reviews about businesses, including professional ones, as well as restaurants and other establishments.

Lots of people and businesses that are the subject of negative reviews on sites like this get riled up and often end up filing lawsuits. Suits against website operators in cases like this are almost always unsuccessful. The case of Reit v. Yelp from a New York state court was no exception.

Plaintiff dentist sued Yelp and an unknown reviewer for defamation. He also sued Yelp under New York state law for “deceptive acts and practices”. Yelp moved to dismiss both claims. The court granted the motion.

Defamation claim – protection under Section 230

Interactive computer service providers are immunized from liability (i.e., they cannot be held responsible) for content that is provided by third parties. So long as the website is not an “information content provider” itself, any claim made against the website will be preempted by the Communications Decency Act, at 47 U.S.C. 230.

In this case, plaintiff claimed that Yelp selectively removed positive reviews of his dentistry practice after he contacted Yelp to complain about a negative reivew. He argued that this action made Yelp an information content provider (doing more than “simply selecting material for publication”) and therefore outside the scope of Section 230’s immunity. The court rejected this argument.

It likened the case to an earlier New York decision called Shiamili v. Real Estate Group of New York. In that case, like this one, an allegation that a website operator may keep and promote bad content did not raise an inference that it becomes an information content provider. The postings do not cease to be data provided by a third party merely because the construct and operation of the website might have some influence on the content of the postings.

So the court dismissed the defamation claim on grounds of Section 230 immunity.

Alleged deceptive acts and practices were not consumer oriented

The other claim against Yelp — for deceptive acts and practices — was intriguing, though the court did not let it stand. Plaintiff alleged that Yelp’s Business Owner’s Guide says that once a business signs up for advertsing with Yelp, an “entirely automated” system screens out reviews that are written by less established users.

The problem with this, plaintiff claimed, was that the process was not automated with the help of algorithms, but was done by humans at Yelp. That divergence between what the Business Owner’s Guide said and Yelps actual practices, plaintiff claimed, was consumer-oriented conduct that was materially misleading, in violation of New York’s General Business Law Section 349(a).

This claim failed, however, because the court found that the statements made by Yelp in the Business Owner’s Guide were not consumer-oriented, but were addressed to business owners like plaintiff. Without being a consumer-oriented statement, it did not violate the statute.

Other coverage of this case:

Enhanced by Zemanta

State law spam claim in federal court not pled with required particularity

Hypertouch, Inc. v. Azoogle.com, Inc., 2010 WL 2712217 (9th Cir. July 9, 2010)

Pleading in federal court is generally a straightforward matter. Federal Rule of Civil Procedure 8 requires only that the plaintiff set forth a short and plain statement as to why that party is entitled to relief. But in cases involving fraud, there is a heightened pleading standard imposed by Rule 9.

In the case of Hypertouch, Inc. v. Azoogle.com, Inc., the plaintiff sued the defendants in federal court over almost 400,000 allegedly spam email messages. Hypertouch brought claims under California law (California Business and Professions Code § 17529.5(a)) but did not meet the heightened pleading standard of Rule 9. So the district court dismissed the case.

Plaintiff appealed to the Ninth Circuit. On review, the appellate court affirmed. It found that not only does the California statute speak in terms of commercial e-mail advertisements that contain “falsified,” “misrepresented,” “forged,” or misleading information — terms common to fraud allegations — but the complaint repeatedly described the advertisements and their content as “fraudulent.” The court held that plaintiff could not circumvent the requirements of the Rules by arguing that it did not plead all of the allegations sufficiently to set forth a claim of fraud.

It’s important to note that the court made clear, despite its holding, that it was not articulating a standard for pleading under this California statute. It merely found that in the circumstances of this case, the claim was not pled with the requisite particularity.

Scroll to top