Can a party recover statutory damages under the Stored Communications Act without proving actual damages?

The Stored Communications Act (18 USC 2701 et seq.) is among the most powerful tools relating to email privacy. It is a federal statute that prohibits, in certain circumstances, one from intentionally accessing without authorization, or exceeding authorized access to, a facility through which an electronic communication service is provided. The statute provides criminal penalties and an aggrieved party can bring a civil suit for damages in certain cases.

The statute contains a provision that addresses the amount of money damages a successful plaintiff can recover. Section 2707(c) provides the following:

Damages. The court may assess as damages in a civil action under this section the sum of the actual damages suffered by the plaintiff and any profits made by the violator as a result of the violation, but in no case shall a person entitled to recover receive less than the sum of $1,000. If the violation is willful or intentional, the court may assess punitive damages. In the case of a successful action to enforce liability under this section, the court may assess the costs of the action, together with reasonable attorney fees determined by the court.

Note the phrase “but in no case shall a person entitled to recover receive less than the sum of $1,000.” Does that mean every plaintiff that successfully proves the defendant’s liability is entitled to at least $1,000, regardless of whether there was any actual damage that occurred? The Fifth Circuit Court of Appeals recently addressed that question in the case of Domain Protection, L.L.C. v. Sea Wasp, L.L.C. It held that one must show at least some actual damages before being entitled to the minimum of $1,000.

The court looked to the Supreme Court’s approach in addressing nearly identical language in another statute, wherein SCOTUS concluded that “person entitled to recover” refers back to the party that suffers “actual damages.” Doe v. Chao, 540 U.S. 614, 620, 124 S.Ct. 1204, 157 L.Ed.2d 1122 (2004). And it noted that two other circuits have held that this reasoning should apply to the same terms in the Stored Communications Act: Vista Mktg., LLC v. Burkett, 812 F.3d 954, 964–75 (11th Cir. 2016) and Van Alstyne v. Elec. Scriptorium, Ltd., 560 F.3d 199, 204–208 (4th Cir. 2009). The court “endorse[d] the reasoning of those opinions and [saw] no need to repeat it.”

Domain Protection, L.L.C. v. Sea Wasp, L.L.C., — F.4th —, 2022 WL 123408 (5th Cir. January 13, 2022)

How Portland has not demonstrated long-term commitment to a ban on facial recognition technologies

facial recognition ban

Portland, Oregon yesterday passed a ban on facial recognition technology. Officials cited two primary reasons for the ban. First, current facial recognition technologies less accurately identify people who are not young, white and/or male. Second, everyone should have some sense of anonymity and privacy when in public places.

Should the facial recognition ban focus on disparate impact?

Do Portland’s efforts to “improve people’s lives, with a specific focus on communities of color and communities with disabilities” demonstrate an effective long-term commitment to keeping invasive facial recognition technology at bay? Such a focus implies that when facial recognition technologies get better and less biased, they should then be deployed full scale, because then everyone will be harmed equally.

That’s one of the problems with looking to ban a technology based on its nascent state and accompanying imperfect implementation. Given the choice between arguing (1) that a technology is being harmfully implemented now, and (2) that the technology, no matter how perfect it is, infringes some fundamental human right, I’d go with number (2) every time.

We will find ourselves halfway down the slippery slope

We know the accuracy of this technology will increase with the development of better cameras, smarter algorithms and more data. When that happens, if you are still seeking to argue against its harmful effects on fundamental rights such as anonymity and privacy, you will already have slid halfway down the slope. With your previous “best” argument made moot, your argument now – an appeal to fundamental rights – will have less impact.

So maybe we should focus on the real issues – the fundamental right of anonymity and privacy for everyone – rather than leading with a social justice argument. At some later point, having made it the primary argument and it having becoming moot, the rationale will be a liability.

About the author

Evan Brown is a technology and intellectual property attorney in Chicago. Follow him on Twitter at @internetcases. Phone: (630) 362-7237. Email: ebrown@internetcases.com.

See also

Police not required to publicly disclose how they monitor social media accounts in investigations

Should revenge porn victims be allowed to proceed anonymously in court?

Plaintiff and her twin sister sued her ex-boyfriend and an unknown John Doe accusing them of copyright infringement and other torts such as invasion of privacy. They claimed the defendants posted intimate and nude photos of plaintiffs online without their consent. And defendants had posted one of the plaintiff’s name and other information on social media in connection with the photos.

Arguing that they had a substantial privacy right that outweighed the customary and constitutionally-embedded presumption of openness in judicial proceedings, plaintiffs asked the court for permission to proceed anonymously. But the court denied the motion.

Plaintiffs’ privacy arguments

Plaintiffs had primarily argued that proceeding under their real names would require them to disclose information of the utmost intimacy and that if they were required to attach their names to the litigation, there would be a public record connecting their names to the harm and exploitation they had suffered which could result in even more people viewing the very images that were stolen and disseminated without their consent.

Court: the harm had already been done

The court rejected these arguments. It observed that the photographs had been published on the internet for approximately seven years and had been sent to people they know. Plaintiffs admitted that one of them could be identified in some of the photographs because her face and a distinctive tattoo were visible. And John Doe had already published that plaintiff’s contact information which resulted in her being inundated with phone calls, text messages, emails, and Instagram, Facebook, and Twitter messages.

So in the court’s mind it appeared that that plaintiff’s identity was already known or discoverable. In addition, that plaintiff had obtained copyright registrations for many of the photographs and the copyright registration was a public document that clearly identified her by name.

As for the twin sister, although her identity had not been similarly made public, the court found that “no great stretch [was] required to identify her through public records as [the other plaintiff’s] twin sister.”

Consequently, the court was not persuaded that plaintiffs’ privacy interests outweighed the public’s right of access in judicial proceedings.

M.C. v. Geiger, 2018 WL 6503582 (M.D.Fla. Dec. 11, 2018)

Facebook did not violate HIPAA by using data showing users browsed healthcare-related websites

Plaintiffs sued Facebook and other entities, including the American Cancer Society, alleging that Facebook violated numerous federal and state laws by collecting and using plaintiffs’ browsing data from various healthcare-related websites. The district court dismissed the action and plaintiffs sought review with the Ninth Circuit. On appeal, the court affirmed the dismissal.

The appellate court held that the district court properly determined that plaintiffs consented to Facebook’s data tracking and collection practices.

Plaintiffs consented to Facebook’s terms

It noted that in determining consent, courts consider whether the circumstances, considered as a whole, demonstrate that a reasonable person understood that an action would be carried out so that their acquiescence demonstrates knowing authorization.

In this case, plaintiffs did not dispute that their acceptance of Facebook’s Terms and Policies constituted a valid contract. Those Terms and Policies contained numerous disclosures related to information collection on third-party websites, including:

  • “We collect information when you visit or use third-party websites and apps that use our Services …. This includes information about the websites and apps you visit, your use of our Services on those websites and apps, as well as information the developer or publisher of the app or website provides to you or us,” and
  • “[W]e use all of the information we have about you to show you relevant ads.”

The court found that a reasonable person viewing those disclosures would understand that Facebook maintained the practices of (a) collecting its users’ data from third-party sites and (b) later using the data for advertising purposes. This was “knowing authorization”.

“But it’s health-related data”

The court rejected plaintiffs claim that—though they gave general consent to Facebook’s data tracking and collection practices—they did not consent to the collection of health-related data due to its “qualitatively different” and “sensitive” nature.

The court did not agree that the collected data was so different or sensitive. The data showed only that plaintiffs searched and viewed publicly available health information that could not, in and of itself, reveal details of an individual’s health status or medical history.

This notion supported the court’s conclusion that the use of the information did not violate the Health Information Portability and Accountability Act of 1996 (“HIPAA”) and its California counterpart.

The court held that information available on publicly accessible websites stands in stark contrast to the personally identifiable patient records and medical histories protected by HIPAA and other statutes — information that unequivocally provides a window into an individual’s personal medical history.

Smith v. Facebook, Inc., 2018 WL 6432974 (9th Cir. Dec. 6, 2018)

Court labels copyright plaintiff as a troll and shuts down efforts to ID anonymous infringer

When a copyright plaintiff does not know who a particular alleged infringer is, it must first send a subpoena to the ISP assigned the IP address used to commit the alleged infringement. But the rules of procedure do not allow the sending of subpoenas until after the 26(f) conference – a meeting between the plaintiff and defendant (or their lawyers) to discuss the case. A plaintiff cannot have a 26(f) conference if the defendant has not been served with the complaint, and the complaint cannot be served unless the defendant’s identity is known.

So you can see the conundrum. To break out of this not-knowing, plaintiffs in situations like this will ask the court’s help through a motion for leave to take early discovery. That way the plaintiff can learn who the defendant is, serve the complaint, and move the case forward.

In the recent case of Strike 3 Holdings v. Doe, Judge Royce Lamberth of the U.S. District Court for the District of Columbia put a stop to the efforts of a plaintiff that it called a copyright troll right to its face (or at least right in the text of the opinion). The court denied Strike 3’s motion for leave to take early discovery to learn the identity of an unknown BitTorrent user accused of downloading pornography.

The court held that the plaintiff’s request was not specific enough, and the privacy interests of the unknown defendant, together with the social harm of being wrongfully accused of obtaining “particularly prurient pornography” were not outweighed by the trollish plaintiff’s need for the information.

Key to the court’s ruling was the idea that a subpoena in circumstances like this must be able to actually identify a defendant who could be sued. The court noted, however, that

Strike 3 could not withstand a 12(b)(6) motion in this case without resorting to far more intensive discovery machinations sufficiently establishing defendant did the infringing—examining physical evidence (at least the computers, smartphones, and tablets of anyone in the owner’s house, as well as any neighbor or houseguest who shared the Internet), and perhaps even interrogatories, document requests, or depositions. Strike 3’s requested subpoena thus will not—and may never—identify a defendant who could be sued.

The opinion is an entertaining read and conveys the judge’s clear frustration with copyright troll plaintiffs. Below are some of the more memorable quips.

Regarding the flaws of using IP addresses to identify people:

[Plaintiff’s] method [of identifying infringers] is famously flawed: virtual private networks and onion routing spoof IP addresses (for good and ill); routers and other devices are unsecured; malware cracks passwords and opens backdoors; multiple people (family, roommates, guests, neighbors, etc.) share the same IP address; a geolocation service might randomly assign addresses to some general location if it cannot more specifically identify another.

Regarding the public shame of being accused of infringing porn:

… But in many cases, the method is enough to force the Internet service provider (ISP) to unmask the IP address’s subscriber. And once the ISP outs the subscriber, permitting them to be served as the defendant, any future Google search of their name will turn-up associations with the websites Vixen, Blacked, Tushy, and Blacked Raw. The first two are awkward enough, but the latter two cater to even more singular tastes.

How trolls are quick to flee:

Indeed, the copyright troll’s success rate comes not from the Copyright Act, but from the law of large numbers. … These serial litigants drop cases at the first sign of resistance, preying on low-hanging fruit and staying one step ahead of any coordinated defense. They don’t seem to care about whether defendant actually did the infringing, or about developing the law. If a Billy Goat Gruff moves to confront a copyright troll in court, the troll cuts and runs back under its bridge. Perhaps the trolls fear a court disrupting their rinse-wash-and-repeat approach: file a deluge of complaints; ask the court to compel disclosure of the account holders; settle as many claims as possible; abandon the rest.

It’s pretty much extortion:

Armed with hundreds of cut-and-pasted complaints and boilerplate discovery motions, Strike 3 floods this courthouse (and others around the country) with lawsuits smacking of extortion. It treats this Court not as a citadel of justice, but as an ATM. Its feigned desire for legal process masks what it really seeks: for the Court to oversee a high-tech shakedown. This Court declines.

The court’s decision to deny discovery is anything but the rubber stamp approach so many judges in these kinds of cases over the past several years have been accused of employing.

Strike 3 Holdings v. Doe, 2018 WL 6027046 (D.D.C. November 16, 2018)

Court takes into consideration defendant’s privacy in BitTorrent copyright infringement case

Frequent copyright plaintiff Strike 3 Holdings filed a motion with the U.S. District Court for the District of Minnesota seeking an order allowing Strike 3 to send a subpoena to Comcast, to learn who owns the account used to allegedly infringe copyright. The Federal Rules of Civil Procedure created a bootstrapping problem or, as the court called it, a Catch-22, for Strike 3 – it was not able to confer with the unknown Doe defendant as required by Rule 26(f) because it could not identify the defendant, but it could not identify defendant without discovery from Comcast.

The court granted Strike 3’s motion for leave to take early discovery, finding that good cause existed for granting the request, and noting:

  • Strike 3 had stated an actionable claim for copyright infringement,
  • The discovery request was specific,
  • There were no alternative means to ascertain defendant’s name and address,
  • Strike 3 had to know defendant’s name and address in order to serve the summons and complaint, and
  • Defendant’s expectation of privacy in his or her name and address was outweighed by Strike 3’s right to use the judicial process to pursue a plausible claim of copyright infringement.

On the last point, the court observed that the privacy interest was outweighed especially given that the court could craft a limited protective order under Federal Rule of Civil Procedure 26(c) to protect an innocent ISP subscriber and to account for the sensitive and personal nature of the subject matter of the lawsuit.

Strike 3 Holdings, LLC v. Doe, 2018 WL 2278110 (D.Minn. May 18, 2018)

About the Author: Evan Brown is a Chicago technology and intellectual property attorney. Call Evan at (630) 362-7237, send email to ebrown [at] internetcases.com, or follow him on Twitter @internetcases. Read Evan’s other blog, UDRP Tracker, for information about domain name disputes.

No privacy violation for disclosing information otherwise available on member-only website

Plaintiff sued several defendants related to her past work as a government employee. She sought to amend her pleadings to add claims for violation of the Fourth Amendment and the federal Stored Communications Act. She claimed that defendants wrongfully disclosed private medical information about her. The court denied her motion to amend the pleadings to add the Fourth Amendment and Stored Communications Act claims because such amendments would have been futile.

Specifically, the court found there to be no violation because she had no reasonable expectation of privacy in the information allegedly disclosed. She had made that information available on a website. Though to view the information required signing up for an account, plaintiff had not set up the website to make the information available only to those she invited to view it. The court relied on several cases from earlier in the decade that addressed the issue of privacy of social media content, among them Rosario v. Clark Cty. Sch. Dist., 2013 WL 3679375 (D. Nev. July 3, 2013), which held that one has no reasonable expectation of privacy in his or her tweets, even if he or she had maintained a private account. In that case, the court held that even if the social media user maintained a private account, his tweets still amounted to the dissemination of information to the public.

Burke v. New Mexico, 2018 WL 2134030 (D.N.M. May 9, 2018)

About the Author: Evan Brown is a Chicago technology and intellectual property attorney. Call Evan at (630) 362-7237, send email to ebrown [at] internetcases.com, or follow him on Twitter @internetcases. Read Evan’s other blog, UDRP Tracker, for information about domain name disputes.

Puzzling privacy analysis in decision to unmask anonymous accused copyright infringers

Plaintiff porn company sued an unknown bittorrent user (identified as John Doe) alleging that defendant had downloaded and distributed more than 20 of plaintiff’s films. Plaintiff asked the court for leave to serve a subpoena on Optimum Online – the ISP associated with defendant’s IP address – prior to the Rule 26(f) conference. (As we have recently discussed, leave of court is required to start discovery before the Rule 26(f) conference, but a plaintiff cannot have that conference unless it knows who the defendant is.) Plaintiff already knew defendant’s IP address. It needed to serve the subpoena on the ISP to learn defendant’s real name and physical address so it could serve him with the complaint.

The court went through a well-established test to determine that good cause existed for allowing the expedited discovery. Drawing heavily on the case of Sony Music Entm’t, Inc. v. Does 1-40, 326 F. Supp. 2d 556 (S.D.N.Y. 2004), the court evaluated:

(1) the concreteness of the plaintiff’s showing of a prima facie claim of copyright infringement,

(2) the specificity of the discovery request,

(3) the absence of alternative means to obtain the subpoenaed information,

(4) the need for the subpoenaed information to advance the claim, and

(5) the objecting party’s expectation of privacy.

The court’s conclusions were not surprising on any of these elements. But it’s discussion under the fifth point, namely, the defendant’s expectation of privacy, was puzzling, and the court may have missed an important point.

It looked to the recent case involving Dred Pirate Roberts and Silk Road, namely, United States v. Ulbricht, 858 F.3d 71 (2d Cir. 2017). Leaning on the Ulbricht case, the court concluded that defendant had no reasonable expectation of privacy in the sought-after information (name and physical address) because there is no expectation of privacy in “subscriber information provided to an internet provider,” such as an IP address, and such information has been “voluntarily conveyed to third parties.”

While the court does not misquote the Ulbricht case, one is left to wonder why it would use that case to support discovery of the unknown subscriber’s name and physical address. At issue in Ulbricht was whether the government violated Dred Pirate Roberts’s Fourth Amendment rights when it obtained the IP address he was using. In this case, however, the plaintiff already knew the IP address from its forensic investigations. The sought-after information here was the name and physical address, not the IP address he used.

So looking to Ulbricht to say that the Doe defendant had no expectation of privacy in his IP address does nothing to shed information on the kind of expectation of privacy, if any, he should have had on his real name and physical address.

The court’s decision ultimately is not incorrect, but it did not need to consult with Ulbricht. As in the Sony Music case from which it drew the 5-part analysis, and in many other similar expedited discovery cases, the court could have simply found there was no reasonable expectation of privacy in the sought-after information, because the ISP’s terms of service put the subscriber on notice that it will turn over the information to third parties in certain circumstances like the ones arising in this case.

Strike 3 Holdings, LLC v. Doe, 2017 WL 5001474 (D.Conn., November 1, 2017)

About the Author: Evan Brown is a Chicago technology and intellectual property attorney. Call Evan at (630) 362-7237, send email to ebrown [at] internetcases.com, or follow him on Twitter @internetcases. Read Evan’s other blog, UDRP Tracker, for information about domain name disputes.

No liability for cable company that retained customer information in violation of law

Court essentially holds “no harm, no foul” in case involving violation of federal privacy statute. The case fails to provide an incentive for “privacy by design”.

Can a company that is obligated by law to destroy information about its former customers be held liable under that law if, after the contract with the customer ends, the company does not destroy the information as required? A recent decision from the United States Court of Appeals for the Eighth Circuit (which is located in St. Louis) gives some insight into that issue. The case is called Braitberg v. Charter Communications, Inc., — F.3d —, 2016 WL 4698283 (8th Cir., Sep. 8, 2016).

12493182714_859e827fe6_z

Plaintiff filed a lawsuit against his former cable company after he learned that the company held on to his personally identifiable information, including his social security number, years after he had terminated his cable service. The cable company was obligated under the federal Cable Communications Policy Act to “destroy personally identifiable information if the information is no longer necessary for the purpose for which it was collected.”

The lower court dismissed the lawsuit on the basis that plaintiff had not properly demonstrated that he had standing to bring the lawsuit. Plaintiff appealed to the Eighth Circuit. On review, the court of appeals affirmed the dismissal of the lawsuit.

The appellate court’s decision was informed by the recent Supreme Court decision in Spokeo, Inc. v. Robins, 136 S.Ct. 1540 (S.Ct. 2016), which addressed, among other things, the question of whether a plaintiff asserting violation of a privacy statute has standing to sue.

As a general matter, Article III of the Constitution limits the jurisdiction of the federal courts to actual “cases or controversies”. A party invoking federal jurisdiction must show, among other things, that the alleged injury is both “concrete and particularized” and “actual or imminent, not conjectural or hypothetical”.

In this case, the Court of Appeals found that plaintiff had not alleged an injury in fact as required under Article III and the Spokeo decision. His complaint asserted merely “a bare procedural violation, divorced from any concrete harm.”

The court’s opinion goes on to provide some examples of when the violation of a privacy statute would give rise to standing. It does this by noting certain things that plaintiff did not allege. He did not, for example allege that defendant had disclosed information to a third party, that any other party accessed the data, or that defendant used the information in any way after the termination of the agreement. Simply stated, he identified no material risk of harm from the retention. This speculative or hypothetical risk was insufficient for him to bring the lawsuit.

One unfortunate side effect of this decision is that it does little to encourage the implementation of “privacy by design” in the development of online platforms. As we have discussed before, various interests, including the federal government, have encouraged companies to develop systems in a way that only keeps data around for as long as it is needed. The federal courts’ unwillingness to recognize liability in situations where data is indeed kept around longer than necessary, even in violation of the law, does not provide an incentive for the utilization of privacy by design practices.

Braitberg v. Charter Communications, Inc., — F.3d —, 2016 WL 4698283 (8th Cir., Sep. 8, 2016)

Photo courtesy Flickr user Justin Hall under this Creative Commons license.

Evan_BrownAbout the Author: Evan Brown is a Chicago technology and intellectual property attorney. Call Evan at (630) 362-7237, send email to ebrown [at] internetcases.com, or follow him on Twitter @internetcases. Read Evan’s other blog, UDRP Tracker, for information about domain name disputes.

Facebook’s Terms of Service protect it from liability for offensive fake account

0723_facebook-screenshot
Someone set up a bogus Facebook account and posted, without consent, images and video of Plaintiff engaged in a lewd act. Facebook finally deleted the account, but not until two days had passed and Plaintiff had threatened legal action.

Plaintiff sued anyway, alleging, among other things, intrusion upon seclusion, public disclosure of private facts, and infliction of emotional distress. In his complaint, Plaintiff emphasized language from Facebook’s Terms of Service that prohibited users from posting content or taking any action that “infringes or violates someone else’s rights or otherwise would violate the law.”

Facebook moved to dismiss the claims, making two arguments: (1) that the claims contradicted Facebook’s Terms of Service, and (2) that the claims were barred by the Communications Decency Act at 47 U.S.C. 230. The court granted the motion to dismiss.

It looked to the following provision from Facebook’s Terms of Service:

Although we provide rules for user conduct, we do not control or direct users’ actions on Facebook and are not responsible for the content or information users transmit or share on Facebook. We are not responsible for any offensive, inappropriate, obscene, unlawful or otherwise objectionable content or information you may encounter on Facebook. We are not responsible for the conduct, whether online or offline, of any user of Facebook.

The court also examined the following language from the Terms of Service:

We try to keep Facebook up, bug-free, and safe, but you use it at your own risk. We are providing Facebook as is without any express or implied warranties including, but not limited to, implied warranties of merchantability, fitness for a particular purpose, and non-infringement. We do not guarantee that Facebook will always be safe, secure or error-free or that Facebook will always function without disruptions, delays or imperfections. Facebook is not responsible for the actions, content, information, or data of third parties, and you release us, our directors, officers, employees, and agents from any claims and damages, known and unknown, arising out of or in any way connected with any claims you have against any such third parties.

The court found that by looking to the Terms of Service to support his claims against Facebook, Plaintiff could not likewise disavow those portions of the Terms of Service which did not support his case. Because the Terms of Service said, among other things, that Facebook was not responsible for the content of what its users post, and that the a user uses the service as his or her on risk, the court could not place the responsibility onto Facebook for the offensive content.

Moreover, the court held that the Communications Decency Act shielded Facebook from liability. The CDA immunizes providers of interactive computer services against liability arising from content created by third parties. The court found that Facebook was an interactive computer service as contemplated under the CDA, the information for which Plaintiff sought to hold Facebook liable was information provided by another information content provider, and the complaint sought to hold Facebook as the publisher or speaker of that information.

Caraccioli v. Facebook, 2016 WL 859863 (N.D. Cal., March 7, 2016)

About the Author: Evan Brown is a Chicago attorney advising enterprises on important aspects of technology law, including software development, technology and content licensing, and general privacy issues.

Scroll to top