Is the Sixth Circuit willing to recognize a right to be forgotten under U.S. law?

Recent FOIA decision questions the 20-year-old notion that defendants have no interest in preventing release of booking photographs during ongoing criminal proceedings.

The Freedom of Information Act (“FOIA”) implements “a general philosophy of full agency disclosure” of government records. Since the mid-90s, the Sixth Circuit has required law enforcement to turn over booking photographs of defendants while ongoing criminal proceedings are occurring.

Plaintiff sought the booking photos of four criminal defendants from the U.S. Marshall’s office. When the U.S. Marshall refused to turn the photos over, plaintiff filed suit. The district court found in plaintiff’s favor, citing the Sixth Circuit case of Detroit Free Press v. United States Department of Justice, 73 F.3d 93 (1996). Defendant sought review with the Sixth Circuit and, bound by the 1996 decision, a panel of the Sixth Circuit affirmed, ordering that the photos be turned over.

But the panel was far from comfortable in its holding. Although it was bound to follow the earlier Sixth Circuit precedent, it urged the court to consider en banc whether an exception to FOIA applies to booking photographs. “In particular, we question the panel’s conclusion that defendants have no interest in preventing the public release of their booking photographs during ongoing criminal proceedings.”

The general theory behind the current requirement that booking photos be released is that the suspects have already appeared publicly in court, and the release of the photos and their names conveys no further information to implicate a protectible privacy interest. But this panel of the court noted that “[s]uch images convey an ‘unmistakable badge of criminality’ and, therefore, provide more information to the public than a person’s mere appearance.”

Something like a right to be forgotten appears in the court’s discussion of how photos can linger online: “[B]ooking photographs often remain publicly available on the Internet long after a case ends, undermining the temporal limitations presumed” by Sixth Circuit case law that calls for the release of photos during ongoing proceedings.

Detroit Free Press v. U.S. Dept. of Justice, — F.3d —, 2015 WL 4745284 (6th Cir. August 12, 2015)

Evan Brown is an attorney in Chicago helping clients manage issues involving technology and new media.

Facebook hacking victim’s CFAA and SCA claims not barred by statutes of limitation

Knowledge that email account had been hacked did not start the statutes of limitation clock ticking for Computer Fraud and Abuse Act and Stored Communications Act claims based on alleged related hacking of Facebook account occurring several months later.

Plaintiff sued her ex-boyfriend in federal court for allegedly accessing her Facebook and Aol email accounts. She brought claims under the Computer Fraud and Abuse Act, 18 U.S.C. § 1030 (“CFAA”), and the Stored Communications Act, 18 U.S.C. § 2701, et seq. (“SCA”).

Both the CFAA and the SCA have two-year statutes of limitation. Defendant moved to dismiss, arguing that the limitation periods had expired.

The district court granted the motion to dismiss, but plaintiff sought review with the Second Circuit Court of Appeals. On appeal, the court affirmed the dismissal as to the email account, but reversed and remanded as to the Facebook account.

In August 2011, plaintiff discovered that someone had altered her Aol email account password. Later that month someone used her email account to send lewd and derogatory sexually-themed messages about her to people in her contact list. A few months later, similar things happened with her Facebook account — she discovered she could not log in in February 2012, and in March 2012 someone publicly posted sexually-themed messages using her account. She figured out it was her (now married) ex-boyfriend and filed suit.

The district court dismissed the claims because it found plaintiff first discovered facts giving rise to the claims in August 2011, but did not file suit until more than two years later, in January 2014. The Court of Appeals agreed with the district court as to the email account. She had enough facts in 2011 to know her Aol account had been compromised, and waited too long to file suit over that. But that was not the case with the Facebook account. The district court had concluded plaintiff knew in 2011 that her “computer” had been compromised. The Court of Appeals observed that the lower court failed to properly recognize the nuance concerning which computer systems were being accessed without authorization. Unauthorized access to the Facebook server gave rise to the claims relating to the Facebook account. The 2011 knowledge about her email being hacked did not bear on whether she knew her Facebook account would be compromised. The court observed:

We take judicial notice of the fact that it is not uncommon for one person to hold several or many Internet accounts, possibly with several or many different usernames and passwords, less than all of which may be compromised at any one time. At least on the facts as alleged by the plaintiff, it does not follow from the fact that the plaintiff discovered that one such account — AOL e-mail — had been compromised that she thereby had a reasonable opportunity to discover, or should be expected to have discovered, that another of her accounts — Facebook — might similarly have become compromised.

The decision gives us an opportunity to think about how users’ interests in having their data kept secure from third party access attaches to devices and systems that may be quite remote from where the user is located. The typical victim of a hack or data breach these days is not going to be the owner of the server that is compromised. Instead, the incident will typically involve the compromising of a system somewhere else that is hosting the user’s information or communications. This decision from the Second Circuit recognizes that reality, and contributes to the reasonable opportunity for redress in those situations.

Sewell v. Bernardin, — F.3d —, 2015 WL 4619519 (2nd Cir. August 4, 2015)

Evan Brown is an attorney in Chicago helping clients manage issues involving technology and new media.

Casual website visitor who watched videos was not protected under the Video Privacy Protection Act

A recent federal court decision from the Southern District of New York sheds light on what is required to be considered a “consumer” who is protected under the Video Privacy Protection Act (VPPA). The court held that a website visitor who merely visited a website once in awhile to watch videos — without establishing a more “deliberate and durable” affiliation with the website — was not a “subscriber” to the website’s services and thus the VPPA did not prohibit the alleged disclosure of information about the website visitor’s viewing habits.

Defendant was a television network that maintains a website offering video clips and episodes of many of its television shows. The website also incorporated Facebook’s software development kit which, among other things, let visitors log into websites using their Facebook credentials. This mechanism relied on cookies. If a person had chosen to remain logged into Facebook by checking the “keep me logged in” button on Facebook’s homepage, the relevant cookie would continue to operate, regardless of what the user did with the web browser. Plaintiff alleged that this mechanism caused AMC to transmit information to Facebook about the video clips she watched on the AMC site.

Plaintiff sued under the VPPA. Defendant moved to dismiss, arguing that plaintiff lacked standing under the statute and that she was not a protected “consumer” as required by the statute.

The court found that plaintiff had standing. It rejected defendant’s argument that a VPPA plaintiff must allege some injury in addition to asserting that defendant had violated the statute. “It is true . . . that Congress cannot erase Article III’s standing requirements by statutorily granting the right to sue to a plaintiff who would not otherwise have standing.” But Congress “can broaden the injuries that can support constitutional standing.”

The court next looked to whether plaintiff was a “consumer” protected under the statute. The VPPA defines the term “consumer” to include “any renter, purchaser, or subscriber of goods or services from a video tape service provider.” Absent any assertion that plaintiff was a renter or purchaser of AMC’s goods, the parties and the court focused on whether she was a “subscriber” (a term not defined in the statute).

Because plaintiff’s allegations failed to establish a relationship with defendant sufficient to characterize her as a subscribers of defendant’s goods or services, the court dismissed the VPPA claim with leave to amend. It observed: “Conventionally, ‘subscription’ entails an exchange between subscriber and provider whereby the subscriber imparts money and/or personal information in order to receive a future and recurrent benefit, whether that benefit comprises, for instance, periodical magazines, club membership, cable services, or email updates.” In this case, “[s]uch casual consumption of web content, without any attempt to affiliate with or connect to the provider, exhibit[ed] none of the critical characteristics of ‘subscription’ and therefore [did] not suffice to render [plaintiff] a subscriber of [defendant’s] services.”

Austin-Spearman v. AMC Network Entertainment LLC, 2015 WL 1539052 (S.D.N.Y. April 7, 2015)

Evan Brown is an attorney in Chicago helping clients manage issues involving technology and new media.

Best practices for providers of goods and services on the Internet of Things

Today the United States Federal Trade Commission issued a report in which it detailed a number of consumer-focused issues arising from the growing Internet of Things (IoT). Companies should pay attention to the portion of the report containing the Commission’s recommendations on best practices to participants (such as device manufacturers and service providers) in the IoT space.

The Commission structured its recommendations around four of the “FIPPs” – the Fair Information Practice Principles – which first appeared in the 1970s and which inform much of the world’s regulation geared to protect personal data. The recommendations focused on data security, data minimization, notice and choice.

DATA SECURITY

IoT participants should implement reasonable data security. The Commission noted that “[o]f course, what constitutes reasonable security for a given device will depend on a number of factors, including the amount and sensitivity of data collected and the costs of remedying the security vulnerabilities.” Nonetheless, companies should:

  • Implement “security by design”
  • Ensure their personnel practices promote good security
  • Retain and oversee service providers that provide reasonable security
  • Implement “defense-in-depth” approach where appropriate
  • Implement reasonable access control measures
  • Monitor products in the marketplace and patch vulnerabilities

Security by Design

Companies should implement “security by design” into their devices at the outset, rather than as an afterthought by:

  • Conducting a privacy or security risk assessment to consider the risks presented by the collection and retention of consumer information.
  • Incorporating the use of “smart defaults” such as requiring consumers to change default passwords during the set-up process.
  • Considering how to minimize the data collected and retained.
  • Testing security measures before launching products.

Personnel Practices and Good Security

Companies should ensure their personnel practices promote good security by making security an executive-level concern and training employees about good security practices. A company should not assume that the ability to write code is equivalent to an understanding of the security of an embedded device.

Retain and Oversee Service Providers That Provide Reasonable Security

The Commission urged IoT participants to retain service providers that are capable of maintaining reasonable security and to oversee those companies’ performance to ensure that they do so. On this point, the Commission specifically noted that failure to do so could result in FTC law enforcement action. It pointed to a recent (non IoT) case in which a medical transcription company outsourced its services to independent typists in India who stored their notes in clear text on an unsecured server. Patients in the U.S. were shocked to find their confidential medical information showing up in web searches.

The “Defense-in-Depth” Approach

The Commission urged companies to take additional steps to protect particularly sensitive information (e.g., health information). For example, instead of relying on the user to ensure that data passing over his or her local wireless network is encrypted using the Wi-Fi password, companies should undertake additional efforts to ensure that data is not publicly available.

Reasonable Access Control Measures

While tools such as strong authentication could be used to permit or restrict IoT devices from interacting with other devices or systems, the Commission noted companies should ensure that they do not unduly impede the usability of the device.

Monitoring of Products and Patching of Vulnerabilities

Companies may reasonably decide to limit the time during which they provide security updates and software patches, but must weigh these decisions carefully. IoT participants should also be forthright in their representations about providing ongoing security updates and software patches to consumers. Disclosing the length of time companies plan to support and release software updates for a given product line will help consumers better understand the safe “expiration dates” for their commodity internet-connected devices.

DATA MINIMIZATION

Data minimization refers to the concept that companies should limit the data they collect and retain, and dispose of it once they no longer need it. The Commission acknowledged the concern that requiring data minimization might curtail innovative uses of data. A new enterprise may not be able to reasonably foresee the types of uses it may have for information gathered in the course of providing a connected device or operating a service in conjunction with connected devices. Despite certain concerns against data minimization, the Commission recommended that companies should consider reasonably limiting their collection and retention of consumer data.

The Commission observed how data minimization mitigates risk in two ways. First, the less information in a database, the less attractive the database is as a target for hackers. Second, having less data reduces the risk that the company providing the device or service will use the information in a way that the consumer does not expect.

The Commission provided a useful example of how data minimization might work in practice. It discussed a hypothetical startup that develops a wearable device, such as a patch, that can assess a consumer’s skin condition. The device does not need to collect precise geolocation information in order to work, but it has that capability. The device manufacturer believes that such information could be useful for a future product feature that would enable users to find treatment options in their area. The Commission observed that as part of a data minimization exercise, the company should consider whether it should wait to collect geolocation information until after it begins to offer the new product feature, at which time it could disclose the new collection and seek consent. The company should also consider whether it could offer the same feature while collecting less information, such as by collecting zip code rather than precise geolocation. If the company does decide it needs the precise geolocation information, the Commission would recommend that the company provide a prominent disclosure about its collection and use of this information, and obtain consumers’ affirmative express consent. And the company should establish reasonable retention limits for the data it does collect.

As an aspect of data minimization, the Commission also discussed de-identification as a “viable option in some contexts” to help minimize data and the risk of potential consumer harm. But as with any conversation about de-identification, the Commission addressed the risks associated with the chances of re-identification. On this note, the Commission referred to its 2012 Privacy Report in which it said that companies should:

  • take reasonable steps to de-identify the data, including by keeping up with technological developments;
  • publicly commit not to re-identify the data; and
  • have enforceable contracts in place with any third parties with whom they share the data, requiring the third parties to commit not to re-identify the data.

This approach ensures that if the data is not reasonably de-identified and then is re-identified
in the future, regulators can hold the company responsible.

NOTICE AND CHOICE

Giving consumers notice that information is being collected, and the ability to make choices as to that collection is problematic in many IoT contexts. Data is collected continuously, by many integrated devices and systems, and getting a consumer’s consent in each context might discourage use of the technology. Moreover, often there is no easy user interface through which to provide notice and offer choice.

With these concerns in mind, the Commission noted that “not every data collection requires choice.” As an alternative, the Commission acknowledged the efficacy of a use-based approach. Companies should not be compelled, for example, to provide choice before collecting and using consumer data for practices that are consistent with the context of a transaction or the company’s relationship with a consumer. By way of example, the Commission discussed a hypothetical purchaser of a “smart oven”. The company could use temperature data to recommend another of the company’s kitchen products. The consumer would expect that. But a consumer would not expect the company to disclose information to a data broker or an ad network without having been given notice of that sharing and the ability to choose whether it should occur.

Given the practical difficulty of notice and choice on the IoT, the Commission acknowledged there is no one-size-fits all approach. But it did suggest a number of mechanisms for communications of this sort, including:

  • Choices at point of sale
  • Tutorials (like the one Facebook uses)
  • QR codes on the device
  • Choices during setup
  • Management portals or dashboards
  • Icons
  • Out-of-band notifications (e.g., via email or text)
  • User-experience approach – “learning” what the user wants, and adjusting automatically

Conclusion

The Commission’s report does not have the force of law, but is useful in a couple of ways. From a practical standpoint, it serves as a guide for how to avoid engaging in flagrant privacy and security abuses on the IoT. But it also serves to frame a larger discussion about how providers of goods and services can and should approach the innovation process for the development of the Internet of Things.

When is it okay to use social media to make fun of people?

There is news from California that discusses a Facebook page called 530 Fatties that was created to collect photos of and poke fun at obese people. It’s a rude project, and sets the context for discussing some intriguing legal and normative issues.

Apparently the site collects photos that are taken in public. One generally doesn’t have a privacy interest in being photographed while in public places. And that seems pretty straightforward if you stop and think about it — you’re in public after all. But should technology change that legal analysis? Mobile devices with good cameras connected to high speed broadband networks make creation, sharing and shaming much easier than it used to be. A population equipped with these means essentially turns all public space into a panopticon. Does that mean the individual should be given more of something-like-privacy when in public? If you think that’s crazy, consider it in light of what Justice Sotomayor wrote in her concurrence in the 2012 case of U.S. v. Jones: “I would ask whether people reasonably expect that their movements will be recorded and aggregated in a manner that enables [one] to ascertain, more or less at will, their political and religious beliefs, sexual habits, and so on.”

Apart from privacy harms, what else is at play here? For the same reasons that mobile cameras + social media jeopardizes traditional privacy assurances, the combination can magnify the emotional harms against a person. The public shaming that modern technology occasions can inflict deeper wounds because of the greater spatial and temporal characteristics of the medium. One can now easily distribute a photo or other content to countless individuals, and since the web means the end of forgetting, that content may be around for much longer than the typical human memory.

Against these concerns are the free speech interests of the speaking parties. In the U.S. especially, it’s hardwired into our sensibilities that each of us has great freedom to speak and otherwise express ourselves. The traditional First Amendment analysis will protect speech — even if it offends — unless there is something truly unlawful about it. For example, there is no free speech right to defame, to distribute obscene materials, or to use “fighting words.” Certain forms of harassment fall into the category of unprotected speech. How should we examine the role that technology plays in moving what would otherwise be playground-like bullying (like calling someone a fatty) to unlawful speech that can subject one to civil or even criminal liability? Is the impact that technology’s use makes even a valid issue to discuss?

Finally, we should examine the responsibility of the intermediaries here. A social media platform generally is going to be protected by the Communications Decency Act at 47 USC 230 from liability for third party content. But we should discuss the roles of the intermediary in terms other than pure legal ones. Many social media platforms are proactive in taking down otherwise lawful content that has the tendency to offend. The pervasiveness of social media underscores the power that these platforms have to shape normative values around what is appropriate behavior among individuals. This power is indeed potentially greater than any legal or governmental power to constrain the generation and distribution of content.

Evan Brown is an attorney in Chicago advising clients on matters dealing with technology, the internet and new media.

Company facing liability for accessing employee’s Twitter and Facebook accounts

While plaintiff was away from the office for a serious brain injury she suffered in a work-related auto accident, some of her co-workers accessed and posted, allegedly without authorization, from her Twitter and Facebook accounts. (There was some dispute as to whether those accounts were personal to plaintiff or whether they were intended to promote the company.) Plaintiff sued, alleging several theories, including violations of the Lanham Act and the Stored Communications Act. Defendants moved for summary judgment. The court dismissed the Lanham Act claim but did not dismiss the Stored Communications Act claim.

Plaintiff had asserted a Lanham Act “false endorsement” claim, which occurs when a person’s identity is connected with a product or service in such a way that consumers are likely to be misled about that person’s sponsorship or approval of the product or service. The court found that although plaintiff had a protectable interest in her “personal brand,” she had not properly put evidence before the court that she suffered the economic harm necessary for a Lanham Act violation. The record showed that plaintiff’s alleged damages related to her mental suffering, something not recoverable under the Lanham Act.

As for the Stored Communications Act claim, the court found that the question of whether defendants were authorized to access and post using plaintiff’s social media accounts should be left up to the jury (and not determined on summary judgment). Defendants had also argued that plaintiff’s Stored Communications Act claim should be thrown out because she had not shown any actual damages. But the court held plaintiff could be entitled to the $1,000 minimum statutory damages under the act even without a showing of actual harm.

Maremont v. Susan Fredman Design Group, Ltd., 2014 WL 812401 (N.D.Ill. March 3, 2014)

Massachusetts supreme court says cops should have gotten warrant before obtaining cell phone location data

Court takes a “different approach” with respect to one’s expectation of privacy

After defendant’s girlfriend was murdered in 2004, the police got a “D order” (an order authorized under 18 U.S.C. 2703(d)) from a state court to compel Sprint to turn over historical cell site location information (“CSLI”) showing where defendant placed telephone calls around the time of the girlfriend’s murder. Importantly, the government did not get a warrant for this information. After the government indicted defendant seven years later, he moved to suppress the CSLI evidence arguing a violation of his Fourth Amendment rights. The trial court granted the motion to suppress, and the government sought review with the Massachusetts supreme court. That court agreed, holding that a search warrant based on probable cause was required.

The government invoked the third party doctrine, arguing that no search in the constitutional sense occurred because CSLI was a business record of the defendant’s cellular service provider, a private third party. According to the government, the defendant could thus have no expectation of privacy in location information — i.e., information about the his location when using the cell phone — that he voluntarily revealed.

The court concluded that although the CSLI at issue was a business record of the defendant’s cellular service provider, he had a reasonable expectation of privacy in it, and in the circumstances of this case — where the CSLI obtained covered a two-week period — the warrant requirement of the Massachusetts constitution applied. The court made a qualitative distinction in cell phone location records to reach its conclusion:

No cellular telephone user . . . voluntarily conveys CSLI to his or her cellular service provider in the sense that he or she first identifies a discrete item of information or data point like a telephone number (or a check or deposit slip…) … In sum, even though CSLI is business information belonging to and existing in the records of a private cellular service provider, it is substantively different from the types of information and records contemplated by [the Supreme Court’s seminal third-party doctrine cases]. These differences lead us to conclude that for purposes of considering the application of [the Massachusetts constitution] in this case, it would be inappropriate to apply the third-party doctrine to CSLI.

To get to this conclusion, the court avoided the question of whether obtaining the records constituted a “search” under the Fourth Amendment, but focused instead on the third party doctrine (and the expectation of privacy one has in information stored on a third party system) in relation to the Massachusetts constitution.

In a sense, though, the court gave the government another bite at the apple. It remanded the case to the trial court where the government could seek to establish that the affidavit submitted in support of its application for an order under 18 U.S.C. § 2703(d) demonstrated probable cause for the CSLI records at issue.

Commonwealth v. Augustine, — N.E.3d —, Mass. , 2014 WL 563258 (Mass. February 18, 2014)

Is the future a trade between convenience and privacy?

This TechCrunch piece talks about how (predictably) Google wants to build the “ultimate personal assistant.” With Google’s collecting user preferences cross-platform and applying algorithms to ascertain intentions, getting around in the world, purchasing things, and interacting with others could get a lot easier.

But at what cost? The success of any platform that becomes a personal assistant in the cloud would depend entirely on the collection of vast amounts of information about the individual. And since Google makes its fortunes on advertising, there is no reason to be confident that the information gathered will not be put to uses other than adding conveniences to the user’s life. Simply stated, the platform is privacy-destroying.

What if one wants to opt-out of this utopically convenient future? Might such a person be unfairly disadvantaged by, for example, choosing to undertake tasks the “old fashioned” way, unassisted by the privacy eviscerating tools? This points to larger questions about augmented reality. As a society, will we implement regulations to level the playing field among those who are not augmented versus those who are? Questions of social justice in the future may take a different tone.

Guy faces lawsuit for using another man’s Facebook pics to send sexually explicit communications to undercover cops

Defendant emailed three pictures, thinking he was communicating with two 14-year-old girls. But he was actually communicating with a police detective. And the pictures were not of defendant, but of plaintiff — a cop in a neighboring community. The pictures were not sexually explicit, but the accompanying communications were. Defendant had copied them from plaintiff’s Facebook page.

Plaintiff and his wife sued defendant under a number of tort theories. Defendant moved to dismiss plaintiffs’ claims for false light publicity and intentional infliction of emotional distress. The court denied the motion.

It found that the false light in which defendant placed plaintiff through his conduct would be highly offensive to a reasonable person, and that defendant had knowledge of or acted in reckless disregard as to the falsity of the identity of the person in the photo, and the false light into which the plaintiff would be placed.

As for the intentional infliction of emotional distress claim, the court found that: (1) defendant intended to inflict emotional distress or that he knew or should have known that emotional distress was the likely result of his conduct; (2) that the conduct was extreme and outrageous; (3) that the conduct was the cause of plaintiff’s distress; and (4) that the emotional distress sustained by the plaintiff was severe.

Defendant argued that his conduct was not extreme and outrageous. The court addressed that argument by noting that:

[Defendant] cannot do that with a straight face. The test is whether “the recitation of the facts to an average member of the community would arouse his resentment against the actor and lead him to exclaim, Outrageous!” . . . This is such a case.

Plaintiff’s wife’s intentional infliction of emotional distress claim survived as well. This was not, as defendant argued, an allegation of bystander emotional distress, such as that of a witness to an automobile accident. Defendant’s conduct implied that plaintiff was a sexual predator. This would naturally reflect on his spouse and cause her great personal embarrassment and natural concern for her own personal health quite apart from the distress she may have experienced from observing her husband’s own travail.

Dzamko v. Dossantos, 2013 WL 5969531 (Conn.Super. October 23, 2013)

Scroll to top