Is storing protected information on an unencrypted server a disclosure of that information?

unencrypted server disclosure

Back in the 1990s, Congress recognized that stalkers were aided in their crimes by using victims’ driver’s license information, and states were selling driver’s license information to marketers. So Congress passed the Driver’s Privacy Protection Act, 18 U.S.C. § 2721, et seq. (the “DPPA”). This statute makes it unlawful for any person to knowingly disclose personal information from a motor vehicle record for any use other than certain uses that the statute permits.

Defendant had more than 27 million Texas driver’s license records that it stored on an external unencrypted server. In 2020, it announced that a third party had accessed the records without authorization. As expected, the class action lawyers jumped on board and sued under the DPPA.

The lower court dismissed the DPPA claim in response to defendant’s motion to dismiss for failure to state a claim. Plaintiffs sought review with the Fifth Circuit Court of Appeals. On appeal, the court affirmed the dismissal.

It held that plaintiffs failed to plausibly allege that storing the data on an unencrypted server amounted to a “disclosure”. More specifically, although plaintiffs argued that defendants had placed the information on a server that was readily accessible to the public, that assertion was nowhere in the complaint, nor was it supported by the facts alleged in the complaint.

In finding there to be no disclosure, the court observed that the storage of the data, as alleged, did not make it visible to a digital “passer-by”. This made the case different from Senne v. Village of Palatine, Ill.,695 F.3d 597 (7th Cir. 2012), in which a police officer disclosed information by putting a traffic ticket on a windshield, which any passer-by could see. The court also looked to Enslin v. Coca-Cola Co., 136 F. Supp. 3d 654 (E.D. Pa. 2015), in which that court held there to be no disclosure under the DPPA when someone stole an unencrypted laptop containing information protected under the statute.

Allen v. Vertafore, Inc., No. 21-20404 (5th Cir., March 11, 2022)

Best practices for providers of goods and services on the Internet of Things

Today the United States Federal Trade Commission issued a report in which it detailed a number of consumer-focused issues arising from the growing Internet of Things (IoT). Companies should pay attention to the portion of the report containing the Commission’s recommendations on best practices to participants (such as device manufacturers and service providers) in the IoT space.

The Commission structured its recommendations around four of the “FIPPs” – the Fair Information Practice Principles – which first appeared in the 1970s and which inform much of the world’s regulation geared to protect personal data. The recommendations focused on data security, data minimization, notice and choice.

DATA SECURITY

IoT participants should implement reasonable data security. The Commission noted that “[o]f course, what constitutes reasonable security for a given device will depend on a number of factors, including the amount and sensitivity of data collected and the costs of remedying the security vulnerabilities.” Nonetheless, companies should:

  • Implement “security by design”
  • Ensure their personnel practices promote good security
  • Retain and oversee service providers that provide reasonable security
  • Implement “defense-in-depth” approach where appropriate
  • Implement reasonable access control measures
  • Monitor products in the marketplace and patch vulnerabilities

Security by Design

Companies should implement “security by design” into their devices at the outset, rather than as an afterthought by:

  • Conducting a privacy or security risk assessment to consider the risks presented by the collection and retention of consumer information.
  • Incorporating the use of “smart defaults” such as requiring consumers to change default passwords during the set-up process.
  • Considering how to minimize the data collected and retained.
  • Testing security measures before launching products.

Personnel Practices and Good Security

Companies should ensure their personnel practices promote good security by making security an executive-level concern and training employees about good security practices. A company should not assume that the ability to write code is equivalent to an understanding of the security of an embedded device.

Retain and Oversee Service Providers That Provide Reasonable Security

The Commission urged IoT participants to retain service providers that are capable of maintaining reasonable security and to oversee those companies’ performance to ensure that they do so. On this point, the Commission specifically noted that failure to do so could result in FTC law enforcement action. It pointed to a recent (non IoT) case in which a medical transcription company outsourced its services to independent typists in India who stored their notes in clear text on an unsecured server. Patients in the U.S. were shocked to find their confidential medical information showing up in web searches.

The “Defense-in-Depth” Approach

The Commission urged companies to take additional steps to protect particularly sensitive information (e.g., health information). For example, instead of relying on the user to ensure that data passing over his or her local wireless network is encrypted using the Wi-Fi password, companies should undertake additional efforts to ensure that data is not publicly available.

Reasonable Access Control Measures

While tools such as strong authentication could be used to permit or restrict IoT devices from interacting with other devices or systems, the Commission noted companies should ensure that they do not unduly impede the usability of the device.

Monitoring of Products and Patching of Vulnerabilities

Companies may reasonably decide to limit the time during which they provide security updates and software patches, but must weigh these decisions carefully. IoT participants should also be forthright in their representations about providing ongoing security updates and software patches to consumers. Disclosing the length of time companies plan to support and release software updates for a given product line will help consumers better understand the safe “expiration dates” for their commodity internet-connected devices.

DATA MINIMIZATION

Data minimization refers to the concept that companies should limit the data they collect and retain, and dispose of it once they no longer need it. The Commission acknowledged the concern that requiring data minimization might curtail innovative uses of data. A new enterprise may not be able to reasonably foresee the types of uses it may have for information gathered in the course of providing a connected device or operating a service in conjunction with connected devices. Despite certain concerns against data minimization, the Commission recommended that companies should consider reasonably limiting their collection and retention of consumer data.

The Commission observed how data minimization mitigates risk in two ways. First, the less information in a database, the less attractive the database is as a target for hackers. Second, having less data reduces the risk that the company providing the device or service will use the information in a way that the consumer does not expect.

The Commission provided a useful example of how data minimization might work in practice. It discussed a hypothetical startup that develops a wearable device, such as a patch, that can assess a consumer’s skin condition. The device does not need to collect precise geolocation information in order to work, but it has that capability. The device manufacturer believes that such information could be useful for a future product feature that would enable users to find treatment options in their area. The Commission observed that as part of a data minimization exercise, the company should consider whether it should wait to collect geolocation information until after it begins to offer the new product feature, at which time it could disclose the new collection and seek consent. The company should also consider whether it could offer the same feature while collecting less information, such as by collecting zip code rather than precise geolocation. If the company does decide it needs the precise geolocation information, the Commission would recommend that the company provide a prominent disclosure about its collection and use of this information, and obtain consumers’ affirmative express consent. And the company should establish reasonable retention limits for the data it does collect.

As an aspect of data minimization, the Commission also discussed de-identification as a “viable option in some contexts” to help minimize data and the risk of potential consumer harm. But as with any conversation about de-identification, the Commission addressed the risks associated with the chances of re-identification. On this note, the Commission referred to its 2012 Privacy Report in which it said that companies should:

  • take reasonable steps to de-identify the data, including by keeping up with technological developments;
  • publicly commit not to re-identify the data; and
  • have enforceable contracts in place with any third parties with whom they share the data, requiring the third parties to commit not to re-identify the data.

This approach ensures that if the data is not reasonably de-identified and then is re-identified
in the future, regulators can hold the company responsible.

NOTICE AND CHOICE

Giving consumers notice that information is being collected, and the ability to make choices as to that collection is problematic in many IoT contexts. Data is collected continuously, by many integrated devices and systems, and getting a consumer’s consent in each context might discourage use of the technology. Moreover, often there is no easy user interface through which to provide notice and offer choice.

With these concerns in mind, the Commission noted that “not every data collection requires choice.” As an alternative, the Commission acknowledged the efficacy of a use-based approach. Companies should not be compelled, for example, to provide choice before collecting and using consumer data for practices that are consistent with the context of a transaction or the company’s relationship with a consumer. By way of example, the Commission discussed a hypothetical purchaser of a “smart oven”. The company could use temperature data to recommend another of the company’s kitchen products. The consumer would expect that. But a consumer would not expect the company to disclose information to a data broker or an ad network without having been given notice of that sharing and the ability to choose whether it should occur.

Given the practical difficulty of notice and choice on the IoT, the Commission acknowledged there is no one-size-fits all approach. But it did suggest a number of mechanisms for communications of this sort, including:

  • Choices at point of sale
  • Tutorials (like the one Facebook uses)
  • QR codes on the device
  • Choices during setup
  • Management portals or dashboards
  • Icons
  • Out-of-band notifications (e.g., via email or text)
  • User-experience approach – “learning” what the user wants, and adjusting automatically

Conclusion

The Commission’s report does not have the force of law, but is useful in a couple of ways. From a practical standpoint, it serves as a guide for how to avoid engaging in flagrant privacy and security abuses on the IoT. But it also serves to frame a larger discussion about how providers of goods and services can and should approach the innovation process for the development of the Internet of Things.

Why be concerned with social media estate planning?

The headline of this recent blog post by the U.S. government promises to answer the question of why you should do some social media estate planning. But the post falls short of providing a compelling reason to plan for how your social media accounts and other digital assets should be handled in the event of your demise. So I’ve come up with my own list of reasons why this might be good both for the individual and for our culture:

Security. People commit identity theft on both the living and the dead. (See, for example, the story of the Tennessee woman who collected her dead aunt’s Social Security checks for 22 years.) While the living can run credit checks and otherwise monitor the use of their personal information, the deceased are not so diligent. Ensuring that the dataset comprising a person’s social media identity is accounted for and monitored should reduce the risk of that information being used nefariously.

Avoiding sad reminders. Spammers have no qualms with commandeering a dead person’s email account. As one Virginia family knows, putting a stop to that form of “harassment” can be painful and inconvenient.

Keeping social media uncluttered. This reason lies more in the public interest than in the interest of the deceased and his or her relatives. The advertising model for social media revenue generation relies on the accuracy and effectiveness of information about the user base. The presence of a bunch of dead peoples’ accounts, which are orphaned, so to speak, dilutes the effectiveness of the other data points in the social graph. So it is a good thing to prune the accounts of the deceased, or otherwise see that they are properly curated.

Preserving our heritage for posterity. Think of the ways you know about your family members that came before you. Stories and oral tradition are generally annotated by photo albums, personal correspondence and other snippets of everyday life. Social media is becoming a preferred substrate for the collection of those snippets. To have that information wander off into the digital ether unaccounted for is to forsake a means of knowing about the past.

How big a deal is this, anyway? This Mashable article commenting on the U.S. government post says that last year about 500,000 Facebook users died. That’s about 0.0006% of the user base. (Incidentally, Facebook users seem much less likely to die than the general population, as 0.007% of the world’s entire population died last year. Go here if you want to do the math yourself.)

I say it’s kind of a big deal, but a deal that’s almost certain to get bigger.

Computer Fraud and Abuse Act case against hard drive destroying director goes forward

Deloitte & Touche LLP v. Carlson, 2011 WL 2923865 (N.D. Ill. July 18, 2011)

Defendant had risen to the level of Director of a large consulting and professional services firm. (There is some irony here – this case involves the destruction of electronic data, and defendant had been in charge of the firm’s security and privacy practice.)

After defendant left the firm to join a competitor, he returned his work-issued laptop with the old hard drive having been replaced by a new blank one. Defendant had destroyed the old hard drive because it had personal data on it such as tax returns and account information.

The firm sued, putting forth a number of claims, including violation of the Computer Fraud and Abuse Act (CFAA). Defendant moved to dismiss for failure to state a claim upon which relief can be granted. The court denied the motion.

Defendant argued that the CFAA claim should fail because plaintiff had not adequately pled that the destruction of the hard drive was done “without authorization.” The court rejected this argument.

The court looked to Int’l Airport Centers LLC v. Citrin, 440 F.3d 418 (7th Cir. 2006) for guidance on the question of whether defendant’s alleged conduct was “without authorization.” Int’l Airport Centers held that an employee acts without authorization as contemplated under the CFAA if he or she breaches a duty of loyalty to the employer prior to the alleged data destruction.

In this case, plaintiff alleged that defendant began soliciting another employee to leave before defendant left, and that defendant allegedly destroyed the data to cover his tracks. On these facts, the court found the “without authorization” element to be adequately pled.

Negligence claim allowed in laptop theft case

Ruiz v. Gap, Inc., 540 F.Supp.2d 1121 (N.D. Cal. March 24, 2008)

In 2006, Ruiz applied for a job at the Gap and was required to provide his Social Security number. A vendor hired by the Gap for recruiting stored Ruiz’s information on a laptop which, as luck would have it, was stolen.

Though he was not (at least yet) the victim of identity theft, Ruiz sued the Gap for negligence. The Gap moved for judgment on the pleadings which the court also treated as a motion to dismiss for failure to state a claim. The court denied the motion to dismiss as to negligence (and granted the motion as to claims for bailment, unfair competition and violation of the California constitutional right to privacy). But Ruiz’s standing to bring the claim was tenuous.

The Gap had argued that Ruiz lacked standing. His only alleged harm was that he was at an increased risk for identity theft. The court’s analysis of the Gap’s objection to standing focused on the first element of the Lujan test (Lujan v. Defenders of Wildlife, 504 U.S. 555 (1992)), namely, whether Ruiz’s alleged injury was “concrete and particularized.”

The Ninth Circuit has held for allegations of future harm to confer standing, the threat must be credible, and the plaintiff must show that there is a “significant possibility” that future harm will ensue. The Lujan case (which is the leading Supreme Court authority on standing) essentially creates a “benefit of the doubt” for plaintiffs at the pleading stage — a court is to presume that general allegations embrace those specific allegations that are necessary to show a particularized injury. Ruiz’s general allegations of the threat of future harm were thus sufficient to confer standing.

But the court gave a warning to Ruiz that the threshold of standing does not apply only to pleadings, but is an indispensable part of a plaintiff’s case throughout. In other words, he’ll have to come up with more later to keep the case in court.

So in denying the motion to dismiss the negligence claim, the court incorporated its standing analysis. The only issue on the point of negligence was whether Ruiz had suffered an injury. Ruiz’s general allegations were sufficient.

Scroll to top