Blog

Best practices for providers of goods and services on the Internet of Things

Today the United States Federal Trade Commission issued a report in which it detailed a number of consumer-focused issues arising from the growing Internet of Things (IoT). Companies should pay attention to the portion of the report containing the Commission’s recommendations on best practices to participants (such as device manufacturers and service providers) in the IoT space.

The Commission structured its recommendations around four of the “FIPPs” – the Fair Information Practice Principles – which first appeared in the 1970s and which inform much of the world’s regulation geared to protect personal data. The recommendations focused on data security, data minimization, notice and choice.

DATA SECURITY

IoT participants should implement reasonable data security. The Commission noted that “[o]f course, what constitutes reasonable security for a given device will depend on a number of factors, including the amount and sensitivity of data collected and the costs of remedying the security vulnerabilities.” Nonetheless, companies should:

  • Implement “security by design”
  • Ensure their personnel practices promote good security
  • Retain and oversee service providers that provide reasonable security
  • Implement “defense-in-depth” approach where appropriate
  • Implement reasonable access control measures
  • Monitor products in the marketplace and patch vulnerabilities

Security by Design

Companies should implement “security by design” into their devices at the outset, rather than as an afterthought by:

  • Conducting a privacy or security risk assessment to consider the risks presented by the collection and retention of consumer information.
  • Incorporating the use of “smart defaults” such as requiring consumers to change default passwords during the set-up process.
  • Considering how to minimize the data collected and retained.
  • Testing security measures before launching products.

Personnel Practices and Good Security

Companies should ensure their personnel practices promote good security by making security an executive-level concern and training employees about good security practices. A company should not assume that the ability to write code is equivalent to an understanding of the security of an embedded device.

Retain and Oversee Service Providers That Provide Reasonable Security

The Commission urged IoT participants to retain service providers that are capable of maintaining reasonable security and to oversee those companies’ performance to ensure that they do so. On this point, the Commission specifically noted that failure to do so could result in FTC law enforcement action. It pointed to a recent (non IoT) case in which a medical transcription company outsourced its services to independent typists in India who stored their notes in clear text on an unsecured server. Patients in the U.S. were shocked to find their confidential medical information showing up in web searches.

The “Defense-in-Depth” Approach

The Commission urged companies to take additional steps to protect particularly sensitive information (e.g., health information). For example, instead of relying on the user to ensure that data passing over his or her local wireless network is encrypted using the Wi-Fi password, companies should undertake additional efforts to ensure that data is not publicly available.

Reasonable Access Control Measures

While tools such as strong authentication could be used to permit or restrict IoT devices from interacting with other devices or systems, the Commission noted companies should ensure that they do not unduly impede the usability of the device.

Monitoring of Products and Patching of Vulnerabilities

Companies may reasonably decide to limit the time during which they provide security updates and software patches, but must weigh these decisions carefully. IoT participants should also be forthright in their representations about providing ongoing security updates and software patches to consumers. Disclosing the length of time companies plan to support and release software updates for a given product line will help consumers better understand the safe “expiration dates” for their commodity internet-connected devices.

DATA MINIMIZATION

Data minimization refers to the concept that companies should limit the data they collect and retain, and dispose of it once they no longer need it. The Commission acknowledged the concern that requiring data minimization might curtail innovative uses of data. A new enterprise may not be able to reasonably foresee the types of uses it may have for information gathered in the course of providing a connected device or operating a service in conjunction with connected devices. Despite certain concerns against data minimization, the Commission recommended that companies should consider reasonably limiting their collection and retention of consumer data.

The Commission observed how data minimization mitigates risk in two ways. First, the less information in a database, the less attractive the database is as a target for hackers. Second, having less data reduces the risk that the company providing the device or service will use the information in a way that the consumer does not expect.

The Commission provided a useful example of how data minimization might work in practice. It discussed a hypothetical startup that develops a wearable device, such as a patch, that can assess a consumer’s skin condition. The device does not need to collect precise geolocation information in order to work, but it has that capability. The device manufacturer believes that such information could be useful for a future product feature that would enable users to find treatment options in their area. The Commission observed that as part of a data minimization exercise, the company should consider whether it should wait to collect geolocation information until after it begins to offer the new product feature, at which time it could disclose the new collection and seek consent. The company should also consider whether it could offer the same feature while collecting less information, such as by collecting zip code rather than precise geolocation. If the company does decide it needs the precise geolocation information, the Commission would recommend that the company provide a prominent disclosure about its collection and use of this information, and obtain consumers’ affirmative express consent. And the company should establish reasonable retention limits for the data it does collect.

As an aspect of data minimization, the Commission also discussed de-identification as a “viable option in some contexts” to help minimize data and the risk of potential consumer harm. But as with any conversation about de-identification, the Commission addressed the risks associated with the chances of re-identification. On this note, the Commission referred to its 2012 Privacy Report in which it said that companies should:

  • take reasonable steps to de-identify the data, including by keeping up with technological developments;
  • publicly commit not to re-identify the data; and
  • have enforceable contracts in place with any third parties with whom they share the data, requiring the third parties to commit not to re-identify the data.

This approach ensures that if the data is not reasonably de-identified and then is re-identified
in the future, regulators can hold the company responsible.

NOTICE AND CHOICE

Giving consumers notice that information is being collected, and the ability to make choices as to that collection is problematic in many IoT contexts. Data is collected continuously, by many integrated devices and systems, and getting a consumer’s consent in each context might discourage use of the technology. Moreover, often there is no easy user interface through which to provide notice and offer choice.

With these concerns in mind, the Commission noted that “not every data collection requires choice.” As an alternative, the Commission acknowledged the efficacy of a use-based approach. Companies should not be compelled, for example, to provide choice before collecting and using consumer data for practices that are consistent with the context of a transaction or the company’s relationship with a consumer. By way of example, the Commission discussed a hypothetical purchaser of a “smart oven”. The company could use temperature data to recommend another of the company’s kitchen products. The consumer would expect that. But a consumer would not expect the company to disclose information to a data broker or an ad network without having been given notice of that sharing and the ability to choose whether it should occur.

Given the practical difficulty of notice and choice on the IoT, the Commission acknowledged there is no one-size-fits all approach. But it did suggest a number of mechanisms for communications of this sort, including:

  • Choices at point of sale
  • Tutorials (like the one Facebook uses)
  • QR codes on the device
  • Choices during setup
  • Management portals or dashboards
  • Icons
  • Out-of-band notifications (e.g., via email or text)
  • User-experience approach – “learning” what the user wants, and adjusting automatically

Conclusion

The Commission’s report does not have the force of law, but is useful in a couple of ways. From a practical standpoint, it serves as a guide for how to avoid engaging in flagrant privacy and security abuses on the IoT. But it also serves to frame a larger discussion about how providers of goods and services can and should approach the innovation process for the development of the Internet of Things.

Facebook wins against alleged advertising fraudster

Defendant set up more than 70 bogus Facebook accounts and impersonated online advertising companies (including by sending Facebook falsified bank records) to obtain an advertising credit line from Facebook. He ran more than $340,000 worth of ads for which he never paid. Facebook sued, among other things, for breach of contract, fraud, and violation of the Computer Fraud and Abuse Act (CFAA). Despite the court giving defendant several opportunities to be heard, defendant failed to answer the claims and the court entered a default.

The court found that Facebook had successfully pled a CFAA claim. After Facebook implemented technological measures to block defendant’s access, and after it sent him two cease-and-desist letters, defendant continued to intentionally access Facebook’s “computers and servers to obtain account credentials, Facebook credit lines, Facebook ads, and other information.” The court entered an injunction against defendant accessing or using any Facebook website or service in the future, and set the matter over for Facebook to prove up its $340,000 in damages. It also notified the U.S. Attorney’s Office.

Facebook, Inc. v. Grunin, 2015 WL 124781 (N.D. Cal. January 8, 2015)

Court orders Twitter to identify anonymous users

Defamation plaintiffs’ need for requested information outweighed any impact on Doe defendants’ free speech right to tweet anonymously.

Plaintiff company and its CEO sued several unknown defendants who tweeted that plaintiff company encouraged domestic violence and misogyny and that the CEO visited prostitutes. The court allowed plaintiffs to serve subpoenas on Twitter to seek the identity of the unknown Twitter users. Twitter would not comply with the subpoenas unless and until the court ruled on whether the production of information would violate the users’ First Amendment rights.

The court ruled in favor of the plaintiffs and ordered Twitter to turn over identifying information about the unknown users. In reaching this decision, the court applied the Ninth Circuit analysis for unmasking anonymous internet speakers set out in Perry v. Schwarzenegger, 591 F.3d. 1126 (9th Cir. 2009). The court found that the requested discovery raised the possibility of “arguable first amendment infringement,” so it continued its analysis by weighing the balance between the aggrieved plaintiffs’ interests with the anonymous defendants’ free speech rights.

The Perry balancing test places a burden on the party seeking discovery to show that the information sought is rationally related to a compelling governmental interest and that the requested discovery is the least restrictive means of obtaining the desired information.

In this case, the court found that the subpoenas were narrowly tailored to plaintiffs’ need to uncover the identities of the anonymous defendants so that plaintiffs could serve process. It also found that the “nature” of defendants’ speech weighed in favor of enforcing the subpoena. The challenged speech went “beyond criticism into what appear[ed] to be pure defamation, ostensibly unrelated to normal corporate activity.”

Music Group Macao Commercial Offshore Ltd. v. Does I-IX, 2015 WL 75073 (N.D. Cal., January 6, 2015).

Domain name case under ACPA failed because trademark was not distinctive

Federal appeals court holds that plaintiff failed to satisfy all elements of the Anticybersquatting Consumer Protection Act in action against competing airline

The federal Anticybersquatting Consumer Protection Act (ACPA) [15 U.S.C. 1125(d)] is a provision in U.S. law that gives trademark owners a cause of action against one who has wrongfully registered a domain name. In general, the ACPA gives rights to owners of trademarks that are either distinctive or famous at the time the defendant registered the offending domain name.

The Eleventh Circuit Court of Appeals recently affirmed the decision of a lower court that dismissed an ACPA claim, holding that the plaintiff failed to plead that its mark was distinctive at the time of the domain name registration.

Plaintiff sued its competitor, who registered the domain name tropicoceanairways.com. Defendant moved to dismiss, and the lower court granted the motion, finding that plaintiff failed to plead that its mark TROPIC OCEAN AIRWAYS was distinctive and thus protected under the ACPA. On appeal, the Eleventh Circuit affirmed the dismissal, holding that plaintiff’s complaint failed to allege that the mark was either suggestive or had acquired secondary meaning as an indicator of source for plaintiff’s services.

Suggestive marks are considered distinctive because they require “a leap of the imagination to get from the mark to the product.” (The court provided the example of a penguin used as a mark for refrigerators.) In this case, the court found the term “tropic ocean airways” was not suggestive, as it merely “inform[ed] consumers about the service [plaintiff provided]: flying planes across the ocean to tropical locations.”

The court rejected plaintiff’s argument that a pending application at the United States Patent and Trademark Office to register the mark proved that it was suggestive. While a certificate of registration may establish a rebuttable presumption that a mark is distinctive, the court held plaintiff was not entitled to such a presumption here, where the application remained pending. Moreover, the court observed in a footnote that the presumption of distinctiveness will generally only go back to the date the application was filed. In this case, the trademark application was not filed until about a year after the domain name was registered.

As for the argument the mark had acquired secondary meaning, the court found plaintiff’s allegations to be insufficient. The complaint instead made conclusory allegations about secondary meaning that were insufficient to survive a motion to dismiss. The court held that plaintiff failed to allege the nature and extent of its advertising and promotion, and, more importantly, did not allege any facts about the extent to which the public identified the mark with plaintiff’s services.

Tropic Ocean Airways, Inc. v. Floyd, — Fed.Appx. —, 2014 WL 7373625 (11th Cir., Dec. 30, 2014)

Evan Brown is an attorney in Chicago helping clients with domain name, trademark, and other matters involving technology and intellectual property.

Forum selection clause in browsewrap agreement did not bind parties in bitcoin fraud case

We all know that clickwrap agreements are preferable to browsewrap agreements, assuming, of course, the objective is to establish binding contracts between participants in online transactions. Nonetheless, some online platforms still (try to) rely on browsewrap agreements to establish terms of service. That avoidance of best practices gives us situations like the recent case of Hussein v. Coinabul, LLC, in which a federal court in Illinois refused to enforce a forum selection clause in a “bitcoin to gold marketplace” browsewrap agreement.

Plaintiff alleged that he sent about $175,000 worth of bitcoins to defendants in June 2013, expecting to get gold in return. (Plaintiff alleges he transferred 1,644.54 BTC. The average exchange value in June 2013 was $107.82/BTC. You can get historical bitcoin price data here: http://www.coindesk.com/price) When the gold never arrived, plaintiff sued for fraud.

Defendants moved to dismiss, citing a forum selection clause contained in a browsewrap agreement found on its website. That purported agreement required all disputes to be heard in the state courts of Wyoming, and for Wyoming law to apply. The court denied the motion to dismiss, finding that the browsewrap agreement afforded plaintiff neither actual nor constructive knowledge of its terms and conditions.

The court observed that the hyperlink that directed users to defendants’ Terms of Service was listed among ten other hyperlinks at the bottom of each page. (See this Wayback Machine capture of the website from June 2013).

As for lack of actual knowledge, the court credited plaintiff’s allegations that he did not review or even know of defendants’ Terms of Service when he entered the bitcoin transaction. And there was no evidence to the contrary in the record.

And as for lack of constructive knowledge, the court found that the hyperlink, “buried at the bottom of the webpage – [was] without some additional act of notification, insufficient for the purpose of providing reasonable notice.”

Hussein v. Coinabul, LLC, No. 14-5735, 2014 WL 7261240 (N.D. Ill. December 19, 2014)

Court allows class action plaintiffs to set up social media accounts to draw in other plaintiffs

Some former interns sued Gawker media under the Fair Labor Standards Act. The court ordered the parties to meet and confer about the content and dissemination of the proposed notice to other potential class members. Plaintiffs suggested, among other things, that they establish social media accounts (Facebook, Twitter, LinkedIn) titled “Gawker Intern Lawsuit” or “Gawker Class Action”. Gawker objected.

The court permitted the establishment of the social media accounts. It rejected Gawker’s argument that the lack of evidence that any former intern used social media would make the notice ineffective. The court found it “unrealistic” that the former interns did not maintain social media accounts.

Gawker also argued that social media to give notice would take control of the dissemination out of the court’s hands. Since users could comment on the posted content, Gawker argued, the court would be “deprived” of its ability to oversee the message. The court likewise rejected this argument, holding that its “role [was] to ensure the fairness and accuracy of the parties’ communications with potential plaintiffs – not to be the arbiter of all discussions not involving the parties that may take place thereafter.”

Mark v. Gawker Media LLC, No. 13-4347, 2014 WL 5557489 (S.D.N.Y. November 3, 2014)

Court denies request of plaintiffs in right of publicity suit to exhume the body of Aunt Jemima

The great-grandsons of Anna S. Harringon, whose image formed the basis for Aunt Jemima, sued Quaker Oats Company and others for $2 billion claiming that defendants failed to pay royalties to Harrington’s estate after her death in 1955. One of the allegations in the case is that defendants played a role in Harrington’s death. Apparently, in an effort to support those allegations, plaintiffs sought an order from the US District Court for the Northern District of Illinois (where the matter is pending) allowing them to exhume the body of their great-grandmother for evidence of this malfeasance.

The court denied the request. Apart from it being just a bizarre ask, it turns out the “evidence” upon which the defendants’ role in Aunt Jemima’s death was based on a parody article from Uncyclopedia. In denying the motion, the court found the following:

The motion is primarily based on statements purportedly made by Quaker Oats executives about the death of the woman who had been identified as “Aunt Jemima.” But the source of the information is an uncyclopedia.wikia.com article, which is a parody website of Wikipedia. Uncyclopedia proudly bills itself as “an encyclopedia full of misinformation and utter lies.” See uncyclopedia.wikia.com/wiki/Uncyclopedia:About.

The court also threatened the pro se plaintiffs: “Plaintiffs must take greater care in their submissions to the Court, or else face sanctions and, if litigation abuse continues, outright dismissal of the case.”

Hunter et al. v. PepsiCo Inc. et al., No. 1:14-cv-06011 (N.D. Ill. October 21, 2014)

BTW: Some info about Anna Harrington’s grave.

Evan Brown is an attorney in Chicago advising clients on matters dealing with technology, the internet and new media.

GitHub jeopardizes its DMCA safe harbor status by launching its new policy

GitHub has baked in some feelgood to its new DMCA takedown policy. The new setup features clearer language, a refusal to automatically disable all forks of an allegedly infringing repository, and a 24-hour window in which the target of a takedown notice may make changes. The mechanisms of this third point ought to cause one to consider whether GitHub is risking the protections of the DMCA safe harbor.

If a DMCA takedown notice alleges that only certain files (as opposed to the whole repository) infringe, under the new policy, GitHub “will contact the user who created the repository and give them approximately 24 hours to delete or modify the content specified in the notice.” If the user makes changes to the repository, the burden shifts back to the sender of the DMCA notice. This shifing-the-burden-back seems problematic under the DMCA.

GitHub’s policy says:

If the user makes changes, the copyright owner must review them and renew or revise their takedown notice if the changes are insufficient. GitHub will not take any further action unless the copyright owner contacts us to either renew the original takedown notice or submit a revised one. If the copyright owner is satisfied with the changes, they may either submit a formal retraction or else do nothing. GitHub will interpret silence longer than two weeks as an implied retraction of the takedown notice.

The DMCA protects a party in GitHub’s position so long as the party “responds expeditiously to remove, or disable access to, the material that is claimed to be infringing upon notification of claimed infringement”. Read that provision carefully — the response must be to take down, not merely take steps to work with the alleged infringer to make it right. GitHub’s new mechanism of interpreting silence as a retraction is not an expeditious removal of or disabling access to allegedly infringing material. Nothing in the DMCA requires the sender of the takedown notice to have to ask twice.

You’ve got to hand it to GitHub for trying to make the world a better place through this new policy. The intended net effect is to reduce the number of instances in which entire repositories are taken down simply because of a few allegedly infringing files. But GitHub is putting something of great value, namely, its DMCA safe harbor protection, at risk.

Many copyright plaintiffs look for every possible angle to pin liability. You can almost be certain that a copyright owner will challenge GitHub’s safe harbor status on the ground that GitHub did not respond expeditiously. It seems odd GitHub would be willing to toss a perfectly good affirmative defense. One would think the better approach would be to go ahead and take the repository down after 24 hours, rather than leaving it up and risk a finding on “non-expeditiousness”.

Related:

Microsoft letter to GitHub over DRM-free music software is not the first copyright-ironic action against an intermediary

Evan Brown is an attorney in Chicago advising clients on matters dealing with copyright, technology, the internet and new media.

YouTube has been a billion dollar boon to big media

This NBC News piece reports that since 2007, YouTube’s ContentID program has enabled copyright holders to monetize content posted to the service and get paid a billion dollars in the process. (Also included in the report is the staggering statistic that ContentID scans 400 years of content every day — we live in content-producing world of crazy proportions!)

So we see that with this kind of cash rolling in, it’s no wonder that Viacom finally came to its senses earlier this year when it decided to discontinue its litigation against YouTube. The billion dollar notion is also interesting — that’s the very amount Viacom sought when it filed suit in March 2007.

Copyright, not privacy, motivated Reddit to take down photos of nude celebrities

This VentureBeat piece with Reddit CEO Yishan Wong brings up a number of interesting facts concerning Reddit in the wake of its receiving an additional $50 million funding round. One of those pieces of interesting information concerns Reddit’s decision to take down a subreddit devoted to the sharing of recently-leaked celebrity nude photos.

Says Wong:

If there’s any confusion: [Reddit] did not shut down /r/TheFappening due to content linking to nude celebrity photos. The subreddit was shut down because users were reposting content already taken down due to valid DMCA requests, and because spammers began posting links to the images hosted on their own pay-per-click sites, or sites intended to spread malware.

We can’t read too much from this comment, but it does implicate that the dignitary interests of the celebrities involved did not motivate Reddit to do the right thing. Instead, the risk of copyright liability (or, more precisely, the risk that DMCA safe harbor protection may be eliminated) was a stronger motivation.

Evan Brown is an attorney in Chicago advising clients on matters dealing with technology, the internet and new media.

Scroll to top