Is transparency the best norm for user privacy?

Discussions about how companies handle privacy are metadiscussions, because data use policies provide information about information, namely, how platforms collect, use and share it. It’s easy to come up with platitudes when operating in such an abstract realm. People like the catchy norm of “transparency.” It suggests that our dignitary ills are cured when we know how companies such as Facebook use the information about us that they hoard.

But transparency as a norm suffers from a hobbling flaw when put into practice — it is antithetical to the proprietary interests companies hold dear and which the law protects. At a fundamental level, the exploitation of big datasets is how most online social platforms make money. Granular knowledge about the user equals more targetedness of the ad. Targeted-er ads can be sold at a premium. The fact that a platform can collect so much information about a user is one thing. The method for the information’s use is another. It’s the face of the latter aspect into which transparency flies.

No company that has invested substantially in developing effective methods for utilizing its collected data is going to have an authentic incentive to lift the hood on its data-utilizing methods. The protection of the law of trade secrets would evaporate in any instance where a company were to do that.

Any reluctance to transparency on the part of the platform betrays this misalignment of incetives between platform and user. Calling on transparency as the norm will only exacerbate the misalignment. What people are actually looking for when they call for transparency are reasons to trust. The metadiscussion needs a new pathway to get to trust, because the path that transparency affords is, ironically, blocked.

Photo credit: motoyen

Alleged voyeur boss cannot pursue Computer Fraud and Abuse Act claim

Bashaw v. Johnson, 2012 WL 1623483 (D.Kan. May 9, 2012)

Some employees filed suit after they learned that their boss — who required them to wear skirts to work — allegedly installed the Cam-u-flage video surveillance app on his iPhone and iPad to surreptitiously capture upskirt shots of plaintiffs at work.

The boss filed a counterclaim under the Computer Fraud and Abuse Act (CFAA), claiming that plaintiffs deleted data from his iDevices without authorization. Plaintiffs moved to dismiss this counterclaim. The court granted the motion.

The court held that the boss failed to allege the nature of his alleged damages within the meaning of the CFAA, and that he failed to sufficiently allege a qualified loss as defined by the statute.

As for damage, the court found that the mere allegation that data had been erased, without identifying which data, did not meet the plausibility requirement to survive a motion to dismiss. (Hmm. I wonder what data the plaintiff-employees would have wanted to delete?)

On the question of loss, the employer alleged that such calculation “would exceed” the CFAA threshold of $5,000. But he did not allege that he actually incurred losses in that amount. He did not mention any investigative or response costs, nor did he allege any lost revenues or other losses due to an interruption in service.

Photo credit: Magic Madzik

Why be concerned with social media estate planning?

The headline of this recent blog post by the U.S. government promises to answer the question of why you should do some social media estate planning. But the post falls short of providing a compelling reason to plan for how your social media accounts and other digital assets should be handled in the event of your demise. So I’ve come up with my own list of reasons why this might be good both for the individual and for our culture:

Security. People commit identity theft on both the living and the dead. (See, for example, the story of the Tennessee woman who collected her dead aunt’s Social Security checks for 22 years.) While the living can run credit checks and otherwise monitor the use of their personal information, the deceased are not so diligent. Ensuring that the dataset comprising a person’s social media identity is accounted for and monitored should reduce the risk of that information being used nefariously.

Avoiding sad reminders. Spammers have no qualms with commandeering a dead person’s email account. As one Virginia family knows, putting a stop to that form of “harassment” can be painful and inconvenient.

Keeping social media uncluttered. This reason lies more in the public interest than in the interest of the deceased and his or her relatives. The advertising model for social media revenue generation relies on the accuracy and effectiveness of information about the user base. The presence of a bunch of dead peoples’ accounts, which are orphaned, so to speak, dilutes the effectiveness of the other data points in the social graph. So it is a good thing to prune the accounts of the deceased, or otherwise see that they are properly curated.

Preserving our heritage for posterity. Think of the ways you know about your family members that came before you. Stories and oral tradition are generally annotated by photo albums, personal correspondence and other snippets of everyday life. Social media is becoming a preferred substrate for the collection of those snippets. To have that information wander off into the digital ether unaccounted for is to forsake a means of knowing about the past.

How big a deal is this, anyway? This Mashable article commenting on the U.S. government post says that last year about 500,000 Facebook users died. That’s about 0.0006% of the user base. (Incidentally, Facebook users seem much less likely to die than the general population, as 0.007% of the world’s entire population died last year. Go here if you want to do the math yourself.)

I say it’s kind of a big deal, but a deal that’s almost certain to get bigger.

Employer not allowed to search for porn on employee’s home computer

Former employee sued her old company for subjecting her to a sexually hostile workplace and for firing her after she reported it. She claimed that she had never looked at pornography before she saw some on the computers at work. During discovery in the lawsuit, the company requested that employee turn over her home computer so that the company’s “forensic computer examiner” could inspect them.

The trial court compelled employee to produce her computer so that the forensic examiner could look for pornography in her web browsing history and email attachments. The employee sought mandamus review with the court of appeals (i.e., she asked the appellate court to order the lower court not to require the production of the hardware). The appellate held that she was entitled to relief, and that she did not have to hand over her computer.

The appellate court found that the lower court failed to consider an appropriate protective order that would limit inspection to uncover specifically-sought information in a particular form of production. In this case, the company had merely asked for the hardware without informing employee of the exact nature of the information sought. And the company provided no information about the qualifications of its forensic examiner. Though the trial court tried to limit the scope of the inspection with carefully chosen wording, the appellate court found that was not sufficient to protect the employee from the risks associated with a highly intrusive search.

In re Jordan, — S.W.3d —, 2012 WL 1098275 (Texas App., April 3, 2012)

Website operators not liable for third party comments

Spreadbury v. Bitterroot Public Library, 2012 WL 734163 (D. Montana, March 6, 2012)

Plaintiff was upset at some local government officials, and ended up getting arrested for allegedly trespassing at the public library. Local newspapers covered the story, including on their websites. Some online commenters said mean things about plaintiff, so plaintiff sued a whole slew of defendants, including the newspapers (as website operators).

The court threw out the claims over the online comments. It held that the Communications Decency Act at 47 U.S.C. 230 immunized the website operators from liability over the third party content.

Defendant argued that the websites were not protected by Section 230 because they were not “providers of interactive computer services” of the same ilk as AOL and Yahoo. The court soundly rejected that argument. It found that the websites provided a “neutral tool” and offered a “simple generic prompt” for subscribers to comment about articles. The website operators did not develop or select the comments, require or encourage readers to make defamatory statements, or edit comments to make them defamatory.

School district has to stop filtering web content

PFLAG v. Camdenton R–III School Dist., 2012 WL 510877 (W.D.Mo. Feb. 16, 2012)

Several website publishers that provide supportive resources directed at lesbian, gay, bisexual, and transgender (LGBT) youth filed a First Amendment lawsuit against a school district over the district’s use of internet filtering software. Plaintiffs asked the court for an injunction against the district’s alleged practice of preventing students’ access to websites that expressed a positive viewpoint toward LGBT individuals.

The court granted a preliminary injunction. It found that by using URL Blacklist software, the district (despite its assertions to the contrary) engaged in intentional viewpoint discrimination, in violation of the website publishers’ First Amendment rights. The URL Blacklist software — which relied in large part on dmoz.org — classified positive materials about LGBT issues within the software’s “sexuality” filter, and it put LGBT-negative materials under “religion,” which were not blocked.

It found that the plaintiffs had a fair chance of success on the merits of their First Amendment claims. The school district had claimed it was simply trying to comply with a federal law that required the blocking of content harmful to minors. But the court found that the chosen method of filtering was not narrowly tailored to meet that interest.

One may wonder whether Section 230 of the Communications Decency Act could have protected the school district in this lawsuit. After all, 47 U.S.C. 230(c)(2)(A) provides that:

No provider or user of an interactive computer service shall be held liable on account of—

(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected. . . . (Emphasis added.)

Section 230 would probably not have been much help, because the plaintiffs were seeking injunctive relief, not money damages. An old case called Mainstream Loudoun v. Bd. of Trustees of Loudoun, 24 F. Supp. 2d 552 (E.D. Va. 1998) tells us that:

[Section] 230 provides immunity from actions for damages; it does not, however, immunize [a] defendant from an action for declaratory and injunctive relief. . . . If Congress had intended the statute to insulate Internet providers from both liability and declaratory and injunctive relief, it would have said so.

One could understand the undesirability of applying Section 230 to protect filtering of this sort even without the Mainstream Loudoun holding. If Section 230 completely immunized government-operated interactive computer service providers, allowing them to engage freely in viewpoint-based filtering, free speech would suffer in obvious ways. And it would be unfortunate to subject Section 230 to this kind of analysis, whereby it would face the severe risk of being unconstitutional as applied.

Video: This Week in Law Episode 150

Had a great time hosting This Week in Law Episode 150, which we recorded on February 24. (Thanks to Denise Howell for handing over the hosting reins while she was off for the week.) It was a really fun conversation with three very smart panelists — Mike Godwin, Greg Sergienko and Jonathan Frieden. We talked about copyright and free speech, encryption and the Fifth Amendment, and the state of internet privacy.

If you’re not a regular listener or viewer of This Week in Law, I hope you’ll add it to your media diet. I’m on just about every week (sometimes I’m even referred to as a co-host of the show). We record Fridays at 1pm Central (that’s 11am Pacific, 2pm Eastern). The live stream is at http://live.twit.tv and the page with all the past episodes and various subscription options is http://twit.tv/twil.

No restraining order against uncle posting family photos on Facebook

Court refuses to consider common law invasion of privacy tort to support restraining order under Minnesota statute.

Olson v. LaBrie, 2012 WL 426585 (Minn. App. February 13, 2012)

Appellant sought a restraining order against his uncle, saying that his uncle engaged in harassment by posting family photos of appellant (including one of him in front of a Christmas tree) and mean commentary on Facebook. The trial court denied the restraining order. Appellant sought review with the state appellate court. On appeal, the court affirmed the denial of the restraining order.

It found that the photos and the commentary were mean and disrespectful, but that they could not form the basis for harassment. The court held that whether harassment occurred depended only on a reading of the statute (which provides, among other things, that a restraining order is appropriate to guard against “substantial adverse effects” on the privacy of another). It was not appropriate, the court held, to look to tort law on privacy to determine whether the statute called for a restraining order.

Teacher fired over Facebook post gets her job back

Court invokes notion of “contextual integrity” to evaluate social media user’s online behavior.

Rubino v. City of New York, 2012 WL 373101 (N.Y. Sup. February 1, 2012)

The day after a student drowned at the beach while on a field trip, a fifth grade teacher updated her Facebook status to say:

After today, I am thinking the beach sounds like a wonderful idea for my 5th graders! I HATE THEIR GUTS! They are the devils (sic) spawn!

Three days later, she regretted saying that enough to delete the post. But the school had already found out about it and fired her. After going through the administrative channels, the teacher went to court to challenge her termination.

The court agreed that getting fired was too stiff a penalty. It found that the termination was so disproportionate to the offense, in the light of all the circumstances, that it was “shocking to one’s sense of fairness.” The teacher had an unblemished record before this incident, and what’s more, she posted the content outside of school and after school hours. And there was no evidence it affected her ability to teach.

But the court said some things about the teacher’s use of social media that were even more interesting. It drew on a notion of what scholars have called “contextual integrity” to evaluate the teacher’s online behavior:

[E]ven though petitioner should have known that her postings could become public more easily than if she had uttered them during a telephone call or over dinner, given the illusion that Facebook postings reach only Facebook friends and the fleeting nature of social media, her expectation that only her friends, all of whom are adults, would see the postings is not only apparent, but reasonable.

So while the court found the teacher’s online comments to be “repulsive,” having her lose her job over them went too far.

Six interesting technology law issues raised in the Facebook IPO

Patent trolls, open source, do not track, SOPA, PIPA and much, much more: Facebook’s IPO filing has a real zoo of issues.

The securities laws require that companies going public identify risk factors that could adversely affect the company’s stock. Facebook’s S-1 filing, which it sent to the SEC today, identified almost 40 such factors. A number of these risks are examples of technology law issues that almost any internet company would face, particularly companies whose product is the users.

(1) Advertising regulation. In providing detail about the nature of this risk, Facebook mentions “adverse legal developments relating to advertising, including legislative and regulatory developments” and “the impact of new technologies that could block or obscure the display of our ads and other commercial content.” Facebook is likely concerned about the various technological and legal restrictions on online behavioral advertising, whether in the form of mandatory opportunities for users to opt-out of data collection or or the more aggressive “do not track” idea. The value of the advertising is of course tied to its effectiveness, and any technological, regulatory or legislative measures to enhance user privacy is a risk to Facebook’s revenue.

(2) Data security. No one knows exactly how much information Facebook has about its users. Not only does it have all the content uploaded by its 845 million users, it has the information that could be gleaned from the staggering 100 billion friendships among those users. [More stats] A data breach puts Facebook at risk of a PR backlash, regulatory investigations from the FTC, and civil liability to its users for negligence and other causes of action. But Facebook would not be left without remedy, having in its arsenal civil actions under the Computer Fraud and Abuse Act and the Stored Communications Act (among other laws) against the perpetrators. It is also likely the federal government would step in to enforce the criminal provisions of these acts as well.

(3) Changing laws. The section of the S-1 discussing this risk factor provides a laundry list of the various issues that online businesses face. Among them: user privacy, rights of publicity, data protection, intellectual property, electronic contracts, competition, protection of minors, consumer protection, taxation, and online payment services. Facebook is understandably concerned that changes to any of these areas of the law, anywhere in the world, could make doing business more expensive or, even worse, make parts of the service unlawful. Though not mentioned by name here, SOPA, PIPA, and do-not-track legislation are clearly in Facebook’s mind when it notes that “there have been a number of recent legislative proposals in the United States . . . that would impose new obligations in areas such as privacy and liability for copyright infringement by third parties.”

(4) Intellectual property protection. The company begins its discussion of this risk with a few obvious observations, namely, how the company may be adversely affected if it is unable to secure trademark, copyright or patent registration for its various intellectual property assets. Later in the disclosure, though, Facebook says some really interesting things about open source:

As a result of our open source contributions and the use of open source in our products, we may license or be required to license innovations that turn out to be material to our business and may also be exposed to increased litigation risk. If the protection of our proprietary rights is inadequate to prevent unauthorized use or appropriation by third parties, the value of our brand and other intangible assets may be diminished and competitors may be able to more effectively mimic our service and methods of operations.

(5) Patent troll lawsuits. Facebook notes that internet and technology companies “frequently enter into litigation based on allegations of infringement, misappropriation, or other violations of intellectual property or other rights.” But it goes on to give special attention to those “non-practicing entities” (read: patent trolls) “that own patents and other intellectual property rights,” which “often attempt to aggressively assert their rights in order to extract value from technology companies.” Facebook believes that as its profile continues to rise, especially in the glory of its IPO, it will increasingly become the target of patent trolls. For now it does not seem worried: “[W]e do not believe that the final outcome of intellectual property claims that we currently face will have a material adverse effect on our business.” Instead, those endeavors are a suck on resources: “[D]efending patent and other intellectual property claims is costly and can impose a significant burden on management and employees….” And there is also the risk that these lawsuits might turn out badly, and Facebook would have to pay judgments, get licenses, or develop workarounds.

(6) Tort liability for user-generated content. Facebook acknowledges that it faces, and will face, claims relating to information that is published or made available on the site by its users, including claims concerning defamation, intellectual property rights, rights of publicity and privacy, and personal injury torts. Though it does not specifically mention the robust immunity from liability over third party content provided by 47 U.S.C. 230, Facebook indicates a certain confidence in the protections afforded by U.S. law from tort liability. It is the international scene that gives Facebook concern here: “This risk is enhanced in certain jurisdictions outside the United States where our protection from liability for third-party actions may be unclear and where we may be less protected under local laws than we are in the United States.”

You have to hand it to the teams of professionals who have put together Facebook’s IPO filing. I suppose the billions of dollars at stake can serve as a motivation for thoroughness. In any event, the well-articulated discussion of these risks in the S-1 is an interesting read, and can serve to guide the many lesser-valued companies out there.

Scroll to top