Reading a non-friend’s comment on Facebook wall was not a privacy invasion

Sumien v. CareFlite, 2012 WL 2579525 (Tex.App. July 5, 2012)

Plaintiff, an emergency medical technician, got fired after he commented on his coworker’s Facebook status update. The coworker had complained in her post about belligerent patients and the use of restraints. Here is plaintiff’s comment:

Yeah like a boot to the head…. Seriously yeah restraints and actual HELP from [the police] instead of the norm.

After getting fired, plaintiff sued his former employer for, among other things, “intrusion upon seclusion” under Texas law. That tort requires a plaintiff to show (1) an intentional intrusion, physical or otherwise, upon another’s solitude, seclusion or private affairs that (2) would be highly offensive to a reasonable person.

The trial court threw out the case on summary judgment. Plaintiff sought review with the Court of Appeals of Texas. On appeal, the court affirmed the summary judgment award.

The court found plaintiff failed to provide any evidence his former employer “intruded” when it encountered the offending comment. Plaintiff had presented evidence that he misunderstood his co-worker’s Facebook settings, did not know who had access to his co-worker’s Facebook Wall, and did not know how his employer was able to view the comment. But none of these misunderstandings of the plaintiff transformed the former employer’s viewing of the comment into an intentional tort.

Read Professor Goldman’s post on this case.


Photo credit: Flickr user H.L.I.T. under this license.

Why be concerned with social media estate planning?

The headline of this recent blog post by the U.S. government promises to answer the question of why you should do some social media estate planning. But the post falls short of providing a compelling reason to plan for how your social media accounts and other digital assets should be handled in the event of your demise. So I’ve come up with my own list of reasons why this might be good both for the individual and for our culture:

Security. People commit identity theft on both the living and the dead. (See, for example, the story of the Tennessee woman who collected her dead aunt’s Social Security checks for 22 years.) While the living can run credit checks and otherwise monitor the use of their personal information, the deceased are not so diligent. Ensuring that the dataset comprising a person’s social media identity is accounted for and monitored should reduce the risk of that information being used nefariously.

Avoiding sad reminders. Spammers have no qualms with commandeering a dead person’s email account. As one Virginia family knows, putting a stop to that form of “harassment” can be painful and inconvenient.

Keeping social media uncluttered. This reason lies more in the public interest than in the interest of the deceased and his or her relatives. The advertising model for social media revenue generation relies on the accuracy and effectiveness of information about the user base. The presence of a bunch of dead peoples’ accounts, which are orphaned, so to speak, dilutes the effectiveness of the other data points in the social graph. So it is a good thing to prune the accounts of the deceased, or otherwise see that they are properly curated.

Preserving our heritage for posterity. Think of the ways you know about your family members that came before you. Stories and oral tradition are generally annotated by photo albums, personal correspondence and other snippets of everyday life. Social media is becoming a preferred substrate for the collection of those snippets. To have that information wander off into the digital ether unaccounted for is to forsake a means of knowing about the past.

How big a deal is this, anyway? This Mashable article commenting on the U.S. government post says that last year about 500,000 Facebook users died. That’s about 0.0006% of the user base. (Incidentally, Facebook users seem much less likely to die than the general population, as 0.007% of the world’s entire population died last year. Go here if you want to do the math yourself.)

I say it’s kind of a big deal, but a deal that’s almost certain to get bigger.

No restraining order against uncle posting family photos on Facebook

Court refuses to consider common law invasion of privacy tort to support restraining order under Minnesota statute.

Olson v. LaBrie, 2012 WL 426585 (Minn. App. February 13, 2012)

Appellant sought a restraining order against his uncle, saying that his uncle engaged in harassment by posting family photos of appellant (including one of him in front of a Christmas tree) and mean commentary on Facebook. The trial court denied the restraining order. Appellant sought review with the state appellate court. On appeal, the court affirmed the denial of the restraining order.

It found that the photos and the commentary were mean and disrespectful, but that they could not form the basis for harassment. The court held that whether harassment occurred depended only on a reading of the statute (which provides, among other things, that a restraining order is appropriate to guard against “substantial adverse effects” on the privacy of another). It was not appropriate, the court held, to look to tort law on privacy to determine whether the statute called for a restraining order.

Teacher fired over Facebook post gets her job back

Court invokes notion of “contextual integrity” to evaluate social media user’s online behavior.

Rubino v. City of New York, 2012 WL 373101 (N.Y. Sup. February 1, 2012)

The day after a student drowned at the beach while on a field trip, a fifth grade teacher updated her Facebook status to say:

After today, I am thinking the beach sounds like a wonderful idea for my 5th graders! I HATE THEIR GUTS! They are the devils (sic) spawn!

Three days later, she regretted saying that enough to delete the post. But the school had already found out about it and fired her. After going through the administrative channels, the teacher went to court to challenge her termination.

The court agreed that getting fired was too stiff a penalty. It found that the termination was so disproportionate to the offense, in the light of all the circumstances, that it was “shocking to one’s sense of fairness.” The teacher had an unblemished record before this incident, and what’s more, she posted the content outside of school and after school hours. And there was no evidence it affected her ability to teach.

But the court said some things about the teacher’s use of social media that were even more interesting. It drew on a notion of what scholars have called “contextual integrity” to evaluate the teacher’s online behavior:

[E]ven though petitioner should have known that her postings could become public more easily than if she had uttered them during a telephone call or over dinner, given the illusion that Facebook postings reach only Facebook friends and the fleeting nature of social media, her expectation that only her friends, all of whom are adults, would see the postings is not only apparent, but reasonable.

So while the court found the teacher’s online comments to be “repulsive,” having her lose her job over them went too far.

Six interesting technology law issues raised in the Facebook IPO

Patent trolls, open source, do not track, SOPA, PIPA and much, much more: Facebook’s IPO filing has a real zoo of issues.

The securities laws require that companies going public identify risk factors that could adversely affect the company’s stock. Facebook’s S-1 filing, which it sent to the SEC today, identified almost 40 such factors. A number of these risks are examples of technology law issues that almost any internet company would face, particularly companies whose product is the users.

(1) Advertising regulation. In providing detail about the nature of this risk, Facebook mentions “adverse legal developments relating to advertising, including legislative and regulatory developments” and “the impact of new technologies that could block or obscure the display of our ads and other commercial content.” Facebook is likely concerned about the various technological and legal restrictions on online behavioral advertising, whether in the form of mandatory opportunities for users to opt-out of data collection or or the more aggressive “do not track” idea. The value of the advertising is of course tied to its effectiveness, and any technological, regulatory or legislative measures to enhance user privacy is a risk to Facebook’s revenue.

(2) Data security. No one knows exactly how much information Facebook has about its users. Not only does it have all the content uploaded by its 845 million users, it has the information that could be gleaned from the staggering 100 billion friendships among those users. [More stats] A data breach puts Facebook at risk of a PR backlash, regulatory investigations from the FTC, and civil liability to its users for negligence and other causes of action. But Facebook would not be left without remedy, having in its arsenal civil actions under the Computer Fraud and Abuse Act and the Stored Communications Act (among other laws) against the perpetrators. It is also likely the federal government would step in to enforce the criminal provisions of these acts as well.

(3) Changing laws. The section of the S-1 discussing this risk factor provides a laundry list of the various issues that online businesses face. Among them: user privacy, rights of publicity, data protection, intellectual property, electronic contracts, competition, protection of minors, consumer protection, taxation, and online payment services. Facebook is understandably concerned that changes to any of these areas of the law, anywhere in the world, could make doing business more expensive or, even worse, make parts of the service unlawful. Though not mentioned by name here, SOPA, PIPA, and do-not-track legislation are clearly in Facebook’s mind when it notes that “there have been a number of recent legislative proposals in the United States . . . that would impose new obligations in areas such as privacy and liability for copyright infringement by third parties.”

(4) Intellectual property protection. The company begins its discussion of this risk with a few obvious observations, namely, how the company may be adversely affected if it is unable to secure trademark, copyright or patent registration for its various intellectual property assets. Later in the disclosure, though, Facebook says some really interesting things about open source:

As a result of our open source contributions and the use of open source in our products, we may license or be required to license innovations that turn out to be material to our business and may also be exposed to increased litigation risk. If the protection of our proprietary rights is inadequate to prevent unauthorized use or appropriation by third parties, the value of our brand and other intangible assets may be diminished and competitors may be able to more effectively mimic our service and methods of operations.

(5) Patent troll lawsuits. Facebook notes that internet and technology companies “frequently enter into litigation based on allegations of infringement, misappropriation, or other violations of intellectual property or other rights.” But it goes on to give special attention to those “non-practicing entities” (read: patent trolls) “that own patents and other intellectual property rights,” which “often attempt to aggressively assert their rights in order to extract value from technology companies.” Facebook believes that as its profile continues to rise, especially in the glory of its IPO, it will increasingly become the target of patent trolls. For now it does not seem worried: “[W]e do not believe that the final outcome of intellectual property claims that we currently face will have a material adverse effect on our business.” Instead, those endeavors are a suck on resources: “[D]efending patent and other intellectual property claims is costly and can impose a significant burden on management and employees….” And there is also the risk that these lawsuits might turn out badly, and Facebook would have to pay judgments, get licenses, or develop workarounds.

(6) Tort liability for user-generated content. Facebook acknowledges that it faces, and will face, claims relating to information that is published or made available on the site by its users, including claims concerning defamation, intellectual property rights, rights of publicity and privacy, and personal injury torts. Though it does not specifically mention the robust immunity from liability over third party content provided by 47 U.S.C. 230, Facebook indicates a certain confidence in the protections afforded by U.S. law from tort liability. It is the international scene that gives Facebook concern here: “This risk is enhanced in certain jurisdictions outside the United States where our protection from liability for third-party actions may be unclear and where we may be less protected under local laws than we are in the United States.”

You have to hand it to the teams of professionals who have put together Facebook’s IPO filing. I suppose the billions of dollars at stake can serve as a motivation for thoroughness. In any event, the well-articulated discussion of these risks in the S-1 is an interesting read, and can serve to guide the many lesser-valued companies out there.

Megaupload takedown reminds us why website terms and conditions can be important

Kashmir Hill pointed out that at least one erstwhile file sharing service has changed its business model in response to the federal government’s action against Megaupload. She observes that:

FileSonic users can’t be too happy to have one of the main features of the site taken away. But the company must be less worried about its breach of contract with existing users than it is about the possibility of getting the Megaupload treatment, i.e., arrest, seizure of its property, and a criminal indictment.

This raises an important point. Any kind of online service that pushes the legal envelope may want to build in some mechanisms to pull back with impunity if it gets freaked out or loses its envelope-pushing courage. Said another way, that service should not make promises to its users that it cannot keep in the event the service wants to change what it is doing.

Some well known user generated content sites do this pretty well already in their terms of service. For example:

  • Dropbox: “We reserve the right to suspend or end the Services at any time, with or without cause, and with or without notice.”
  • YouTube reserves the right to discontinue any aspect of the Service at any time.”
  • Reddit: “We also reserve the right to discontinue the Program, or change the content or formatting of the Program, at any time without notice to you, and to require the immediate cessation of any specific use of the Program.”
  • Facebook (being kind of vague): “If you . . . create risk or possible legal exposure for us, we can stop providing all or part of Facebook to you.”

All good examples of foresight in drafting website terms and conditions that help innovative sites with damage control.

Employee’s Facebook status update was protected by the First Amendment

Mattingly v. Milligan, 2011 WL 5184283 (E.D.Ark. November 1, 2011)

Plaintiff worked in the county clerk’s office. Her old boss, whom she had supported in the election, lost. Her new boss (the newly-elected county clerk) began cleaning house and laid off some of the staff. Plaintiff survived that round of cuts, but lamented those terminations in a Facebook status update. Empathetic comments from county residents ensued.

The new boss found out about the status update and the comments. So he fired plaintiff. She sued, alleging that the termination violated her right to free speech. The boss moved for summary judgment, but the court denied the motion, sending the case to trial.

Here is some of the relevant Facebook content:

Plaintiff’s status update: So this week not going so good bad stuff all around.

Friend’s comment: Will be praying. Speak over those bad things positively.

Plaintiff’s comment: I am trying my heart goes out to the ladies in my office that were told by letter they were no longer needed…. It’s sad.

* * *

Friend’s comment: He’s making a mistake, but I knew he would, too bad….

* * *

Friend’s comment: I can’t believe a letter would be the manner of delivering such a message! I’m with the others…they will find some thing better and tell them this is an opportunity and not a closed door. Prayers for you and friends.

* * *

Friend’s comment: How could you expect anything else from [defendant], he was an…well nevermind.

Courts addressing claims by public employees who contend that they have been discharged for exercising their right to free speech must employ a two-step inquiry: First, the court must determine whether the speech may be described as “speech on a matter of public concern.” If so, the second step involves balancing the employee’s right to free speech against the interests of the public employer.

In this case, the court found the speech to be on a matter of public concern because:

  • the statements were made in a “public domain”
  • those who saw the statements (many of whom were residents of the county) understood them to be about terminations in the clerk’s office
  • some of the comments contained criticism of the termination decision
  • six constituents of the new clerk called his office to complain
  • the press and media had covered the situation

As for the second step in the analysis, namely, balancing the employee’s right to free speech against the interests of the public employer, the court did not even undertake a balancing test, as there simply was no evidence that the status update and the comments disrupted the operations of the clerk’s office.

Court requires fired social media employee to return usernames and passwords

Ardis Health, LLC v. Nankivell, 2011 WL 4965172 (S.D.N.Y. October 19, 2011)

Defendant was hired to be plaintiffs’ “video and social media producer,” with responsibilities that included maintaining social media pages in connection with the online marketing of plaintiffs’ products. After she was terminated, she refused to tell her former employers the usernames and passwords for various social media accounts. (The case doesn’t say which ones, but it’s probably safe to assume these were Facebook pages and maybe Twitter accounts.) So plaintiffs sued, and sought a preliminary injunction requiring defendant to return the login information. The court granted the motion for preliminary injunction.

The court found that plaintiffs had come forward with sufficient evidence to support a finding of irreparable harm if the login information was not returned prior to a final disposition in the case:

Plaintiffs depend heavily on their online presence to advertise their businesses, which requires the ability to continuously update their profiles and pages and react to online trends. The inability to do so unquestionably has a negative effect on plaintiffs’ reputation and ability to remain competitive, and the magnitude of that effect is difficult, if not impossible, to quantify in monetary terms. Such injury constitutes irreparable harm.

Defendant argued there would not be irreparable harm because the web content had not been updated in over two years. But the court rejected that argument, mainly because it would have been unfair to let the defendant benefit from her own failure to perform her job responsibilities:

Defendant was employed by plaintiffs for the entirety of that period, and she acknowledges that it was her responsibility to post content to those websites. Defendant cannot use her own failure to perform her duties as a defense.

Moreover, the court found that the plaintiffs would lose out by not being able to leverage new opportunities. For example, plaintiffs had recently hopped on the copy Groupon bandwagon by participating in “daily deal” promotions. The court noted that the success of those promotions depended heavily on tie-ins with social media. So in this way the unavailability of the social media login information also contributed to irreparable harm.

Prosecutor’s Facebook postings did not warrant overturning conviction

State v. Usee, 2011 WL 2437271 (Minn. App. June 20, 2011)

A jury convicted defendant of attempted murder and other violent crimes. He asked the court for a Schwartz hearing (which is what they call these things in Minnesota) to evaluate whether a posting by the prosecutor on her public Facebook page improperly influenced the jury. According to affidavits that defendant submitted to the court, the prosecutor made the culturally insensitive remark that she was keeping the streets safe from Somalis.

The trial court denied the motion for a Schwartz hearing. Defendant sought review. On appeal, the court affirmed the denial of the motion.

It held that there was no evidence that the Facebook posting led to any jury misconduct. The jurors had been instructed not to research the case. (And we all know that jurors take those instructions seriously, right?) Any harm to defendant’s interests, the court found, would merely be speculative.

Court dismisses unfair competition claim against Facebook over alleged privacy violation

This is a post by Sierra Falter.  Sierra is a third-year law student at DePaul University College of Law in Chicago focusing on intellectual property law.  You can reach her by email at sierrafalter [at] gmail dot com or follow her on Twitter (@lawsierra).  Bio: www.sierrafalter.com.

In re Facebook Privacy Litigation, 2011 WL 2039995 (N.D.Cal. May 12, 2011)

Plaintiff Facebook users sued defendant Facebook for violation of California’s Unfair Competition Law (“UCL”), Cal. Bus. & Prof. Code §§ 17200, et seq., alleging that Facebook intentionally and knowingly transmitted personal information about plaintiffs to third-party advertisers without plaintiffs’ consent.  Facebook moved to dismiss the UCL claim.  The court granted the motion.

Defendant argued that plaintiffs failed to state a claim because they lacked standing under the UCL, since they did not allege they lost money or property.  Defendant asserted there was no such loss because plaintiffs’ “personal information” did not constitute property under the UCL.

Instead, the plaintiffs had alleged that defendant unlawfully shared their “personally identifiable information” with third-party advertisers.  However, the court distinguished the plaintiffs’ claim from Doe 1 v. AOL, LLC, 719 F.Supp.2d 1102 (N.D. Cal. 2010).  In that case, the plaintiffs’ personal and financial information had been distributed to the public after the plaintiffs therein signed up and paid fees for AOL’s service.  The court dismissed plaintiff’s claim in this case under the holding of Doe v. AOL — since plaintiffs alleged they received defendant’s services for free, they could not state a UCL claim.

Scroll to top