Blog

Can the Fifth Amendment protect you from having to turn over your personal email account if you’re fired?

 

Can the Fifth Amendment protect you from having to turn over your personal laptop and email account when you are fired? Maybe not.

In New Jersey, an ex-employee allegedly sent over a hundred emails filled with confidential data to his personal Gmail account. This was frowned upon, especially since he had signed a confidentiality agreement.

When the company demanded he hand over his devices and accounts for inspection, he played the Fifth Amendment card, saying it protected him from self-incrimination. But the court wasn’t having any of it.

Why? Because during the HR investigation, he admitted in writing there was incriminating evidence waiting to be found, so the court held he waived his right to Fifth Amendment protection.

So even if turning over the devices could get the employee arrested, the court held that was okay in this situation and even benefitted the public interest.

Be careful out there with your side hustle.

Game developer prevails in action over bogus DMCA takedown notices

DMCA good faith

Defendant posted some videos on YouTube about the game Destiny 2. The videos stayed online for eight years with no issues until Bungie, the game’s developer and publisher sent a Digital Millennium Copyright Act (“DMCA”) takedown notice to YouTube because defendant’s video violated Bungie’s intellectual property policy. This policy encouraged Destiny 2 enthusiasts to create and post Destiny 2 content so long as the content conformed with the policy. Defendant felt wronged by Bungie’s DMCA takedown notice and, seeking to highlight flaws in the DMCA takedown process, posed as a Bungie employee and submitted 96 fraudulent takedown requests targeting other Destiny 2 content, including videos on Bungie’s own channel.

Bungie sued under Section 512(f) of the DMCA which provides that one may be liable for sending any takedown notification that knowingly materially misrepresents that complained of material is infringing. To be liable, a defendant must lack a subjective, good faith belief that the material targeted by the takedown notification is infringing. Bungie moved for summary judgment on its own claim and defendant did not oppose the motion, even though he had sat for a deposition and otherwise participated in the litigation. The court granted the summary judgment motion in favor of Bungie.

In his deposition, defendant had admitted he was “oblivious to the reprehensible damages [he] was causing to the community” and Bungie in issuing the fraudulent takedown notices, and that he caused financial and emotional damage to several Destiny 2 fans whose videos were subject to the fraudulent takedown notices he had sent. The court determined that defendant lacked a good faith belief in the infringing nature of the content, which supported his liability under the statute. Bungie demonstrated that the material did not infringe its intellectual property policy and that defendant had no authority to issue the DMCA notices. As a result of defendant’s actions, Bungie faced reputational damage and incurred significant costs in addressing the issue. Consequently, the court granted summary judgment in favor of Bungie, recognizing the intentional nature of defendant’s violations and the resultant harm to Bungie.

Bungie, Inc. v. Minor, 2024 WL 965010 (W.D. Washington, March 6, 2024)

Lawyers and AI: Key takeaways from being on a panel at a legal ethics conference

Earlier today I was on a panel at Hinshaw & Culbertson’s LMRM Conference in Chicago. This was the 23rd annual LMRM Conference, and the event has become the gold standard for events that focus on the “law of lawyering.”

Our session was titled How Soon is Now—Generative AI, How It Works, How to Use it Now, How to Use it Ethically. Preparing for and participating in the event gave me the opportunity to seriously consider some of the key issues relating to how lawyers are using generative AI and the promise that wider future adoption of these technologies in the legal industry holds.

Here are a few key takeaways:

    • Effective use. Lawyers are already using generative AI in ways that aid efficiency. The technology can summarize complex texts during legal research, allowing the attorney to quickly assess if the content addresses her specific interests, is factually relevant, and aligns with desired legal outcomes. With a carefully crafted and detailed prompt, an attorney can generate a pretty good first draft of many types of correspondence (e.g., cease and desist letters). Tools such as ChatGPT can aid in brainstorming by generating a variety of ideas on a given topic, helping lawyers consider possible outcomes in a situation.

 

    • Access to justice. It is not clear how generative AI adoption will affect access to justice. While it is possible that something like “legal chatbots” could bring formerly unavailable legal help to parties without sufficient resources to hire expensive lawyers, the building and adoption of sophisticated tools by the most elite firms will come at a cost that is passed on to clients, making premium services even more expensive, thereby increasing the divide that already exists.

 

    • Confidentiality and privacy. Care must be taken to reduce the risk of unauthorized disclosure of information when law firms adopt generative AI tools. Data privacy concerns arise regardless of the industry in which generative AI is used. But lawyers have the additional obligation to preserve their clients’ confidential information in accordance with the rules governing the attorney-client relationship. This duty of confidentiality complicates the ways in which a law firm’s “enterprise knowledge” can be used to train a large language model. And lawyers must consider whether and how to let their clients know that the client’s information may be used to train the model.

 

    • Exposing lawyering problems. Cases such as Mata v. Avianca, Park v. Kim and Kruse v. Karlenwherein lawyers or litigants used AI to generate documents submitted to the court containing non-existent case citations (hallucinations)tend to be used to critique these kinds of tools and tend to discourage lawyers from adopting them. But if one looks at these cases carefully, it is apparent that the problem is not so much with the technology, but instead with lawyering that lacks the appropriate competence and diligence.
    •  

    • AI and the standard of the practice. There is plenty of data suggesting that most knowledge work jobs will be drastically impacted by the use of AI in the near term. Regardless of whether a lawyer or law firm wants to adopt generative AI in the practice of law, attorneys will not be able to avoid knowing how the use of AI will change norms and expectations, because clients will be effectively using these technologies and innovating in the space.

Thank you to Barry MacEntee for inviting me to be on his panel. Barry, you did an exemplary job of preparation and execution, which is exactly how you roll. Great to meet my co-panelist Andrew Sutton. Andrew, your insights and commentary on both the legal and technical aspects of the use of AI in the practice of law were terrific.

Fifth Amendment did not save former employee from having to turn over his Gmail account

Gmail Fifth Amendment

Plaintiff biotech company sued a former employee for allegedly emailing proprietary information to his personal Gmail account and discussing employment with competitors. Plaintiff’s investigations revealed defendant had sent over a hundred emails with confidential data to his Gmail account, in violation of a confidentiality agreement defendant had signed when he was hired. Plaintiff sued defendant alleging misappropriation of trade secrets under both federal and state law. Plaintiff sought a temporary restraining order that required defendant to turn over his devices and online accounts for inspection. The court granted the motion.

Injunctive relief warranted

The court found that plaintiff had shown a reasonable probability of success in the litigation. It had successfully alleged ownership of trade secrets and had described specific instances (e.g., sending emails to a private Gmail account) that would be considered misappropriation.

Defendant could not be trusted

As for the likelihood of irreparable harm plaintiff would suffer if the injunction were not granted, the court considered plaintiff’s assertion that defendant “could not be trusted” based on his alleged conduct, and that plaintiff would suffer irreparable harm because of the continued presence of unsecured confidential information on defendant’s devices and accounts.

No Fifth Amendment Protection

Defendant argued under the “balancing of the equities” test that requiring him to turn over his devices and accounts would violate his Fifth Amendment rights against self-incrimination. The court rejected this argument, however, observing that in the course of plaintiff’s investigation of defendant’s conduct, defendant signed a document knowingly, intelligently and voluntarily, thereby admitting there was incriminating evidence to be found. Because of this, the court found defendant waived his Fifth Amendment rights.

 Injunction favored the public interest

The court also found that entry of the injunction requiring defendant to turn over the devices and accounts would benefit the public interest. It noted that there is a generalized public interest in upholding the inviolability of trade secrets and enforceability of confidentiality agreements. It mentioned the general interest in preserving Fifth Amendment rights but reiterated that in these circumstances, because of defendant’s waiver, the Fifth Amendment did not shield defendant.

Legend Biotech USA v. Liu, 2024 WL 919082 (D.N.J. March 4, 2024)

See also:

Website operator not liable under Wiretap Act for allowing Meta to intercept visitor communications

Plaintiffs asserted that defendant healthcare organization inadequately protected the personal and health information of visitors to defendant’s website. In particular, plaintiffs alleged that unauthorized third parties – including Meta – could intercept user interactions through the use of tracking technologies such as the Meta Pixel and Conversions API. According to plaintiffs, these tools collected sensitive health information and sent it to Meta. Despite defendant’s privacy policy claiming to protect user privacy and information, plaintiffs alleged that using defendant’s website caused plaintiffs to receive unsolicited advertisements on their Facebook accounts.

Plaintiffs sued, asserting a number of claims, including under the federal Electronic Communications Privacy Act (“ECPA”) and the California Invasion of Privacy Act (“CIPA”). Defendant moved to dismiss these claims. The court granted the motion.

To establish an ECPA claim, a plaintiff must demonstrate that defendant intentionally intercepted or attempted to intercept electronic communications using a device. CIPA similarly prohibits using electronic means to understand the contents of a communication without consent. Both laws have a “party exception” allowing a person who is a party to the communication to intercept it, provided the interception is not for a criminal or tortious purpose. In other words, there is an exception to the exception.

In this case, defendant argued it was a legitimate party to plaintiffs’ communications on a website, thus invoking the party exception. Plaintiffs countered that the exception should not apply due to defendant’s alleged tortious intent (making the information available to Facebook without disclosure to plaintiffs). But the court found that plaintiffs did not provide sufficient evidence that defendant’s actions were for an illegal or actionable purpose beyond the act of interception itself. Under the guidance of Pena v. GameStop, Inc., 2023 WL 3170047 (S.D. Cal. April 27, 2023), (a plaintiff must plead sufficient facts to support an inference that the offender intercepted the communication for the purpose of a tortious or criminal act that is independent of the intentional act of recording or interception itself), the court concluded there was no separate tortious conduct involved in the interception and dismissed the claims.

B.K. v. Eisenhower Medical Center, 2024 WL 878100 (February 29, 2024)

See also:

VIDEO: Elon Musk / OpenAI lawsuit – What’s it all about?

 

So Elon Musk has sued OpenAI. What’s this all about?

The lawsuit centers on the breach of a founding agreement and OpenAI’s shift from non-profit to for-profit through partnerships with companies like Microsoft. It has been filed in state court in California and talks about the risks of artificial general intelligence (or AGI). It talks about how Musk worked with Sam Altman back in 2015 to form OpenAI for the public good. That was the so called “founding agreement” which also got written into the company’s certificate of incorporation. One of the most intriguing things about the lawsuit is that Musk is asking the court to determine that OpenAI is Artificial General Intelligence and thereby has gone outside the initial scope of the Founding Agreement.

Stealing data: Ninth Circuit examines whether cellular data can be subject to a conversion claim

data conversion

Plaintiffs sued Google alleging Google improperly used plaintiffs’ cellular data without consent, constituting conversion under California law. The lower court dismissed the case for failure to state a claim. Plaintiffs sought review with the Ninth Circuit. On appeal, the court reversed the lower court’s decision concerning the conversion claim, finding that cellular data is something that can be subject to conversion.

The court observed that a successful conversion plaintiff must plead and prove (1) ownership or rightful possession of property, (2) defendant’s use of the property in violation of plaintiff’s rights, and (3) resulting damages. The court found that plaintiffs satisfactorily established cellular data as a form of personal property subject to conversion, given its definable nature, potential for exclusive control, and plaintiffs’ legitimate expectations based on their data plans.

Moreover, the court concluded that plaintiffs’ allegations against Google meet the criteria for conversion, demonstrating unauthorized use of their cellular data that went against their property interests and resulted in quantifiable damages. By equating Google’s actions to a “forced sale” of plaintiffs’ data, the court underscored the tangible impact of intangible property loss.

Taylor v. Google, 2024 WL 837044 (9th Cir. February 28, 2024)

See also:

Nvidia forces consumer lawsuit into arbitration  

arbitration provisoin

Plaintiffs filed a class action suit against Nvidia alleging that Nvidia falsely advertised a game streaming feature for its Shield line of devices which was later disabled, thus depriving consumers of a paid feature and devaluing their devices. The suit included claims of trespass to chattels, breach of implied warranty, and violations of various consumer protection laws.

Nvidia filed a motion to compel arbitration, citing an agreement that users ostensibly accepted during the device setup process. This agreement provided that disputes would be resolved through binding arbitration in accordance with Delaware laws and that any arbitration would be conducted by an arbitrator in California.

The court looked to the Federal Arbitration Act, which upholds arbitration agreements unless general contract defenses like fraud or unconscionability apply. Nvidia emphasized the initial setup process for Shield devices, during which users were required to agree to certain terms of use that included the arbitration provision. In light of Nvidia’s claim that this constituted clear consent to arbitrate disputes, the court examined whether this agreement was conscionable and whether it indeed covered the plaintiffs’ claims.

The court found the arbitration agreement enforceable, rejecting plaintiffs’ claims of both procedural and substantive unconscionability. The court concluded that the setup process provided sufficient notice to users about the arbitration agreement, and the terms of the agreement were not so one-sided as to be deemed unconscionable. Furthermore, the court determined that plaintiffs’ claims fell within the scope of the arbitration agreement, leading to a decision to stay the action pending arbitration in accordance with the agreement’s terms.

Davenport v. Nvidia Corporation, — F.Supp.3d —, 2024 WL 832387 (N.D. Cal. Feb 28, 2024)

See also:

ChatGPT was “utterly and unusually unpersuasive” in case involving recovery of attorney’s fees

chatgpt billing

In a recent federal case in New York under the Individuals with Disabilities Act, plaintiff prevailed on her claims and sought an award of attorney’s fees under the statute. Though the court ended up awarding plaintiff’s attorneys some of their requested fees, the court lambasted counsel in the process for using information obtained from ChatGPT to support the claim of the attorneys’ hourly rates.

Plaintiff’s firm used ChatGPT-4 as a “cross-check” against other sources in confirming what should be a reasonably hourly rate for the attorneys on the case. The court found this reliance on ChatGPT-4 to be “utterly and unusually unpersuasive” for determining reasonable billing rates for legal services. The court criticized the firm’s use of ChatGPT-4 for not adequately considering the complexity and specificity required in legal billing, especially given the tool’s inability to discern between real and fictitious legal citations, as demonstrated in recent past cases within the Second Circuit.

In Mata v. Avianca, Inc., 2023 WL 4114965 (S.D.N.Y. June 22, 2023) the district court judge sanctioned lawyers for submitting fictitious judicial opinions generated by ChatGPT, and in Park v. Kim, — F.4th —, 2024 WL 332478 (2d Cir. January 30, 2024) an attorney was referred to the Circuit’s Grievance Panel for citing non-existent authority from ChatGPT in a brief. These examples highlighted the tool’s limitations in legal contexts, particularly its inability to differentiate between real and fabricated legal citations, raising concerns about its reliability and appropriateness for legal tasks.

J.G. v. New York City Dept. of Education, 2024 WL 728626 (February 22, 2024)

See also:

Scroll to top