Blog

What does the “bill that could ban TikTok” actually say?

In addition to causing free speech concerns, the bill is troubling in the way it gives unchecked power to the Executive Branch.

Earlier this week the United States House of Representatives passed a bill that is being characterized as one that could ban TikTok. Styled as the Protecting Americans from Foreign Adversary Controlled Applications Act, the text of the bill calls TikTok and its owner ByteDance Ltd. by name and seeks to “protect the national security of the United States from the threat posed by foreign adversary controlled applications.”

What conduct would be prohibited?

The Act would make it unlawful for anyone to “distribute, maintain, or update” a “foreign adversary controlled application” within the United States. The Act specifically prohibits anyone from “carrying out” any such distribution, maintenance or updating via a “marketplace” (e.g., any app store) or by providing hosting services that would enable distribution, maintenance or updating of such an app. Interestingly, the ban does not so much directly prohibit ByteDance from making TikTok available, but would cause entities such as Apple and Google to be liable for making the app available for others to access, maintain and update the app.

What apps would be banned?

There are two ways one could find itself being a “foreign adversary controlled application” and thereby prohibited.

  • The first is simply by being TikTok or any app provided by ByteDance or its successors.
  • The second way – and perhaps the more concerning way because of its grant of great power to one person – is by being a “foreign adversary controlled application” that is “determined by the President to present a significant threat to the national security of the United States.” Though the President must first provide the public with notice of such determination and make a report to Congress on the specific national security concerns, there is ultimately no check on the President’s power to make this determination. For example, there is no provision in the statute saying that Congress could override the President’s determination.

Relatively insignificant apps, or apps with no social media component would not be covered by the ban. For example, to be a “covered company” under the statute, the app has to have more than one million monthly users in two of the three months prior to the time the President determines the app should be banned. And the statute specifically says that any site having a “primary purpose” of allowing users to post reviews is exempt from the ban.

When would the ban take effect?

TikTok would be banned 180 days after the date the President signs the bill. For any other app that the President would later decide to be a “foreign adversary controlled application,” it would be banned 180 days after the date the President makes that determination. The date of that determination would be after the public notice period and report to Congress discussed above.

What could TikTok do to avoid being banned?

It could undertake a “qualified divestiture” before the ban takes effect, i.e., within 180 days after the President signs the bill. Here is another point where one may be concerned about the great power given to the Executive Branch. A “qualified divestiture” would be situation in which the owner of the app sells off that portion of the business *and* the President determines two things: (1) that the app is no longer being controlled by a foreign adversary, and (2) there is no “operational relationship” between the United States operations of the company and the old company located in the foreign adversary company. In other words, the app could not avoid the ban by being owned by a United States entity but still share data with the foreign company and have the foreign company handle the algorithm.

What about users who would lose all their data?

The Act provides that the app being prohibited must provide users with “all the available data related to the account of such user,” if the user requests it, prior to the time the app becomes prohibited. That data would include all posts, photos and videos.

What penalties apply for violating the law?

The Attorney General is responsible for enforcing the law. (An individual could not sue and recover damages.) Anyone (most likely an app store) that violates the ban on distributing, maintaining or updating the app would face penalties of $5,000 x the number of users determined to access, maintain or update the app. Those damages could be astronomical – TikTok currently has 170 million users, so the damages would be $850,000,000,000. An app’s failure to provide data portability prior to being banned would cause it to be liable for $500 x the number of affected users.

Key takeaways from the USPTO and Copyright Office joint report to Congress on NFTs

On March 12, 2024, the United States Patent and Trademark Office and the Copyright Office released a joint report from a study they did exploring the impact of NFTs on Intellectual Property law. The study aimed to assess how innovations in digital art ownership and authenticity verification align with existing intellectual property frameworks.

The report emphasized that NFTs present novel opportunities for intellectual property owners, possibly enhancing licensing avenues and offering creators greater control over their works and a larger share of the resulting revenues. On the other hand, the immutable and decentralized nature of blockchain and the underlying technology of NFTs, introduce big challenges in enforcing intellectual property rights, amplifying concerns around online piracy and counterfeiting.

A significant issue that the report highlighted is the widespread confusion around the scope of rights obtained in an NFT transaction, often leading to misconceptions about owning intellectual property rights in the associated digital assets. Despite these challenges, the study found that current intellectual property laws are generally adequate to address the complexities introduced by NFTs, with both the Copyright Office and the USPTO favoring educational initiatives over legislative changes to clarify the nature of rights involved in NFT transactions. So it is not likely we will see new NFT legislation – at least the Copyright Office and the USPTO are not pushing for it.

To conclude the report, the USPTO and the Copyright Office committed to further exploring the use of emerging technologies to improve agency operations and to ongoing engagement with stakeholders to enhance understanding of NFT-related intellectual property issues.

All in all, there is nothing too surprising or revealing in the report, but it does provide a great summary of the various issues. Below is the full text of the report.

Joint-USPTO-USCO-Report-on-NFTs-and-Intellectual-Property

X avoids much of music industry copyright lawsuit

Plaintiffs sued X for copyright infringement arising from X’s users uploading tweets that contained copyright-protected music. Plaintiffs accused X of “trying to generate the kind of revenue that one would expect as a lawful purveyor of music and other media, without incurring the cost of actually paying for the licenses.” For example, plaintiffs highlighted a feature within the X platform whereby one could seek out tweets that include audiovisual media. And they pointed out infringing content surrounded by “promoted” content on the platform that generated revenue for X. The parties disputed the extent to which X actively encouraged infringing conduct. Plaintiffs sent many DMCA takedown notices to X but complained that the company took too long to respond to those notices. And plaintiffs asserted that X did not have an appropriate procedure in place to terminate users engaged in repeated acts of copyright infringement.

The complaint alleged three counts – direct, contributory and vicarious infringement. X moved to dismiss the complaint for failure to state a claim. The court granted the motion for the most part, except as to certain practices concerning contributory liability, namely, being more lenient to verified users, failing to act quickly concerning DMCA takedown notices, and failing to take steps to in response to severe serial infringers.

No direct infringement liability

The court found that plaintiffs had not successfully alleged direct infringement liability because their claims did not align with the required notion of “transmission” as defined in the Copyright Act and interpreted in the Supreme Court case of American Broadcasting Companies, Inc. v. Aereo, Inc., 573 US 431 (2014). The court distinguished X’s actions from the defendant in the Aereo case by noting that X merely provided the platform for third-party transmissions, rather than actively participating in the transmission of copyrighted material. Therefore, the court concluded that X’s role was more akin to a passive carrier, similar to a telegraph system or telephone company, making its actions more suitable for consideration under theories of secondary liability rather than direct infringement.

Some possible contributory liability

The court found that certain portions of plaintiffs’ claims for contributory infringement liability survived because they plausibly alleged that X engaged in actions that could materially contribute to infringement on the X platform. These actions included failing to promptly respond to valid takedown notices, allowing users to pay for less stringent copyright policy enforcement, and not taking meaningful steps against severe serial infringers. Consequently, the court dismissed the broader claim of general liability across the X platform but allowed the plaintiffs to proceed with their claims related to these specific practices.

No vicarious liability

Finally, the court determined that plaintiffs had not successfully pled vicarious liability for copyright infringement because their allegations did not establish that X had the requisite level of control over the infringing activities on X. The court found that simply providing a service that users might exploit for infringement did not equate to the direct control or supervisory capacity typically required for vicarious liability, as seen in traditional employer-employee or principal-agent relationships. Consequently, the court rejected the application of vicarious liability in this context, emphasizing that contributory infringement, rather than vicarious liability, was the more appropriate legal framework for the plaintiffs’ claims.

Concord Music Group, Inc. v. X Corp., 2024 WL 945325 (M.D. Tenn. March 5, 2024)

See also:

Print-on-demand platform avoids liability for illustrator’s copyright claims

Plaintiff illustrator (known for his works involving fish) sued defendant print-on-demand online platform operators for copyright infringement. Defendants’ platform enabled third parties to upload designs that could be printed on items such as t-shirts, mugs and tumblers. Plaintiff alleged that four of his works had been uploaded to the platform and had been printed on goods without plaintiff’s authorization.

Defendants moved to dismiss, arguing that defendants were neither directly nor secondarily liable for any alleged infringement. The magistrate judge submitted a report and recommendation that the matter be dismissed. Plaintiff objected to the magistrate judge’s report and recommendation. The district court overruled the objections and granted the motion to dismiss.

No liability for direct copyright infringement

The court examined the question of defendants’ alleged volitional conduct and its relation to a claim for direct liability for infringement. The magistrate judge had found that plaintiff failed to adequately allege that defendants had engaged in volitional conduct required to pin liability on defendants for infringements occasioned by defendants’ platform’s third party users. The court agreed that an allegation that defendants merely displayed plaintiff’s copyright-protected works did not plausibly suggest that defendants knew the work was protected by copyright. Moreover, plaintiff did not, for example, allege that defendants designed, manufactured or even selected the products on their website.

No liability for secondary copyright infringement

As for secondary liability for copyright infringement, plaintiff had objected to the magistrate judge’s determination of the question at the motion to dismiss stage. But the court rejected this objection to the magistrate judge’s report and recommendation. The court agreed with the magistrate judge’s determination that plaintiff failed to state a valid contributory infringement claim because he did not allege that defendants induced the third-party infringers; and he failed to state a valid secondary liability claim because he did not allege defendants “declined” to stop or limit third parties from infringing. It appears plaintiff sought to limit any application of these secondary liability elements to questions arising under the safe harbors of the Digital Millennium Copyright Act (“DMCA”). But the court found that plaintiff conflated the DMCA and general theories of copyright infringement liability.

Tomelleri v. Sunfrog, LLC, 2024 WL 940238 (E.D. Michigan, March 5, 2024)

See also:

Can the Fifth Amendment protect you from having to turn over your personal email account if you’re fired?

 

Can the Fifth Amendment protect you from having to turn over your personal laptop and email account when you are fired? Maybe not.

In New Jersey, an ex-employee allegedly sent over a hundred emails filled with confidential data to his personal Gmail account. This was frowned upon, especially since he had signed a confidentiality agreement.

When the company demanded he hand over his devices and accounts for inspection, he played the Fifth Amendment card, saying it protected him from self-incrimination. But the court wasn’t having any of it.

Why? Because during the HR investigation, he admitted in writing there was incriminating evidence waiting to be found, so the court held he waived his right to Fifth Amendment protection.

So even if turning over the devices could get the employee arrested, the court held that was okay in this situation and even benefitted the public interest.

Be careful out there with your side hustle.

Game developer prevails in action over bogus DMCA takedown notices

DMCA good faith

Defendant posted some videos on YouTube about the game Destiny 2. The videos stayed online for eight years with no issues until Bungie, the game’s developer and publisher sent a Digital Millennium Copyright Act (“DMCA”) takedown notice to YouTube because defendant’s video violated Bungie’s intellectual property policy. This policy encouraged Destiny 2 enthusiasts to create and post Destiny 2 content so long as the content conformed with the policy. Defendant felt wronged by Bungie’s DMCA takedown notice and, seeking to highlight flaws in the DMCA takedown process, posed as a Bungie employee and submitted 96 fraudulent takedown requests targeting other Destiny 2 content, including videos on Bungie’s own channel.

Bungie sued under Section 512(f) of the DMCA which provides that one may be liable for sending any takedown notification that knowingly materially misrepresents that complained of material is infringing. To be liable, a defendant must lack a subjective, good faith belief that the material targeted by the takedown notification is infringing. Bungie moved for summary judgment on its own claim and defendant did not oppose the motion, even though he had sat for a deposition and otherwise participated in the litigation. The court granted the summary judgment motion in favor of Bungie.

In his deposition, defendant had admitted he was “oblivious to the reprehensible damages [he] was causing to the community” and Bungie in issuing the fraudulent takedown notices, and that he caused financial and emotional damage to several Destiny 2 fans whose videos were subject to the fraudulent takedown notices he had sent. The court determined that defendant lacked a good faith belief in the infringing nature of the content, which supported his liability under the statute. Bungie demonstrated that the material did not infringe its intellectual property policy and that defendant had no authority to issue the DMCA notices. As a result of defendant’s actions, Bungie faced reputational damage and incurred significant costs in addressing the issue. Consequently, the court granted summary judgment in favor of Bungie, recognizing the intentional nature of defendant’s violations and the resultant harm to Bungie.

Bungie, Inc. v. Minor, 2024 WL 965010 (W.D. Washington, March 6, 2024)

Lawyers and AI: Key takeaways from being on a panel at a legal ethics conference

Earlier today I was on a panel at Hinshaw & Culbertson’s LMRM Conference in Chicago. This was the 23rd annual LMRM Conference, and the event has become the gold standard for events that focus on the “law of lawyering.”

Our session was titled How Soon is Now—Generative AI, How It Works, How to Use it Now, How to Use it Ethically. Preparing for and participating in the event gave me the opportunity to seriously consider some of the key issues relating to how lawyers are using generative AI and the promise that wider future adoption of these technologies in the legal industry holds.

Here are a few key takeaways:

    • Effective use. Lawyers are already using generative AI in ways that aid efficiency. The technology can summarize complex texts during legal research, allowing the attorney to quickly assess if the content addresses her specific interests, is factually relevant, and aligns with desired legal outcomes. With a carefully crafted and detailed prompt, an attorney can generate a pretty good first draft of many types of correspondence (e.g., cease and desist letters). Tools such as ChatGPT can aid in brainstorming by generating a variety of ideas on a given topic, helping lawyers consider possible outcomes in a situation.

 

    • Access to justice. It is not clear how generative AI adoption will affect access to justice. While it is possible that something like “legal chatbots” could bring formerly unavailable legal help to parties without sufficient resources to hire expensive lawyers, the building and adoption of sophisticated tools by the most elite firms will come at a cost that is passed on to clients, making premium services even more expensive, thereby increasing the divide that already exists.

 

    • Confidentiality and privacy. Care must be taken to reduce the risk of unauthorized disclosure of information when law firms adopt generative AI tools. Data privacy concerns arise regardless of the industry in which generative AI is used. But lawyers have the additional obligation to preserve their clients’ confidential information in accordance with the rules governing the attorney-client relationship. This duty of confidentiality complicates the ways in which a law firm’s “enterprise knowledge” can be used to train a large language model. And lawyers must consider whether and how to let their clients know that the client’s information may be used to train the model.

 

    • Exposing lawyering problems. Cases such as Mata v. Avianca, Park v. Kim and Kruse v. Karlenwherein lawyers or litigants used AI to generate documents submitted to the court containing non-existent case citations (hallucinations)tend to be used to critique these kinds of tools and tend to discourage lawyers from adopting them. But if one looks at these cases carefully, it is apparent that the problem is not so much with the technology, but instead with lawyering that lacks the appropriate competence and diligence.
    •  

    • AI and the standard of the practice. There is plenty of data suggesting that most knowledge work jobs will be drastically impacted by the use of AI in the near term. Regardless of whether a lawyer or law firm wants to adopt generative AI in the practice of law, attorneys will not be able to avoid knowing how the use of AI will change norms and expectations, because clients will be effectively using these technologies and innovating in the space.

Thank you to Barry MacEntee for inviting me to be on his panel. Barry, you did an exemplary job of preparation and execution, which is exactly how you roll. Great to meet my co-panelist Andrew Sutton. Andrew, your insights and commentary on both the legal and technical aspects of the use of AI in the practice of law were terrific.

Fifth Amendment did not save former employee from having to turn over his Gmail account

Gmail Fifth Amendment

Plaintiff biotech company sued a former employee for allegedly emailing proprietary information to his personal Gmail account and discussing employment with competitors. Plaintiff’s investigations revealed defendant had sent over a hundred emails with confidential data to his Gmail account, in violation of a confidentiality agreement defendant had signed when he was hired. Plaintiff sued defendant alleging misappropriation of trade secrets under both federal and state law. Plaintiff sought a temporary restraining order that required defendant to turn over his devices and online accounts for inspection. The court granted the motion.

Injunctive relief warranted

The court found that plaintiff had shown a reasonable probability of success in the litigation. It had successfully alleged ownership of trade secrets and had described specific instances (e.g., sending emails to a private Gmail account) that would be considered misappropriation.

Defendant could not be trusted

As for the likelihood of irreparable harm plaintiff would suffer if the injunction were not granted, the court considered plaintiff’s assertion that defendant “could not be trusted” based on his alleged conduct, and that plaintiff would suffer irreparable harm because of the continued presence of unsecured confidential information on defendant’s devices and accounts.

No Fifth Amendment Protection

Defendant argued under the “balancing of the equities” test that requiring him to turn over his devices and accounts would violate his Fifth Amendment rights against self-incrimination. The court rejected this argument, however, observing that in the course of plaintiff’s investigation of defendant’s conduct, defendant signed a document knowingly, intelligently and voluntarily, thereby admitting there was incriminating evidence to be found. Because of this, the court found defendant waived his Fifth Amendment rights.

 Injunction favored the public interest

The court also found that entry of the injunction requiring defendant to turn over the devices and accounts would benefit the public interest. It noted that there is a generalized public interest in upholding the inviolability of trade secrets and enforceability of confidentiality agreements. It mentioned the general interest in preserving Fifth Amendment rights but reiterated that in these circumstances, because of defendant’s waiver, the Fifth Amendment did not shield defendant.

Legend Biotech USA v. Liu, 2024 WL 919082 (D.N.J. March 4, 2024)

See also:

Website operator not liable under Wiretap Act for allowing Meta to intercept visitor communications

Plaintiffs asserted that defendant healthcare organization inadequately protected the personal and health information of visitors to defendant’s website. In particular, plaintiffs alleged that unauthorized third parties – including Meta – could intercept user interactions through the use of tracking technologies such as the Meta Pixel and Conversions API. According to plaintiffs, these tools collected sensitive health information and sent it to Meta. Despite defendant’s privacy policy claiming to protect user privacy and information, plaintiffs alleged that using defendant’s website caused plaintiffs to receive unsolicited advertisements on their Facebook accounts.

Plaintiffs sued, asserting a number of claims, including under the federal Electronic Communications Privacy Act (“ECPA”) and the California Invasion of Privacy Act (“CIPA”). Defendant moved to dismiss these claims. The court granted the motion.

To establish an ECPA claim, a plaintiff must demonstrate that defendant intentionally intercepted or attempted to intercept electronic communications using a device. CIPA similarly prohibits using electronic means to understand the contents of a communication without consent. Both laws have a “party exception” allowing a person who is a party to the communication to intercept it, provided the interception is not for a criminal or tortious purpose. In other words, there is an exception to the exception.

In this case, defendant argued it was a legitimate party to plaintiffs’ communications on a website, thus invoking the party exception. Plaintiffs countered that the exception should not apply due to defendant’s alleged tortious intent (making the information available to Facebook without disclosure to plaintiffs). But the court found that plaintiffs did not provide sufficient evidence that defendant’s actions were for an illegal or actionable purpose beyond the act of interception itself. Under the guidance of Pena v. GameStop, Inc., 2023 WL 3170047 (S.D. Cal. April 27, 2023), (a plaintiff must plead sufficient facts to support an inference that the offender intercepted the communication for the purpose of a tortious or criminal act that is independent of the intentional act of recording or interception itself), the court concluded there was no separate tortious conduct involved in the interception and dismissed the claims.

B.K. v. Eisenhower Medical Center, 2024 WL 878100 (February 29, 2024)

See also:

Scroll to top