Executive order to clarify Section 230: a summary

Section 230 executive order

Late yesterday President Trump took steps to make good on his promise to regulate online platforms like Twitter and Facebook. He released a draft executive order to that end. You can read the actual draft executive order. Here is a summary of the key points. The draft order:

  • States that it is the policy of the U.S. to foster clear, nondiscriminatory ground rules promoting free and open debate on the Internet. It is the policy of the U.S. that the scope of Section 230 immunity should be clarified.
  • Argues that a platform becomes a “publisher or speaker” of content, and therefore not subject to Section 230 immunity, when it does not act in good faith to to restrict access to content (in accordance with Section 230(c)(2) that it considers to be “obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable.” The executive order argues that Section 230 “does not extend to deceptive or pretextual actions restricting online content or actions inconsistent with an online platform’s terms of service.”
  • Orders the Secretary of Commerce to petition the FCC, requesting that the FCC propose regulations to clarify the conditions around a platform’s “good faith” when restricting access or availability of content. In particuar, the requested rules would examine whether the action was, among other things, deceptive, pretextual, inconsistent with the provider’s terms of service, the product of unreasoned explanation, or without meaningful opportunity to be heard.
  • Directs each federal executive department and agency to review its advertising and marketing spending on online platforms. Each is to provide a report in 30 days on: amount spent, which platforms supported, any viewpoint-based restrictions of the platform, assessment whether the platform is appropriate, and statutory authority available to restrict advertising on platforms not deemed appropriate.
  • States that it is the policy of the U.S. that “large social media platforms, such as Twitter and Facebook, as the functional equivalent of a traditional public forum, should not infringe on protected speech”.
  • Re-establishes the White House “Tech Bias Reporting Tool” that allows Americans to report incidents of online censorship. These complaints are to be forwarded to the DoJ and the FTC.
  • Directs the FTC to “consider” taking action against entities covered by Section 230 who restrict speech in ways that do not align with those entities’ public representations about those practices.
  • Directs the FTC to develop a publicly-available report describing complaints of activity of Twitter and other “large internet platforms” that may violate the law in ways that implicate the policy that these are public fora and should not infringe on protected speech.
  • Establishes a working group with states’ attorneys general regarding enforcement of state statutes prohibiting online platforms from engaging in unfair and deceptive acts and practices. 
  • This working group is also to collect publicly available information for the creation and monitoring of user watch lists, based on their interactions with content and other users (likes, follows, time spent). This working group is also to monitor users based on their activity “off the platform”. (It is not clear whether that means “off the internet” or “on other online places”.)

Influencer agreements: what needs to be in them

If you are a social media influencer, or are a brand looking to engage an influencer, you may need to enter into an influencer agreement. Here are five key things that should be in the contract between the influencer and the brand: 

  • Obligations 
  • Payment 
  • Content ownership 
  • Publicity rights 
  • Endorsement guidelines compliance 

Obligations under the influencer agreement.

The main thing that a brand wants from an influencer is for the influencer to say certain things about the brand’s products, in a certain way, and at certain times. What kind of content? Photos? Video? Which platforms? What hashtags? When? How many posts? The agreement should spell all these things out.

Payment.

Influencers are compensated in a number of ways. In addition to getting free products, they may be paid a flat fee upfront or from time to time. And it’s also common too see a revenue share arrangement. That is, the influencer will get a certain percentage based on sales of the products she is endorsing. These may be tracked by a promo code. The contract should identify all these amounts and percentages, and the timing for payment.

So what about content ownership? 

The main work of an influencer is to generate content. This could be pictures posted to Instagram, tweets, or video posted to her story. All that content is covered by copyright. Unless the contract says otherwise, the influencer will own the copyright. If the brand wants to do more with that content outside of social media, that needs to be addressed in the influencer agreement.

And then there are rights of publicity. 

Individuals have the right to determine how their image and name are used for commercial purposes. If the brand is going to feature the influencer on the brand’s own platform, then there needs to be language that specifies the limits on that use. That’s key to an influencer who wants to control her personal brand and reputation. 

Finally, endorsement guidelines and the influencer agreement. 

The federal government wants to make sure the consuming public gets clear information about products. So there are guidelines that influencers have to follow. You have to know what these guidelines are to stay out of trouble. And the contract should address what happens if these guidelines aren’t followed.

See also: When is it okay to use social media to make fun of people?

About the author: Evan Brown is an attorney helping individuals and businesses with a wide variety of agreements involving social media, intellectual property and technology. Call him at (630) 362-7237 or send email to ebrown@internetcases.com. 

Twitter account hacked, chaos ensued, but no legal claims stuck

Plaintiff owned a Twitter account and operated a related blog. Plaintiff used the Twitter account to drive traffic and revenue to the blog, conduct business via direct messages, and promote its brand, for which it claimed common law trademark rights.

Unknown hackers took control of plaintiff’s Twitter account by changing the email address associated with it. This locked plaintiff out. Plaintiff contacted Twitter several times to report the hack, but Twitter declined to take action because the complaints did not come from the email address then associated with the account. Plaintiff had not enabled Twitter’s two-factor authentication feature and claimed Twitter failed to adequately inform users about it.

While in control of the account, the hackers used plaintiff’s credit card without permission to purchase 93,000 promoted tweets and posted spam messages, including ones falsely advertising that the account was for sale and offering free iPhones. Plaintiff submitted refund requests to Twitter, but claimed Twitter’s process was broken. Twitter did not restore plaintiff’s access until after the lawsuit was filed.

So let’s sue Twitter

Plaintiff sued Twitter for multiple claims, including contributory trademark infringement, breach of contract, negligence, breach of bailment, and unfair competition. The court granted Twitter’s motion to dismiss all claims, though some were dismissed with permission for plaintiff to amend.

How the court ruled

Plaintiff claimed contributory trademark infringement, arguing that Twitter allowed the hackers to control plaintiff’s account after being notified of the hack, which led to misuse of plaintiff’s mark. The court dismissed this claim because plaintiff did not allege that Twitter had actual or constructive knowledge that trademark infringement was occurring. Simply receiving reports of a hack was not enough to show that Twitter knew or should have known the account’s use was infringing a trademark.

For the breach of contract claim, plaintiff pointed to Twitter’s Terms of Service (TOS), asserting that Twitter had obligations to maintain access and protect content. The court found that the TOS provisions cited did not amount to enforceable promises about uninterrupted access or account security. The court also rejected plaintiff’s claim that Twitter had breached an implied contract, finding no facts showing Twitter made any implied promises. Finally, the court dismissed the breach of the implied covenant of good faith and fair dealing because it was based on the same allegations as the breach of contract claim and added nothing new.

In its negligence and recklessness claim, plaintiff argued that Twitter failed to use reasonable care in securing accounts and responding to the hack. The court rejected this claim for three reasons. First, plaintiff failed to show that Twitter owed a legal duty separate from the contract. Second, the only damages alleged were economic losses, which are barred in negligence cases unless accompanied by personal or property harm. Third, the negligence allegations were nearly identical to the contract claims, making the tort claim impermissibly duplicative.

Plaintiff also alleged a breach of the duty of bailment, claiming Twitter had custody of its credit card information and private messages. The court rejected this claim, stating that digital content and payment information did not qualify as personal property that could be “delivered” and then returned, as required for a bailment. The court also noted this claim was duplicative of the breach of contract and negligence claims.

Under California’s unfair competition law, plaintiff asserted that Twitter engaged in unlawful and unfair practices. The court dismissed this claim because it was entirely derivative of the other claims. Since none of those underlying claims were properly pled, the unfair competition claim also failed.

Finally, the court dismissed plaintiff’s request for declaratory judgment, which asked for a declaration of ownership over its blog, domain, and Twitter account. The court explained that declaratory judgment is not a standalone legal claim and requires a viable underlying claim, which plaintiff had not presented.

Worldwide Media, Inc. v. Twitter, Inc., 2018 WL 5304852 (N.D. Cal., October 24, 2018)

Police not required to publicly disclose how they monitor social media accounts in investigations

In the same week that news has broken about how Amazon is assisting police departments with facial recognition technology, here is a decision from a Pennsylvania court that held police do not have to turn over details to the public about how they monitor social media accounts in investigations.

The ACLU sought a copy under Pennsylvania’s Right-to-Know Law of the policies and procedures of the Pennsylvania State Police (PSP) for personnel when using social media monitoring software. The PSP produced a redacted copy, and after the ACLU challenged the redaction, the state’s Office of Open Records ordered the full document be provided. The PSP sought review in state court, and that court reversed the Office of Open Records order. The court found that disclosure of the record would be reasonably likely to threaten public safety or a public protection activity.

The court found in particular that disclosure would: (i) allow individuals to know when the PSP can monitor their activities using “open sources” and allow them to conceal their activities; (ii) expose the specific investigative method used; (iii) provide criminals with tactics the PSP uses when conducting undercover investigations; (iv) reveal how the PSP conducts its investigations; and (v) provide insight into how the PSP conducts an investigation and what sources and methods it would use. Additionally, the court credited the PSP’s affidavit which explained that disclosure would jeopardize the PSP’s ability to hire suitable candidates – troopers in particular – because disclosure would reveal the specific information that may be reviewed as part of a background check to determine whether candidates are suitable for employment.

Pennsylvania State Police v. American Civil Liberties Union of Pennsylvania, 2018 WL 2272597 (Commonwealth Court of Pennsylvania, May 18, 2018)

About the Author: Evan Brown is a Chicago technology and intellectual property attorney. Call Evan at (630) 362-7237, send email to ebrown [at] internetcases.com, or follow him on Twitter @internetcases. Read Evan’s other blog, UDRP Tracker, for information about domain name disputes.

Ninth Circuit upholds decision in favor of Twitter in terrorism case

Tamara Fields and Heather Creach, representing the estates of their late husbands and joined by Creach’s two minor children, sued Twitter, Inc. Plaintiffs alleged that the platform knowingly provided material support to ISIS, enabling the terrorist organization to carry out the 2015 attack in Jordan that killed their loved ones. The lawsuit sought damages under the Anti-Terrorism Act (ATA), which allows U.S. nationals injured by terrorism to seek compensation.

Plaintiffs alleged that defendant knowingly and recklessly provided ISIS with access to its platform, including tools such as direct messaging. Plaintiffs argued that these services allowed ISIS to spread propaganda, recruit followers, raise funds, and coordinate operations, ultimately contributing to the attack. Defendant moved to dismiss the case, arguing that plaintiffs failed to show a direct connection between its actions and the attack. Defendant also invoked Section 230 of the Communications Decency Act, which shields platforms from liability for content created by users.

The district court agreed with defendant and dismissed the case, finding that plaintiffs had not established proximate causation under the ATA. Plaintiffs appealed, but the Ninth Circuit upheld the dismissal. The appellate court ruled that plaintiffs failed to demonstrate a direct link between defendant’s alleged support and the attack. While plaintiffs showed that ISIS used defendant’s platform for various purposes, the court found no evidence connecting those activities to the specific attack in Jordan. The court emphasized that the ATA requires a clear, direct relationship between defendant’s conduct and the harm suffered.

The court did not address defendant’s arguments under Section 230, as the lack of proximate causation was sufficient to resolve the case. Accordingly, this decision helped clarify the legal limits of liability for platforms under the ATA and highlighted the challenges of holding technology companies accountable for how their services are used by third parties.

Three Reasons Why This Case Matters:

  • Sets the Bar for Proximate Cause: The ruling established that a direct causal link is essential for liability under the Anti-Terrorism Act.
  • Limits Platform Liability: The decision underscores the difficulty of holding online platforms accountable for misuse of their services by bad actors.
  • Reinforces Section 230’s Role: Although not directly addressed, the case highlights the protections Section 230 offers to tech companies.

Fields v. Twitter, Inc., 881 F.3d 739 (9th Cir. 2018)

Pastor’s First Amendment rights affected parole conditions barring social media use

Plaintiff – a Baptist minister on parole in California – sued several parole officials, arguing that conditions placed on his parole violated plaintiff’s First Amendment rights. Among the contested restrictions was a prohibition on plaintiff accessing social media. Plaintiff claimed this restriction infringed on both his right to free speech and his right to freely exercise his religion. Plaintiff asked the court for a preliminary injunction to stop the enforcement of this condition. The court ultimately sided with plaintiff, ruling that the social media ban was unconstitutional.

The Free Speech challenge

Plaintiff argued that the parole condition prevented him from sharing his religious message online. As a preacher, he relied on platforms such as Facebook and Twitter to post sermons, connect with congregants who could not attend services, and expand his ministry by engaging with other pastors. The social media ban, plaintiff claimed, silenced him in a space essential for modern communication.

The court agreed, citing the U.S. Supreme Court’s ruling in Packingham v. North Carolina, which struck down a law barring registered sex offenders from using social media. In Packingham, the Court emphasized that social media platforms are akin to a modern public square and are vital for exercising free speech rights. Similarly, the court in this case found that the blanket prohibition on social media access imposed by the parole conditions was overly broad and not narrowly tailored to address specific risks or concerns.

The court noted that plaintiff’s past offenses, which occurred decades earlier, did not involve social media or the internet, undermining the justification for such a sweeping restriction. While public safety was a legitimate concern, the court emphasized that parole conditions must be carefully tailored to avoid unnecessary burdens on constitutional rights.

The Free Exercise challenge

Plaintiff also argued that the social media ban interfered with his ability to practice his religion. He asserted that posting sermons online and engaging with his congregation through social media were integral parts of his ministry. By prohibiting social media use, the parole condition restricted his ability to preach and share his faith beyond the physical boundaries of his church.

The court found this argument compelling. Religious practice is not confined to in-person settings, and plaintiff demonstrated that social media was a vital tool for his ministry. The court noted that barring a preacher from using a key means of sharing religious teachings imposed a unique burden on religious activity. Drawing on principles from prior Free Exercise Clause cases, the court held that the parole condition was not narrowly tailored to serve a compelling government interest, as it broadly prohibited access to all social media regardless of its religious purpose.

The court’s decision

The court granted plaintiff’s request for a preliminary injunction, concluding that he was likely to succeed on his claims under both the Free Speech Clause and the Free Exercise Clause of the First Amendment. The ruling allowed plaintiff to use social media during the litigation, while acknowledging the government’s legitimate interest in monitoring parolees. The court encouraged less restrictive alternatives, such as targeted supervision or limiting access to specific sites that posed risks, rather than a blanket ban.

Three reasons why this case matters:

Intersection of Speech and Religion: The case highlights how digital tools are essential for both free speech and the practice of religion, especially for individuals sharing messages with broader communities.

Limits on Blanket Restrictions: The ruling reaffirms that government-imposed conditions, such as parole rules, must be narrowly tailored to avoid infringing constitutional rights.

Modern Application of First Amendment Rights: By referencing Packingham, the court acknowledged the evolving role of social media as a platform for public discourse and religious expression.

Manning v. Powers, 281 F. Supp. 3d 953 (C.D. Cal. Dec. 13, 2017)

Ownership of domain names and social media accounts a key issue in case

Plaintiff sued defendant for unauthorized use of domain names and social media accounts. Plaintiff asked the court to declare its rights to these digital assets and to hold defendant accountable for trademark infringement and other claims. The court decided to allow some claims to proceed while dismissing others based on New York law’s treatment of intangible property.

Plaintiff, a luxury grooming and fragrance company operating under the name MiN New York, hired defendant, Mindy Yang, through her company Superego Management LLC, to manage marketing and social media efforts. After the business relationship ended, plaintiff alleged that defendant retained control of website domains and social media accounts. Defendant allegedly redirected these assets to promote its new business, even using plaintiff’s accounts to advertise its own events.

Defendant argued that the claims for replevin, conversion, and trespass should be dismissed because domain names and social media accounts are intangible and not considered property under New York law. Defendant also sought dismissal of the breach of fiduciary duty claim, asserting that as an independent contractor, it did not owe fiduciary obligations to plaintiff.

The court partially agreed with defendant. It dismissed the trespass claim, finding that plaintiff failed to show harm to the online assets themselves. However, the court allowed plaintiff’s claims for replevin and conversion to proceed, ruling that domain names and social media accounts can qualify as property under New York law. The court recognized that these assets were crucial to plaintiff’s business and plausibly alleged to have been wrongfully controlled by defendant.

On the claim for breach of fiduciary duty, the court ruled in plaintiff’s favor. The court held that plaintiff sufficiently alleged that defendant, by accessing sensitive accounts, using a corporate credit card, and managing key aspects of plaintiff’s marketing, owed fiduciary duties despite being an independent contractor. This established that defendant had a responsibility to act in plaintiff’s best interests.

Three reasons why this case matters:

  • Addresses rights to digital assets: The court’s decision tends to confirm that domain names and social media accounts can be considered property under New York law.
  • Defines fiduciary duties for contractors: The ruling clarifies that independent contractors can owe fiduciary obligations when entrusted with significant responsibilities.
  • Offers a blueprint for online disputes: This case sets important standards for businesses seeking to reclaim control over misappropriated digital assets.

Salonclick LLC v. Superego Management LLC, 2017 WL 239379 (S.D.N.Y. Jan. 18, 2017).

Twitter avoids liability in terrorism lawsuit

Update 1/31/2018: The Ninth Circuit upheld the court’s decision discussed below.

The families of two U.S. contractors killed in Jordan sued Twitter, accusing the platform of providing material support to the terrorist organization ISIS. Plaintiffs alleged that by allowing ISIS to create and maintain Twitter accounts, the company violated the Anti-Terrorism Act (ATA). Plaintiffs further claimed this support enabled ISIS to recruit, fundraise, and promote extremist propaganda, ultimately leading to the deaths of the contractors. The lawsuit aimed to hold Twitter responsible for the actions of ISIS and to penalize it for facilitating the organization’s digital presence.

Twitter moved to dismiss, arguing that the claims were barred under the Communications Decency Act (CDA) at 47 U.S.C. §230. Section 230 provides immunity to internet platforms from being treated as the publisher or speaker of content posted by third parties. The court had to decide whether Twitter’s role in allowing ISIS to use its platform made it liable for the consequences of ISIS’s acts.

The court dismissed the case, finding that Section 230 shielded Twitter from liability. The court ruled that plaintiffs’ claims attempted to treat Twitter as the publisher of content created by ISIS, which is precisely the type of liability Section 230 was designed to prevent. The court also concluded that plaintiffs failed to establish a plausible connection, or proximate causation, between Twitter’s actions and the deaths. Importantly, in the court’s view, plaintiffs could not demonstrate that ISIS’s use of Twitter directly caused the attack in Jordan or that the shooter had interacted with ISIS content on the platform.

The court further addressed plaintiffs’ argument regarding private messages sent through Twitter’s direct messaging feature. It ruled that these private communications were also protected under Section 230, as the law applies to all publishing activities, whether public or private.

Three reasons why this case matters:

  • Expanding the scope of Section 230: The case reinforced the broad immunity provided to tech companies under Section 230, including their handling of controversial or harmful content.
  • Clarifying proximate causation in ATA claims: The ruling highlighted the challenges of proving a direct causal link between a platform’s operations and acts of terrorism.
  • Balancing tech innovation and accountability: The decision underscored the ongoing debate about how to balance the benefits of open platforms with the need for accountability in preventing misuse.

Fields v. Twitter, Inc., 200 F. Supp. 3d 964 (N.D. Cal., August 10, 2016).

internetcases turns 10 years old today

Ten years ago today, somewhat on a whim, yet to fulfill a need I saw for discussion about the law of the internet in the “blogosphere” (a term we loved dearly back then), I launched internetcases.

What started as a one-page handwritten pamphlet that I would mimeograph in the basement of my one-bedroom apartment and then foist upon unsuspecting people on street corners has in ten years turned into a billion dollar conglomerate and network. internetcases is now translated into 7 languages daily and employs a staff of thousands to do the Lord’s work fighting Ebola and terrorism on 4 continents. Or it’s a WordPress install on some cheap GoDaddy space and I write when I can.

All seriousness aside, on this 10th anniversary, I want to sincerely thank my loyal readers and followers. Writing this blog has been the single most satisfying thing I’ve done in my professional life, and I am immensely grateful for the knowledge it has helped me develop, the opportunities for personal brand development it has given (speaking, press, media opportunities), but most of all, I’m grateful for the hundreds of people it has enabled me to connect with and get to know.

Blogging (and the web in general) has changed a lot in 10 years. And the legal issues arising from the internet continue to challenge us to stretch our thinking and amp up our powers of analysis. It’s great to have a platform on the web from which to share news and thoughts about the role that technology plays in shaping our legal rules and our culture.

Thanks all.

Court orders Twitter to identify anonymous users

Defamation plaintiffs’ need for requested information outweighed any impact on Doe defendants’ free speech right to tweet anonymously.

Plaintiff company and its CEO sued several unknown defendants who tweeted that plaintiff company encouraged domestic violence and misogyny and that the CEO visited prostitutes. The court allowed plaintiffs to serve subpoenas on Twitter to seek the identity of the unknown Twitter users. Twitter would not comply with the subpoenas unless and until the court ruled on whether the production of information would violate the users’ First Amendment rights.

The court ruled in favor of the plaintiffs and ordered Twitter to turn over identifying information about the unknown users. In reaching this decision, the court applied the Ninth Circuit analysis for unmasking anonymous internet speakers set out in Perry v. Schwarzenegger, 591 F.3d. 1126 (9th Cir. 2009). The court found that the requested discovery raised the possibility of “arguable first amendment infringement,” so it continued its analysis by weighing the balance between the aggrieved plaintiffs’ interests with the anonymous defendants’ free speech rights.

The Perry balancing test places a burden on the party seeking discovery to show that the information sought is rationally related to a compelling governmental interest and that the requested discovery is the least restrictive means of obtaining the desired information.

In this case, the court found that the subpoenas were narrowly tailored to plaintiffs’ need to uncover the identities of the anonymous defendants so that plaintiffs could serve process. It also found that the “nature” of defendants’ speech weighed in favor of enforcing the subpoena. The challenged speech went “beyond criticism into what appear[ed] to be pure defamation, ostensibly unrelated to normal corporate activity.”

Music Group Macao Commercial Offshore Ltd. v. Does I-IX, 2015 WL 75073 (N.D. Cal., January 6, 2015).

Scroll to top