Blog

Software reseller not entitled to preliminary injunction to protect customer relationships

Plaintiff CD appointed defendant SST to be the exclusive reseller to certain customers of CD’s software development platform. CD sued SST for breach, and SST likewise filed counterclaims for breach of contract and fraudulent inducement. SST sought a preliminary injunction against CD, asking that the court prohibit CD from unilaterally terminating the reseller agreement.

SST asserted, among other things, that it would suffer irreparable harm from this termination, citing potential loss of solicited clients and reputational damage. CD argued, however, that these asserted harms could be remedied monetarily, and thus did not qualify as irreparable.

The court agreed with CD, finding SST’s arguments regarding reputational damage and loss of client relationships to be speculative and unsupported by concrete evidence. As such, these claims did not meet the stringent criteria for irreparable harm, which requires a clear, immediate threat of injury that monetary compensation could not redress.

Further undermining SST’s claim of irreparable harm was the notion that any potential financial losses due to CD’s actions, including the costs associated with resolving issues with target accounts or transitioning to alternative software solutions, were quantifiable and thus recoverable in monetary terms. The court noted that SST’s reluctance to make additional payments to CD for resolving software access issues did not constitute irreparable harm, as those could be recouped in resolution of the contract dispute. Moreover, the court pointed out that SST’s concerns about CD not restoring access post-payment were speculative and lacked evidentiary support, given the record showing ongoing negotiations and concrete offers from CD.

Citizen Developer, LLC v. System Soft Tech., Inc., 2024 WL 554140 (M.D. Penn. February 12, 2024)

See also:

Kids Online Safety Act: Quick Facts

What is KOSA?

Senators Blackburn and Blumenthal have introduced a new version of KOSA – the Kids Online Safety Act, which seeks to protect minors from online harms by requiring social media companies to prioritize children’s safety in product design and offer more robust parental control tools. Garnering bipartisan support with 62 Senate cosponsors in the wake of a significant hearing with Big Tech CEOs, the bill emphasizes accountability for tech companies, transparency in algorithms, and enhanced safety measures. The legislation has been refined following extensive discussions with various stakeholders, including tech companies, advocacy groups, and parents, to ensure its effectiveness and alignment with the goal of safeguarding young internet users from bullying, harassment, and other online risks.

Critics of the statute argue that the KOSA, despite amendments, remains a threat to constitutional rights, effectively censoring online content and empowering state officials to target undesirable services and speech. See, e.g., the EFF’s blog post about the statute. They contend that KOSA mandates extensive filtering and blocking of legal speech across numerous websites, apps, and platforms, likely leading to age verification requirements. Concerns are raised about the potential harm to minors’ access to important information, particularly for groups such as LGBTQ+ youth, those seeking health and reproductive information, and activists. The modifications in the 2024 version, including the removal of the authority for state attorneys general to sue for non-compliance with the “duty of care” provision, are seen as insufficient to address the core issues related to free speech and censorship. Critics urge opposition to KOSA, highlighting its impact not just on minors but on all internet users who could be subjected to a “second-class internet” due to restricted access to information.

What does the proposed law actually say? Below are some key facts about the contents of the legislation:

Who would be subject to the law:

The statute would place various obligations on “covered platforms”:

  • A “covered platform” encompasses online platforms, video games, messaging applications, and video streaming services accessible via the internet and used or likely to be used by minors.
  • Exclusions from the definition of “covered platform” include common carrier services, broadband internet access services, email services, specific teleconferencing or video conferencing services, and direct wireless messaging services not linked to an online platform.
  • Entities not for profit, educational institutions, libraries, news or sports news websites/apps with specific criteria, business-to-business software, and cloud services not functioning as online platforms are also excluded.
  • Virtual private networks and similar services that solely route internet traffic are not considered “covered platforms.”

Design and Implementation Requirements

  • Covered platforms are required to exercise reasonable care in designing and implementing features to prevent and mitigate harms to minors, including mental health disorders, addiction-like behaviors, physical violence, bullying, harassment, sexual exploitation, and certain types of harmful marketing.
  • The prevention of harm includes addressing issues such as anxiety, depression, eating disorders, substance abuse, suicidal behaviors, online bullying, sexual abuse, and the promotion of narcotics, tobacco, gambling, and alcohol to minors.
  • Despite these protections, platforms are not required to block minors from intentionally seeking content or from accessing resources aimed at preventing or mitigating these harms, including providing evidence-informed information and clinical resources.

Required Safeguards for Minors

  • Covered platforms must provide minors with safeguards to limit communication from others, restrict access to their personal data, control compulsive platform usage features, manage personalized recommendation systems, and protect their geolocation data. (One has to consider whether these would pass First Amendment scrutiny, particularly in light of recent decisions such as the one in NetChoice v. Yost).
  • Platforms are required to offer options for minors to delete their accounts and personal data, and limit their time on the platform, with the most protective privacy and safety settings enabled by default for minors.
  • Parental tools must be accessible and easy-to-use, allowing parents to manage their child’s privacy, account settings, and platform usage, including the ability to restrict purchases and view and limit time spent on the platform.
  • A reporting mechanism for harms to minors must be established, with platforms required to respond substantively within specified time frames, and immediate action required for reports involving imminent threats to minors’ safety.
  • Advertising of illegal products such as narcotics, tobacco, gambling, and alcohol to minors is strictly prohibited.
  • Safeguards and parental tools must be clear, accessible, and designed without “dark patterns” that could impair user autonomy or choice, with considerations for uninterrupted gameplay and offline device or account updates.

Disclosure Requirements

  • Before a minor registers or purchases on a platform, clear notices about data policies, safeguards for minors, and risks associated with certain features must be provided.
  • Platforms must inform parents about safeguards and parental tools for their children and obtain verifiable parental consent before a child uses the platform.
  • Platforms may consolidate notice and consent processes with existing obligations under the Children’s Online Privacy Protection Act (COPPA). (Like COPPA, a “child” under the act is one under 13 years of age.)
  • Platforms using personalized recommendation systems must clearly explain their operation, including data usage, and offer opt-out options for minors or their parents.
  • Advertising targeted at minors must be clearly labeled, explaining why ads are shown to them and distinguishing between content and commercial endorsements.
  • Platforms are required to provide accessible information to minors and parents about data policies and access to safeguards, ensuring resources are available in relevant languages.

Reporting Requirements

  • Covered platforms must annually publish a report, based on an independent audit, detailing the risks of harm to minors and the effectiveness of prevention and mitigation measures. (Providing these audit services is no doubt a good business opportunity for firms with such capabilities; unfortunately this will increase the cost of operating a covered platform.)
  • This requirement applies to platforms with over 10 million active monthly users in the U.S. that primarily host user-generated content and discussions, such as social media and virtual environments.
  • Reports must assess platform accessibility by minors, describe commercial interests related to minor usage, and provide data on minor users’ engagement, including time spent and content accessed.
  • The reports should identify foreseeable risks of harm to minors, evaluate the platform’s design features that could affect minor usage, and detail the personal data of minors collected or processed.
  • Platforms are required to describe safeguards and parental tools, interventions for potential harms, and plans for addressing identified risks and circumvention of safeguards.
  • Independent auditors conducting the risk assessment must consult with parents, youth experts, and consider research and industry best practices, ensuring privacy safeguards are in place for the reported data.

Keep an eye out to see if Congress passes this legislation in the spirit of “for the children.”

How did Ohio’s efforts to regulate children’s access to social media violate the constitution?

children social media

Ohio passed a law called the Parental Notification by Social Media Operators Act which sought to require certain categories of online services to obtain parental consent before allowing any unemancipated child under the age of sixteen to register or create accounts with the service.

Plaintiff internet trade association – representing platforms including Google, Meta, X, Nextdoor, and Pinterest – sought a preliminary injunction that would prohibit the State’s attorney general from enforcing the law. Finding the law to be unconstitutional, the court granted the preliminary injunction.

Likelihood of success on the merits: First Amendment Free Speech

The court found that plaintiff was likely to succeed on its constitutional claims. Rejecting the State’s argument that the law sought only to regulate commerce (i.e., the contracts governing use of social media platforms) and not speech, it held that the statute was a restriction on speech, implicating the First Amendment. It held that the law was a content-based restriction because the social media features the statute singled out in defining which platforms were subject to the law – e.g., the ability to interact socially with others – were “inextricable from the content produced by those features.” And the law violated the rights of minors living in Ohio because it infringed on minors’ rights to both access and produce First Amendment protected speech.

Given these attributes of the law, the court applied strict scrutiny to the statute. The court held that the statute failed to pass strict scrutiny for several reasons. First, the Act was not narrowly tailored to address the specific harms identified by the State, such as protecting minors from oppressive contract terms with social media platforms. Instead of targeting the contract terms directly, the Act broadly regulated access to and dissemination of speech, making it under-inclusive in addressing the specific issue of contract terms and over-inclusive by imposing sweeping restrictions on speech. Second, while the State aimed to protect minors from mental health issues and sexual predation related to social media use, the Act’s approach of requiring parental consent for minors under sixteen to access all covered websites was an untargeted and blunt instrument, failing to directly address the nuanced risks posed by specific features of social media platforms. Finally, in attempting to bolster parental authority, the Act mirrored previously rejected arguments that imposing speech restrictions, subject to parental veto, was a legitimate means of aiding parental control, making it over-inclusive by enforcing broad speech restrictions rather than focusing on the interests of genuinely concerned parents.

Likelihood of success on the merits: Fourteenth Amendment Due Process

The statute violated the Due Process Clause of the Fourteenth Amendment because its vague language failed to provide clear notice to operators of online services about the conduct that was forbidden or required. The Act’s broad and undefined criteria for determining applicable websites, such as targeting children or being reasonably anticipated to be accessed by children, left operators uncertain about their legal obligations. The inclusion of an eleven-factor list intended to clarify applicability, which contained vague and subjective elements like “design elements” and “language,” further contributed to the lack of precise guidance. The Act’s exception for “established” and “widely recognized” media outlets without clear definitions for these terms introduced additional ambiguity, risking arbitrary enforcement. Despite the State highlighting less vague aspects of the Act and drawing parallels with the federal Children Online Privacy Protection Act of 1998 (COPPA), these did not alleviate the overall vagueness, particularly with the Act’s broad and subjective exceptions.

Irreparable harm and balancing of the equities

The court found that plaintiff’s members would face irreparable harm through non-recoverable compliance costs and the potential for civil liability if the Act were enforced, as these monetary harms could not be fully compensated. Moreover, the Act’s infringement on constitutional rights, including those protected under the First Amendment, constituted irreparable harm since the loss of such freedoms, even for short durations, is considered significant.

The balance of equities and the public interest did not favor enforcing a statute that potentially violated constitutional principles, as the enforcement of unconstitutional laws serves no legitimate public interest. The argument that the Act aimed to protect minors did not outweigh the importance of upholding constitutional rights, especially when the statute’s measures were not narrowly tailored to address specific harms. Therefore, the potential harm to plaintiff’s members and the broader implications for constitutional rights underscored the lack of public interest in enforcing this statute.

NetChoice, LLC v. Yost, 2024 WL 55904 (S.D. Ohio, February 12, 2024)

See also: 

Using AI generated fake cases in court brief gets pro se litigant fined $10K

fake ai cases

Plaintiff sued defendant and won on summary judgment. Defendant sought review with the Missouri Court of Appeals. On appeal, the court dismissed the appeal and awarded damages to plaintiff/respondent because of the frivolousness of the appeal.

“Due to numerous fatal briefing deficiencies under the Rules of Appellate Procedure that prevent us from engaging in meaningful review, including the submission of fictitious cases generated by [AI], we dismiss the appeal.” With this, the court began its roast of the pro se appellant’s conduct.

The court detailed appellant’s numerous violations of the applicable Rules of Appellate Procedures. The appellate brief was unsigned, it had no required appendix, and had an inadequate statement of facts. It failed to provide points relied on, and a detailed table of cases, statutes and other authorities.

But the court made the biggest deal about how “the overwhelming majority of the [brief’s] citations are not only inaccurate but entirely fictitious.” Only two out of the twenty-four case citations in the brief were genuine.

Though appellant apologized for the fake cases in his reply brief, the court was not moved, because “the deed had been done.” It characterized the conduct as “a flagrant violation of the duties of candor” appellant owed to the court, and an “abuse of the judicial system.”

Because appellant “substantially failed to comply with court rules,” the court dismissed the appeal and ordered appellant to pay $10,000 in damages for filing a frivolous appeal.

Kruse v. Karlen, — S.W.3d —, 2024 WL 559497 (Mo. Ct. App. February 13, 2024)

See also:

GenAI and copyright: Court dismisses almost all claims against OpenAI in authors’ suit

copyright social media

Plaintiff authors sued large language model provider OpenAI and related entities for copyright infringement, alleging that plaintiffs’ books were used to train ChatGPT. Plaintiffs asserted six causes of action against various OpenAI entities: (1) direct copyright infringement, (2) vicarious infringement, (3) violation of Section 1202(b) of the Digital Millennium Copyright Act (“DMCA”), (4) unfair competition under  Cal. Bus. & Prof. Code Section 17200, (5) negligence, and (6) unjust enrichment.

Open AI moved to dismiss all of these claims except for the direct copyright infringement claim. The court granted the motion as to almost all the claims.

Vicarious liability claim dismissed

The court dismissed the claim for vicarious liability because plaintiffs did not successfully plead that direct copying occurs from use of the software. Citing to A&M Recs., Inc. v. Napster, Inc., 239 F.3d 1004, 1013 n.2 (9th Cir. 2001) aff’d,  284 F.3d 1091 (2002) the court noted that “[s]econdary liability for copyright infringement does not exist in the absence of direct infringement by a third party.” More specifically, the court dismissed the claim because plaintiffs had not alleged either direct copying when the outputs are generated, nor had they alleged “substantial similarity” between the ChatGPT outputs and plaintiffs’ works.

DMCA claims dismissed

The DMCA – at 17 U.S.C. 1202(b) – requires a defendant’s knowledge or “reasonable grounds to know” that the defendant’s removal of copyright management information (“CMI”) would “induce, enable, facilitate, or conceal an infringement.” Plaintiffs alleged “by design,” OpenAI removed CMI from the copyrighted books during the training process. But the court found that plaintiffs provided no factual support for that assertion. Moreover, the court found that even if plaintiffs had successfully asserted such facts, they had not provided any facts showing how the omitted CMI would induce, enable, facilitate or conceal infringement.

The other portion of the DMCA relevant to the lawsuit – Section 1202(b)(3) – prohibits the distribution of a plaintiff’s work without the plaintiff’s CMI included. In rejecting plaintiff’s assertions that defendants violated this provision, the court looked to the plain language of the statute. It noted that liability requires distributing the original “works” or “copies of [the] works.” Plaintiffs had not alleged that defendants distributed their books or copies of their books. Instead, they alleged that “every output from the OpenAI Language Models is an infringing derivative work” without providing any indication as to what such outputs entail – i.e., whether they were the copyrighted books or copies of the books.

Unfair competition claim survived

Plaintiffs asserted that defendants had violated California’s unfair competition statute based on “unlawful,” “fraudulent,” and “unfair” practices. As for the unlawful and fraudulent practices, these relied on the DMCA claims, which the court had already dismissed. So the unfair competition theory could not move forward on those grounds. But the court did find that plaintiffs had alleged sufficient facts to support the claim that it was “unfair” to use plaintiffs works without compensation to train the ChatGPT model.

Negligence claim dismissed

Plaintiffs alleged that defendants owed them a duty of care based on the control of plaintiffs’ information in their possession and breached their duty by “negligently, carelessly, and recklessly collecting, maintaining, and controlling systems – including ChatGPT – which are trained on Plaintiffs’ [copyrighted] works.” The court dismissed this claim, finding that there were insufficient facts showing that defendants owed plaintiffs a duty in this situation.

Unjust enrichment claim dismissed

Plaintiffs alleged that defendants were unjustly enriched by using plaintiffs’ copyright protected works to train the large language model. The court dismissed this claim because plaintiff had not alleged sufficient facts to show that plaintiffs had conferred any benefit onto OpenAI through “mistake, fraud, or coercion.”

Tremblay v. OpenAI, Inc., 2024 WL 557720 (N.D. Cal., February 12, 2024)

See also:

When can you serve a lawsuit by email?

email service

One of the biggest challenges brand owners face in enforcing their intellectual property rights online is in tracking down the infringer – often located in a foreign country – so that a lawsuit can be served on the infringer. Generally, for due process reasons, the Federal Rules of Civil Procedure require that a complaint and summons be served personally – that is, by handing the papers to the person directly. But there are exceptions to this, particularly in situations involving overseas defendants, in which “alternative service” may be available. A recent case from federal court in the state of Washington provides an example of where Amazon and certain sellers were able to serve the lawsuit on overseas defendants via email.

Learning about the counterfeiters

Plaintiffs sued defendants, accusing defendants of selling counterfeit goods on Amazon. Plaintiffs alleged that defendants resided in Ukraine. Even after working with a private investigator and seeking third party discovery from defendants’ virtual bank account providers, plaintiffs could not find any valid physical addresses for defendants. So plaintiffs asked the court to permit service of the complaint and summons on defendants’ email addresses registered with their Amazon selling accounts. They knew those email addresses must be valid because test messages did not receive any error notices or bounce backs that would indicate the messages failed to deliver.

What is required

The court looked at Federal Rule of Civil Procedure 4(f) which allows for service of process on individuals in foreign countries through several methods, including (`1) internationally agreed methods such as those authorized by the Hague Convention, (2) according to the foreign country’s law if no international agreement exists, or (3) by other means not prohibited by international agreements as the court orders. A plaintiff must show that the specific circumstances require court intervention. Furthermore, any method of service must align with constitutional due process, meaning it must be designed to effectively inform interested parties of the ongoing action and give them a chance to object, ensuring fairness and the opportunity for defense.

The court said okay

The court found that plaintiffs had shown court intervention was necessary because plaintiffs could not find valid physical addresses but could show that the email addresses apparently were valid. As for the Hague Convention, no method it provided was available without valid physical addresses. Moreover, the court observed that whether or not the Hague Convention applied, email service on individuals in Ukraine was not prohibited by the Convention nor by any other international agreement.

And the court found email service comported with constitutional due process. Defendants conducted business through these email accounts and tests confirmed their functionality. Although defendants’ Amazon accounts were blocked, evidence suggested these email addresses were still active. The court thus concluded that email service met due process standards by being “reasonably calculated” to notify defendants, allowing them the chance to present objections.

Amazon.com Inc. v. Ananchenko, 2024 WL 492283 (W.D. Washington, February 7, 2024)

See also:

DMCA subpoena to “mere conduit” ISP was improper

DMCA defamatory

Because ISP acted as a conduit for the transmission of material that allegedly infringed copyright, it fell under the DMCA safe harbor in 17 U.S.C. § 512(a), and therefore § 512(h) did not authorize the subpoena issued in the case.

Some copyright owners needed to find out who was anonymously infringing their works, so they issued a subpoena to the users’ internet service provider (Cox Communications) under the Digital Millennium Copyright Act’s (“DMCA”) at 17 U.S.C. § 512(h). After the ISP notified one of the anonymous users – referred to as John Doe in the case – of the subpoena, Doe filed a motion to quash. The magistrate judge recommended the subpoena be quashed, and the district judge accepted such recommendation.

Contours of the Safe Harbor

The court explained how the DMCA enables copyright owners to send subpoenas for the identification of alleged infringers, contingent upon providing a notification that meets specific criteria outlined in the DMCA. However, the DMCA also establishes safe harbors for Internet Service Providers (ISPs), notably exempting those acting as “mere conduits” of information, like in peer-to-peer (P2P) filesharing, from liability and thus from the obligations of the notice and takedown provisions found in other parts of the DMCA. This distinction has led courts, including the Eighth and D.C. Circuits, to conclude that subpoenas under § 512(h) cannot be used to compel ISPs, which do not store or directly handle the infringing material but merely transmit it, to reveal the identities of P2P infringers.

Who is in?

The copyright owners raised a number of objections to quashing the subpoena. Their primary concerns were with the court’s interpretation of the ISP’s role as merely a “conduit” in the alleged infringement, arguing that the ISP’s assignment of IP addresses constituted a form of linking to infringing material, thus meeting the DMCA’s notice requirements. They also disputed the court’s conclusion that the material in question could not be removed or access disabled by the ISP due to its nature of transmission, and they took issue with certain factual conclusions drawn without input from the parties involved. Additionally, the petitioners objected to the directive to return or destroy any information obtained through the subpoena, requesting that such measures apply only to the information related to the specific subscriber John Doe.

Conduits are.

Notwithstanding these various arguments, the court upheld the magistrate judge’s recommendation, agreeing that the subpoena issued to the ISP was invalid due to non-compliance with the notice provisions required by 17 U.S.C. § 512(c)(3)(A). The petitioners’ arguments, suggesting that the ISP’s assignment of IP addresses to users constituted a form of linking to infringing material under § 512(d), were rejected. The court clarified that in the context of P2P file sharing, IP addresses do not serve as “information location tools” as defined under § 512(d) and that the ISP’s role was limited to providing internet connectivity, aligning with the “mere conduit” provision under § 512(a). The court also dismissed the petitioners’ suggestion that the ISP could disable access to infringing material by null routing, emphasizing the distinction between disabling access to material and terminating a subscriber’s account, with the latter being a more severe action than what the DMCA authorizes. The court suggested that the petitioners could pursue the infringer’s identity through other legal avenues, such as a John Doe lawsuit, despite potential challenges highlighted by the petitioners.

In re: Subpoena of Internet Subscribers of Cox Communications, LLC and Coxcom LLC, 2024 WL 341069 (D. Hawaii, January 30, 2024)

 

Not fair use after all – Fourth Circuit reverses lower court’s decision in online copyright infringement case

fair use

Plaintiff photographer sued defendant news website for copyright infringement over a photo of Ted Nugent that defendant used in an online article. The lower court granted summary judgment for defendant, finding that its use of the photo was fair use. Plaintiff sought review with the Fourth Circuit. On review, the court reversed, applying the factors set out in 17 U.S.C. § 107 in finding the use of the photo was not fair use.

For the first fair use factor, the court found that defendant’s use of the photo was not transformative and was commercial. This caused the factor to weigh against fair use. It looked to the recent Supreme Court case of Andy Warhol Found. for the Visual Arts, Inc. v. Goldsmith, 598 U.S. 508 (2023). The court noted that in a manner similar to what happened in the Warhol case, plaintiff took the photo of Ted Nugent to capture a “portrait” of him, and defendant used the photo to “depict” the musician. Accordingly, the two uses “shared substantially the same purpose.” The court actually found that defendant in this case had “less of a case” for transformative use than the Andy Warhol Foundation did, because unlike in Warhol, defendant did not alter or add new expression beyond cropping negative space. And though the article in which the photo appeared did not generate much revenue for defendant, the relevant question to determine commercial use was whether defendant stood to profit, not whether it actually did profit.

Though the district court did not address the second and third fair use factors, the appellate court looked at those factors, and found they did not support fair use. Looking at the nature of plaintiff’s photo, the court observed that plaintiff made several creative choices in capturing the photo, including angle of photography, exposure, composition, framing, location, and exact moment of creation. As for the third factor, the court found it to weigh against fair use because defendant copied a significant percentage of the photo, only cropping out negative space while keeping the photo’s expressive features.

Finally, the court also found that the fourth fair use factor weighed against fair use. The court concluded that if defendant’s unauthorized use became “uninterrupted and widespread,” it would adversely affect the potential market for the photo. It emphasized the notion of the potential market. In this case, there was at least one instance where plaintiff had allowed use of his photo for free, and he also made it available for free subject to a Creative Commons license, requiring attribution in return. But that did not change the outcome, given that he customarily licensed if for either money or attribution.

Philpot v. Independent Journal Review, — F.4th —, 2024 WL 442066 (4th Cir., February 6, 2024)

See also:

Ted Nugent photo by Republic Country Club, under this Creative Commons license. Changes made: frame expanded, image color modified and background enhanced using generative AI.

Apple’s civil hacking lawsuit against software maker moves forward

apple hacking

Apple sued defendant NSO, accusing it of, among other things, the Computer Fraud and Abuse Act, 18 U.S.C. § 1030 (the “CFAA”), The case dealt with NSO’s creation and distribution of “Pegasus,” a piece of software Apple claimed was capable of covertly extracting information from virtually any mobile device.

Apple alleged NSO fabricated Apple IDs to gain access to Apple’s servers and launch attacks on consumer devices through a method known as “FORCEDENTRY.” This exploit, characterized as a “zero-click” attack, allowed NSO or its clients to infiltrate devices without the device owners’ knowledge or action. The repercussions for Apple were significant, as the company reportedly faced considerable expenses and damages in its efforts to counteract NSO’s activities. These efforts included the development and deployment of security measures and patches, as well as increased legal exposure.

Defendant moved to dismiss the claims. The court denied the motion.

In finding that Apple had properly pled the CFAA claim, the court noted that Apple’s allegations aligned with the anti-hacking intent of the CFAA. Despite NSO’s contention that the devices in question were not owned by Apple and thus not protected under the CFAA, the court observed that Apple’s claims extended to the exploitation of its own servers and services, fitting within the statute’s scope.

Apple Inc. v. NSO Group Technologies Ltd., 2024 WL 251448 (N.D. Cal. January 23, 2024)

 

Online retailer’s browsewrap agreement was not enforceable

browsewrap

Plaintiff sued defendant Urban Outfitters under California law over the way that the retailer routed messages sent using the company’s website. Defendant moved to compel arbitration, arguing that the terms and conditions on defendant’s website required plaintiff to submit to arbitration instead of going to court. The court denied the motion.

The key issue in the case was whether plaintiff, by completing her purchases on defendant’s website, was sufficiently notified of and thus agreed to the arbitration agreement embedded via hyperlinks on the checkout page. Defendant maintained that the language and placement of the hyperlinks on the order page were adequate to inform plaintiff of the arbitration terms, which she implicitly agreed to by finalizing her purchases. Plaintiff argued that the hyperlinks were not conspicuous enough to alert her to the arbitration terms, thus negating her consent to them.

The court looked at the nature of the online agreement and whether plaintiff had adequate notice of the arbitration agreement, thereby consenting to its terms. The court’s discussion touched upon the differences between “clickwrap” and “browsewrap” agreements, emphasizing that the latter, which defendant’s website purportedly used, often fails to meet the threshold for constructive notice due to the lack of explicit acknowledgment required from the user.

The court examined the specifics of what constitutes sufficient notice, pointing out that for a user to be on inquiry notice, the terms must be presented in a way that a reasonable person would notice and understand that their actions (such as clicking a button) indicate agreement to those terms. The court found that defendant’s method of presenting the arbitration terms – through hyperlinks in small, grey font that were not sufficiently set apart from surrounding text – did not meet this standard.

Rocha v. Urban Outfitters, 2024 WL 393486 (N.D. Cal., February 1, 2024)

See also:

Scroll to top