Blog

FAFO in federal court: Hacker who bragged on Hulu documentary slammed with liability under federal law

fafo

Plaintiff sued defendant for unlawfully accessing plaintiff’s email account and publishing more than sixty private emails on social media. Defendant had repeatedly claimed credit for the hack in a Hulu documentary, on social media, and in podcast appearances. Plaintiff brought several claims in federal court, including claims under the Stored Communications Act, the Computer Fraud and Abuse Act, and invasion of privacy under Tennessee common law.

Plaintiff asked the court to enter summary judgment on liability, arguing that defendant’s own public statements confirmed every essential element of the Stored Communications Act and invasion of privacy tort claims.

The court ruled that defendant was liable under the Stored Communications Act and for public disclosure of private facts. It denied summary judgment on the Computer Fraud and Abuse Act claim because plaintiff had not presented sufficient evidence of economic loss. But that issue remains open for trial.

The court ruled this way because it found that defendant gave repeated, detailed accounts of how he accessed plaintiff’s email account, changed the password, and took control. Plaintiff submitted additional evidence that it lost access to the account during the same period. The court held that this conduct met the elements of unauthorized access under the Stored Communications Act and that the publication of dozens of personal emails, including intimate messages and communications from family members, qualified as highly offensive under Tennessee law.

McKamey v. Yerace, No. 3:21-CV-00132, 2024 WL 7147987 (M.D. Tenn. January 15, 2026)

Did anti-ICE church protestors in Minnesota violate federal law?

A brazen and disruptive intrusion by anti-ICE activists during a Sunday worship service at Cities Church in St. Paul, Minnesota, has rightly drawn national outrage and sparked a federal investigation into potential violations of civil rights laws. The protesters, organized by various left wing groups, stormed the sanctuary, chanting slogans and effectively shutting down the service. This understandably left congregants, including children, very upset. The U.S. Department of Justice’s Civil Rights Division is now examining whether these actions violated the federal Freedom of Access to Clinic Entrances Act (FACE Act).

At first glance, applying the FACE Act (originally passed in the 1990s to combat violent blockades and threats at abortion clinics) might seem unexpected. But the law’s text is clear and broad. Congress deliberately extended protections to places of religious worship. It recognized that the same aggressive tactics used against medical facilities could be weaponized against houses of worship. The law exists to prevent raucous interference and other hostilities toward religious exercise.

What the FACE Act actually prohibits

Under 18 U.S.C. § 248(a)(2), it is unlawful for anyone to:

by force or threat of force or by physical obstruction, intentionally injure, intimidate or interfere with or attempts to injure, intimidate or interfere with any person lawfully exercising or seeking to exercise the First Amendment right of religious freedom at a place of religious worship.

This provision exists precisely because disruptions like the one in Minnesota threaten the fundamental right to peaceful worship. The activists did not simply express disagreement outside. They invaded the sanctuary mid-service, chanting demands and accusations. They turned a sacred space into a stage for their political theater.

Why the definitions are critical (and why this conduct looks troubling)

The FACE Act does not criminalize all protest or even offensive speech. It targets specific, harmful conduct with narrow definitions. Here are the key definitions for this situation:

  • To “interfere with” means restricting a person’s freedom of movement.
  • To “intimidate” means to place a person in reasonable apprehension of bodily harm to him- or herself or to another.
  • A “physical obstruction” means rendering ingress to or egress from a place of religious worship impassable, or unreasonably difficult or hazardous.

Reports and video evidence suggest the protesters crowded the sanctuary, positioned themselves in the middle during the sermon, and caused congregants to be very upset. This was not peaceful picketing. It was a calculated invasion that terrified families and halted a Christian service. Given the current political climate and instances of violence, it seems the government should be able to prove that worshippers – who were clearly exercising a First Amendment right – were placed in reasonable apprehension that either they or their loved ones would be harmed.

Federal enforcement, civil options, and state inaction

The FACE Act allows not only criminal prosecution but also civil suits, including by state attorneys general. Minnesota’s AG could theoretically act to defend residents’ religious freedom. However, given the state’s political leadership (often sympathetic to disruptive protests), we have no reason to hold our breath awaiting enforcement from local or state officials.

Infringement case against OpenAI failed because there was no copyright registration

copyright dismissed

Thinking about suing an AI company for copyright infringement? Do not overlook the basics. Before any court will consider the merits of an infringement claim, the plaintiff needs to have an actual copyright registration in hand, not just a pending application.

That notion was confirmed in a recent unsuccessful lawsuit against OpenAI in federal court in California. Plaintiff sued OpenAI alleging that OpenAI infringed the copyright in several artificial intelligence models and content plaintiff claimed to have developed and then destroyed evidence to conceal that alleged infringement.

Plaintiff asked the court to issue a temporary restraining order preventing defendant from deleting or altering data and documents related to the alleged infringement while the case proceeded. The court denied the request for a temporary restraining order and dismissed the complaint.

The court ruled this way because the Copyright Act bars any civil infringement action until copyright registration has been made, and courts interpret that requirement to mean the Copyright Office must have issued a registration certificate, not merely received an application. This left plaintiff in this case unable to show a likelihood of success on the merits.

Gholami v. OpenAI, Inc., No. 26-cv-00174, 2026 WL 61359 (N.D. Cal., January 8, 2026).

Here is some relevant info:

Does the DMCA safe harbor cover infringing images in an email?

DMCA safe harbor for notifications

Plaintiff photographer sued Pinterest for copyright infringement, alleging Pinterest displayed his and other photographers’ copyrighted images in notifications sent outside of the Pinterest website. Pinterest moved for summary judgment, arguing it was protected under the safe harbor provisions of Section 512(c) of the Digital Millennium Copyright Act (“DMCA”). The court granted Pinterest’s motion and dismissed the case.

Pinterest is a familiar and massive social media platform, where individuals upload and share image-based “Pins” that function as visual bookmarks. The platform displays Pins in personalized feeds curated by algorithms and which contain advertisements labeled as “promoted.” Pinterest also delivers through notifications such as emails, in-app alerts, and push notifications, which contain hyperlinks that trigger display of images hosted on its servers. One such notification that plaintiff received included his copyrighted photograph, prompting him to file suit six days later.

The court found that Pinterest’s actions fell within the DMCA’s Section 512(c) safe harbor, which shields service providers from copyright liability for content stored at the direction of users. Because Pinterest raised this as an affirmative defense, it had the burden to prove every element of the safe harbor criteria, and the court concluded it had met both the statutory threshold and all required conditions.

Statutory threshold requirements under the DMCA

To qualify for the DMCA safe harbor, Pinterest had to meet several threshold statutory requirements that are found in Sections 512(c) and (i): it had to be a service provider, maintain a designated agent, implement a repeat infringer policy, and accommodate standard technical measures. The court found that Pinterest satisfied all four. As “one of the largest social media platforms in the world,” it operated a qualifying online platform as defined by the statute. The evidence showed that Pinterest maintained a registered agent with the Copyright Office and that it enforced a strike-based policy for repeat infringers. And the court found that Pinterest did not interfere with any recognized standard technical measures that plaintiff implemented with his works. (Plaintiff had asserted that he embedded certain metadata in his photographs, but he did not argue that this metadata qualified as a “standard technical measure” under the DMCA, nor did he claim that Pinterest interfered with it — in fact, he alleged that Pinterest preserved the metadata on its servers.)

How Pinterest met the required conditions

After finding that Pinterest satisfied the DMCA’s threshold requirements, the court turned to whether Pinterest’s conduct of sending out copyright protected images in off-platform notifications was protected under Section 512(c). To do so, Pinterest had to show three things:

  • the alleged infringement occurred due to user-directed storage;
  • Pinterest lacked actual or red flag knowledge of the infringement; and
  • Pinterest either had no right and ability to control the activity or did not receive a direct financial benefit from it.

The court evaluated each element in turn.

By reason of storage at the direction of a user

The court concluded that Pinterest met the first requirement for DMCA safe harbor protection: the alleged infringement occurred “by reason of the storage at the direction of a user.” It emphasized that the image at issue was not embedded in the notification itself but was instead hosted on Pinterest’s servers and accessed via a hyperlink contained in the notification. When a user opened the message, their software triggered a request to Pinterest’s server to retrieve and display the image, just as it would when accessing content directly through the platform. Because this method merely facilitated access to user-uploaded content without altering it, the court found the display was within the statutory definition.

No knowledge of infringement

The court found that Pinterest satisfied the second requirement for DMCA safe harbor protection by showing it lacked actual or red flag knowledge of the alleged infringement. Critically, Harrington never sent Pinterest a DMCA takedown notice or otherwise identified the allegedly infringing material before filing suit. The DMCA operates on a notice and takedown system: platforms are not required to proactively monitor user content but must respond once they receive proper notice. Because Harrington gave no such notice and offered no evidence that Pinterest otherwise knew about the specific image at issue, the court concluded there was no genuine dispute as to Pinterest’s lack of knowledge.

Control and financial benefit

The court found that Pinterest met the third and final requirement for DMCA safe harbor by showing it neither had the right and ability to control the alleged infringement nor received a financial benefit directly attributable to it. While Pinterest used algorithms to curate content and monetize its platform generally, the court held that this did not amount to the kind of “substantial influence” over user activity that would disqualify it under the DMCA. Pinterest did not direct users to upload specific content, nor did it participate in any purposeful conduct related to the display of plaintiff’s photo.

The court also rejected plaintiff’s claim that Pinterest profited directly from the infringement. Pinterest presented evidence that its notifications did not contain advertisements and that it earned no revenue specifically tied to the image in question. Plaintiff’s counter-evidence failed to show otherwise. Even if ads had appeared near the image, the law requires a direct connection between the infringing display and revenue, which was absent here. Therefore, Pinterest satisfied this final element of the DMCA safe harbor defense.

Harrington et al. v. Pinterest, Inc., No. 20-CV-5290, 2026 WL 25880 (N.D. Cal., January 5, 2026)

Ninth Circuit declines to impose broad injunction against California’s social media law for minors

social media ban

NetChoice, an internet trade association representing companies such as Google, Meta, and X, sued the State of California over its Protecting Our Kids from Social Media Addiction Act, claiming that the law violates the First Amendment. The Act restricts how social media platforms interact with minors, particularly limiting access to algorithmic feeds, requiring certain default settings, and mandating age-verification procedures.

Plaintiff asked the court to block enforcement of several provisions of the law through a preliminary injunction, focusing on its claims that aspects of the Act unlawfully restrict speech and are unconstitutionally vague. The lower court declined to issue the injunction. Plaintiff sought review with the Ninth Circuit.

On appeal, the Ninth Circuit largely affirmed the district court’s refusal to issue a broad injunction but ruled that the provision of the law requiring platforms to hide like and share counts by default for minors is unconstitutional. It reversed the lower court on that point and instructed it to modify its injunction to prevent enforcement of that specific provision.

The court ruled this way because it found the like-count requirement to be content-based and therefore subject to strict scrutiny under the First Amendment. The government failed to show that hiding like counts was the least restrictive means to achieve its goal of protecting minors’ mental health. Other provisions, including those governing private-mode settings and age verification, either survived scrutiny or were deemed unripe for review.

NetChoice LLC v. Bonta, — F.4th —, 2025 WL 2600007 (9th Cir. Sept. 9, 2025)

Claims against porn sites dismissed because of Section 230 immunity

Section 230 immunity
Plaintiffs sued several internet pornography companies after they discovered that videos secretly recorded of them while changing in a college locker room had been uploaded online.

Plaintiffs asked the court to hold the defendants liable under several theories, including civil conspiracy, negligent monitoring, and violations of the Trafficking Victims Protection Reauthorization Act (TVPRA).

The court granted summary judgment in favor of defendants.

The court held that Section 230 of the Communications Decency Act shielded the defendants from liability for user-generated content, and plaintiffs failed to show that any of defendants materially contributed to the illegal aspects of the videos. The court also found no evidence of a conspiracy or that defendants met the requirements to be considered beneficiaries of a sex trafficking venture under the TVPRA. Claims against defendants who merely licensed trademarks or placed ads were also rejected due to lack of personal jurisdiction or insufficient evidence of wrongdoing.

Jane Does 1–9 v. Collins Murphy, et al., No. 7:20-cv-00947-DCC, 2025 WL 2533961 (D.S.C. Sept. 3, 2025).

AI fake cases crisis reaches Illinois Appellate Court for the first time

illinois appellate

The Illinois Appellate Court for the Fourth District issued a decision that marks the first time an Illinois court at this level has addressed attorney misuse of artificial intelligence. While the sad underlying matter involved the termination of parental rights, the appellate decision is noteworthy for another reason: the use of fictitious legal citations generated by AI.

The Appeal

Respondent appealed from a trial court’s order terminating her parental rights to her two minor children. The appeal raised several arguments, including challenges to the trial court’s findings on unfitness and best interests, a due process claim regarding self-representation, and a claim of ineffective assistance of counsel. The appellate court ultimately affirmed the trial court’s decision. However, the focus of the opinion shifted when the court found irregularities in the appellate briefs that respondent’s appointed counsel had filed.

The Court’s Review of the Briefs

After reviewing the briefs, the court noticed that eight cited cases did not exist. The court ordered counsel to explain the source of these citations and to appear in person. Counsel responded by acknowledging that five of the cited cases were fictitious. He explained that he had used AI to assist in drafting the brief and did not independently verify the citations it produced. For the remaining three cases, which were real, the court found that the actual content did not support the legal arguments presented in the brief.

The Role of AI and Legal Responsibility

The court’s opinion noted that AI tools such as generative chatbots can assist legal professionals but must be used with caution. Citing recent guidance from the American Bar Association and the Illinois Supreme Court’s new AI policy, the court emphasized that attorneys are responsible for reviewing and verifying all material submitted to a court, regardless of how it was generated.

The court concluded that the attorney had not reviewed the AI-generated citations and that this lack of verification resulted in inaccurate filings. It clarified that while use of AI is not prohibited, reliance on unverified outputs can compromise the integrity of legal proceedings.

Sanctions Imposed

Rather than striking the briefs, the court chose to address the appeal on the merits but imposed monetary sanctions under Illinois Supreme Court Rule 375. It ordered the attorney to (1) return the $6,925.62 he had been paid by Sangamon County for his representation, (2) pay an additional $1,000 fine to the appellate clerk, and (3) submit a copy of the opinion to the Illinois Attorney Registration and Disciplinary Commission.

The court noted that these measures were intended to ensure accountability and to reinforce the expectations surrounding the use of AI in court proceedings.

In re Baby Boy, — N.E.3d —, 2025 WL 2046315 (Ct. App. Ill. 4th Dist., July 21, 2025)

Tesla awarded sanctions where opposing party cited AI-generated cases in discovery briefing

Plaintiff sued Tesla in federal court in the Southern District of Florida. During discovery, plaintiff filed multiple motions that cited fake case law hallucinated by artificial intelligence. Defendant moved to strike the filings and asked the court to award attorneys’ fees for the time spent addressing the fake citations and related motions.

Defendant argued that its attorneys spent more than five hours reviewing the false citations, drafting a motion to strike, and responding to a motion to compel contact information. Defendant requested $1,096 in fees. Plaintiff objected, claiming the hours were excessive and that defendant had not properly conferred before seeking fees. Plaintiff had offered to pay only one dollar as a “symbolic” resolution.

The court rejected plaintiff’s arguments. It found that plaintiff did not confer in good faith and was again wasting the court’s time. Although plaintiff was not a lawyer, the court held that pro se litigants must still follow the rules and act professionally. The court emphasized that submitting hallucinated cases, even unintentionally, undermines the judicial process.

The court reduced the requested amount slightly, finding that some billing entries were duplicative, as both attorneys reviewed the same documents. Plaintiff was ordered to pay the reduced amount and defendant was required to notify the court whether payment was received.

Crespo v. Tesla, Inc., 2025 WL 1921903 (S.D. Florida, July 11, 2025)

 

Scroll to top