Federal judge optimistic about the future of courts’ use of AI

plain meaning AI

Eleventh Circuit concurrence could be a watershed moment for discourse on the judiciary’s future use of AI.

In a very thought-provoking recent ruling, Judge Kevin Newsom of the 11th Circuit Court of Appeals discussed the potential use of AI-powered large language models (LLMs) in legal text interpretation. Known for his commitment to textualism and plain-language interpretation, Judge Newsom’s career – prior to President Trump appointing him to the bench in 2017 – includes serving as the Solicitor General of Alabama and clerking for Justice David Souter of the U.S. Supreme Court. His concurrence in the case of Snell v. United Specialty Insurance Company, — F.4th — 2024 WL 2717700 (May 28, 2024), unusual in its approach, aimed to pull back the curtain on how legal professionals can leverage modern technology to enhance judicial processes. It takes an optimistic and hopeful tone on how LLMs could improve judges’ decision making, particularly when examining the meaning of words.

Background of the Case

The underlying case involved a plaintiff (Snell) who installed an in-ground trampoline for a customer. Snell got sued over the work and the defendant insurance company refused to pick up the tab. One of the key questions in the litigation was whether this work fell under the term “landscaping” as used in the insurance policy. The parties and the courts had anguished over the ordinary meaning of the word “landscaping,” relying heavily on traditional methods such consulting a dictionary. Ultimately, the court resolved the issue based on a unique aspect of Alabama law and Snell’s insurance application, which explicitly disclaimed any recreational or playground equipment work. But the definitional debate highlighted the complexities in interpreting legal texts and inspired Judge Newsom’s proposal to consider AI’s role in this process.

Judge Newsom’s Proposal

Judge Newsom’s proposal is both provocative and forward-thinking, discussing how to effectively incorporate AI-powered LLMs such as ChatGPT, Gemini, and Claude into the interpretive analysis of legal texts. He acknowledged that this suggestion may initially seem “heretical” to some but believed it was worth exploring. The basic rationale is that LLMs, trained on vast amounts of data reflecting everyday language use, could provide valuable insights into the ordinary meanings of words and phrases.

The concurrence reads very differently – in its solicitous treatment of AI – than many other early cases dealing with litigants’ use of AI, such as J.G. v. New York City Dept. of Education, 2024 WL 728626 (February 22, 2024). In that case, the found ChatGPT to be “utterly and unusually unpersuasive.” The present case has an entirely different attitude toward AI.

Strengths of Using LLMs for Ordinary Language Determinations

Judge Newsom systematically examined the various benefits of judges’ use of LLMs. He commented on the following issues and aspects:

  • Reflecting Ordinary Language: LLMs are trained on extensive datasets from the internet, encompassing a broad spectrum of language use, from academic papers to casual conversations. This training allows LLMs to offer predictions about how ordinary people use language in everyday life, potentially providing a more accurate reflection of common speech than traditional dictionaries.
  • Contextual Understanding: Modern LLMs can discern context and differentiate between various meanings of the same word based on usage patterns. This capability could be particularly useful in legal interpretation, where context is crucial.
  • Accessibility and Transparency: LLMs are increasingly accessible to judges, lawyers, and the general public, offering an inexpensive and transparent research tool. Unlike the opaque processes behind some dictionary definitions, LLMs can provide clear insights into their training data and predictive mechanisms.
  • Empirical Advantages: Compared to traditional empirical methods such as surveys and “corpus linguistics”, LLMs are more practical and less susceptible to manipulation. They offer a scalable solution that can quickly adapt to new data.

Challenges and Considerations

But the Judge’s take was not all completely rosy. He acknowledged certain potential downsides or vulnerabilities in the use of AI for making legal determinations. But even in this critique, his approach remained optimistic:

  • Hallucinations: LLMs can generate incorrect or fictional information. However, Judge Newsom argued that this issue is not unique to AI and that human lawyers also make mistakes or manipulate facts.
  • Representation: LLMs may not fully capture offline speech, potentially underrepresenting certain populations. This concern needs addressing, but Judge Newsom stated it does not fundamentally undermine the utility of LLMs.
  • Manipulation Risks: There is a risk of strategic manipulation of LLM outputs. However, this risk exists with traditional methods as well, and transparency in querying multiple models can mitigate it.
  • Dystopian Fears: Judge Newsom emphasized that LLMs should not replace human judgment but serve as one of many tools in the interpretive toolkit.

Future Directions

Judge Newsom concluded by suggesting further exploration into the proper querying of LLMs, refining the outputs, and ensuring that LLMs can handle temporal considerations for interpreting historical texts (i.e., a word must be given the meaning it had when it was written). These steps could maximize the utility of AI in legal interpretation, ensuring it complements rather than replaces traditional methods.

Snell v. United Specialty Insurance Company, — F.4th — 2024 WL 2717700 (May 28, 2024)

 

Can the Fifth Amendment protect you from having to turn over your personal email account if you’re fired?

 

Can the Fifth Amendment protect you from having to turn over your personal laptop and email account when you are fired? Maybe not.

In New Jersey, an ex-employee allegedly sent over a hundred emails filled with confidential data to his personal Gmail account. This was frowned upon, especially since he had signed a confidentiality agreement.

When the company demanded he hand over his devices and accounts for inspection, he played the Fifth Amendment card, saying it protected him from self-incrimination. But the court wasn’t having any of it.

Why? Because during the HR investigation, he admitted in writing there was incriminating evidence waiting to be found, so the court held he waived his right to Fifth Amendment protection.

So even if turning over the devices could get the employee arrested, the court held that was okay in this situation and even benefitted the public interest.

Be careful out there with your side hustle.

When can you serve a lawsuit by email?

email service

One of the biggest challenges brand owners face in enforcing their intellectual property rights online is in tracking down the infringer – often located in a foreign country – so that a lawsuit can be served on the infringer. Generally, for due process reasons, the Federal Rules of Civil Procedure require that a complaint and summons be served personally – that is, by handing the papers to the person directly. But there are exceptions to this, particularly in situations involving overseas defendants, in which “alternative service” may be available. A recent case from federal court in the state of Washington provides an example of where Amazon and certain sellers were able to serve the lawsuit on overseas defendants via email.

Learning about the counterfeiters

Plaintiffs sued defendants, accusing defendants of selling counterfeit goods on Amazon. Plaintiffs alleged that defendants resided in Ukraine. Even after working with a private investigator and seeking third party discovery from defendants’ virtual bank account providers, plaintiffs could not find any valid physical addresses for defendants. So plaintiffs asked the court to permit service of the complaint and summons on defendants’ email addresses registered with their Amazon selling accounts. They knew those email addresses must be valid because test messages did not receive any error notices or bounce backs that would indicate the messages failed to deliver.

What is required

The court looked at Federal Rule of Civil Procedure 4(f) which allows for service of process on individuals in foreign countries through several methods, including (`1) internationally agreed methods such as those authorized by the Hague Convention, (2) according to the foreign country’s law if no international agreement exists, or (3) by other means not prohibited by international agreements as the court orders. A plaintiff must show that the specific circumstances require court intervention. Furthermore, any method of service must align with constitutional due process, meaning it must be designed to effectively inform interested parties of the ongoing action and give them a chance to object, ensuring fairness and the opportunity for defense.

The court said okay

The court found that plaintiffs had shown court intervention was necessary because plaintiffs could not find valid physical addresses but could show that the email addresses apparently were valid. As for the Hague Convention, no method it provided was available without valid physical addresses. Moreover, the court observed that whether or not the Hague Convention applied, email service on individuals in Ukraine was not prohibited by the Convention nor by any other international agreement.

And the court found email service comported with constitutional due process. Defendants conducted business through these email accounts and tests confirmed their functionality. Although defendants’ Amazon accounts were blocked, evidence suggested these email addresses were still active. The court thus concluded that email service met due process standards by being “reasonably calculated” to notify defendants, allowing them the chance to present objections.

Amazon.com Inc. v. Ananchenko, 2024 WL 492283 (W.D. Washington, February 7, 2024)

See also:

No breach of contract claim against Twitter for account suspension

Photo by Evan Brown

The issues in the case of Yuksel v. Twitter were whether Twitter, by terminating plaintiff’s account (1) breached its contract with plaintiff, and (2) violated the Racketeer Influenced and Corrupt Organizations Act (“RICO”). Twitter moved to dismiss the breach of contract and RICO claims. The court granted the motion because Section 230 barred the claims and because plaintiff failed to plausibly allege the claims.

The gist of plaintiff’s claims centered on allegations that Twitter suspended plaintiff’s account due to deference to the Turkish government. He claims that by being cut from his 142,000 followers and by having 7 years’ worth of “intellectual content” destroyed, he was damaged to the tune of $142 million.

Section 230 immunity

The court held that Twitter was immune from plaintiff’s claims, under 47 U.S.C. §230(c)(1). That provision states that:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

If found that Twitter is a provider of an interactive computer service, and that plaintiff sought to hold Twitter liable for decisions regarding information provided by another information content provider. In this situation, that other information content provider was plaintiff himself. The court also found that plaintiff sought to treat Twitter as a publisher in connection with its decision to suspend his account. But the decision to suspend an account is within the scope of “traditional publishing functions”. Quoting the well-known case of Barnes v. Yahoo!, Inc., 570 F.3d 1096, 1102 (9th Cir. 2009), the court noted that “removing content is something publishers do.”

Plaintiff tried to get around Section 230 immunity by asserting that his RICO claim fit into the statute’s carveout for federal criminal prosecution (Section 230 (e)(1) provides that “[n]othing in this section shall be construed to impair the enforcement of . . . any . . . Federal criminal statute.”) The court rejected this argument, concluding that the carveout from immunity extends only to criminal prosecutions and not civil actions based on criminal statutes.

Failure to state a claim

The court held that even without Section 230 immunity, the breach of contract and RICO claims would fail. Plaintiff had alleged breach of contract in the complaint, but in responding to the motion to dismiss stated that he did not claim breach of contract between himself and Twitter, but rather asked the court to find Twitter’s terms of service unenforceable because they permit the arbitrary and reckless deletion of user accounts. The court noted that it was constrained to not look beyond the complaint to allegations plaintiff made only in response to Twitter’s motion to dismiss.

On the RICO claim, the court found that plaintiff failed to identify the “enterprise” subject to the RICO claim, nor any purported “pattern,” “racketeering activity,” or “injury” to “business or property” caused by it. Moreover, according to the court, plaintiff’s conclusory allegations that Twitter was somehow in cahoots with foreign dictators failed to meet basic pleading standards in federal court.

Yuksel v. Twitter, Inc., 2022 WL 16748612 (N.D. California, November 7, 2022)

See also:

What must social media platforms do to comply with Florida’s Senate Bill 7072?

Ron DeSantis social media billThe media has been covering Florida’s new law (Senate Bill 7072) targeting social media platforms, which Governor DeSantis signed today. The law is relatively complex and imposes a number of new obligations on social media platforms. The law is likely to face First Amendment challenges. And then there’s the Section 230 problem. In any event, through all the political noise surrounding the law’s passage, it is worth taking a careful look at what the statute actually says. 

Findings

The bill starts with some findings of the legislature that give context to what the law is about. There are some interesting findings that reflect an evolved view of the internet and social media as being critical spaces for information exchange, in the nature of public utilities and common carriers:

  • Social media platforms have transformed into the new public town square.
  • Social media platforms have become as important for conveying public opinion as public utilities are for supporting modern society.
  • Social media platforms hold a unique place in preserving first amendment protections for all Floridians and should be treated similarly to common carriers.
  • The state has a substantial interest in protecting its residents from inconsistent and unfair actions by social media.

Important definitions

The statute gives some precise and interesting definitions to important terms:

A “social media platform” is any information service, system, Internet search engine, or access software provider that provides or enables computer access by multiple users to a computer server, including an Internet platform or a social media site, operates as a sole proprietorship, partnership, limited liability company, corporation, association, or other legal entity, does business in Florida and has either annual gross revenues in excess of $100 million, or has at least 100 million monthly individual platform participants globally.

Interestingly, the statute appears to clarify that Disney World and other major players are not part of what constitute “social media platforms”: 

The term does not include any information service, system, Internet search engine, or access software provider operated by a company that owns and operates a theme park or entertainment complex as defined elsewhere in Florida law.

Some other definitions:

To “censor” is for a social media platform to delete, regulate, restrict, edit, alter, inhibit the publication or republication of, suspend a right to post, remove, or post an addendum to any content or material posted by a user. The term also includes actions to inhibit the ability of a user to be viewable by or to interact with another user of the social media platform.

“Deplatforming” means the action or practice by a social media platform to permanently delete or ban a user or to temporarily delete or ban a user from the social media platform for more than 14 days.

A “shadow ban” is action by a social media platform, through any means, whether the action is determined by a natural person or an algorithm, to limit or eliminate the exposure of a user or content or material posted by a user to other users of the social media platform. This term includes acts of shadow banning by a social media platform which are not readily apparent to a user.

“Post-prioritization” means action by a social media platform to place, feature, or prioritize certain content or material ahead of, below, or in a more or less prominent position than others in a newsfeed, a feed, a view, or in search results. The term does not include post-prioritization of content and material of a third party, including other users, based on payments by that third party, to the social media platform. 

Protections for political candidates

The first substantive part of the statute seeks to protect political candidates from being taken offline:

A social media platform may not willfully deplatform a candidate for office who is known by the social media platform to be a candidate, beginning on the date of qualification and ending on the date of the election or the date the candidate ceases to be a candidate. A social media platform must provide each user a method by which the user may be identified as a qualified candidate and which provides sufficient information to allow the social media platform to confirm the user’s qualification by reviewing the website of the Division of Elections or the website of the local supervisor of elections.

If the Florida Election Commission finds that a social media platform violates the above provision, it can fine the platform $250,000 per day for a candidate for statewide office, and $25,000 per day for a candidate for other offices. 

Social media platforms’ required activity

The statute spells out certain things that social media platforms must and must not do. For example, social media platforms:

  • Must publish the standards, including detailed definitions, it uses or has used for determining how to censor, deplatform, and shadow ban.
  • Must apply censorship, deplatforming, and shadow banning standards in a consistent manner among its users on the platform.
  • May not censor or shadow ban a user’s content or material or deplatform a user from the social media platform without notifying the user who posted or attempted to post the content or material (unless the content is obscene). (This notice must be in writing and must be delivered via electronic mail or direct electronic notification to the user within 7 days after the censoring action. It must include a thorough rationale explaining the reason that the social media platform censored the user. It must also include a precise and thorough explanation of how the social media platform became aware of the censored content or material, including a thorough explanation of the algorithms used, if any, to identify or flag the user’s content or material as objectionable.)
  • Must, if a user is deplatformed, allow that user to access or retrieve all of the user’s information, content, material, and data for at least 60 days after the user receives the required notice.
  • Must provide a mechanism that allows a user to request the number of other individual platform participants who were provided or shown the user’s content or posts and provide, upon request, a user with the number of other individual platform participants who were provided or shown content or posts.
  • Must categorize algorithms used for post-prioritization and shadow banning, and must allow a user to opt out of post-prioritization and shadow banning algorithm categories to allow sequential or chronological posts and content.
  • Must provide users with an annual notice on the use of algorithms for post-prioritization and shadow banning and reoffer annually the opt-out opportunity provided in the statute. 
  • May not apply or use post-prioritization or shadow banning algorithms for content and material posted by or about a user who is known by the social media platform to be a political candidate as defined under the law, beginning on the date of qualification and ending on the date of the election or the date the candidate ceases to be a candidate.
  • Must provide each user a method by which the user may be identified as a qualified candidate and which provides sufficient information to allow the social media platform to confirm the user’s qualification by reviewing the website of Florida’s Division of Elections or the website of the local supervisor of elections.
  • May not take any action to censor, deplatform, or shadow ban a journalistic enterprise based on the content of its publication or broadcast (unless the content is obscene as defined under Florida law). 

What happens if there is a violation?

A social media platform that violates the statute could face legal action from the government, or from private citizens who sue under the statute. 

Government action:

If the Florida Department of Justice, by its own inquiry or as a result of a complaint, suspects that a social media platform’s violation of the statute is imminent, occurring, or has occurred, it may investigate the suspected violation. Based on its investigation, the department may bring a civil or administrative action. The department can send subpoenas to learn about the algorithms related to any alleged violation. 

The ability for a private individual to bring an action under the statute is not as broad as the government’s ability to enforce the law. A private individual can only sue if:

  • the social media platform fails to apply censorship, deplatforming, and shadow banning standards in a consistent manner among its users on the platform, or
  • if the social media platform censors or shadow bans a user’s content or material or deplatforms the user from the social media platform without the required notice.

Remedies

The court may award the following remedies to the user who proves a violation of the statute:

  • Up to $100,000 in statutory damages per proven claim.
  • Actual damages.
  • If aggravating factors are present, punitive damages.
  • Other forms of equitable relief, including injunctive relief.
  • If the user was deplatformed in circumstances where the social media platform failed to apply censorship, deplatforming, and shadow banning standards in a consistent manner among its users on the platform, the user can recover its costs and reasonable attorney fees.

When does the law take effect?

July 1, 2021. 

 

Does Section 230 apply to claims under the Fair Credit Reporting Act?

Plaintiffs sued defendants, claiming that defendants violated the Fair Credit Reporting Act (FCRA) by including inaccurate criminal information on background check reports defendants produced and sold. Defendants moved for a judgment on the pleadings (a form of a motion to dismiss), arguing that 47 U.S.C. §230 provided immunity to defendants. Specifically, defendants argued that they were an interactive computer service, and that plaintiffs’ claims treated defendants as the publisher of third party content. The court agreed with defendants and granted defendants’ motion.

Section 230 Fair Credit Reporting Act

Defendants’ website

Defendants operate the website found at publicdata.com. The website allows customers to search through various databases available via the site. Defendants can pull this information into a report. Plaintiffs asserted that defendants purchase, collect, and assemble public record information into reports, which employers then buy from defendants via the website.

The FCRA claims

The FCRA places a number of requirements on “consumer reporting agencies,” and plaintiffs asserted that defendants did not meet these requirements. Each of the three plaintiffs – who wish to represent an entire class of plaintiffs – claim that reports obtained from prospective employers contained inaccurate information about criminal charges against plaintiffs, and resulted in plaintiffs not getting jobs they sought.

Section 230’s exceptions did not apply

The court began by noting that none of Section 230’s exceptions (i.e., situations where immunity does not apply) precluded immunity from applying to an FCRA claim. Congress enumerated five exceptions to immunity, expressly stating that Section 230 cannot have any effect on any “[f]ederal criminal statute,” “intellectual property law,” “[s]tate law that is consistent with this section,” “the Electronic Communications Privacy Act,” or “sex trafficking law.” Applying the canon of statutory construction of expressio unius est exclusio alterius, the court determined that where Congress explicitly enumerates certain exceptions to a general prohibition, additional exceptions are not to be implied, in the absence of evidence of a contrary legislative intent. The court held that since Congress plainly chose five exceptions to Section 230 immunity, and did not include the FCRA among them, by its plain language, Section 230 can apply to FCRA claims.

Immunity applied

Citing to the well-known Fourth Circuit case of Nemet Chevrolet, Ltd. v. Consumeraffairs.com, Inc., 591 F.3d 250, (4th Cir. 2009), the court looked to the three requirements to successfully assert § 230 immunity: (1) a defendant is an interactive computer service; (2) the content is created by an information content provider; and (3) the defendant is alleged to be the creator of the content. In this case, all three elements were met.

Finding that the defendants’ website was an interactive computer service, the court observed that Section 230 immunity covers information that the defendant does not create as an information content provider, and that such immunity is not lost when the interactive service provider pays a third party for the content at issue, and essentially becomes a “distributor” of the content.

On the second element, the court found that plaintiffs clearly stated that defendants did not create the content, but that they obtained it “from vendors, state agencies, and courthouses.” It was those entities that created the records defendants uploaded to their website and collected into reports.

And in the court’s mind there was no doubt that defendants did not create the content. It found that plaintiffs admitted in their complaint that the convictions and other information included on defendants’ reports were derived from other information content providers such as courts and other repositories of this information. Although plaintiffs alleged that defendants manipulated and sorted the content in a background check report, there was no explicit allegation that defendants materially contributed to or created the content themselves.

Henderson v. The Source For Public Data, 2021 WL 2003550 (E.D. Va. May 19, 2021)

Smartphone user gave consent by website registration to receive texts

consent by website registration

Plaintiff sued defendant alleging defendant sent unsolicited text messages that violated the Telephone Consumer Protection Act (TCPA). Defendant moved for summary judgment. The court granted the motion. It found that plaintiff expressly gave consent by website registration to receive the text messages when she signed up for defendant’s services. This was a significant win for defendant because the TCPA provides for stiff penalties. 

Consent by website registration

Plaintiff used her smartphone to sign up for defendant’s food delivery services. When she registered, she left checked a box opting in to receive text messages. She also provided her cell phone number.

Defense of express consent

Defendant raised the affirmative defense of express consent as an affirmative defense. Plaintiff claimed the signup process did not include a clear and conspicuous disclosure that she would receive text messages. The court rejected defendant’s arguments. It found that the website disclosure was clear and conspicuous.

Pre-checked box was okay

And the court rejected plaintiff’s argument that the pre-checked box opting in was no a valid electronic signature. The court recognized, among other things, that smartphones are a pervasive part of daily life, and a significant majority of American adults own smartphones. It also noted that it should consider the perspective of a reasonably prudent smartphone user. On these facts, the court found that the phone disclosure reasonably conveyed that registering an account with the phone communications box checked would indicate consent to phone communications regarding advertising.

Lundbom v. Schwans Home Service, Inc., 2020 WL 2736419 (D. Oregon, May 26, 2020)

See also: Browsewrap enforceable: hyperlinked terms on defendant’s website gave reasonable notice

About the author: Evan Brown is a technology and intellectual property attorney helping clients with a variety of online issues. Call him at (630) 362-7237 or send email to ebrown@internetcases.com. Follow on Twitter and Instagram

Identifying unknown online copyright infringers: guidance

unmasking online copyright infringers

A recent case addressed the problem of identifying unknown online copyright infringers. Plaintiff sued some unknown “John Doe” defendants who infringed plaintiff’s copyrights. To keep the lawsuit moving forward, plaintiff needed to serve the complaint on the defendants. But this presented a challenge, since plaintiff did not know to whom it should deliver the documents. So plaintiff filed a motion with the court, asking for permission to send interrogatories and to take depositions that would help unmask the anonymous infringers. Plaintiffs sought to get information from parties including PayPal, Cloudflare and various domain name registrars. The court’s response provides guidance to parties seeking to learn the identities of unknown parties.

To identify unknown online copyright infringers: early discovery

The rules of procedure in federal court do not permit discovery requests until the parties have had an initial conference with each other. But they cannot have that conference if the defendant is unknown. So the plaintiff needs to send discovery requests earlier than what the rules generally allow. It needs the court’s permission to do so.

A court will not permit early discovery in every instance. But courts have made exceptions, permitting limited discovery after a plaintiff files the complaint to permit the plaintiff to learn the identifying facts necessary to permit service on the defendant. Courts allow these requests upon a showing of good cause.

What constitutes good cause for early discovery?

This court applied the three part test for good cause set out more than 20 years ago in the case of Columbia Ins. Co. v. Seescandy.com, 185 F.R.D. 573 (N.D. Cal. 1999). The party seeking early discovery should be able to:

  • Identify the missing party with sufficient specificity such that the court can determine that the defendant is a real person or entity who could be sued in federal court;
  • Identify all previous steps taken to locate the elusive defendant; and
  • Establish to the court’s satisfaction that the suit against defendant could withstand a motion to dismiss.

Early discovery was appropriate in this case

Under the first prong of the test, the court found that plaintiff identified the missing parties with as much clarity as possible. Plaintiff stated that those missing parties were persons or entities, and that those parties had been observed and documented as infringing on plaintiff’s copyrights. Thus, as real persons or entities, those Doe parties could be sued in federal court.

As for the second prong, the only information plaintiff had regarding the defendants was the existence of accounts relating to the operations of the defendants’ websites. Therefore, there were no other measures plaintiff could take to identify the defendants other than to obtain their identifying information from the parties from whom it was sought.

Finally, on the third prong, for identifying unknown copyright infingers, the court found that plaintiff had pled the required elements of direct and contributory copyright infringement. Plaintiff claimed (1) it owned and had registered the copyrighted work at issue in the case; (2) defendants knew of the infringing activity and were conscious of their infringement; and (3) defendants actively participated in this infringement by inducing, causing and contributing to the infringement of plaintiff’s copyrighted work. Since plaintiff had alleged each of these elements properly, this cause of action could withstand a motion to dismiss.

MG Premium Ltd. v. Does, 2020 WL 1675741 (W.D. Wash. April 6, 2020)

Related: 

Scroll to top