Kids Online Safety Act: Quick Facts

What is KOSA?

Senators Blackburn and Blumenthal have introduced a new version of KOSA – the Kids Online Safety Act, which seeks to protect minors from online harms by requiring social media companies to prioritize children’s safety in product design and offer more robust parental control tools. Garnering bipartisan support with 62 Senate cosponsors in the wake of a significant hearing with Big Tech CEOs, the bill emphasizes accountability for tech companies, transparency in algorithms, and enhanced safety measures. The legislation has been refined following extensive discussions with various stakeholders, including tech companies, advocacy groups, and parents, to ensure its effectiveness and alignment with the goal of safeguarding young internet users from bullying, harassment, and other online risks.

Critics of the statute argue that the KOSA, despite amendments, remains a threat to constitutional rights, effectively censoring online content and empowering state officials to target undesirable services and speech. See, e.g., the EFF’s blog post about the statute. They contend that KOSA mandates extensive filtering and blocking of legal speech across numerous websites, apps, and platforms, likely leading to age verification requirements. Concerns are raised about the potential harm to minors’ access to important information, particularly for groups such as LGBTQ+ youth, those seeking health and reproductive information, and activists. The modifications in the 2024 version, including the removal of the authority for state attorneys general to sue for non-compliance with the “duty of care” provision, are seen as insufficient to address the core issues related to free speech and censorship. Critics urge opposition to KOSA, highlighting its impact not just on minors but on all internet users who could be subjected to a “second-class internet” due to restricted access to information.

What does the proposed law actually say? Below are some key facts about the contents of the legislation:

Who would be subject to the law:

The statute would place various obligations on “covered platforms”:

  • A “covered platform” encompasses online platforms, video games, messaging applications, and video streaming services accessible via the internet and used or likely to be used by minors.
  • Exclusions from the definition of “covered platform” include common carrier services, broadband internet access services, email services, specific teleconferencing or video conferencing services, and direct wireless messaging services not linked to an online platform.
  • Entities not for profit, educational institutions, libraries, news or sports news websites/apps with specific criteria, business-to-business software, and cloud services not functioning as online platforms are also excluded.
  • Virtual private networks and similar services that solely route internet traffic are not considered “covered platforms.”

Design and Implementation Requirements

  • Covered platforms are required to exercise reasonable care in designing and implementing features to prevent and mitigate harms to minors, including mental health disorders, addiction-like behaviors, physical violence, bullying, harassment, sexual exploitation, and certain types of harmful marketing.
  • The prevention of harm includes addressing issues such as anxiety, depression, eating disorders, substance abuse, suicidal behaviors, online bullying, sexual abuse, and the promotion of narcotics, tobacco, gambling, and alcohol to minors.
  • Despite these protections, platforms are not required to block minors from intentionally seeking content or from accessing resources aimed at preventing or mitigating these harms, including providing evidence-informed information and clinical resources.

Required Safeguards for Minors

  • Covered platforms must provide minors with safeguards to limit communication from others, restrict access to their personal data, control compulsive platform usage features, manage personalized recommendation systems, and protect their geolocation data. (One has to consider whether these would pass First Amendment scrutiny, particularly in light of recent decisions such as the one in NetChoice v. Yost).
  • Platforms are required to offer options for minors to delete their accounts and personal data, and limit their time on the platform, with the most protective privacy and safety settings enabled by default for minors.
  • Parental tools must be accessible and easy-to-use, allowing parents to manage their child’s privacy, account settings, and platform usage, including the ability to restrict purchases and view and limit time spent on the platform.
  • A reporting mechanism for harms to minors must be established, with platforms required to respond substantively within specified time frames, and immediate action required for reports involving imminent threats to minors’ safety.
  • Advertising of illegal products such as narcotics, tobacco, gambling, and alcohol to minors is strictly prohibited.
  • Safeguards and parental tools must be clear, accessible, and designed without “dark patterns” that could impair user autonomy or choice, with considerations for uninterrupted gameplay and offline device or account updates.

Disclosure Requirements

  • Before a minor registers or purchases on a platform, clear notices about data policies, safeguards for minors, and risks associated with certain features must be provided.
  • Platforms must inform parents about safeguards and parental tools for their children and obtain verifiable parental consent before a child uses the platform.
  • Platforms may consolidate notice and consent processes with existing obligations under the Children’s Online Privacy Protection Act (COPPA). (Like COPPA, a “child” under the act is one under 13 years of age.)
  • Platforms using personalized recommendation systems must clearly explain their operation, including data usage, and offer opt-out options for minors or their parents.
  • Advertising targeted at minors must be clearly labeled, explaining why ads are shown to them and distinguishing between content and commercial endorsements.
  • Platforms are required to provide accessible information to minors and parents about data policies and access to safeguards, ensuring resources are available in relevant languages.

Reporting Requirements

  • Covered platforms must annually publish a report, based on an independent audit, detailing the risks of harm to minors and the effectiveness of prevention and mitigation measures. (Providing these audit services is no doubt a good business opportunity for firms with such capabilities; unfortunately this will increase the cost of operating a covered platform.)
  • This requirement applies to platforms with over 10 million active monthly users in the U.S. that primarily host user-generated content and discussions, such as social media and virtual environments.
  • Reports must assess platform accessibility by minors, describe commercial interests related to minor usage, and provide data on minor users’ engagement, including time spent and content accessed.
  • The reports should identify foreseeable risks of harm to minors, evaluate the platform’s design features that could affect minor usage, and detail the personal data of minors collected or processed.
  • Platforms are required to describe safeguards and parental tools, interventions for potential harms, and plans for addressing identified risks and circumvention of safeguards.
  • Independent auditors conducting the risk assessment must consult with parents, youth experts, and consider research and industry best practices, ensuring privacy safeguards are in place for the reported data.

Keep an eye out to see if Congress passes this legislation in the spirit of “for the children.”

How do you sort out who owns a social media account used to promote a business?

owns social media account
Imagine this scenario – a well-known founder of a company sets up social media accounts that promote the company’s products. The accounts also occasionally display personal content (e.g., public happy birthday messages the founder sends to his spouse). The company fires the founder and then the company claims it owns the accounts. If the founder says he owns the accounts, how should a court resolve that dispute?

The answer to this question is helpful in resolving actual disputes such as this, and perhaps even more helpful in setting up documentation and procedures to prevent such a dispute in the first place.

In the recent case of In re: Vital Pharmaceutical, the court considered whether certain social media accounts that a company’s founder and CEO used were property of the company’s bankruptcy estate under Bankruptcy Code § 541. Though this was a bankruptcy case, the analysis is useful in other contexts to determine who owns a social media account. The court held that various social media accounts (including Twitter, Instagram and TikTok accounts the CEO used) belonged to the company.

In reaching this decision, the court recognized a “dearth” of legal guidance from other courts on how to determine account ownership when there is a dispute. It noted the case of In re CTLI, LLC, 528 B.R. 359 (Bankr. S.D. Tex. 2015) but expressed concern that this eight year old case did not adequately address the current state of social media account usage, particularly in light of the rise of influencer marketing.

The court fashioned a rather detailed test:

  • Are there any agreements or other documents that show who owns the account? Perhaps an employee handbook? If so, then whoever such documents say owns the account is presumed to be the owner of the account.
  • But what if there are no documents that show ownership, or such documents do not show definitively who owns the account? In those circumstances, one should consider:
    • Does one party have exclusive power to access the account?
    • Does that same party have the ability to prevent others from accessing the account?
    • Does the account enable that party to identify itself as having that exclusive power?
  • If a party establishes both that documents show ownership and that a party has control, that ends the inquiry. But if one or both of those things are not definitively shown, one can still consider whether use of the social media account tips the scales one way or the other:
    • What name is used for the account?
    • Is the account used to promote more than one company’s products?
    • To what extent is the account used to promote the user’s persona?
    • Would any required changes fundamentally change the nature of the account?

Companies utilizing social media accounts run by influental individuals with well-known personas should take guidance from this decision. Under this court’s test, creating documentation or evidence of the account ownership would provide the clearest path forward. Absent such written indication, the parties should take care to establish clear protocols concerning account control and usage.

In re: Vital Pharmaceutical, 2023 WL 4048979 (Bankr. S.D. Fla., June 16, 2023)

Does tagging the wrong account in an Instagram post show actual confusion in trademark litigation?

In a recent trademark infringement case, the court considered whether Instagram users tagging photos of one product with the account of another company’s product was evidence of actual confusion. In this case, the court found that it was not evidence of actual confusion.

Plaintiff makes premium tequila sold in bottles and defendant makes inexpensive tequila-soda product sold in cans. Plaintiff sued defendant for trademark infringement and sought a preliminary injunction against defendant. To support its assertion that it was likely to succeed on the merits of the case, plaintiff argued there was actual confusion among the consuming public. For example, on Instagram, at least 30 people had tagged photos of plaintiff’s products with defendant’s account.

The court found that in these circumstances, particularly where a marketing survey also showed less than 10% of people were confused by the defendant’s mark, that the incorrect tagging did not show actual confusion.

Though the bar for showing actual confusion is low, the court noted that a showing of confusion requires more than a “fleeting mix-up of names” and that confusion must be caused by the trademark used and must “sway” consumer purchase.

In this case, the court found that defendant’s evidence regarding mistaken Instagram tags did not establish a likelihood of trademark confusion that would result in purchase decisions based on the mistaken belief that the defendant’s tequila-soda product was affiliated with the plaintiff. At best, in the court’s view, the plaintiff’s evidence demonstrated a “fleeting mix-up of names,” which was not evidence of actual confusion.

The court likened this case to the recent case of Reply All Corp. v. Gimlet Media, LLC, 843 F. App’x 392 (2d Cir. 2021), wherein “instances of general mistake or inadvertence—without more—[did] not suggest that those potential consumers in any way confused [plaintiff’s] and [defendant’s] products, let alone that there was confusion that could lead to a diversion of sales, damage to goodwill, or loss of control over reputation.”

Casa Tradición S.A. de C.V. v. Casa Azul Spirits, LLC, 2022 WL 17811396 (S.D. Texas, December 19, 2022)

Can a person be liable for retweeting a defamatory tweet?

section 230 user retweet defamatory

Under traditional principles of defamation law, one can be liable for repeating a defamatory statement to others. Does the same principle apply, however, on social media such as Twitter, where one can easily repeat the words of others via a retweet?

Hacking, tweet, retweet, lawsuit

A high school student hacked the server hosting the local middle school’s website, and modified plaintiff’s web page to make it appear she was seeking inappropriate relationships. Another student tweeted a picture of the modified web page, and several people retweeted that picture.

The teacher sued the retweeters for defamation and reckless infliction of emotional distress. The court dismissed the case, holding that 47 USC §230 immunized defendants from liability as “users” of an interactive computer service. Plaintiff sought review with the New Hampshire Supreme Court. On appeal, the court affirmed the dismissal.

Who is a “user” under Section 230?

Section 230 provides, in relevant part, that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”. Importantly, the statute does not define the word “user”. The lower court held that defendant retweeters fit into the category of “user” under the statute and therefore could not be liable for their retweeting, because to impose such liability would require treating them as the publisher or speaker of information provided by another.

Looking primarily at the plain language of the statute, and guided by the 2006 California case of Barrett v. Rosenthal, the state supreme court found no basis in plaintiff’s arguments that defendants were not “users” under the statute. Plaintiff had argued that “user” should be interpreted to mean libraries, colleges, computer coffee shops and others who, “at the beginning of the internet” were primary access points for people. And she also argued that because Section 230 changed common law defamation, the statute must speak directly to immunizing individual users.

The court held that it was “evident” that Section 230 abrogated the common law of defamation as applied to individual users. “That individual users are immunized from claims of defamation for retweeting content they did not create is evident from the statutory language. ”

Banaian v. Bascom, — A.3d —, 2022 WL 1482521 (N.H. May 11, 2022)

See also:

Old social media posts violated trade dress infringement injunction

social media trade dress
The parties in the case of H.I.S.C., Inc. v. Franmar are competitors, each making garden broom products. In earlier litigation, the defendant filed a counterclaim against plaintiff for trade dress infringement, and successfully obtained an injunction against plaintiff, prohibiting plaintiff from advertising brooms designed in a certain way. Defendant asked the court to find plaintiff in contempt for, among other reasons, certain social media posts that plaintiff posted before the injunction, but that still remained after the injunction was entered. The court agreed that the continuing existence of such posts was improper and found plaintiff in contempt for having violated the injunction.

The court noted that the injunction prohibited “[a]dvertising, soliciting, marketing, selling, offering for sale or otherwise using in the United States the [applicable product trade dress] in connection with any garden broom products.” It observed that “[o]n the Internet and in social media, a post from days, weeks, months, or even years ago can still serve to advertise a product today.” The court cited to Ariix, LLC v. NutriSearch Corp., 985 F.3d 1107, 1116 n.5, in which that court noted that one prominent influencer receives $300,000 to $500,000 for a single Instagram post endorsing a company’s product – a sum surely including both the post itself and an agreement to continue allowing the post to be visible to consumers for a substantial duration of time. Interestingly, the court found that the nature of a social media post may be different from a television or radio advertisement that has a fixed air date and time. Accordingly, the court found that it was inappropriate for social media posts published before the injunction to stay online.

H.I.S.C., Inc. v. Franmar Int’l Importers, Ltd., 2022 WL 104730 (S.D. Cal. January 11, 2022)

See also:

Executive order to clarify Section 230: a summary

Section 230 executive order

Late yesterday President Trump took steps to make good on his promise to regulate online platforms like Twitter and Facebook. He released a draft executive order to that end. You can read the actual draft executive order. Here is a summary of the key points. The draft order:

  • States that it is the policy of the U.S. to foster clear, nondiscriminatory ground rules promoting free and open debate on the Internet. It is the policy of the U.S. that the scope of Section 230 immunity should be clarified.
  • Argues that a platform becomes a “publisher or speaker” of content, and therefore not subject to Section 230 immunity, when it does not act in good faith to to restrict access to content (in accordance with Section 230(c)(2) that it considers to be “obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable.” The executive order argues that Section 230 “does not extend to deceptive or pretextual actions restricting online content or actions inconsistent with an online platform’s terms of service.”
  • Orders the Secretary of Commerce to petition the FCC, requesting that the FCC propose regulations to clarify the conditions around a platform’s “good faith” when restricting access or availability of content. In particuar, the requested rules would examine whether the action was, among other things, deceptive, pretextual, inconsistent with the provider’s terms of service, the product of unreasoned explanation, or without meaningful opportunity to be heard.
  • Directs each federal executive department and agency to review its advertising and marketing spending on online platforms. Each is to provide a report in 30 days on: amount spent, which platforms supported, any viewpoint-based restrictions of the platform, assessment whether the platform is appropriate, and statutory authority available to restrict advertising on platforms not deemed appropriate.
  • States that it is the policy of the U.S. that “large social media platforms, such as Twitter and Facebook, as the functional equivalent of a traditional public forum, should not infringe on protected speech”.
  • Re-establishes the White House “Tech Bias Reporting Tool” that allows Americans to report incidents of online censorship. These complaints are to be forwarded to the DoJ and the FTC.
  • Directs the FTC to “consider” taking action against entities covered by Section 230 who restrict speech in ways that do not align with those entities’ public representations about those practices.
  • Directs the FTC to develop a publicly-available report describing complaints of activity of Twitter and other “large internet platforms” that may violate the law in ways that implicate the policy that these are public fora and should not infringe on protected speech.
  • Establishes a working group with states’ attorneys general regarding enforcement of state statutes prohibiting online platforms from engaging in unfair and deceptive acts and practices. 
  • This working group is also to collect publicly available information for the creation and monitoring of user watch lists, based on their interactions with content and other users (likes, follows, time spent). This working group is also to monitor users based on their activity “off the platform”. (It is not clear whether that means “off the internet” or “on other online places”.)

Influencer agreements: what needs to be in them

If you are a social media influencer, or are a brand looking to engage an influencer, you may need to enter into an influencer agreement. Here are five key things that should be in the contract between the influencer and the brand: 

  • Obligations 
  • Payment 
  • Content ownership 
  • Publicity rights 
  • Endorsement guidelines compliance 

Obligations under the influencer agreement.

The main thing that a brand wants from an influencer is for the influencer to say certain things about the brand’s products, in a certain way, and at certain times. What kind of content? Photos? Video? Which platforms? What hashtags? When? How many posts? The agreement should spell all these things out.

Payment.

Influencers are compensated in a number of ways. In addition to getting free products, they may be paid a flat fee upfront or from time to time. And it’s also common too see a revenue share arrangement. That is, the influencer will get a certain percentage based on sales of the products she is endorsing. These may be tracked by a promo code. The contract should identify all these amounts and percentages, and the timing for payment.

So what about content ownership? 

The main work of an influencer is to generate content. This could be pictures posted to Instagram, tweets, or video posted to her story. All that content is covered by copyright. Unless the contract says otherwise, the influencer will own the copyright. If the brand wants to do more with that content outside of social media, that needs to be addressed in the influencer agreement.

And then there are rights of publicity. 

Individuals have the right to determine how their image and name are used for commercial purposes. If the brand is going to feature the influencer on the brand’s own platform, then there needs to be language that specifies the limits on that use. That’s key to an influencer who wants to control her personal brand and reputation. 

Finally, endorsement guidelines and the influencer agreement. 

The federal government wants to make sure the consuming public gets clear information about products. So there are guidelines that influencers have to follow. You have to know what these guidelines are to stay out of trouble. And the contract should address what happens if these guidelines aren’t followed.

See also: When is it okay to use social media to make fun of people?

About the author: Evan Brown is an attorney helping individuals and businesses with a wide variety of agreements involving social media, intellectual property and technology. Call him at (630) 362-7237 or send email to ebrown@internetcases.com. 

Police not required to publicly disclose how they monitor social media accounts in investigations

In the same week that news has broken about how Amazon is assisting police departments with facial recognition technology, here is a decision from a Pennsylvania court that held police do not have to turn over details to the public about how they monitor social media accounts in investigations.

The ACLU sought a copy under Pennsylvania’s Right-to-Know Law of the policies and procedures of the Pennsylvania State Police (PSP) for personnel when using social media monitoring software. The PSP produced a redacted copy, and after the ACLU challenged the redaction, the state’s Office of Open Records ordered the full document be provided. The PSP sought review in state court, and that court reversed the Office of Open Records order. The court found that disclosure of the record would be reasonably likely to threaten public safety or a public protection activity.

The court found in particular that disclosure would: (i) allow individuals to know when the PSP can monitor their activities using “open sources” and allow them to conceal their activities; (ii) expose the specific investigative method used; (iii) provide criminals with tactics the PSP uses when conducting undercover investigations; (iv) reveal how the PSP conducts its investigations; and (v) provide insight into how the PSP conducts an investigation and what sources and methods it would use. Additionally, the court credited the PSP’s affidavit which explained that disclosure would jeopardize the PSP’s ability to hire suitable candidates – troopers in particular – because disclosure would reveal the specific information that may be reviewed as part of a background check to determine whether candidates are suitable for employment.

Pennsylvania State Police v. American Civil Liberties Union of Pennsylvania, 2018 WL 2272597 (Commonwealth Court of Pennsylvania, May 18, 2018)

About the Author: Evan Brown is a Chicago technology and intellectual property attorney. Call Evan at (630) 362-7237, send email to ebrown [at] internetcases.com, or follow him on Twitter @internetcases. Read Evan’s other blog, UDRP Tracker, for information about domain name disputes.

Ninth Circuit upholds decision in favor of Twitter in terrorism case

Tamara Fields and Heather Creach, representing the estates of their late husbands and joined by Creach’s two minor children, sued Twitter, Inc. Plaintiffs alleged that the platform knowingly provided material support to ISIS, enabling the terrorist organization to carry out the 2015 attack in Jordan that killed their loved ones. The lawsuit sought damages under the Anti-Terrorism Act (ATA), which allows U.S. nationals injured by terrorism to seek compensation.

Plaintiffs alleged that defendant knowingly and recklessly provided ISIS with access to its platform, including tools such as direct messaging. Plaintiffs argued that these services allowed ISIS to spread propaganda, recruit followers, raise funds, and coordinate operations, ultimately contributing to the attack. Defendant moved to dismiss the case, arguing that plaintiffs failed to show a direct connection between its actions and the attack. Defendant also invoked Section 230 of the Communications Decency Act, which shields platforms from liability for content created by users.

The district court agreed with defendant and dismissed the case, finding that plaintiffs had not established proximate causation under the ATA. Plaintiffs appealed, but the Ninth Circuit upheld the dismissal. The appellate court ruled that plaintiffs failed to demonstrate a direct link between defendant’s alleged support and the attack. While plaintiffs showed that ISIS used defendant’s platform for various purposes, the court found no evidence connecting those activities to the specific attack in Jordan. The court emphasized that the ATA requires a clear, direct relationship between defendant’s conduct and the harm suffered.

The court did not address defendant’s arguments under Section 230, as the lack of proximate causation was sufficient to resolve the case. Accordingly, this decision helped clarify the legal limits of liability for platforms under the ATA and highlighted the challenges of holding technology companies accountable for how their services are used by third parties.

Three Reasons Why This Case Matters:

  • Sets the Bar for Proximate Cause: The ruling established that a direct causal link is essential for liability under the Anti-Terrorism Act.
  • Limits Platform Liability: The decision underscores the difficulty of holding online platforms accountable for misuse of their services by bad actors.
  • Reinforces Section 230’s Role: Although not directly addressed, the case highlights the protections Section 230 offers to tech companies.

Fields v. Twitter, Inc., 881 F.3d 739 (9th Cir. 2018)

Pastor’s First Amendment rights affected parole conditions barring social media use

Plaintiff – a Baptist minister on parole in California – sued several parole officials, arguing that conditions placed on his parole violated plaintiff’s First Amendment rights. Among the contested restrictions was a prohibition on plaintiff accessing social media. Plaintiff claimed this restriction infringed on both his right to free speech and his right to freely exercise his religion. Plaintiff asked the court for a preliminary injunction to stop the enforcement of this condition. The court ultimately sided with plaintiff, ruling that the social media ban was unconstitutional.

The Free Speech challenge

Plaintiff argued that the parole condition prevented him from sharing his religious message online. As a preacher, he relied on platforms such as Facebook and Twitter to post sermons, connect with congregants who could not attend services, and expand his ministry by engaging with other pastors. The social media ban, plaintiff claimed, silenced him in a space essential for modern communication.

The court agreed, citing the U.S. Supreme Court’s ruling in Packingham v. North Carolina, which struck down a law barring registered sex offenders from using social media. In Packingham, the Court emphasized that social media platforms are akin to a modern public square and are vital for exercising free speech rights. Similarly, the court in this case found that the blanket prohibition on social media access imposed by the parole conditions was overly broad and not narrowly tailored to address specific risks or concerns.

The court noted that plaintiff’s past offenses, which occurred decades earlier, did not involve social media or the internet, undermining the justification for such a sweeping restriction. While public safety was a legitimate concern, the court emphasized that parole conditions must be carefully tailored to avoid unnecessary burdens on constitutional rights.

The Free Exercise challenge

Plaintiff also argued that the social media ban interfered with his ability to practice his religion. He asserted that posting sermons online and engaging with his congregation through social media were integral parts of his ministry. By prohibiting social media use, the parole condition restricted his ability to preach and share his faith beyond the physical boundaries of his church.

The court found this argument compelling. Religious practice is not confined to in-person settings, and plaintiff demonstrated that social media was a vital tool for his ministry. The court noted that barring a preacher from using a key means of sharing religious teachings imposed a unique burden on religious activity. Drawing on principles from prior Free Exercise Clause cases, the court held that the parole condition was not narrowly tailored to serve a compelling government interest, as it broadly prohibited access to all social media regardless of its religious purpose.

The court’s decision

The court granted plaintiff’s request for a preliminary injunction, concluding that he was likely to succeed on his claims under both the Free Speech Clause and the Free Exercise Clause of the First Amendment. The ruling allowed plaintiff to use social media during the litigation, while acknowledging the government’s legitimate interest in monitoring parolees. The court encouraged less restrictive alternatives, such as targeted supervision or limiting access to specific sites that posed risks, rather than a blanket ban.

Three reasons why this case matters:

Intersection of Speech and Religion: The case highlights how digital tools are essential for both free speech and the practice of religion, especially for individuals sharing messages with broader communities.

Limits on Blanket Restrictions: The ruling reaffirms that government-imposed conditions, such as parole rules, must be narrowly tailored to avoid infringing constitutional rights.

Modern Application of First Amendment Rights: By referencing Packingham, the court acknowledged the evolving role of social media as a platform for public discourse and religious expression.

Manning v. Powers, 281 F. Supp. 3d 953 (C.D. Cal. Dec. 13, 2017)

Scroll to top