Publications

One Pending Supreme Court Case Could Change the Internet as We Know It: Gonzalez v. Google and Tech Platforms’ Liability

November 9, 2022 Articles

The Supreme Court granted certiorari in Gonzalez v. Google, a high-stakes case appealed from the Ninth Circuit about the scope of protection Section 230 of the Communications Decency Act affords technology companies against liability for the content on their platforms. The case could fundamentally change the modern landscape of the internet and online speech.

At issue is a provision of Section 230 that reads, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” 47 U.S.C. § 230(c)(1). Under this law, websites like Facebook, Twitter, and YouTube and internet search engines have since 1996 enjoyed broad immunity from liability for claims like defamation based on their users’ content posted to their platforms, often winning cases bringing these claims on early motions to dismiss.  The Gonzalez family aims to hold YouTube’s parent company Google responsible for the death of their family member, Nohemi Gonzalez, who died in a terrorist attack in Paris in 2015.  Her family’s argument centers on YouTube’s algorithm that “recommends” additional videos for a given user to watch based on videos the user has already watched.  According to Nohemi’s family, YouTube “recommended” terrorist group ISIS’s radicalizing videos to some viewers and these “recommendations” contributed to Nohemi’s death.  The Gonzalez family argues that Section 230(c)(1) only protects a platform’s exercise of traditional editorial functions over user-generated content, such as deciding whether to alter, withhold, publish, or withdraw the content.  According to Nohemi’s family, it does not extend to “recommending” user-generated content because that exceeds traditional editorial functions.  They reason that recommendations involve communication by the platform itself, and therefore go beyond “publishing” (i.e., providing the public with content supplied by others). In illustrating this argument, petitioners analogize to publishing a book. Were a member of the Court to recommend a book, they would not by doing so be transformed into the publisher of the book.

Google, on the other hand, argues that an online service should be treated as a publisher within the meaning of Section 230 when it curates and displays third-party content of potential interest to each user because pairing a third-party video with another does not change the underlying content. Brief in Opposition, at 21. They reason that “actively bringing [a speaker’s] message to interested parties . . . falls within the heartland of what it means to be the ‘publisher’ of information.” Id., at 3.

Google also argues that unlike many cases the Supreme Court hears, there is no split of authority among circuit courts of appeal here. Rather, precedent out of both the Second and Ninth Circuits have held that Section 230 protects targeted recommendations by interactive computer services, although with divergent reasoning. See, respectively, Force v. Facebook, Inc., 934 F.3d 53, 68 (2d Cir. 2019); Dyroff v. Ultimate Software Group, Inc., 934 F.3d 1093, 1098 (9th Cir. 2019). In Force, the Second Circuit held that Facebook’s recommendations remain traditional editorial functions because distributing third-party information inherently connects users to content, whether recommended or not. 934 F.3d 53, 67. In Dyroff, the Ninth Circuit found that to hold the defendant liable for making such recommendations would “inherently require the court to treat the defendant as the ‘publisher or speaker’ of content provided by another,” and therefore run counter to the intent of Section 230. 934 F.3d 1093, 1098. The Ninth Circuit’s ruling below in Gonzalez v. Google then followed Dyroff’s precedent in finding that targeted recommendations do indeed fall within the scope of Section 230’s protection. 2 F.4th 871, 895 (9th Cir. 2021).  Nohemi Gonzalez’s family argues that the conflict is that this case, Dryoff, and Force conflict with decisions of other circuit courts of appeal holding that the protections of Section 230 apply only to traditional editorial functions.

Judges in both the Second and Ninth Circuits have raised concerns that the scope of immunity granted by Section 230 should be clarified. Three judges in particular – Katzmann, Berzon, and Gould – have argued that the majority’s interpretations of Section 230 are counter to the law’s text and intent. Force, at 77-84 (Katzmann, J., concurring in part and dissenting in part); Gonzalez, 2 F. 4th 871, 913-917 (Berzon, J., concurring); Id., 920 (Gould, J., concurring in part and dissenting in part). This division amongst judges on the circuit courts will now be resolved by the nation’s highest court.

Now, the Supreme Court, in its first-ever case interpreting Section 230, will determine whether the provision immunizes “interactive computer services” (i.e., websites like YouTube) when they make targeted recommendations of content provided by another “information content provider” (i.e., their users).  At stake in Gonzalez is the very manner in which online content is delivered to users.  Currently, every big name internet platform – Facebook, Google, Twitter – uses algorithms which recommend content based on users’ online behavior. If the Court were to find that interactive computer services can be held liable for their recommendations, this could fundamentally change how those platforms operate. As Google points out, interactive computer services must make constant choices about what information to display and how in order to help users navigate the sheer amount of data online.  Users who have become accustomed to platforms “curating” content for them may dislike having to hunt through mountains of random content to find what they want to see if these platforms stop making recommendations altogether.  If platforms continue to make recommendations, they may face significant increased costs in the form of more stringent recommendation algorithms and AI-filtering of recommended content, as well as increasing their human reviewer headcount to try to further prevent the recommendation of content that may create an increased risk of creating liability for them.  They will also have to increase their legal spending to fight cases on the merits that they previously could have defeated on a motion to dismiss.

But Nohemi’s family responds that recommended content takes control over the information received away from users and places it in the hands of internet companies. Petition, at 32. The stated policy of Section 230 is to “encourage the development of technologies which maximize user control over what information is received by individuals, families, and schools who use the internet and other computer services.” Id. As Nohemi’s family reasons, these algorithms are influential at best – and downright dangerous at their worst. Petition, at 19.

Given the huge potential ramifications for the tech industry, this case is one to watch this term.

Firm Highlights

Publication

It Wasn’t Me, It Was the AI: Intellectual Property and Data Privacy Concerns With Nonprofits’ Use of Artificial Intelligence Systems

In today's rapidly changing technological landscape, artificial intelligence (AI) is making headlines and being discussed constantly. To be sure, AI provides a powerful tool to nonprofits in creating content and exploiting for countless cost-effective...

Read More
News

Scraping Battles: Meta Loses Legal Effort to Halt Harvesting of Personal Profiles

Alex Reese spoke to Matt Fleischer-Black of  Cybersecurity Law Report about the Meta v. Bright Data decision and its impact on U.S. scraping case law. Read the article here (paywall or trial).

Read More
Publication

A Summary of New Laws Coming for California Employers in 2024

In 2023, California has adopted several new employment laws either introducing new employee protections or codifying existing practices into state law. With these changes, employers will need to examine and adjust some of their...

Read More
News

Farella Wins Complete Defense Ruling at Trial for Smart Meter Technology Company

Northern California legal powerhouse Farella Braun + Martel secured a complete defense victory for a smart meter technology company following a two-week bench trial in the U.S. Bankruptcy Court for the Southern District of California...

Read More
Publication

Major Decision Affects Law of Scraping and Online Data Collection, Meta Platforms v. Bright Data

On January 23, 2024, the court in Meta Platforms Inc. v. Bright Data Ltd. , Case No. 3:23-cv-00077-EMC (N.D. Cal.), issued a summary judgment ruling with potentially wide-ranging ramifications for the law of scraping and...

Read More
Publication

Is the Copyright Threat to Generative AI Overhyped? Implications of Kadrey v. Meta

In November 2023, Meta successfully had nearly all of the claims against it dismissed in the Kadrey v. Meta Platforms, Inc. suit, a victory with potential implications for other technology companies with generative AI tools...

Read More
News

Winston Liaw Named a Leadership Council on Legal Diversity Fellow

Northern California legal powerhouse Farella Braun + Martel is proud to announce that Winston Liaw has been named a Leadership Council on Legal Diversity (LCLD) Fellow for 2024. Winston joins a select group of...

Read More
Publication

Court Reinstates CPPA Enforcement Authority and Confirms No Delay Necessary for Enforcement of Future CCPA Regulations

A recent appellate decision has made clear that the regulations promulgated under California’s groundbreaking consumer privacy law, the California Consumer Privacy Act (CCPA, as amended by the California Privacy Rights Act (CPRA)), are ripe...

Read More
News

Farella 2024 Partner Elevations: Cynthia Castillo and Greg LeSaint

Northern California legal powerhouse Farella Braun + Martel is pleased to announce the election of two lawyers to partnership effective Jan. 1: Cynthia Castillo and Greg LeSaint. “We are thrilled to elevate Cynthia and...

Read More
Publication

California Proposes New AI & Automated Decision-Making Technology Regulations

The California Privacy Protection Agency (CPPA) released its draft  regulatory framework for automated decision-making technology (ADMT) on November 27. These regulations are a preview of what new requirements may look like for companies currently...

Read More