Supreme Court Treads Carefully in Gonzalez
pa href=https://www.cato.org/people/will-duffield hreflang=undWill Duffield/a
/p
pLast month the Supreme Court heard oral a href=https://www.supremecourt.gov/oral_arguments/argument_transcripts/2022/21-1333_p8k0.pdfarguments/a in emGonzalez v. Google/em, a case about whether Section 230 protects platforms from liability for algorithmically recommended speech. This is the first time the Court has heard a case involving Section 230, and a bad ruling would a href=https://www.nationalreview.com/2023/02/why-gonzalez-v-google-matters/remake/a the internet for the worse. Although many had feared that justices would use the opportunity to get at Big Tech, the Court was skeptical of petitioners' counsel Eric Schnapper’s textual arguments and mindful of algorithms' almost universal use in sorting information online./p
pGoing into emGonzalez, /emthere wasn’t a circuit split about algorithmic liability. The Second Circuit’s 2019 emForce v. Facebook /emdecision a href=https://www.techdirt.com/2020/10/28/another-section-230-reform-bill-dangerous-algorithms-bill-threatens-speech/prompted/a a href=https://www.cato.org/blog/force-v-facebook-revisitedproposals/a to amend Section 230 to exclude algorithmic recommendations from its protections. While the “Protecting Americans from Dangerous Algorithms Act'' stalled in two consecutive congresses, its introduction seemed to signal that the debate over algorithmic liability had moved beyond interpretations of existing law. Thus, it seemed strange that the Court took up emGonzalez v. Google/em at all./p
pAs the justices discovered that Schnapper wasn’t bringing them anything new, their questions began to sound like the conclusions reached by appeals courts in earlier cases. In emForce/em, the Second Circuit held that Facebook couldn’t be treated as the publisher of pro-Hamas user profiles merely for suggesting the profiles to others because Facebook’s friend suggestion algorithm was neutral between lawful and unlawful interests. Facebook didn’t develop the user profiles’ content or recommend them in a way that contributed to their unlawfulness./p
blockquotepThe algorithms take the information provided by Facebook users and “match” it to other users—again, materially unaltered—based on objective factors applicable to any content, whether it concerns soccer, Picasso, or plumbers. Merely arranging and displaying others’ content to users of Facebook through such algorithms—even if the content is not actively sought by those users—is not enough to hold Facebook responsible as the “develop[er]” or “creat[or]” of that content. (emForce v. Facebook/em at 47)/p
/blockquote
pIn response to Schnapper’s claim that platforms are the creators of video thumbnails, Justice Thomas offered an account of platform agency in recommendations, or lack thereof, very similar to the reasoning of the majority in emForce/em./p
blockquotepJustice Thomas: But the -- it's basing the thumbnails -- from what I understand, it's based upon what the algorithm suggests the user is interested in. So, if you're interested in cooking, you don't want thumbnails on light jazz. It's neutral in that sense. You're interested in cooking. Say you get interested in rice -- in pilaf from Uzbekistan. You don't want pilaf from some other place, say, Louisiana. I don't see how that is any different from what is happening in this case. And what I'm trying to get you to focus on is if -- are we talking about the neutral application of an algorithm that works generically for pilaf and it also works in a similar way for ISIS videos? Or is there something different?/p
pMr. Schnapper: No, I think that's correct, but . . ./p
/blockquote
pSchnapper’s further attempts to persuade the court that some workable line could be drawn between publishing, display, and recommendation which would render platforms the co-creators of recommended speech did not gain traction. Indeed, justices expressed a href=https://fedsoc.org/commentary/fedsoc-blog/four-things-to-watch-in-gonzalez-v-googleconfusion/a about what line Schnapper was attempting to draw no less than eight times. As Cato’s a href=https://www.cato.org/legal-briefs/gonzalez-v-googleamicus brief/a notes, a clean line cannot be drawn because “if displaying some content more prominently than others is “recommending,” then recommending is inherent to the act of publishing.”/p
pJustice Kavanaugh took this lack of distinction to its obvious conclusion, noting that Schnapper’s reading of the statute would render almost any method of organizing user speech unprotected recommendation, exposing intermediaries to a broad array of lawsuits./p
blockquotepJustice Kavanaugh: “. . . your position, I think, would mean that the very thing that makes the website an interactive computer service also means that it loses the protection of 230. And just as a textual and structural matter, we don't usually read a statute to, in essence, defeat itself.”/p
/blockquote
pThis recognition should echo beyond the emGonzalez/em petitioner's recommendation claims. Critics of Section 230 have proposed a host of novel readings and clever pleadings to pare back the law. However, treating Section 230 as providing mere distributor liability, casting editorial decisions as unprotected design choices, or expecting content neutrality from intermediaries all read the statute to, in effect, defeat itself./p
pWhile conservative justices chose textual analysis over base hobby horses, the court’s liberals—with the exception of Justice Jackson—seemed wary of tackling algorithmic harms from the bench. Justice Kagan glibly observed that “we're a court. We really don't know about these things. You know, these are not like the nine greatest experts on the Internet.” Cognizant of the limits of their knowledge, and unable to discern the line Schnapper wanted to draw, a significant majority of justices seem ready to rule in Google’s favor./p
pHowever, this doesn’t mean that everything went right. There were two exchanges which provoked particular concern. In the first Google counsel Lisa Blatt seemed to endorse the emHenderson/em test, a recent interpretation of Section 230 by the Fourth Circuit that reads the statute as only applying to claims concerning platforms’ hosting of unlawful speech, or “some improper content within their publication.” While emGonzalez/em concerns the hosting and recommendation of improper speech, some lawsuits against platforms fault their failure to retain content or presentation of merely inaccurate information. While emHenderson /emmight protect Google here, its adoption would a href=https://blog.ericgoldman.org/archives/2022/11/fourth-circuit-takes-a-wrecking-ball-to-zeran-and-section-230-henderson-v-public-data.htmnarrow/a Section 230 in other contexts./p
pThe other perturbing exchange concerned platform “neutrality” and “neutral tools”. At several points Justice Gorsuch seemed to misapprehend the relevant kind of neutrality. While Gorsuch took the term to mean content neutrality or neutrality between purposes, earlier decisions use “neutral tools” to describe features which merely have both lawful and unlawful purposes./p
blockquotepJustice Gorsuch: “When it comes to what the Ninth Circuit did, it applied this neutral tools test, and I guess my problem with that is that language isn't anywhere in the statute, number one.”/p
p“And another problem also is that it begs the question what a neutral rule is. Is an algorithm always neutral? Don't many of them seek to profit-maximize or promote their own products? Some might even prefer one point of view over another.”/p
/blockquote
pJustice Gorsuch is right that the neutral tools test is not a part of Section 230’s statutory language. However, it helps courts to determine if litigated content is “information provided by another information content provider” or, if the platform has actually contributed to the content’s unlawfulness enough to become a co-author. Although the test works better for tools that are actively employed by users, rather than employed by websites to display content, the neutral tools test does not cut against YouTube’s algorithmic recommendations. The Wisconsin Supreme Court provides a succinct summary in emDaniel v. Armslist/em./p
blockquotepThe concept of neutral tools provides a helpful analytical framework for figuring out whether a website's design features materially contribute to the unlawfulness of third-party content. A neutral tool in the CDA context is a feature provided by an interactive computer service provider that can be utilized for proper or improper purposes. Goddard, 640 F. Sup. 2d at 1197 (citing Roommates.com, 521 F.3d at 1172)./p
/blockquote
pThe 2008 Ninth Circuit case emFair Housing Council of San Fernando Valley v. Roommates.com, LLC /emprovides examples of both protected neutral tools and unprotected features without lawful purposes. Roomates.com required new users to create a profile, submit information about their race and gender, and select a preferred roommate race. Because the platform required users to submit unlawfully discriminatory preferences, it contributed mightily to the unlawfulness of the resultant discriminatory profiles. In contrast, Roomates.com’s “additional comments” box could be filled with any sort of roommate preference, whether lawful or unlawful. Thus, Section 230 shielded Roomates.com from claims about the content of profiles’ “additional comments”, but not from claims about the platform-mandated discriminatory racial preferences./p
pThe neutral tools concept also helps to a href=https://www.cato.org/policy-analysis/circumventing-section-230-product-liability-lawsuits-threaten-internet-speech#neutral-toolsillustrate/a why Section 230 was created to protect online speech intermediaries. Many useful objects can be used for lawful and unlawful purposes. The creators of traditional tools can’t police misuses of their creations. Speech intermediaries can, but at significant cost to legitimate speakers. Thus, Section 230 protects these neutral tools when they are misused to allow creators to offer digital speech tools as freely as they can offer pen and paper or printers./p
pAlthough oral arguments went about as well as they could have, the internet still waits with bated breath for the Court’s opinion. The best outcome would be for the court to dismiss emGonzalez/em as improvidently granted and decide the matter in emTwitter v. Taamneh/em, a related case about the scope of the Anti-Terrorism Act. However, at arguments in a href=https://www.supremecourt.gov/oral_arguments/argument_transcripts/2022/21-1496_4315.pdfemTaamneh/em/aem /emthe court did not seem to clearly favor of any one conception of “substantial assistance”. Therefore, a narrow emGonzalez /emopinion that reifies Section 230’s protection of algorithms is more likely. As long as such an opinion is carefully written, it will avoid harming the online ecosystem that Section 230 has fostered./p
p/p