Social media pervades daily life. A recent study showed that Americans spend an average of more than 1,300 hours a year on social media, with some Gen Zers spending nine hours a day in front of a screen. The Department of Justice has acknowledged how powerful social media is as a tool for spreading extremist beliefs and violence. But should social media companies be held liable for content that their algorithms promote to users, specifically if that content contains terroristic rhetoric?
The U.S. Supreme Court case Gonzalez v. Google LLC, in which the justices heard oral arguments Feb. 21, addresses that question.
Nohemi Gonzalez, a California State University, Long Beach, student who was studying abroad, died in the November 2015 terrorist attacks in Paris. In the days following, the Islamic State claimed responsibility for the attacks in a YouTube video. This was not the first time the militant group communicated via YouTube; it posted a series of beheading videos in 2014. Gonzalez’s family and estate filed suit against YouTube owner Google in U.S. District Court in Northern California, alleging that through YouTube, Google provided material assistance to the Islamic State by knowingly permitting the organization to post hundreds of videos to radicalize and recruit potential supporters.
Should social media companies be held liable for content that their algorithms promote to users, specifically if that content contains terroristic rhetoric?
Under the Anti-Terrorism Act of 1990 (18 U.S.C. §2331) any U.S. national injured by an act of international terrorism may bring a civil action against the perpetrators of the act, including any person who aids and abets the act by providing substantial assistance. If successful, the plaintiffs would be entitled to triple the amount of damages caused by the terrorist act. The district court dismissed the case, and the 9th U.S. Circuit Court of Appeals affirmed the ruling on the grounds that the plaintiffs failed to show Google’s actions were a proximate cause of Gonzalez’s death and that the claim was barred by the Communications Decency Act.
Section 230(c)(1) of the CDA (47 U.S.C § 230) provides immunity for providers of interactive computer services against liability for speech of third-party users. This means, if person A posts libel about person B on Twitter, person B cannot sue Twitter for libel. She can sue only person A.
Congress passed the Communications Decency Act in 1996 to protect the new and growing World Wide Web from being crushed by lawsuits centered on user speech. The differences between the online experience then and now are innumerable, but the 27-year-old law still controls the treatment of social media platforms in relation to their users’ speech.
The wrinkle in the case comes from Google’s use of algorithms to promote content to users. The plaintiffs asked the Supreme Court to consider whether providers are still protected from liability under the CDA when their algorithms target users and recommend other users’ content to them. The distinction between hosting the content and promoting it is important here, as promoting content can be considered an action of the platform and not the original poster. The plaintiffs reasoned that this makes the social media platform a publisher that can be held liable for the speech it publishes.
In its response, Google argued that if a social media platform can be held liable for content its algorithms promote, Section 230 would be moot as most websites use some manner of algorithm to organize or promote content. Google went on to argue that even if the YouTube algorithm was specifically pro-Islamic State, Section 230 would still provide immunity to the platform. During the oral arguments, several of the justices voiced hesitation with this broad interpretation of the statute.
The court will release its decision later this year.
Nicole Ezeh is a legislative specialist in NCSL’s State-Federal Relations Program.