Should Tech Companies Be Held Accountable for Letting Terrorists on Their Sites?

Two cases before the US Supreme Court could upend immunity protections for social media companies. Photo by AP/POLITICO/Francis Chung
Should Tech Companies Be Held Accountable for Letting Terrorists on Their Sites?
BU experts assess two cases before the US Supreme Court challenging immunity
Should technology companies be accountable for harm possibly resulting from terrorists’ content on their sites? The Supreme Court wrestled with that question in two cases last week that could rewrite “the Twenty-Six Words That Created the Internet,” in the Washington Post’s words.
The 26 words are part of Section 230, the 1996 law immunizing tech companies for third-party posts. In the first of last week’s cases, Gonzalez vs. Google, argued February 21, the family of the late Nohemi Gonzalez says Google’s YouTube “aided and abetted” the 23-year-old’s 2015 murder by the Islamic State (ISIS) after its algorithms recommended ISIS videos to users, facilitating the group’s recruitment. The following day, the justices heard Twitter vs. Taamneh, which “indirectly involves Section 230,” says Woodrow Hartzog, a professor at the School of Law. In that case, survivors of a victim of a 2017 ISIS attack in Istanbul accuse Twitter, Google, and Facebook of violating the federal Anti-Terrorism Act (ATA) by permitting ISIS on their platforms.
During Gonzalez’ oral arguments, justices spanning the court’s conservative-liberal split voiced confusion about the family’s argument and fretted about undercutting congressionally legislated immunity. (Lower courts have interpreted the law as conferring that immunity.) Yet questioning lawyers in the Taamneh case, the court’s tone shifted, sounding “less sympathetic to the tech companies,” National Public Radio reports.
All of which makes the outcomes, likely to be handed down in June, a jump ball. We asked Hartzog—who specializes in law regarding technology and the First Amendment—and T. Barton Carter, a professor of media science at the College of Communication and a communications law attorney, for some play-by-play.
The interviews have been edited for clarity and brevity.
Q&A
With T. Barton Carter and Woodrow Hartzog
BU Today: In your legal opinion, who should the court rule for, and do you expect this court to so rule given its ideological makeup?
Carter: For Google to be liable [in Gonzalez], the court would have to find that the algorithms that determine which YouTube videos are recommended to users make Google an information content provider, which is not protected by Section 230. The plaintiffs have argued that directing ISIS-related videos to certain users makes YouTube an information content provider. The basic social media business model is based on the algorithms delivering content to users that they are most interested in. While [losing Section 230 protection] wouldn’t break the internet, as some tech companies have predicted, it would have a serious effect on social media. Whether such a change is warranted should really be decided by Congress, not the court. There is also the possibility that holding for Twitter in the other case could render Gonzalez moot. This is because the Twitter case involves the question of how the Anti-Terrorism Act applies to social media. Given that Gonzalez is also based on the Anti-Terrorism Act, a finding for Twitter could mean Google is not liable regardless of whether or not Section 230 protects it.
The basic question in the Twitter case is whether allowing known terrorists to use Twitter, Facebook, etc., is sufficient to qualify as “aiding and abetting” terrorism as the term is used in the Anti-Terrorism Act. The problem here is, how far could you extend aiding and abetting? There is no allegation that the social media companies had any advance knowledge of the specific terrorist attacks. The allegation is the terrorists used social media to radicalize people, leading to some people becoming terrorists. As Justice Thomas asked, would the phone company be liable for permitting known criminals to use phones that these criminals would then use for criminal activities? The justices generally seemed skeptical regarding how they could draw a line that would allow liability here without drastically expanding the reach of the law.
I think the court should rule in favor of the tech companies, and based on oral argument, I think they will.
Hartzog: It’s hard to say, without more development of the facts, whether the plaintiffs in the Google or Twitter cases are going to have meritorious claims. I think the best possible outcome, regardless of whether the plaintiffs in the Google case triumph, is an opinion that recognizes that Section 230 is not an absolute blank check. There’s a world where we get an opinion where the plaintiffs lose on the facts of this particular case, but the court’s opinion recognizes there are circumstances under which these companies can design their services in a harmful and dangerous and wrongfully discriminatory way that is actionable under law.
BU Today: Given what seems to be tech companies’ universally recognized influence on malleable minds, is limiting Section 230 protections the only realistic route to ensuring online accountability?
Carter: The biggest problem with that approach is that eliminating Section 230 would not be very effective in terms of influence on malleable minds. Social media companies are private companies with First Amendment rights. Much of the speech that concerns people, including hate speech, fake news, misinformation, etc., is protected under the First Amendment. Note that the Supreme Court has been asked to review two circuit court decisions dealing with the question of whether limiting the right of social media companies to censor or deplatform certain people violates the First Amendment.
Hartzog: The best possible thing is that Congress would step in and clarify under what circumstances the design of algorithms can give rise to liability. The world has changed a lot since Section 230 was passed, and algorithmic amplification and sorting has a more central role in our everyday experience using online services. I think the best thing that the court could do is limit interpretation of Section 230, instead of trying to tie itself into a pretzel and justifying algorithmic amplification [that is] targeted by Section 230 and that which is not. I think that a narrow reading of Section 230, even one where Google ended up winning on the facts of this case, would recognize that technology companies don’t have a blank check to design their services any way they want without legal accountability.
The important thing to realize is that even if Section 230 doesn’t apply [to protect companies in these cases], plaintiffs still have to prove causation; the First Amendment still looms large. This is a debate in the Twitter case, [which] is much more about the success of the claim under the ATA. I’m not an expert on the Anti-Terrorism Act and wouldn’t feel comfortable opining on the merits of the ATA in this case. But there are all sorts of claims [against Twitter] that one might think of, including violations of civil rights laws, unfair and deceptive trade practices, and other regulatory regimes implicated by culpability in system design.
BU Today: Would there be downsides to society if Google and Twitter lose?
Carter: Yes. It could open internet companies to a host of lawsuits based on third-party posts and would force social media companies to revamp their recommendation algorithms. It is not clear they could do so in a way that would not alter the usefulness of social media.
Hartzog: Possibly. To say that the internet wouldn’t have existed without Section 230 is really to say that the internet that we currently know wouldn’t exist. But if you look at the internet that we have now, it’s not all roses. It’s possible that we can get a better version of the internet with a more limited scope of protections for tech companies. Being able to have a conversation about which designs are dangerous and wrongfully discriminatory is important, and I worry that we get shut off if Section 230 is read too broadly.
That being said, of course one of the worries about diminishing Section 230 is that it will result in over-moderation of content, such that meritorious speech, or at least [speech] necessary to have an open, expressive environment, will be wrongfully curtailed. That’s certainly a valid worry, depending upon how the Supreme Court rules.
It doesn’t necessarily follow that just because you get rid of 230 protections that the sky is going to fall.
BU Today: Some commentators fear a decision from a court that they say historically is tech-ignorant. Fair concern?
Carter: Yes. The justices are certainly not sufficiently knowledgeable about the internet to understand the consequences that could result from their decisions, and more than one indicated concern about that. These issues really should be decided by Congress. Unfortunately, although many in Congress, unhappy with Section 230 and the power of social media, want to make changes, they do not share the same concerns.
Conservatives believe that conservative viewpoints are being limited by social media and want to limit the power of social media to silence individual users. As I mentioned before, there are two cases involving state laws that limit the power of social media to censor or remove certain speakers. Liberals are concerned about hate speech, fake news, and misinformation and want to encourage or even force social media to censor those kinds of speech.
Hartzog: I think that the concern that lawmakers and judges aren’t computer whizzes is overblown. Lawmakers and judges, including the Supreme Court, regularly opine on extremely complicated matters involving science and technology. Listening to the oral arguments, it was clear to me that the justices were trying to get a sophisticated and nuanced understanding of how these technologies work. It’s not as though you need a PhD in computer science to understand the basics of how the internet works. We’ve been living with the internet for quite some time. The idea that justices are incompetent to adjudicate matters related to technology is nonsense. We’ve been doing it for quite some time now, involving laws regulating nuclear reactors and airplanes and other sophisticated technologies.
Comments & Discussion
Boston University moderates comments to facilitate an informed, substantive, civil conversation. Abusive, profane, self-promotional, misleading, incoherent or off-topic comments will be rejected. Moderators are staffed during regular business hours (EST) and can only accept comments written in English. Statistics or facts must include a citation or a link to the citation.