
If you want to understand anything about global Internet regulation, you’d be lucky to get Daphne Keller’s perspective on it. We’re thrilled to have the director of Stanford’s Cyber Policy Center on for a two-parter about regulating social media platforms. First off, a speed run through the Supreme Court cases that were designed to reshape the Internet as we know it today. Will they? Well…
In addition to directing the Stanford Cyber Policy Center, Daphne Keller writers briefs for the ACLU and is a fantastic blogger. She is also a regular guests on evelyn douek’s Moderated Content podcast.
Transcript
Ethan Zuckerman:
Hey everybody, welcome back to Reimagining the Internet. I’m Ethan Zuckerman, your host. I am here with one of my very favorite people to learn about the Internet and law from.
She is the director of the program on platform regulation at Stanford University Cyber Policy Center. She is the former general counsel for Google, associate general counsel for Google. She formerly was associate general counsel from Google until 2015. She’s done filings with the ACLU on Google versus Gonzalez. She’s written comments to Meta on the Facebook oversight board.
In general, I find her to be one of the most insightful lawyers in the world on questions of platform power. She’s Daphne Keller. We are thrilled to have her here.
Daphne Keller:
Thank you, Ethan. It’s always wonderful to talk to you.
Ethan Zuckerman:
So Daphne, this is a really sort of auspicious time to get some time with you. As we just mentioned, you’ve been very involved with Google v. Gonzalez. This is a case the Supreme Court just heard. You’ve been one of the most useful commentators in general, not only on how the Supreme Court is thinking about regulating Internet platforms, but you’ve also written about things like the Digital Services Act in the European Union.
Let me start with just a massive broad question. Who’s regulating the Internet these days? Is it Elon Musk? Is it the European Union? Is it the US Supreme Court? Is it Texas and Florida, as I’m sure we’re going to talk about at some point? Who’s in charge here and who should be in charge here?
Daphne Keller:
Well, everybody’s trying to regulate the Internet and it’s a bit of a competition to see who gets there first. I think the biggest, most obvious answer is the European Union because they have enacted now three big pieces of legislation, the GDPR for privacy, the DMA or Digital Markets Act for competition, and the Digital Services Act for content issues for the issues we know here as DMCA issues or CDA 230 issues. And they are rolling forward that is coming into effect and compliance is due soon.
But plenty of other people are working to regulate the Internet. There are problematic laws in the process of being enacted in Canada and Australia. There are really troubling laws in India with ongoing litigation about them. There’s a new draft law in Nigeria. And in the United States, we have a real sort of standoff to see who the regulators will be here.
For a long time, I think most of us have assumed this, you know, this has to be a federal question. We have to have national legislation about the responsibilities of intermediaries like YouTube or Twitter or Facebook for speech on the Internet. Congress has been in a stalemate for a number of years with Democrats and Republicans wanting pretty much opposite things. And so states have moved into the breach, passing an array of pretty crazy laws. And nobody knows if they actually have the authority to do that. And now, you know, two cases have already gone to the Supreme Court, and two more cases are likely to go where the court will tell us a lot more about whether states can do things like that.
Ethan Zuckerman:
So let’s start with those two that went to the Supreme Court, and let’s start sort of on the US territory. One of the things that you mentioned is that in many ways what would be great is if Congress could just do something about Internet platforms. The last time Congress did something about Internet platforms in a real sense was 1996 with the Communication Decency Act, most of which disappeared in a poof of unconstitutionality, leaving us with the remarkably durable Section 230. If there’s one thing that Joe Biden and Donald Trump were able to agree on, it was that Section 230 was a problem and needed to be addressed.
And now we’ve just had a pair of cases at the Supreme Court that have sort of threatened to address Section 230. I have been listening to you as closely as possible on evelyn douek’s excellent Moderated Content podcast. It sounds like what was meant to be the Super Bowl of Internet regulation turned out to be sort of a disappointing JV game.
What was actually at stake in those cases? And why did it end up not being a confrontation over 230?
Daphne Keller:
Well, I don’t think we can draw any conclusions yet about what the outcomes are going to be and how whether how the court will speak to 230. But the cases that went to the Supreme Court are a pair of cases that are very similar for legal purposes, both start with horrible tragedies.
They’re both brought by plaintiffs who had family members die in ISIS attacks. And both are claiming that platforms, Twitter in one case and YouTube in the other, contributed to that in a way that should make them liable under the Anti-Terrorism Act of federal statutes that creates a civil cause of action for people harmed by acts of terrorism. The two cases for sort of procedural reasons wound up teeing up separate legal questions, both of which the court agreed to review.
The Gonzalez case, Google v. Gonzalez, which has had more attention, is about whether Google has immunity under CDA 230 for these claims. And Plaintiff sort of reformulated what the claim was a lot over the course of the case to turn it into something that sort of had a hook for judicial attention. And that hook is asking whether CDA immunity goes away if platforms have done ranking or recommendation or targeting of particular content in a way that they say introduces the harm and that stands outside of 230. So that’s the Gonzalez case.
Ethan Zuckerman:
So when 230 comes about in the late 1990s, state-of-the-art is sites like the one that I’m running at that time, which is Tripod.com, which has 18 million home pages, any number of which have problematic material on them. It’s probably impossible for our little business of 50 people in Western Massachusetts to know what’s on all 18 million of those pages. We’re very worried about a libel suit, something along those lines.
230 comes in and essentially says, it’s okay for you to take some moderation, some editorial control over those pages. It’s okay for you to take down pornography or sort of other violations of your terms of service. You will not be found to be a publisher. The publisher is still going to be the individual who put those pages in place.
YouTube’s got an added level of complexity here, which is that it’s recommending videos. It’s always saying here are the 10 videos that you should watch after watching this video. The petitioner’s argument is that this somehow changes YouTube’s role. Maybe they look more like a publisher at that point. Is that roughly correct?
Daphne Keller:
Yes, the word publisher is very loaded in 230 statutory interpretation, but the basic claim that they should assume legal responsibility because of their recommendations is the heart of the case as it’s been teed up to the Supreme Court. Although the idea that nothing like this existed in 1996 was really refuted by a brief filed by the lawmakers who drafted CDA 230, Senator Ron Wyden, who is still in Congress, and Chris Cox, who is no longer in Congress.
And they said, no, of course there were sites that did some form of making recommendations or ranking or did things other than a binary leave-up/take-down decision for user content. And we knew about them. And obviously 230, which was intended to encourage content moderation covers sites engage in that kind of content moderation also. And if you look at the statute, it lists the kinds of things that immunized sites can do, and it includes words like “organize.” You know, what is ranking, but organizing the content? So, you know, I think it’s really, it’s a stretch.
I understand the kind of the moral claim or the idea that platforms through their ranking introduced a level of harm or a level of risk that wasn’t there from the content itself. I understand the sort of the draw of that line of reasoning, although I think it runs into a whole lot of problems in practice. But the idea that ranking or the use of algorithms somehow didn’t exist in 1996 is wrong.
Ethan Zuckerman:
And once you get into the idea that using an algorithm, once you get into the idea that using the algorithm, using ranking, somehow increases responsibility, we’re not just going after YouTube anymore, we’re going after search engines, right? I mean, we’re really going after most of the tools on the web.
Google’s council paints this very bleak picture and basically says, you know, if we take away 230, if we put on responsibility in in this fashion, you’re either going to have a web where there’s no moderation at all, where it’s the web of 4chan, or you’re going to have the Walt Disney web. You’re going to have this sort of sterile, carefully moderated, let’s not offend anybody. It’s very, very different from what we like about the web at the moment.
Daphne Keller:
Yeah, and this is similar to the points that we made in the brief that I filed with the ACLU, that if the features on sites like YouTube that use ranking suddenly lose immunity, then that becomes the part of the site where the incentives to greatly over remove user content have their impact.
And so if YouTube doesn’t want to risk defamation liability, then they’re going to take down any sort of #MeToo type content making allegations about that. If they are worried, you know, if local police say, oh, this footage of police brutality was unlawfully released and you need to take it down, hopefully they would leave it up, but Section 230 makes that an easy decision. Whereas without 230, the incentives to just take down anything that might possibly create legal risk would be tremendous.
And the idea that this, what the plaintiff is asking for is something really narrow, you know, only affects features that use algorithmic ranking. Well, that’s the front page of the Internet for a lot of people. That’s a Twitter feed or a Facebook feed or YouTube recommendations make up 70% of video views. So it’s actually tremendously—
Ethan Zuckerman:
99% of TikTok and arguably everything that comes through a search engine. Yeah, so it’s an enormous—
Daphne Keller:
I mean, for search engines, Plaintiff was trying very hard to not affect search engines. He was saying, no, no, there’s a way you can rule where the kind of algorithmic ranking that YouTube is doing makes them lose immunity, but search engines are still okay.
And he came up with some really wild theories. Like, well, YouTube creates a new URL for hosting the video. And that URL is YouTube’s own speech and creates live-voting.
Ethan Zuckerman:
There was all sorts of nonsense about thumbnails. And some of these discussions seemed crazy. I have to say, and please let me know that I’m wrong on this, I actually came out of that discussion somewhat optimistic that the Supreme Court, which has given me almost no reasons for optimism, actually understood that they could do an awful lot of damage if they made a bad decision here.
Daphne Keller:
Yes, I agree. I was very encouraged by the hearing. We could still get all kinds of outcomes, but it was clear that every single one of the justices understood the magnitude of the question, understood that defining some kind of dividing line between a kind of algorithm that makes you liable and a kind that doesn’t was incredibly hard and nobody had come up with any kind of plausible proposal for how to do that.
They understood, or some of them at least may clearly understood the economic consequences, not just for platforms, but for all kinds of other parts of the economy that Plaintiff was asking for the speech consequences.
So they were taking it seriously and being thoughtful and careful and pretty nonpartisan. If you are somebody who is discourage by a lot of other things that this court has done, This was an oral argument that made them seem like a thoughtful, responsible court.
Ethan Zuckerman:
So in the hopes that they might remain a thoughtful, responsible court, let’s talk a little bit about NetChoice.
This is going to head to the Supreme court. It is a combination of cases based on state laws passed in both Florida and Texas. And it’s a very different set of concepts, but it really is the same set of questions about who has the ability to tell platforms what they can do with speech.
Can you give us sort of a quick outline of what’s at stake in NetChoice?
Daphne Keller:
Yes. So both Texas and Florida passed so-called must carry laws, saying that large platforms cannot take down certain kinds of speech or cannot adopt certain kinds of policies. Both of them were explicitly the product of Republican lawmakers’ concern about anti-conservative bias on the parts of platforms.
And so what the Florida law said, well, they wrote very long, they say a lot of things, but most importantly, the Florida law says platforms can’t take down anything if it is posted by a political candidate or someone talking about a political candidate or journalistic outlets very broadly defined.
And so if you have a political candidate saying pro-ISIS things, or saying, you know, maybe even saying defamatory, you know, clearly unlawful things, or saying lawful but awful, you know, speech encouraging racism or encouraging anorexia, encouraging suicide, you know, there’s just all this awful stuff that’s lawful that platforms take down now and they can’t anymore in Florida, or they couldn’t if this law actually came into effect.
The Texas law comes out at a slightly different way by saying that the platforms can’t discriminate on the basis of viewpoint when they do their content moderation.
So if there is anti-racist content that they leave up, then they need to also leave up the pro-racist content. If there’s anti-gun violence content that they leave up, they also have to leave up the pro-gun violence content. That one runs into similar crazy consequences in terms of what platforms are supposed to leave up. So the platform sued over these laws saying that the laws violated the platform’s own First Amendment rights to set editorial policy.
And in the first instance courts, the district courts, the platforms won both of those cases. And then at the courts of appeal, they got very different outcomes for the Florida law versus the Texas law. The Florida law was struck down by the 11th Circuit or the must carry provisions were saying, yes, of course, this violates the platform’s First Amendment rights, which I think is a pretty straightforward answer, really, under Supreme Court precedent.
The Fifth Circuit ruling, which is wild, said, no, the censors here are the platforms, and it upheld the Texas law. And so a lot of us thought that because there is a clear circuit split on a big important question in those two rulings, the Supreme Court would take those cases and hear them this term. And instead, what they wound up doing is asking the Justice Department to file some additional comments before they decide, as presumably they will be heard in the fall.
Ethan Zuckerman:
So let’s talk about what platforms actually do. A platform like YouTube has cajillions of videos uploaded every day. And that platform is making at least two decisions regarding those videos.
One is whether they can be posted in the first place. Are they consistent with the terms of service of the platform? And YouTube might decide, I believe has decided that it doesn’t want to host nudity. It doesn’t want to host pornography. And it’s going to take down a lot of that content based on the fact that those aren’t its rules. And then there’s the second set of these, which is YouTube does recommendation.
And this was really what we were talking about in Google v. Gonzalez. Both of these functions seem to be imperiled in the NetChoice cases. Whether it’s Florida essentially saying, you’ve got to have balance in this. Whether it’s Texas saying, you’ve got to have viewpoint neutrality in all of this. Platforms take on a lot of, to use the term that you put out, lawful but awful speech.
Do these lawmakers realize what they’re doing? That they’re really threatening kind of the core of what makes these platforms usable rather than unusable in a way that a platform like 4chan often ends up being?
Daphne Keller:
I don’t think they realized that at all. I mean, I think that the Texas and Florida lawmakers may have never anticipated that platforms would ever try to comply with these laws. I think they were having a great time and engaging in really fun political theater enacting these laws. And to the extent that they thought about this issue, I think they probably fell prey to an error that we see on both sides of the aisle, which is not realizing how much really awful speech the First Amendment permits.
And so, you know, on the more liberal side, we tend to see politicians and advocates saying, “Well, if it weren’t for 230 immunity, then platforms would have to take down hate speech or medical disinformation,” which isn’t true.
The New York Times had to run a correction on this because they said, “But for 230 platforms would take down hate speech.” And then they had to say, “Oops, actually, but for the First Amendment, platforms would have to take down hate speech.” So you can’t fix that.
Ethan Zuckerman:
Right. And in fact, 230 is what allows you to take down hate speech without suddenly taking liability for everything on the server.
Daphne Keller:
Yeah, so it’s that, you know, that’s the mistake on the sort of more liberal side of the aisle. And then I think the Republicans in Texas and Florida had they thought they were talking about, I don’t know, Joe Rogan or maybe Dennis Prager or, you know, they were talking about conservative voices who might offend the liberals and how we need to make sure they’re protected, but they didn’t realize they were also mandating carriage of barely legal harassment and pornography. They didn’t realize that they were opening the door to their teenage kids going on TikTok in search of dance videos and finding pro-suicide content or their grandmothers running into extreme racist diatribes on Twitter.
I don’t think they understood the kind of lawful but awful tide of garbage that they were inviting into their states with these mandates.
Ethan Zuckerman:
Well, what’s interesting, of course, is that the other person who didn’t realize the power of lawful but awful is, you know, the world’s most brilliant billionaire Elon Musk, who in taking over the criticism factory, Twitter, suddenly finds himself moving from, you know, what positioned himself essentially as a free speech absolutist, to someone who seems extremely willing to take down speech, particularly speech critical of him, but also, you know, speech that ends up being a parody, speech where someone comes in that and pretends to be Eli Lilly and declares that insulin is going to be free.
How did any rational human being who had encountered the Internet before managed to convince himself that being a free speech purist was going to be a viable stance for a platform with hundreds of millions of users?
Daphne Keller:
I don’t know. I mean, long before he actually acquired Twitter, he had already moved from saying, we should only take down the illegal things to saying, well, also we should take down bad and dangerous things. But he clearly wasn’t listening to people like Mike Masnick, who has written a ton about how the difficulties of content moderation at scale, or anybody who’s done trust and safety work, could explain how hard it is to identify what is bad and dangerous, and then operationalize that in a way that’s somehow consistent in regulating the behavior of millions of human beings.
So I’m sure he must have known well before the acquisition was completed that you can’t run a profitable business in the social media space without taking down a bunch of lawful but awful speech. Because if you leave that stuff up, not only are you going to lose users, but you’re going to lose your advertisers. You’re going to lose your source of revenue. And that happened. You know, a lot of advertisers fled Twitter.
Ethan Zuckerman:
The perhaps the most informative conversation I had about the Internet in the last 10 years was with the head of marketing and advertising for the S.E. Johnson companies.
So I happened to have been invited by the head of the S.E. Johnson companies to a discussion on civics. And we were talking about how we reach audiences, how we market things. And I had the opportunity to hear from this brilliant woman who controls billions of dollars in ad budgets. And just thinking about how incredibly careful she was about where those ads are appearing, what they’re appearing with, what the possibility of reputational harm is associated with it. And these incredibly high-level conversations she’s having at all times with Google, with Facebook, with Twitter.
It was quite extraordinary to think about the fact that in a very real sense, when we think about who is regulating these platforms, those VPs of advertising for those big product companies, they may be doing more to regulate these platforms than the Supreme Court, the EU, et cetera, et cetera.
Daphne Keller:
Absolutely. I think that’s so important and it’s the mechanics by which advertisers are very actively seeking to shape what speech you can see online are evolving. And there’s coalitions like GARM, the Global Alliance for Responsible Media, I think is the acronym, that are trying to set new standards for defining what speech platforms take down and use that in order to help advertisers preserve brand safety.
And it’s very understandable economically why they want to do that. And many of the things that they want platforms to do are also what I think most normal platform users want them to do, like how about no pro-the Holocaust denial content, stuff that’s pretty widely accepted as a preferences for what we want to see on social media.
But at the end of the day, thinking of governance, if we don’t want Mark Zuckerberg or platform executives making decisions about controversial speech, is it better to have Unilever do it or PepsiCo? I don’t think that’s the answer.
Ethan Zuckerman:
No, so I raise it not as an ideal, but as sort of realpolitik. I think a chunk of what Musk has discovered in this experiment is that you can be a billionaire and you’re probably less powerful than you think when what you’re actually facing is the global advertising industry and their questions about where their brand content is going to end up being safe.
But let’s talk about someone who might be a more promising regulator than the Supreme Court, which frankly seems to be taking cases that in many ways feel a little bit out of left field on this, right? The cases brought around 230, the cases brought around common carriage. I think a lot of people like you who spend a lot of time studying this space look at these instead of say, these are not that hard, right? These are pretty easy things to say if we make those changes, we’re going to break things in a real way.