photo of Daphne Keller

77 Lawful But Awful and the Future of Social Media with Daphne Keller (part 2)

Reimagining the Internet
Reimagining the Internet
77 Lawful But Awful and the Future of Social Media with Daphne Keller (part 2)
Loading
/

How do we get better moderated social media platforms without putting governments in control of who gets to say what? For our part 2 of our episode with Daphne Keller, we get Daphne to tell us what the current wave of EU Internet regulation will mean for the future of social media.

Transcript

Ethan Zuckerman:

The EU has expressed a willingness to put a whole lot of new rules in place. Before we start talking about the Digital Markets Act and the Digital Services Act, we should talk about the fact that when GDPR came out, there was a real sense that, okay, now the rules have shifted, everything is going to be different forever. The truth is, I think for a lot of us, the main thing we’ve seen is we spend a lot more time agreeing to be tracked by cookies.

Is DSM/DSA going to be a radical change of pace for the users, or is it really just a change of pace for the compliance departments of these very large online platforms, which I believe we’re now calling V-L-O-Ps? Is it V-L-O-Ps or “vlops”? What are we calling them, them, Daphne?

Daphne Keller:

The people in Brussels say “VLOPs” and you know they came up with the term so I think we should follow them.

Ethan Zuckerman:

VLOPs. Okay, we started the term V-S-O-P in my lab for very small online platforms, which is which is what we like to play with. But “vlops” for now, what’s this going to mean for the VLOPs?

Daphne Keller:

So the DSA, Digital Services Act, broadly has two big buckets of rules. One are rules that are applicable to the VLOPs and also to a whole lot of smaller platforms that are about process, the process for content moderation. Clearly posting your speech rules, clearly having a mechanism for users to report violations or report things they think are illegal, notifying the speaker of what’s been removed, giving them an appeal, transparency mechanisms.

There’s a whole lot in that sort of procedural improvement bucket or procedural excess bucket, you know, depending on who you ask, that applies to the VLOPS and to pretty much everyone else.

Then just for the VLOPS, there’s a second bucket of obligations that are more about new engagement with regulators who can tell them what to do in a lot of much fuzzier ways.

So every year, the VLOPs have to go through an audit and they have to publish a report assessing the risks that their platform creates along a lot of dimensions. to pre-expression, risks to children, risks, you know, every kind of risk you can think of. And then they have to publish a risk mitigation plan. And then the regulators look at that and they say whether the risk mitigation plan is good enough or not. And then there’s some kind of back and forth.

And it’s this sort of behind closed doors regulatory future that I think a lot of civil society groups are most concerned about. Because, who knows? I think the people who are going to be making these decisions in the near term are very thoughtful people. I don’t think they’re going to rush out there and come up with something diabolical or even necessarily a bad idea to ask for. But who knows what will be asked for down the road?

Similarly, there’s something called a crisis protocol where in the case of something like a terrorist attack, there’s a mechanism to get some instructions out quickly to platforms, which could be very useful and also is, you know, could be abused in ways that are really scary.

Ethan Zuckerman:

And with EU legislation, there are all these interesting questions of where the rubber meets the road, right? So we have these EU dictates, but then we have to end up with specific country law. And then as you’re pointing this whole process of, what is an annual risk assessment and how will a regulator decide whether that’s effective and what are the consequences of all of that? We’re not going to know until those mechanisms come into play, correct?

Daphne Keller:

Mm-hmm, that’s right. And actually there’s a lot of just, one of the most interesting things about the DSA right now is looking at the timeline of what is going to happen when. You know, platforms need to start rolling out things like better transparency reports and appeals relatively soon. The VLOPS will have to do that. I think it’s four months out from when they get officially designated as VLOPS and that designation could happen any time now.

But other things like there’s a provision for researcher access to data that’s held by platforms, which a lot of people are very excited for, that can’t come into effect until several sort of triggering steps happen, one of which is that the individual member states have to figure out who their lead regulators will be and form a board. So there’s some things that are interesting that won’t happen for a while and other things that are going to happen very fast and that I think platform compliance teams are racing to sort out.

Ethan Zuckerman:

So there’s a fantastic article in all of this, by the way, that’s really helped me get ready for this interview. It’s on the Verfassungsblog, and it’s called “The EU’s New Digital Services Act and the Rest of the World.”

I think it’s by Daphne Keller, and it’s really helpful in sort of getting an overview on all of this. Look, I can’t ask you to predict the future, and I also will accept that maybe my premise for this question is false. But I think for a lot of people in the US, the GDPR felt a little bit like a damp squid. Like we thought, oh, the Europeans are here, the Internet is finally going to be meaningfully regulated. There are going to be serious privacy considerations.

And it doesn’t feel like the world has changed all that much. I think Shoshana Zuboff’s surveillance capitalism still pretty much applies on all of these platforms. Is there reason to think that it’s a very different world under DSM/DSA?

Daphne Keller:

I don’t think it is a tremendously different world. I think, you know, for a lot of individual users if they were wrongly silenced by a platform, which happens what, millions of times a day?

Ethan Zuckerman:

Absolutely.

Daphne Keller:

Even at a 1% error rate. They have a remedy now. And so that matters a lot to a lot of people.

Ethan Zuckerman:

That’s a big deal.

Daphne Keller:

Yeah. And regulators have a way to directly influence platforms, the VLOPS, the biggest ones. And so that’s a pretty big deal. Beyond the, there are other, you know, certainly many things that matter.

Ethan Zuckerman:

And researcher access, if we ever get there, is an enormous deal. And don’t let me, given how many years of my life I’ve spent, you know, pushing for researcher access, that’s a big deal as well.

Daphne Keller:

Yeah, absolutely. But, you know, there’s kind of a difference between the GDPR and the DSA. The GDPR was about, it wasn’t just about creating better processes for handling user data. It was about creating substantive restrictions on whether platforms could collect and use that data at all. And so, you know, there could well have been important behind the scenes changes about whether platforms are, you know, collecting data for one purpose and then going and using it for some other purpose that we can’t really see.

The DSA by contrast, for the most part, is not about the substantive outcomes. It doesn’t set any new rules for speech, rules for speech continue to be whatever the national law is in a member state or whatever terms of service rules the platforms adopt on top of that. So it really is almost exclusively setting these new procedural protections. So it’s not setting out to make the kinds of substantive changes that the GDPR did.

Ethan Zuckerman:

You pointed out in your discussion of this that these are pretty heavy lifts in some cases for platforms that Facebook didn’t do a transparency report until it had 35,000 employees that these are not the sort of things that you casually knock off in an afternoon. They require some real work and some real effort.

One interesting thing that’s happening while all this is going on at Daphne is that Social media, at least in some cases, is getting smaller. We’ve seen at least a partial exodus from Twitter to Mastodon. Mastodon is really designed to be run around small servers. It hasn’t quite evolved that way.

Mastodon.social and some of these other very big Mastodon servers are starting to end up in a million or more users. They’re starting to look fairly substantial, but a lot of them are also quite small. And there are definitely challenges with both unlawful content and also lawful, but awful content in a federated world.

I personally am terrified that we’re going to find at some point that the Fediverse has ended up hosting a great deal of child sexual abuse material that seems inevitable. We already know that some of these Fediverse services have actually been the platforms for lawful but awful speech, given that platforms like Gab.ai were at least originally built on forks of the Mastodon code, at which point they were defederated from the network.

Do any of these approaches, the US approaches, the European approaches, does anyone help us with how we think about content in the VSOP space, the very small online platform space?

Daphne Keller:

I don’t think they help us enough. So, you know, the European lawmakers in their defense really did try to make rules granularly applied based on the size and function of a platform, which is already about 100 times smarter than most of the bills introduced in Congress, which tend to assume that anyone and everyone immunized by 230 should follow the same rules from Cloudflare to the local preschools blog to Twitter.

So they at least tried, but as you know from that blog post, I think they went too far in the burden that they’re putting on much smaller platforms in a way that will forfeit competition goals in favor of content and speech regulation goals.

But I don’t think we have a draft law or a model yet, and maybe you can tell me if you know of any that really look to provide good rules for smaller platforms or good lack of rules if lack of rules is appropriate or that really look to dealing certainly with federated systems or interoperability of speech platforms. We have attempts to move in that direction, but nothing great that I know of.

And you mentioned the question about the worst of the worst content, child sexual abuse material inevitably showing up on Mastodon or in other federated systems. And I think it’s just impossible to get away from this tension between wanting centralized control to lock down whatever content we think is worst and not wanting centralized control, if it turns out that makes Facebook a nexus for just Mark Zuckerberg deciding or just Unilever deciding or just Modi, like somebody, a government that controls access to a lucrative market effectively being able to strong arm a global platform into deciding what speaks the whole world would see, that’s scary. When you look at that, you start not wanting centralized control. And then when you think about things like CSAM, it’s hard to deny the appeal of having some kind of centralized system to deal with it.

Ethan Zuckerman:

I’m going to just do an aside on CSAM, just because not all of our listeners are going to know the term. Anyone who sort of takes content moderation seriously and who takes sort of Internet law seriously, uses the term child sexual abuse material. And this is because it’s really been drilled into us in part by law enforcement. There is no such thing as child pornography, right? Pornography is a legitimate form of expression for adults. There is no legitimate use of imagery of children in sexual situations, it’s child abuse. And so you’ll hear people like Daphne and me talking very casually about CSAM, but it really is one of these ways in which you can tell whether someone has actually worked in this space or not. If you hear someone talking about child porn, generally speaking, it’s actually a sign that they have not done a lot of work around things like trust and safety.

One thing that Facebook actually deserves a great deal of credit for is that they built quite robust systems to do fingerprinting of known CSAM materials and to be able to pass material to the National Center for Exploiting and Missing Children. One question as we start getting into the small platform web, if we do keep going in this direction of Mastodon and such, is whether we’re going to end up with middleware, whether we’re going to end up with software that would allow me to protect my Mastodon instance without having to be a company as big as Meta and building this database on my own.

Is there an argument that we should be doing things like legislating access to this database as a public good and to make it something that is available as software to help anyone big or small, try to figure out how to manage their communities?

Daphne Keller:

That’s a very complicated question. So I think the way it stands right now that a platform that wanted access to the NICMIC, the database of hashes of known CSAM images could get it. I’m not sure that there’s a, is that not the case?

Ethan Zuckerman:

I don’t think so. And here I’m relying on Brian Levine, who’s my colleague here at UMass Amherst, who says, “No, you can probably get it from Thorn, which is a nonprofit which is offering it for a fee. The reason that people are reluctant to make the NICMIC database available is if you are a bad actor, if you are trying to figure out how to disseminate CSAM, you could use that to check whether the ways in which you’re altering CSAM files will get through the detectors.

So if you’re going to start mastodon.csam.social, having access to that database is exactly what you want.

Daphne Keller:

Which was actually one of the reasons for my reluctance about having a government administered system of access to the database, because then that puts the government in the position of deciding like who’s a good actor and who’s a bad actor. And it also, you know, it creates a situation where the government is holding a list of things that need to be made invisible on the Internet. And hopefully it’s the right list of things. But I think many people would be reluctant to make that be a government job.

And then lastly, there’s this really messy question about the Fourth Amendment and surveillance, which is if the US law seems to be both requiring or compelling or nudging platforms to go out and do this proactive monitoring and sweeping and looking for bad guys or bad content, and requiring them to report that content to NICMIC or to law enforcement when they find it, then there are cases suggesting at that point the platform is acting on behalf of the government and has become a state actor to the point that you would need a search warrant to have them do what they do in the first place.

And this all sounds kind of vague. It’s not vague at all because the upshot is in the eventual prosecution of actual CSAM purveyors, like serious bad guys, you might not be able to introduce the evidence to convict them of their crimes.

Ethan Zuckerman:

Right, right. So this concept that there are tools out there that help us figure out how to moderate platforms, that there are things like a hash database for CSAM, they sound useful in some senses, but they very quickly turn into problematic situations. You can use them in hostile ways.

In the situation of CSAM, you might find yourself essentially becoming an actor on behalf of the government in that sense, and that really complicates how those cases can end up being brought.

At the same time, there’s a lot of smart people, including you, including Frank Fukuyama, some other folks around here, talking about this idea of a middle layer of services that platforms can have access to.

Talk us through just really briefly, what is that idea behind middleware?

Daphne Keller:

Yeah, so in many ways, it’s a response to the presence of all this lawful but awful content on the Internet, stuff the government can’t prohibit, but most Internet users don’t want to see or don’t wanna see most of the time.

And it’s also a response to concerns about the tremendous concentration of power of Republic discourse that we see in the hands of the handful of biggest platforms today.

And in a paper that Frank Fukuyama and others at Stanford wrote on this, they referred to that power as a loaded gun sitting on the table. Whether or not you think it’s being abused now, it is sitting there waiting to be abused at some point.

And so the idea behind middleware is to devolve a lot of that power out to the margins of the network and specifically to the individual user to allow them to say, here’s my tolerance for nudity and here is my tolerance for violence, and here is my tolerance for violence against cartoon animals or whatever level of granularity they’re interested in.

There are two, I’d say major flavors of this idea. One is more of what Mike Masnick described in his “Protocols Not Platforms” piece and what Corey Doctorow tends to talk about as adversarial interoperability or competitive compatibility. And that’s an idea of like a very federated system where there is no centralized point of control but an individual node in the system or an individual user might be able to toggle their preferences.

The other is more of a hub and spoke model. You keep YouTube or Facebook with their dragon’s horde of data and content, but you require them to open it up to third parties to come along and offer anything that’s legal in the content for alternate ranking systems or alternate content moderation systems.

So I could subscribe to a version of YouTube that takes down anything about baseball because I hate baseball and that prioritizes the Philadelphia Eagles because I love them. I’m making this up, I know nothing about sports and has an overlay of an anti-gender discrimination set of preferences from the League of Women Voters.

The idea is, or something from Disney, it could be from a commercial entity, but just open up the opportunity for the end user to select from different providers of content moderation services with different speech rules.

Ethan Zuckerman:

And interestingly enough, the work that we’re doing at UMass with our project around Gobo and this sort of larger architecture that we’re proposing for a social media pluriverse is basically trying to adopt both of those, right?

We are interested both in this idea that you might have a friendly neighborhood algorithm shop that might be useful to you, not just as an end user, but potentially as a mastodon site administrator where you might use something for anti-spam blocking or anti-seism blocking or something along those lines, but then more in the we should be able to sort what’s coming out of Facebook, what’s coming out of YouTube, what’s coming out of Reddit, we have this notion of a client that is loyal to you that is trying to figure out how to use those various different algorithms and sort of create a chain for them. It’s hard.

It’s hard because it’s computationally tricky. It’s legally hard. We know that we’re going to sort of bump into places with any number of these providers who say you don’t have a right to deliver our content the way that you want to deliver it. We believe ultimately we’re actually justified by section 230 by our ability to put forward the filters that we want to have in place. But it’s certainly complicated.

Is middleware the best hope we have at this point, Daphne? We are we are sort of religious on this podcast about this idea that we try to look for solutions in all of this. I feel like we’ve gone from sort of the most hopeless with Elon Musk and the Supreme court into the EU, which might be moving at least in some of the right directions and now sort of getting into some new architectures.

Is there an even better solution on the table that we should talk about before we wrap up?

Daphne Keller:

I don’t think there’s any one solution that addresses all the different concerns that people have about Internet platforms or even about just speech and content on Internet platforms.

You know, we’ve moved almost the full range of human behavior into this intermediated online space. And unsurprisingly, that raises a whole array of different problems that in the real world we solve with very different laws or very different social norms or market based responses.

So I don’t think there’s any one, you know, solution to speak to all of those different problems.

So for example, if your problem is truly illegal content on the Internet, maybe you would support a change to CDA 230 that says if there’s a court order confirming something is illegal, then you have to take it down. But if your problem is about lawful but awful speech, then there’s not much you can do with CDA 230 or with any law without running afoul of the First Amendment. And that’s where the middleware is so useful, I think, as a response, because it gets away from the problem of the government regulating lawful content, which probably any law like that would be struck down. It gets away from the problem of having very centralized control in the hands of Zuckerberg or Unilever or whoever.

And it gets away, part of Fukuyama’s point is it gets away from some economic problems with breaking up the platforms.

Hopefully it leads to a path forward where you can have diversity of content moderation approaches while not having to solve the problem of user lock-in or network effects in the value of current platforms.

So I think it’s really important, and in part because so many of the other ideas lead to dead ends. I also think it has, as you know, I have this response to Fukuyama in the Journal of Democracy raising a list of just kind of like operational logistical problems, including I think really important privacy problems that and economic problems about the incentive to become a middleware offeror or the cost of doing the content moderation, all of which we need to solve to make this at all workable.

But at least these are problems that engineers and creative people can work on and iterate and try to solve as opposed to problems that just run into constitutional dead ends.

Ethan Zuckerman:

So I will close this conversation by pointing out that one of my favorite online search strategies, when I’m trying to figure out how I feel about something like the DSA or how I feel about lawful but awful speech, is to search for that phrase and then put and Daphne Keller so that I can find out what Daphne thinks about it.

Daphne, I’m exhausted, but that was so helpful. It was just a pleasure to have you on.

Thank you so much for all the work you do. You are incredibly thoughtful about helping people take on the full complexity of these issues. And I’m just grateful that we have you.

Daphne Keller:

Thank you so much for having me. Thank you for the work that you do, Ethan.

Ethan Zuckerman:

Thanks, Daphne.