34 Fixing Failed Moderation with Sarita Schoenenbeck

photo of Sarita Schoenebeck
Reimagining the Internet
Reimagining the Internet
34 Fixing Failed Moderation with Sarita Schoenenbeck
/

Moderation processes online should reduce harm, offer victims justice they find meaningful, and fix inequity in these social spaces. On all of these counts, the moderation systems implemented by big social media companies fail conclusively. Sarita Schoenebeck from the Living Online Lab at the University of Michigan joins us to talk about what moderation and harm reduction driven by the real-world experiences of victims might look like.

Sarita has published recently on accounability and repair after online harassment, reimagining social media governance, and restorative justice frameworks for online life.

Transcript

Ethan Zuckerman:

Hey, everybody, welcome to Re-imagining the Internet. I’m Ethan Zuckerman, your host. Today, I have with us Sarita Schoenebeck, she’s associate professor at the University of Michigan School of Information. She is director of the Living Online Lab, which has the wonderful acronym of LOL, and she is justly celebrated for really original and creative work on equity and justice in online spaces. Sarita, thanks so much for being with us.

Sarita Schoenebeck:

Thanks for having me.

Ethan Zuckerman:

This show is really an opportunity to talk with some of the big thinkers about the future of the internet, about giant underlying problems with the web that we have today. Can you talk to me right now about the problem that young people, in particular, are having with harassment and abuse online?

Sarita Schoenebeck:

Yeah, absolutely. It’s pervasive, as we all know. And what I’ve really been focusing on lately is the challenges of what to do about it and what to do about it in ways that recognize not all harms are the same and not all young people have the same experiences online. And so, I’m really interested in where content moderation…so I kind of use that term with all the things involved with it…and where it’s working for them right now and where it’s failing them right now. And so some of the arguments that we could talk about are what are the ways that current platform governance, content moderation practices are maybe pretty dramatically failing any social media users, but certainly including young people as well.

Ethan Zuckerman:

So let’s start there. First of all, maybe with the word “moderation”, I’m seeing a lot of people in our field wrestle with questions of whether we should be talking about moderation or about governance. I feel like moderation is usually the appropriate term when we’re talking about something being done to users rather than something being done with users. So for me, there are spaces like certain Subreddits that might be said to be governed, but most spaces are closer to being moderated. How do you feel like moderation is working with or against users on some of the platforms that are most popular with your research subjects? I know that you’ve looked at Instagram and Twitter and you may have thoughts on Snapchat and TikTok and some of the newer platforms as well.

Sarita Schoenebeck:

Yeah. And this is where this, really love this idea that you have about re-imagining because it seems like there’s a lot of areas where we might reimagine what is content moderation, and are there other ways to think about maybe governance and other ideas? In a couple of papers now, I, and colleagues have made this argument that current content moderation practices, which is largely focused on removing content, so content that may violate user guidelines or community guidelines, and removing accounts, might be temporary or permanent removal, those are kind of the two mechanisms for moderating content. We’ve made the argument that they kind of rely on these criminal justice approaches, very Western punitive models that say, just something bad or offensive or wrong happened, and we’re going to remove it, remove the content and maybe remove the person as well. And that’s it, that seems to be how we’re governing billions and billions of conversations daily.

And when I talk about criminal justice, these mirror kind of seemed to be modeled after physical criminal justice, offline criminal justice models, which at least in the U.S. are incarcerate people, they do it inequitably, they disproportionately harm people of color, disabled people, and so we think about those models and are they working well in other contexts, like in our criminal justice system. It doesn’t seem like they’re particularly effective either at helping people to change or kind of be held accountable for their behavior, nor are they in any way kind of dignified or appropriate way to treat human beings in my mind and many others, of course. And so this argument is that social media governance content moderation has adopted these models. So they identify perpetrators and then just kind of try to remove them.

The reasons I think it doesn’t work particularly well is that they overlook the needs and the interests of the people who are targets of harassment. So the individuals or communities who experience harm are completely overlooked in content moderation processes, they don’t try to address the harm or kind of rehabilitate or see what happened and how can we help people to improve their behavior in some way, or if that’s needed. They’re not transparent. So you mentioned procedural justice and we can talk more about that. And they also perpetuate kind of, or they risk at least perpetuating inequalities. And so concerns about content moderation processes as they’re enacted now, which you can also come back to is that they may, again, just reinforce inequities and oppression, where again, say people of color, marginalized voices, disabled people, their content is removed without kind of an ability to say, “Hey, this was an unfair removal, a false positive” and things like that.

Ethan Zuckerman:

It feels like there’s even more parallels to the criminal justice system. One that crosses my mind is that there’s basically no attempt at rehabilitation that in theory in criminal justice, incarceration is part of a process of rehabilitation and re-entry into society. In practice that happens far too seldom, and in fact, a lot of incarceration-based justice systems have essentially given up on rehabilitation, but in the same sense here, we’re basically saying “We’re going to lock you up for a short period of time or permanently, which is to say you can’t be on the system anymore, but we’re not considering the possibility that you’re going to return to our system and perhaps engage in the same behavior, nor as you mentioned, are we sort of addressing the harms that were suffered by the user who is targeted.” Help me sort of think about what alternatives to this system are. What is a more rehabilitative justice or a restorative justice paradigm look like when applied to content moderation on social media platforms?

Sarita Schoenebeck:

It’s a great question. And I’m excited to talk about it, but before I even kind of starting in that conversation, it’s important to talk about restorative justice or other ideas like transformative justice are really complex topics. And so I do think they actually can be translated into social media governance and interesting ways, but we just need to be careful about how directly we can do it, in what ways we do it, the origins of those movements. And so we want to keep that in mind in these conversations that there’s not simple kind of implementations, “Do it this way, and everything will be great”. But the alternative models are, and these have been proposed as alternatives to criminal justice system, a prominent one is restorative justice, as you mentioned. And this is a set of movements, so lots of people in these movements will have different ideas about what it is and what it should look like.

But it’s the idea that instead of sanctioning or punishing people, which just kind of reproduces more harm or creates more problems, oppression, whatever, we should instead try to think about models of accountability. So how do we encourage people to be accountable for harms that they perpetuate? And then some kinds of restorative justice talk about mediation, for example, where maybe the offender and the victim or the person who experienced harm come together with a mediator, community members, and one I mentioned we should be careful about these models when we think about taking them from one context and translating them to another, is things like these mediation models they’ve been used in kind of small local communities. The origins of restorative justice have been typically credited to indigenous communities in Canada and the US and New Zealand as an alternative pipeline from, from incarceration, kind of.

And so whether those would work online is a big open question because we don’t have these small local communities with shared values and shared commitments to the communities, but, but the idea of accountability and, and not just going straight to punishing people for any possible offense, which is what happens now. I think that’s compelling and there’s a lot more we could do online to move there. I think a lens that will be hard for social media companies to adopt, but that needs to be really central, is one of power. And so I think that people who have been historically oppressed and marginalized in communities and societies in the US but also in various regions around the world, need to be most centered in thinking about the design of the solutions.

I think whether we keep existing criminal justice models or use restorative justice models, unless they’re done very carefully, they’ll just perpetuate and bake in the inequities, and we already see that happening now. We could talk about the way those current practices now reinforce inequities, gender race, and things like that rather than fixing them. And so I think restorative justice, if it was just kind of slapped on to whatever existing processes happen, it would do the same thing as well. It doesn’t just magically solve social issues.

Ethan Zuckerman:

So let’s unpack sort of what often ends up going on, on social media right now. These spaces are notoriously desensitizing in the sense that people often engage in behavior online, that they would be significantly less likely to engage in offline, particularly around speech since for the most part, speech is the main way we can harm each other online, but people pick fights. People are abusive to one. Another people are insulting at rates that are probably higher than they would end up being in the physical world. In many cases, in most cases, their speech behavior, that’s outside of the lines of a site’s terms of service, you can report that you are being harassed and show the example of it to the moderators of a site, and they can or cannot take action on you. Other people can report you as engaging in hostile behavior or speech that sort of goes beyond those lines, and you may find yourself sort of subjected to this process of moderation as well. What works in that system and what doesn’t work in that system?

Sarita Schoenebeck:

It’s such a narrow set of remedies and processes and take some examples. It makes no sense that some 15 year old kid who said something horribly offensive once has the same outcome as Donald Trump. For example, they’re both just banned, like in what world is… Would we say offline that these two incidences or trajectory should have the same outcome. Or take two people, maybe one uses really offensive language along with some really racist remark, and another person uses offensive language with something very anti racist training, both would also be banned, and in what world would we be saying,”Hey, those should be treated similarly,” or if we’re parents and we’re talking to our kids about this, these are not the same things. And so I think one of the things that’s so constrained right now with governance is the really narrow set of remedies.

And by remedies, I mean, what happens when there’s some violation of a guideline, if we take a harm-based approach, which aligns with these alternative justice frameworks, we can think about a much wider array of remedies, and so whether it’s an apology and expression of intent to not do that again, a conversation, a little educational tidbit… And for all of these, they may or may not work. The idea is just, let’s think about much broader way of interacting and having conversations, which mirror, what we imagine in communities. We think about healthy communities. We don’t just kind of say something to someone, someone did something wrong and let’s remove them and done. So I take a socio-technical approach to how we think about this, which means that there’s the social behavior on the site, and then there is the design or the technical affordances and they shape each other.

And so I think that we can still protect important principles like speech while having a whole lot of other ways of handling behavior that is offensive or concerning or harmful, and also what to do about it, where maybe removing the content is a last resort, but not is not necessarily the first, second or third or the only route possible. So there’s lots of other ways of trying to engage with people and give them opportunities to be more accountable before we get to that banning process. And I also think that will increase the legitimacy of banning if it’s just for the more severe people who have not been receptive to changing their behavior, acknowledging harms, then yeah, I’m still okay with banning them. I haven’t gone full like restorative justice, I guess, or I think there’s limits where that framework applies here. And so I think it’s just both at punitive and restorative models have a role in governance, but we should expand the repertoire and the kinds of remedies we might consider

Ethan Zuckerman:

The idea of socio-technical systems help us understand why the standards are applied so bluntly and stupidly, in some cases: the 15 year old venting once versus the president of the United States. In a purely technical rules-based system, we consider what words are said. In the socio-technical system. We consider the context, we consider the positionality of the person saying things, we consider the context, how often has this been said, who’s the audience for it, and is it likely to actually incite people to violence? Has it incited people to violence? But if we’re just looking at the technical rule set, we start saying speech is speech, and we’re going to treat it all the same way. Similarly what the platforms have come up with is a purely technical intervention. We will block that account either for a period of time or indefinitely.

Maybe it will cost you something depending on how much of your identity is tied up with the account, but maybe it costs you nothing, particularly if you had an account that no one was following and you simply go and create another one. I’m really interested as you start sort of expanding this repertoire, and it sounds like one of the spaces that you’re most interested in is apologies. Can you talk to me about what this recent paper that you’ve written, which looks really deeply at apologies and how people feel about them, what might apologies do within the systems that we’re considering?

Sarita Schoenebeck:

Yeah. Apologies can be a very important component of restorative justice frameworks and of the idea of accountability, and in one of our studies with US adults, and we asked about the idea of apologies and in general, they were positive about it, along with ideas like banning and removing content, so people can have a range of preferences about what to do when they experience harassment. But we also found that some groups were opposed to apologies, for example, the transgender participants in our sample were less positive about apologies, and that could be because they may feel they’re not genuine. It could be because the idea of an apology may suggest some private communication and DM or, and it could open them to more abuse. I’m excited about the idea of apologies because it can amend a lot of harms in an interpersonal level or, you know, governments apologize too.

And that can be really meaningful signal. But I also think that apologies if they’re forced, demanded, they can, again, exacerbate harm because we take like, let’s take the example of someone posted racist content and someone posted anti-racist content, and they were both removed, asking the person who was posting anti racist comment, and maybe they’re responding to something that to now apologize is just an egregious trajectory to imagine going down. So I think there’s a lot of ways to move forward in this. One thing that I think is a really convincing argument is to say that people who are one time offenders or rare offenders, and I’m using the word offenders, we should say people who perpetuate harmful content, if this infrequent, is not routine, we should give them alternative pathways to acknowledge that they messed up. They made a mistake. They can express that they don’t, they’ll try not to do it again.

In that case, we may not need to remove or ban them necessarily. The people who repeatedly are harmful, they may be engaged in networked behavior that is incredibly harmful for the victims or the people experiencing it. Those can go straight to the punitive content. And that’s something I think that companies can and should implement like, now. And, it’s, they know who are repeat offenders, who are likely to be engaging in network harmful behavior. And they know who’s probably just a one-time really messed up, a person who messed up one time and just needs a different pathway to reform their behavior.

Ethan Zuckerman:

Increasingly I’m of the opinion that moderation is the problem that the problem isn’t a problem with moderation. The problem is moderation. That moderation implies that some group of people is going to enforce a set of rules, and your adherence to those rules is what matters, not your input into those roles or your discussion around those rules. To me, it feels like the healthiest communities are the ones that actually have robust conversations about what the rules should be, how we should deal with sanctioning, whether it’s removal, whether it’s apology, whether it’s public shaming, whether it’s any number of other things that people sort of involved with that. So for me, the experiments I’m really interested in taking on are: can we build small communities that are operating around governance rather than around moderation? What are some of the experiments that you want to do in this space? What are the experiments that would really get you closest to sort of answering your questions about what’s wrong with moderation and what models that are closer to restorative justice might work in online spaces?

Sarita Schoenebeck:

Good question. And I love the, or I appreciate these shared concerns about content moderation and what it gets us and what it doesn’t. I would love to see an ability to more easily move between sort of controlled lab studies, survey studies, where you can know about the people you’re running the studies with. You can look for certain outcomes. For example, with some of the surveys I’ve been doing lately, we can ask people, tell me about yourself. What’s your gender, what’s your race, other things, and look at their preferences so that we can catch problems where we might say on average people like apologies, but they’re really harmful for some groups. And I’d love to be able to move more seamlessly or transition from those kinds of studies to the field experiments, the online studies where concern would be, you might see really nice result, but you don’t know if it’s harming some groups in the process, even while it looks good for the site overall or whatever, and be able to kind of move between those two.

So maybe have pools of people online, where you know more about them. You can look for what safeguards are in place to make sure those who may be most harmed can be especially attended to and protected, and so, as soon as we realize, “Hey, these measures look really good overall, but these groups they’re not working for at all, we need to stop this” and, and we need some kinds of accountability auditing independence. I think because the companies may not have much incentive to stop something that works for most people, even if it harms some people more. Some of the also powerful lessons are about different countries value systems.

So Jillian York has written about this, there’s a variety of colleagues, Amna Batool, a PhD student who’s looking at women’s experiences of gender based violence in Pakistan, in a country like Pakistan, for example, social media is such a profound kind of cultural experience. For example, women maybe just have a photo of posted them online and it could be shamed, there’s even honor killings based on social media postings. And so many things about governance too. We need to think about really radically different value systems and family commitments and things that here in the US, I think people don’t think about very often.

Ethan Zuckerman:

Yeah. I think that added complexity of the fact that these rule sets that we’re working within are created with one culture, they are applied to all sorts of other cultures. It really raises the question of whether it’s possible to govern these spaces from that US point of view when they’re working in such radically different places. So Sarita Schoenebeck, it’s been wonderful to have you with us today. Thanks for making some time for us.

Sarita Schoenebeck:

Thank you so much for having me. It was a great conversation.

Leave a comment

Your email address will not be published. Required fields are marked *