Deen Freelon is one of the foremost scholars on how contemporary protest movements organize on the Internet. This week Deen joins us to talk about his work on the Black Lives Matter movement, how he’s trying to understand mis- and disinformation from both the right and the left, and what fixing social media might look like when the scale of platforms like Facebook and Twitter is what makes them so exciting and so difficult to moderate.
During the interview, we talk about Deen’s papers “False Equivalences: Online Activism from Left to Right” and “Beyond the Hashtags: #blacklivesmatter and the Online Struggle for Offline Justice.”
Hey everybody. Welcome to Reimagining the Internet. I remain Ethan Zuckerman. I am thrilled to be here today with Dean Freelon. Dean is an associate journalism professor at UNC. He’s Principal Researcher at UNC Center for Information Technology and Public Life. He is someone that I push my students to as one of the very leading methodologists on understanding the relationship between social media and social change. He’s an expert on activism online, particularly on Twitter, particularly in communities around the movement for black lives. Dean, it’s great to be with you.
Thank you for having me.
I wanted to have a conversation today about some recent research that you’ve done with Alice Marwick and Daniel Kreiss. You have a paper, it came out in September of 2020, a time where I’m sure none of us were paying attention to anything other than what was coming out in Science magazine. But this is a paper called False Equivalencies: Online Activism from the Left to the Right. And you made this case that the left and right in the United States do really different things with Twitter and with online media, from an activist point of view. They actually have different targets and different goals. Can you talk a little bit about how left and right are using Twitter and other platforms for activism differently?
Sure. The first thing I want to say is that the case we make in that paper comes with a pretty big asterisk, which is that we’re simultaneously trying to establish that the left and the right have very different patterns of activism and protests as reflected in the research literature, but the big asterisk is that what we see in the research literature is reflective of the attentions of the researchers and there would be other very important patterns that are present that simply haven’t been studied.
So our biases may be getting in the way of our understanding of how the left and the right are using those spaces?
Yes. That’s always an issue, but we actually sketch out ways in which that’s a particularly pressing issue for this particular question and for this particular topic. So what the research literature says is that… Or, the portrayal the research literature gives is that the left primarily engages in a mode of activism that’s come to be called hashtag activism, and that is where… First of all, it’s a bit of a misnomer because it’s not only online. But I think the term hashtag activism, once you understand it, is quite apt because the hashtag becomes a key signifier for whatever the movement is.
The examples we give in the paper are very, or should be very familiar to anyone who’s been paying attention to the news over the past five to 10 years. So there’s Black Lives Matter, there is Me Too, and then there’s Fight for $15.
So whether you’re talking about people on Twitter, Facebook, or on other social media platforms or offline, these are signifiers that you are going to see, they’re used widely in the media to refer to these movements, and that really becomes the brand and the rallying cry that folks who engage with these embrace and that is in some cases, thrust upon them by external commentators. So that’s how the left does activism today.
The portrayal on the right is primarily of something that’s come to be called a right wing media ecosystem. Rather than engaging in this sort of grassroots-y, hashtag activism, you have more of an elite driven process, but then the right wing media ecosystem has a periphery and it has a central component. The central component is a lot of the big players, folks that people know the names of. Fox News, Breitbart and the rest of those people and outlets that command lots of attention. Then you’ve got fringe players that most people outside of the ecosystem will not be familiar with where a lot of the stuff is even too crazy for Fox, kind of percolates and doesn’t really go anywhere, but occasionally, sometimes something will surface from this periphery, make it all the way to Fox through the Trump administration, in many cases, it got all the way to the president and he would repeat it either in speeches or through his Twitter account.
That really is how information diffused and how the right decided its priorities in terms of activism, through this right wing media ecosystem, which in many cases was either elite driven or the elites would decide which aspects got to the forefront there.
Someone like Yochai Benkler, who’s really made a lot of that case for that right wing ecosystem through a book like Network Propaganda, would probably go even further and say that it’s been incredibly successful in that not only does it steer Fox News, but often the New York Times will come along with Fox News for fear of not covering these issues that are getting so much discussion and probably out of a misguided sense of false equivalence. So is this right wing strategy more successful than the left wing strategy? Are they just different? Do we have any evidence of the two crossing paths? Do we see the left trying to do the mainstream media manipulation and the right trying to do the offline activism around a single topic?
Sure. I think there’s some very important questions about the success levels of each one. First of all, I think it’s hard to measure success in absolute terms. Actually, I suppose Benkler’s colleagues attempt this a little bit, but you could look at something like the total amount of exposure, but you’d have to track that over time because a lot of this is very much event driven. So just because at any given time, one appears more prominent than the other, that doesn’t mean that that side would be more successful or should be considered more successful in general. I think that’s certainly an open question and one that really needs to be measured on a timescale of years rather than months, or within the context of any particular event.
In terms of the crossover, that’s actually one thing we do address, and those are areas in which more research needs to be done. So if you consider something like right wing hashtag activism in presentations that I gave after the paper came out, I point out that during the Trump administration there were a number of hashtags that were basically boycott campaigns. Good examples of this were boycott Netflix and boycott Budweiser, both of which did supposedly “woke” things. And then it’s like, oh, we’ve got to cancel them because they’re doing things that we don’t really like. There’s been very little research done on those. We don’t know what, for example, whether they’re highly elite driven, we don’t know how successful they were, how far they got and how much it actually affected the bottom lines of these companies.
So those were just anecdotes that I talked about. We know they exist, we don’t know how widespread they were. The other piece of it is that you think about right wing media ecosystem, disinformation is a very key ingredient there, right? This is one key theme of Benkler and Hal’s book. They try to answer the question, why disinformation is more prevalent on the right than on the left? And they really peg it to mainstream media. They say, well, mainstream media is a really good filter for this. They really kick out a lot of the crazier stuff. They’re not totally immune to it, but if you look at it in relative terms, they’re pretty good at filtering out some of the patently baseless stuff that really has a lot of cache on the right side of the equation.
So that is one theory as to why that exists and to whatever extent that’s valid, I think that goes to explain a little bit why there is more disinformation on the right and how that becomes a strategy for activism on the right. Whereas it’s largely, or at least relatively, eschewed on the left.
Although in some ways it’s sort of a comforting thought for those of us who lean towards the left. Correct? We can say, well we’re willing to work in the shared reality of universally agreed upon facts and for the right to make their argument makes sense, they need to put forward their own fact pattern that reality doesn’t appear to support. I mean, just in terms of self-congratulatory left-leaning, that’s a nicely confirming way of looking at that. And let me make it clear, I’m saying that as much to my friend Yochai as I am to you.
Do we have evidence of left wing disinfo? Do we have cases where we’re seeing folks out on the fringes of the left, on the left equivalent of the gateway pundits of the world, who are manufacturing facts and scenarios, as well as interpretations?
Yeah. This is something we address in the paper. As you know, it kind of puts people in a weird position if you are on the left, because we’re aware that confirmation bias exists and, at least for me, I’m well aware of the fact of how it looks. Yes, the other side is the one that engages in all that terrible disinformation and we’re all great. We totally agree with the truth and we, you know. So yeah, it’s a self-serving argument. That doesn’t mean it’s not true, but as we point out in the paper and as I pointed out in subsequent presentations based on the paper, if you’re going to make that claim, I think people need to work extra hard to make sure that the empirics of that are as rock solid as they can possibly be to counter those inevitable observations. And they’re absolutely true, about the incentives and about confirmation bias and the rest of it.
So yeah, I think the evidence is there. Actually, additional evidence has come out since the paper was published when you look at a lot of disinformation that really swelled around the 2020 election, which obviously didn’t emerge until after the election had been called. That really supports that pattern of there being disproportionate amounts of disinformation on the right as compared to the left.
Now, you asked specifically about, is there a disinformation on the left? The answer to that question is yes. But when you’re looking at it in an ideological asymmetry framework, the lens is always comparative or relative. So relative to the left, the right has more. I’ll give you a couple of examples of left wing this information that we document in the piece.
One of them is there was a situation… Actually, this example was documented in the Network Propaganda book. During the 2016 election campaign, there was a lawsuit or an allegation to the effect that Trump engaged in some kind of child sex trafficking or something like this, right? There’s been no evidence to support this is actually true, but it pops up every once in a while. It popped up during the summer of 2016 after the book had been published… This is actually a really interesting empirical note. I don’t think anybody’s explored this. So if anybody wanted to do research on this, you definitely should. About a year ago, at the end of May 2020, Anonymous, the hacktivist collective, released a trove of documents that basically reported to supply some evidence that Trump actually had been engaged in this child sex trafficking incident.
Now this information was completely ignored by mainstream media, but it really had a lot of traction on Twitter, and I actually took a look at some of the accounts that were retreating this stuff, and they weren’t political accounts. These were accounts that were mostly into things like anime and K-pop and other sort of pop culture type topics, and then all of a sudden Anonymous with the Trump child sex accusations pops up, and it’s like, my first thought is, well, is this an automated thing? Is this some kind of bot network? Or, alternatively, could it be a situation in which people who are generally… People on the left, or people who don’t like Trump, but who also generally don’t pay that much attention to mainstream news, are now getting information from alternative networks like Anonymous, like other potential left wing sources that don’t hue to very rigorous factual standards and spread this stuff around, and is it that much more powerful for them because they’re not engaged with alternative news outlets that have those very strong pro-fact biases?
So again, I don’t know the answer to these questions, it’s just a very interesting empirical note at this point, but I do think it demonstrates the potential for there to be a left wing disinformation distribution networks, although all the evidence or most of the evidence that we have to date indicates that to whatever extent those networks exist, they’re less robust and managed to spread less disinformation than their equivalents on the right.
So this is a paper from 2020. It’s coming out before the 2020 election, and it’s coming out really as a review of a lot of research within the field. It feels to me like there’s been a little bit of a seismic shift.
I guess you can’t have a little bit of a seismic shift.
There’s been a seismic shift in the space around mis and disinformation with what people are now calling the Big Lie, the insistence on winning the 2020 election. We’re now at a point where more than 60% of Republicans believe that Biden is not a legitimate president. We’ve had 70% of House Republicans vote to overturn or not confirm at least part of the 2020 election.
How do we study this once such central embrace of a conspiracy theory, something that near as we can tell has no factual basis, has actually become part of the Republican party? If one of our two parties is putting forward a false narrative, how do we even talk about this? What does this mean for the sort of analysis that you’re doing in this paper?
A great question, and I want to point out that this is not the first time that a large majority of either one party or even perhaps the entire Washington establishment has embraced something that is patently false. You can go back to something like the Gulf of Tonkin incident from the sixties that was based on something that was largely false. You don’t even have to go back that far. Go back to the Iraq war and WMD. We’re not even 20 years out from that, and that was something that, again, totally false. You can make your argument about whether they knew or not. I mean, I don’t think we’ll ever know who knew what at which time. Certainly the actions taken once it was revealed to be false, at least to me, were not consistent with what you should do when you find out you made a huge mistake.
But I think what’s really important to understand here is that this is not without precedent. So the fact that a lot of people believe this, I don’t think, is any kind of obstruction to empirical study of this particular phenomenon. I think we should just keep on the way that we have been in terms of chronicling the appeal of disinformation content like this.
It’s interesting because in referring back to weapons of mass destruction and the Iraq war, going back to the Gulf of Tonkin incident, which we now know in retrospect was mostly constructed and certainly a large chunk of the administration knew that it was false. What you’re almost pointing to is a presidential power to create alternative realities. That part of what comes along with the presidency is the combination of a sufficiently powerful bully pulpit that you’re almost able to sort of fork reality as it goes. And suddenly we were living in a world in which Iraq had WMDs, even if we know that Iraq did not have WMDs. We suddenly have to actually deal with that universe.
What does that mean for Facebook? Facebook is under enormous pressure to combat mis and disinformation. Should Facebook, should Twitter be taking down all of these posts where people are insisting that Trump won the 2020 election? How should they be handling their responsibility for consensus reality, or is that a space that they shouldn’t be getting involved with at all?
That’s a great question and one that I’ve been asked by a number of reporters over the past, gosh, nine months or so. And I think that the evidence on this point is quite clear, and that is that deplatforming works.
When you take somebody who has been a widely acknowledged spreader of mis and disinformation off of that platform, the total amount of mis and disinformation spread unsurprisingly decreases. There was a New York Times analysis a few months ago that basically demonstrated when Trump was deplatformed from Facebook and Twitter, the degree or the distance that his words spread decreased by something like an order of magnitude. Is a huge, huge decrease.
There’s a big confounding variable in that, right? Which is that he’s also no longer president. Those two things happen almost simultaneously.
Right, yeah. He’s no longer president and he’s also, right. So a huge confounding variable. But one thing that’s certain about confounding variables is when you look at the factor decrease, you consider the degree of impact that the confounding variable would have to have to completely eliminate the effect of the deplatforming. So I think the effect size really suggests that the deplatform probably… There are also other studies on deplatforming.
Yeah. He’s not not our only example. We can go back to Alex Jones and the ways in which Info Wars went from being the most important of the crazy far right, and now has really lost traction to things like Gateway Pundit and some of the other things based in part on that deplatforming.
So, terrible open-ended question that you may want to resist, but, are there lessons learned from activist communities around the use of these tools that we should be taking seriously as we’re thinking about the next generation of these tools? Like people like me, whose lab is very actively trying to develop a next generation of tools, trying to put some other models out there. What are some of the takeaways that you and Charlton and Meredith and the other people that you’ve done such great work with on this have found about why these tools are so important for social change, that we want to make sure that we preserve, or that we pivot towards, as we think about the next generation of the tools?
Okay. I will try my best to answer this question. I’m going to do that by putting on the Marxists hat for a second.
Please, go for it.
I’m going to be a fake critical theorist for a minute. This is not what I studied in grad school, so this could totally be wrong, but I’m going to say it anyway, because I have tenure and I’m going to flex it.
I’m going to draw on a question that was asked to me in a seminar at some point over the last year. It was grad student who was saying… And this is a question that comes up a lot and it has come up ever since social media became a big thing, which is, why doesn’t somebody just create some super distributed system that is not controlled by anybody, but anyone can get on it? And all this, all these kinds of things. Basically the question of, can we create alternatives to the social media platforms that currently exist that would not have so many of the same problems because they’re not driven by the imperatives of capitalism, et cetera, et cetera?
My feeling is the answer to that is probably no, and the reason has to do with the political economy of social media. Most people have no idea how much money it costs to run a globally distributed social media system. Twitter has nine figure users on there, hundreds of millions of users. Facebook has a couple billion. You need to start getting out your accounting sheets to try to figure out how much money this costs.
And so it’s very difficult for me to imagine a social media system of that scale that exists outside of a capitalist system. When you have a platform that exists within a capitalist system, then you get stuff like, oh, we have to let the disinformation people on there because they drive engagement. They were at the top of the reach charts. Or, we have to let the white nationalists on here, because they’re the ones that are paying for the ads. Their content really keeps people in the rabbit hole for longer.
So in some ways, the very system that allows activists to be able to engage in hashtag activism is also in many ways supported by algorithmic rabbit holes that lead people to white nationalism and disinformation that prevents people from, or that makes people think that vaccines are going to harm them and these sorts of things. From a political economy perspective, it’s very difficult for me to understand how to separate those two things.
Now, that having been said, if you want to do something small scale, yeah, you can do that outside of a capitalist system. You can get your little mesh network together and put together something that is very small. But one of the reasons why people love social media is its scale, right? So if you want to find your high school buddy from 30 years back, you can get on Facebook and probably do it. 60, 70% of American adults are on Facebook, so you have a pretty good chance of success with that. That’s unprecedented in human history, I’m pretty sure. Those network effects on a global level are, I think, one of the major attractions to these kinds of social media systems. Small scale systems simply can’t do the same thing.
This is a question that I think is incredibly difficult to reconcile. How do you maintain the advantages of a global social media platform while removing the incentives that the political economy itself creates? I don’t know the answer to that question, but I think that it absolutely is a question worth asking and one that even if we can’t completely separate the platform from its native political economy rooted in capitalism, we at least can try to pair down those edges and get rid of the worst aspects of it if we possibly can.
I don’t know the answer either, and I’m very much feeling in your analysis, like, someone mucking around with mesh networks and very small communities, because that’s certainly my hypothesis at the moment, that that’s at the very least where we’re going to have to start and learn some of the community management and some of the scaling pieces of it.
The last guest I had on the podcast is Omar Wasow. We went back in time a little bit to talk about BlackPlanet and to talk about this moment in the early 2000’s where it looked like people might have different homes online. You might have a home community where you were engaged in one type of speech. You might go hang out in other spaces and engage in different types of speech. And that ends up feeling really quite foreign now in 2021, right?
There’s this sense of we’re all on Twitter, we’re all on Facebook. Why would we need that small sub community? The flip side is that that might actually be a much more positive, much more supportive view of the internet in which we had communities that were much closer to having common interests and the possibility of self-governance. It’s very hard to imagine a world in which three billion Facebook users with no common language, no common identity are able to step in and say, these should be the rules of the road, this is the conversation that we should have, but maybe it is the sort of thing where a BlackPlanet or an academic community or some smaller community could come in with a different vision.
For me, that’s the parallel track that I’m interested in. I am interested in, can we have spaces that are “and”, rather than “or”? I don’t want to get rid of Facebook. I love the ability to go back and find high school friends, but it’d be great to have a community where we as academics have our own space and our own rules of the road and so on and so forth. And I’m really interested to think about whether we could build those parallel tracks. Do you think we still end up falling down in the face of scale? Is it just impossible to be doing the stuff at a universal scale?
I’m going to disagree with you a little bit. I think that we do have parallel tracks. Facebook is its own thing, Twitter is something different, right? It’s been empirically documented people use Facebook and Twitter generally speaking for different purposes. So that’s a parallel track. Instagram is another parallel track. There’s a lot less politics on there because you can’t post links, so it’s very much a visual culture. Pinterest is something else. Snapchat is something else. I mean, Reddit is a really interesting example. To me, Reddit is like the second coming of the Newsgroups. It’s Newsgroups for everybody.
Lots of parallel tracks. That’s right.
It’s super easy to do because you can make your own or you can very easily… That I think is one of the most… That’s the thing that really reminds me most of my high school experience with Newsgroups, except its way easier to do and way more functional.
So I think we do have some parallel tracks, and what I think makes that hard is that in many ways, the issue is that the field is so crowded. We’ve got so many different things. For academics you’ve got ResearchGate and academia.edu. Now, of course the corporate aspect of them is another question entirely, but they exist. There’s some really, really great conversations that exist on ResearchGate that I’ve been able to benefit from. If you like to code there’s Stack Exchange. I’m on there all the time trying to get my coding questions answered and not have people be too mean to me. So, yeah, absolutely. I think that there are different tracks for different kinds of things. What that does, I think, is it really raises the bar for any additional track to come in to answer the question of, what does this track do that existing tracks don’t?
And that’s a question that gets harder and harder to answer every day because people are always coming out with new stuff. Everybody’s trying to be the next whatever. Everybody’s trying to… Attention is the one limited resource. There’s only 24 hours in a day, and there’s only so many hours within that 24 hours that we have to devote to media consumption.
If I’m thinking about it from my own perspective, if there is yet another platform vying for my attention, I’m going to be paying attention to that value proposition of, what do I get out of it? Who do I get access to? How do I benefit from communicating on this platform? I’ve already got my platform, so if I’m going to make room and space for that, the answer to that has to be very intuitive and very convincing for me to be able to try to engage with that, certainly on any longterm basis, even if I wasn’t… Even if somebody I knew for example was doing it, and I was inclined to give them a little bit of time at the beginning. If it was something I was going to integrate into my long-term media diet, it would really have to supply a tangible benefit for me that I can very easily understand.
Yeah. Eli Pariser and I right now are pursuing this idea that the alternative social networks that have been in the most traction so far are networks for people who’ve been deplatformed from elsewhere. That it’s basically, people aren’t joining these new networks unless essentially they have been thrown off and have to move the conversation to somewhere else. So what did we learn from the deplatformed?
The bad news about this of course is that a lot of the deplatformed are people that we probably don’t want to be taking the lessons from.
Anyway, my friends. Dean Freelon, it’s always a pleasure. Thank you for making some time and glad to talk with you about this stuff.
Sure thing. Well, thanks for having me on. This has been great.