88. Does Facebook change your politics? Talia Stroud is leading studies to find out.

Talia Stroud
Reimagining the Internet
Reimagining the Internet
88. Does Facebook change your politics? Talia Stroud is leading studies to find out.
/

Our first ever guest Talia Stroud is one of the principal investigators on a slate of social science research investigating Facebook’s impacts on the 2020 elections, and we’re thrilled to welcome her back to tell us about what her team is finding when they look at the funny things algorithms do, the pervasiveness of polarized political posting, and how users decide how long they’ll engage with the platform.

Talia Stroud is professor,  founder and director of the Center for Media Engagement at the University of Texas at Austin, University of Texas at Austin. She is co-lead on an effort to study political content on Facebook as part of permissioned research with Meta, which has resulted in four journal articles so far in Science and Nature with more to come later this year.

You can read her summary of those articles on the group’s Medium page.

We discussed these studies in our last episode with Laura Edelson and a two-parter with Brendan Nyhan (part 1, part 2.)

Transcript

Ethan Zuckerman:
Hey everybody welcome back to Reimagining the Internet. I’m your host Ethan Zuckerman.

We’ve got a real treat today. We’re here with Talia Stroud who was the first ever guest on Reimagining the Internet back in October 2020. We’ve come a long way since then and now.

When we had Talia on last time, she was talking about work on the Civic Signals project with our friend Eli Pariser. We’re now going to talk with her about a set of papers that have come out understanding social media, US politics, people’s attitudes, algorithms, all sorts of amazing stuff. 

Before we get into it, Talia Stroud is a professor. She’s founder and director of the Center for Media Engagement at the University of Texas at Austin. And she co-led with Joshua Tucker at NYU, a group of social scientists who conducted studies with data put forward by Meta, which has resulted in a set of really important and really interesting publications that have come out in Science and in Nature with lots more to come. 

Talia, welcome back. So glad to have you here.

Talia Stroud:
I am so delighted to be here again. Thank you for having me.

Ethan Zuckerman:
You have been a very busy person. You and Joshua are last authors. And for those of us who aren’t in academe, last author basically means sort of running and overseeing the whole project, editing, making sure that this work happens even when someone else is the primary researcher on something. You and Joshua are Last Author on this set of papers that have come out. 

These are papers focusing on the role of social media in the 2020 US presidential elections. The results of these papers are really complex and nuanced, and they’re telling us a lot that we didn’t know previously about social media. Before we get into what we learned from this, tell me how this collaboration came about. And for you, what made you want to take on this research project?

Talia Stroud:
Wow, great questions. Well, the research project came about, there have been lots of discussions, both inside Meta and between Meta and a variety of different academics saying we didn’t have much information in 2016 and in hindsight, wow, we wish we had data on what took place in the 2016 election with respect to social media. 

And so a lot of people were having conversations saying let’s do something differently when we look to 2020. And so a collaboration started to form Social Science One, which was a previous iteration of research and making data available to academics, was a bit of a conduit for this. 

The two heads of Social Science One approached me and approached Josh Tucker and asked us if we wanted to co-chair this effort because we had both been involved as chairs in the Social Science One project. And we agreed to undertake this. We agreed in early 2020, kind of right before the pandemic took place, which is a little bit wild to think about in hindsight. And I think the thing that made me want to take this on is because there were so many questions after 2016. We didn’t know about the role of social media, and people had significant concerns about the impact of social media. 

And so this seemed like a project well worth dedicating time to help provide some of that information to the extent that we were able.

Ethan Zuckerman:
Before we get into 2016 versus 2020, which is sort of the next question I want to ask,

Social Science One was a bit of a controversial effort. Facebook gave a bunch of what we might call permissioned data. It released data to accredited researchers in a fairly limited sense. You had to apply to be within the project.

There were some concerns that the data that Facebook released was not complete. And in fact, some academics have come in and pointed out that it didn’t meet the criteria that they needed to have a representative sample of data. And in fact, it was only through research that they figured out that the data might not be all that Facebook had represented it to be.

Were there any concerns sort of following on a project that was following on Social Science One which there’s been a decent amount of hand wringing about?

Talia Stroud:
I mean, I think that there’s an evolution in terms of how we do this research with social

media. And I think Social Science One was one endeavor to try to do that with strengths and weaknesses, many of them that you’ve shared here. And I see this as kind of a next iteration in an ever-evolving, I think, way of thinking about how is it that we do research on social media?

And I think that in this project, we really tried to think through how would we do quality assurance, for example. And, you know, with data sets getting ever more complicated and massive, I think that the potential for error does exist. And, you know, that’s why we need things like replication. That’s why we need researchers that are critically looking at the data that are produced by these sorts of projects. It’s why we’ve really made an effort in this one to make sure that the data will be available for replication and extension. 

And so I think we learned lessons and I think we tried to apply them to this. I think that the next iteration beyond us will learn lessons from both Social Science One and the effort that we undertook to do something even better in the next phase. 

So, you know, I think we appreciate where this started and tried our best to create a project that represents this next step. 

Ethan Zuckerman:
2016 is an election that was really surprising to a lot of people. It brought Donald Trump to the White House. It also, really, I think, spun the field around to thinking about mis- and disinformation, not only as a critical aspect of our media ecosystem, but actually an electoral strategy, whether implemented by actors in the United States, but also international actors. 

I think if we look at 2016 from the internet scholarship point of view, this question of how important was mis- and disinformation, did it sway elections, that seems to be the narrative that came out of it.

2020 was its own crazy election. It’s the only US election that has resulted in physical insurrection. But I’m not sure that it has as clear a narrative for those of us who sort of study the internet space. I think we all understand it now sort of in terms of January 6th. You’ve been working on this since 2020, what for you are some of those sort of top line takeaways about what we now understand about social media and the US election in 2020? What are some of the big lessons learned there?

Talia Stroud:
Yeah, so I’ll start by saying that when we started this project in early 2020, when you’re doing this research, it requires you to have some forecasting, right, to say, okay, what do we think is going to, what do we think are going to be the main issues so that we create surveys that center around them.

And when we started at that point, we decided to focus on four major areas.

The first was political polarization. That had been a hot topic since 2016 and indeed many, many years before that. And I do think that that is something to focus on when we think about 2020.

The second was political participation, including turnout and how people were choosing to vote.

The third was exactly where you started, which is misinformation, knowledge, information. What people think about and believe.

And then the fourth and final one that I think really speaks to the events of January 6th is democratic norms and confidence in democratic institutions.

And with those four anchors for the studies that we conducted. What we really wanted to understand is, okay, what happens when we change things about the algorithm or what happens when we look at the type of political news URLs that are shared on the platform?

And so I think across all of those, I’d say that the high level findings of what we looked at are first that algorithms are just extremely powerful in people’s on-platform experiences. And I think in some ways that’s a, of course we know algorithms are powerful. But I think the research that we did was able to offer more insight in terms of how algorithms are powerful, what goes on inside that black box.

I think that the second important finding is how significant ideological segregation

is on these platforms, is on Facebook, which is where we analyze this in terms of political news exposure.

And then I think the third finding is an interesting one, which is we examined a number of popular proposals for how you might change social media. And we did this for a period of three months around the election. And we found that even though we did create significant change in the content that people saw and their on-platform experiences, these changes did not sway people’s political attitudes.

And so I think that combination of findings represent some new information about what happened with social media in the context of the 2020 election.

Ethan Zuckerman:
So it’s really interesting that these findings are so much about the algorithms, because I actually think they take us in many ways to a question that’s pretty central for me, which I’m gonna phrase in terms of permissioned and unpermissioned research.

And I’ll show my cards here. I have been doing unpermissioned research for years, which is to say, I collect data sets, whether the platforms want it or not.  And my major project Media Cloud doesn’t ask permission. It indexes huge amounts of media and lets people draw conclusions from it.

What you’re doing is permissioned research. It has to be done with cooperation from Meta. And precisely that condition of being able to manipulate the algorithm so that they work in different ways is something where you absolutely need the platform to participate with it.

Why is it so important to be able to do that form of permissioned research? Are there trade-offs in doing that?

Talia Stroud:
It’s a great question. 

The work you’re doing with Media Cloud, let me just give a huge amount of thanks on behalf of so many of us that have made use of that.

In this research, if we want to understand causal relationships, if we want to understand that this change in the algorithm produces this effect. The way to do that methodologically is to do an experiment is to change some part of the algorithm and then measure what effect it has.

And as you said, researchers don’t have any access to that unless you’re collaborating with the platforms. So if we want to understand causal relationships, this is the way that we can do it with platforms like Meta, like Facebook and Instagram. And, you know, there are some trade-offs here. We were given a large amount of access to data and to envision the studies in this instance. 

The criteria that we had were that it had to comply with legal obligations that the company had. It had to protect user privacy. You know, these are not surprising things and that it had to be feasible. And it had to be within the context of the 2020 election. So we weren’t looking at just anything on the platforms. So we actually had quite a bit of latitude to choose what sorts of topics and things that we study.

But in the broader landscape of things, this is one company out of so many different social media companies that elected to do this in one election, in one country. So I think that the conditions here or the drawbacks of permissioned research that this sort of research relies upon companies deciding that they want to do it. And we have lots more examples of companies deciding not to do it than we have of companies deciding to do this.

Ethan Zuckerman:
Right. And I’ll actually take a moment here because Facebook bashing is a pretty popular pastime. But I will note that for those of us researching in this space right now, the company formerly known as Twitter has become the new bête noir because it has shut off its API unless you pay extortionate rates and is blocking scraping and is really not providing ways to study it. 

I don’t love the way that Facebook makes it possible to study, but I’m incredibly grateful that there are people like you who are willing to jump through some of these hoops and sort of work within it. And it’s clear to me that what you found is quite fascinating. 

Let’s drill first into this question of just how powerful the algorithm is. One of the first papers that came out looked at an experiment where some subset of users got what many of us advocate for, which is the reverse chronological feed. Don’t give me the algorithm, just give me everybody that I’ve subscribed to don’t screw around with what I’m paying attention to, just give me what I said that I wanted when I signed up for.

What happens when we turn off the algorithm and give people quote unquote default Facebook? 

Talia Stroud:
Yeah, it’s fascinating because this is a proposal that many people have suggested, but it would be great if we didn’t have algorithms that prioritize content that we instead just allowed people to see the most recent content in their feeds first.

When we do this, the amount of time that people spend on the platforms, both Facebook and Instagram, goes down. So you see people spend less time. They’re less engaged with the content that they do encounter. And it leads people to different places.

So we actually are able to detect substitution effects, by which we mean people substitute the time that they used to spend on Facebook and Instagram with something else. And we were able to do this because a subset of our participants gave us permission to also look at their web browse and behavior at the same time. So that allowed us to find out that when people saw the chronological feed on Instagram, mobile users then used more TikTok and YouTube and Facebook users, Facebook mobile users turned more to Instagram. So you see people migrating to different social media platforms as a consequence of switching to this chronological feed.

And then chronological feed really did change what content people saw in their feeds.

It increased the amount of political content people saw. It also increased the amount of content people saw from untrustworthy sources. So it had changes in terms of what content people saw on both Facebook and Instagram.

And then as I mentioned before, in terms of attitudes though, we didn’t see significant shifts in people’s levels of polarization and how knowledgeable they were in their self-reported political participation. This changed the chronological, even though it changed a lot of what they saw in their feed, it changed how much time they were spending on Facebook and Instagram, this didn’t translate into attitudinal effects.

Ethan Zuckerman:
So let’s dive into that question of attitudinal effects because some of the pushback on this work. I’m gonna summarize an argument, it is not my argument, but I’m going to summarize it as an argument. 

This research in some ways is a dream for Meta because it says, yes, we’re very powerful, yes, we’re capturing a lot of attention, but we are not shaping political opinion. So please lay off and don’t blame the crumbling of democracy on us.

One response is to sort of say, it’s three months. We know that changing people’s political attitudes is very, very hard to do. We know, as you just said, that we’re bathing in this stuff. And most of us have been bathing in this stuff for at least 10 years and multiple different platforms.

Did you expect that you would see attitudinal changes on a single platform over the course of three months?

Talia Stroud:
We did. I mean, this was an empirical question to try to figure out would this have effects, right? And even in the research literature to date, there’s a lot of mixed findings. There are some studies that are done where they expose people to very short broadcasts on television, and that results in a change in their attitudes. So I think it really was an open intellectual question, an academic question to find out whether or not this was the case.

And as you’ll see in the paper, we hypothesized that there would be some effects here. But we want to be super upfront about the caveats as part of this research. Yeah, this was only done for three months. And although by social science standards, three months is kind of a long period of time compared to the studies that are done in the span half an hour in a lab or online, it still is only three months. So we don’t know whether this would have produced changes during a long period of time. 

I think it’s important to take into account that this was conducted during a heated electoral period. So that’s another just thing to be to be mindful of. And Facebook and Instagram have been around for decades. So we’re looking at we’re looking at what happened in the moment in 2020. 

We can’t do this study where we look at what what would happen if people never ever had access to Facebook and Instagram? It would be a fascinating study. I wish we could do that, but we can’t do that.

And then I think the other thing to just keep in mind is what you intimated, which is that this is one source of the many, many sources that people access and use. And I think that this shows that that did not have an impact on people’s political attitudes.

But I also want to mention that there were other findings. So for instance, when we were looking at ideological segregation, I think that paints a picture that, you know, there are some aspects of the way that the platforms facilitate people’s that cocooning, or I don’t know the right word, but it gives them a way to look at news content that’s primarily used by other people that share their political ideology.

And so I wouldn’t characterize the findings across the four studies as being one-sided in any way, even though it is the case that we didn’t find these effects on political attitudes.

Ethan Zuckerman:
Let’s dig in a little bit to that question of ideological segregation. 

So one of the big findings that you found on all of this is that there’s enormous sides of ideologically homophily. If you are on the left, you are mostly encountering content from the left. If you are on the right, in particular, your most encountering content on the right. What sort of cross ideological content reading did you encounter? And how much was that affected by algorithms? Are the algorithms pushing us further into echo chambers, or are they trying to pull us out of echo chambers?

Yeah, it’s a great question. So what we were able to analyze is all the political news URLs that were on the platform that were post at least 100 times on Facebook. And so from that we find exactly what you say that many political news URLs were seen and engaged with primarily by conservatives or liberals but not both. 

So the story here is one of the polls not one of this great center that’s bringing everyone together that’s not what we found in the data. And I think that it’s hard to say from this analysis whether this is all algorithm whether this some user behavior because users are of course selecting into who their friends went. But I think the study paints a really interesting picture because we find higher levels of ideological segregation associated with pages and groups, which is one of the really, I think, important components of this research is we’re able to look separately at different surfaces on Facebook and that there’s higher segregation on pages and groups than content posted by users. So I think that some of the aspects of Facebook are leading people to more ideological, segregated spaces.

And that could be algorithmic, right? There’s algorithms that suggest what groups you might follow and what pages you might like. There also might be some personal selection there. You are also deciding what pages you’re going to choose to follow and what groups. So I think that it’s a mix of the two.

And in the paper, the analysis looks at what could you potentially seen? What did you actually see? And what did you engage with? And there is a trend of increasing segregation as you go through those levels, especially for those with high political interest. So I think that that shows a mix of algorithm and user.

But I think it shows that there’s something about segregation on this platform that is in excess of what we’ve seen in prior data sets that are looking at web browsing, for example. So segregation in web browsing is like 0.1 and the segregation levels that we find here are from like 0.3 to 0.55. So this is quite an extensive increase in segregation.

Ethan Zuckerman:
Which is interesting because there’s a way in which the way that Facebook is structured and the social structure of Facebook actually encourages some cross-ideological ties. You know, a decade or so ago, I was working on my book, Rewire, about this idea of could you retune a social network to help people out of their filter bubbles?

And my dear friend and colleague, Judith Donath, said, “You know, my Facebook is actually probably my best set of politically diverse content because I went to high school with a whole bunch of people who have gone to the far right, and even though I’m on the far left, I have social reasons to stay in touch with them.”

I end up referring to this as my Uncle Bill effect. You know, I’m going to follow my Uncle Bill on Facebook, whether or not I agree with anything he posts because he’s my uncle, and even if he’s very ideologically far from me. 

So if I’m getting you right, it sounds like people may have a large universe of ideologically diverse content they might encounter. They actually encounter content closer to their own ideology. When they have choice to engage with it, they encounter an even smaller ideological universe. You would say that that’s a combination of choice and algorithm.

Is there any way to pull apart those two factors there?

Talia Stroud:
It’s tough to pull apart, and we definitely aren’t able to do it conclusively in this paper.

I think that it’s fair to say that the algorithm, there’s something about the algorithm there, and there’s probably something about personal choice as well. The other thing that I’ll point out is in that study, we really did focus on these political news URLs, but in another study that we did where we were looking at like-minded content on the platform.

There we were looking at whether the people who you are friends with are politically like-minded. And then we’re looking at the groups and pages. And in that study, we see that around 50% of the content that is in people’s feeds comes from like-minded sources. And only around 15% is from cross-cutting sources. So I think that gives a little bit more context to the ideological segregation findings that there is an Uncle Bill for a lot of people out there. But the preponderance of things that people are seeing on the platform does come from like-minded sources, whether that’s users, groups, or pages. 

Ethan Zuckerman:
Did that change in cases where people were suddenly working in reverse chronological order? It’s clear that in reverse chronological order, they got more news content. In many cases, they got more unreliable news content, which is really an interesting finding ’cause it does suggest that work that’s been done on sort of tuning algorithms to protect against quote unquote, fake news may be effective. 

Was the algorithm breaking ideological segregation or is there no evidence on that?

Talia Stroud:
So we’re able to do this analysis on Facebook but not on Instagram because we are able to categorize ideology in the same way on Instagram. 

So for Facebook, when we switched people to the chronological feed, they saw more content from moderate friends and sources with ideologically mixed audiences. So that was under the chronological feed condition, not the ranked algorithm.

Ethan Zuckerman:
So we’re not able, because of the conditions of the study to essentially say—I know that we’re not able to say on Instagram—but on Facebook, we’re not able to say that the algorithm is challenging echo chambers.

Talia Stroud:
We’re able to say that when you have a chronological newsfeed, you see more content from moderate friends and ideologically mixed sources.

Ethan Zuckerman:
Got it, okay. Are we able to say anything about extremes? I mean, I guess the answer is if we’re seeing more moderate friends, then we’re seeing fewer extremes, is that right?

Talia Stroud:
And I just urge caution with the idea of extremes because we’re just looking at left and right. And there could be people who aren’t left or right, aren’t very extreme. And we didn’t get into that level of detail when we did this analysis. But certainly there’s something about chronological feed that’s leading to more moderate friends, which the inverse is also the case. When you’re in the ranked algorithm, you’re seeing less content from moderate friends and sources with ideologically mixed audience compared to this chronological condition.

Ethan Zuckerman:
I would say the armchair hypothesis that many people have about these algorithms is that they are optimized for engagement. And you find some support for that. It turns out that people engage quite a bit more, they spend more time on the platform, they’re less likely to leave and go to TikTok or YouTube when they are in the algorithmic feed. 

It seems like one possibility there might be that moderate content is not actually all that interesting, whereas content that is less moderate, shall we say, may be more interesting and more engaging, which might give the algorithm an incentive to put more of that forward.

Talia Stroud:
Totally possible. I think it’s worth thinking about how many things change there, though, because not only moderate change, but we also see more political content when we switch to chronological feed. So this could also be that people are like, “Oh, politics. I don’t want to see that. So I think it’s a mix of things which makes these sorts of, I think it highlights how complex these sorts of changes are. It’s just changing to chronological, has all sorts of implications for untrustworthy sources, for the type of partisanship you see in your feed, for political content overall. 

And I think that this research shows the complexity of solutions that have many, many effects on the content that people see and frankly have effects in terms of the people that are putting the content there, right? Because it can have dramatic effects on those producers of content.

Ethan Zuckerman:
One thing you and I have both seen over the last decade or so is as researchers try to understand what platforms are doing, we often see the platforms in turn responding and changing behavior. So there was a wave of interest in the rabbit hole hypothesis on YouTube, this idea that you know Zeynep Tukfeci puts it, “If I search for running, it will push me towards ultramarathons. If I search for Hillary Clinton, it will try to get me to enroll in the Communist Party.”

That, I think I understand in retrospect, was probably a real phenomenon for a while. And I think YouTube changed its algorithm at some point. And so we see evidence of it in early studies, and then we see less evidence in later studies. It’s possible that this research, which let me just say is incredibly well done, really thoughtfully written up, is being published in some of the best journals out there, is going to influence how Facebook, Instagram, and other Meta properties present their work. 

One real possibility from this, by the way, is that Meta may decide that it doesn’t want to be in the news business. We’ve already seen Meta do that in Australia and in Canada in response to local legislation. We’ve heard figures like it representing less than 5% of the revenue.

Do you have a sense for if you were within Meta, what are policy changes you might contemplate coming off of this research that’s been published?

Talia Stroud:
It’s a really great question and I think that to this point, I’m not aware of what Meta is doing in response to this research I should just be super clear about that. I haven’t had conversations with them. They haven’t shared anything with me at all.

I hope that with this research, they’re really thinking about what exactly it is that their products are doing I hope that they’re thinking about the benefits of doing research like this in the first place because it gives information, it does it in a public way so that people have access to this, whether they’re academics or policymakers or practitioners. So I hope that it has that effect of leading them to think internally about what the importance is of sharing data publicly. And I hope that’s an industry-wide effect, not just within Meta. 

But I think some of the findings here might lead to some thinking, right? So there are levels of ideological segregation here. How is that happening? What are the products that lead to that? Is that problematic in some way? Are there ways that that could be changed?

So I hope that these studies at least lead to some contemplation inside the platforms of what’s happening there and things that could be done differently.

Ethan Zuckerman:
Can you give us any sort of sense of coming attractions as far as work that is yet to come out? I realize that there’s probably all sorts of constraints around what you can and can’t say. But is there anything that you’re particularly excited about that you can talk about, or is that sort of in the firmly like cannot talk about its base?

Talia Stroud:
No, it is not. We pre-registered over 12 different studies. And I should actually say that we pre-registration, which is a really important feature of this research.

We tried to be really thoughtful about what this meant to be academics working with an industry partner. And so that meant that we pre-specified what we were going to analyze before we had any data. We uploaded these to date and timestamped websites so that then when we’re publishing, these were making that available too. So everyone can say, “Oh, look, they decided they were going to analyze this before they did anything. Now, here’s what they analyzed and they did all the things they said before anyone knew what the answers were going to be.”

And so we pre-registered over a dozen studies. And some of the ones that were— are coming out, I’m excited about all of them. But one that has kind of been an interesting one is we paid people to deactivate from the platform. So both from Facebook or from Instagram. And these people were asked to not access these platforms. And indeed, we disabled the account with their permission for the period of the election. And then we were examining what changed.

Did their behaviors change? Did their attitudes change? And so that’s an interesting one. And that one I always kind of smile about because in the course of doing this study, when we were first soliciting or asking people, they wanted to participate in this study. She saw some Instagram posting. She got an email telling you they’d pay you to be off the platform, which is an odd—forgive the word meta, use of the word meta here—but a meta moment of when you’re seeing your research being discussed in— 

Ethan Zuckerman:
I’m really curious to see whether you ask people questions about subjective well-being and whether people report happiness being off the platform or so on and so forth. Is that gonna be within that data set?

Talia Stroud:
We were confined to looking in the election. So this was more of a political study. There are some questions that kind of touch on that, but I think that the Alcott et. al. study that was published a few years back where they did, they did some deactivation on Facebook, a different scale and some different details than what we did here, but they did find some increases in subjective wellbeing.

I can’t remember the exact way they label the variable, but something along those lines when people deactivate it. So our focus though is far more on the political consequences of deactivation, but that study I think is one to reference.

Ethan Zuckerman:
We’re heading into 2024. It is likely to be an election with immense amounts of mis- and disinformation because several of the candidates on both sides of the aisle are serial fabricators. 

We are heading into an election where many of the tools for unpermissioned research have been disabled. We no longer have the Twitter API. It’s gotten much harder to study Reddit. Crowdtangle is not as robust a tool as it used to be.

We are also entering an environment where researchers are coming under suit, where we’ve seen platforms sue researchers over the research they’ve done. We are also seeing researchers receiving what I’m sure are completely well-meaning congressional inquiries about the work that they’re doing. It is creating a really scary and really chilling environment.

Two questions here. What are you as a social scientist asking about social media in the 2024 election? What do you wanna know? What do you wanna find out in this coming election? And how are some of these changes in the environment changing how you think about doing research in this field?

Talia Stroud:
So I think as we look to 2024, as much as possible, we need to be doing similar types of research as we did with Meta. I’m not aware though, if that’s going to be taking place on any platform, which should lead all of us to say, what is happening here?

We need to have some sort of way that data access is made available to the research community. So we can at least understand what’s happening on these platforms. So I’m concerned about that. So I encourage anyone listening to think about how is it that you can do some of this research, gather the data that you’re able to gather.

So, you know, I think 2024 presents all sorts of new challenges. We haven’t mentioned AI yet, which is such an elephant in the room right now, because AI could be used to do so many things at such a scale. And I’m very nervous about that.

I followed pretty closely some of the things that Katie Harbath has been saying about, look at all the elections taking place in 2024. It’s an unprecedented number of elections. And I think that it just makes me nervous. Are we prepared for all of the threats and possible turns in an environment in which we have no access to data on some of these platforms based on their terms of service and based on the way in which they’re structuring data access. So I’m nervous.

I think we need this sort of research. I think if we don’t have it, we will be very disappointed that we didn’t get that sort of access.

Ethan Zuckerman:
It really is an amazing and fascinating moment. And I think part of what’s so interesting about about it for me is that as we are now heading into 2024, you’ve given us this really rich set of research and questions that we really want to be answering about the next election, whether or not we’re able to do that, whether or not there’s another opportunity like the one that you’ve had in cooperation with Meta, not clear that we’re going to get there. And even if we do, it’s also clear that there’s other platforms that we’re simply not going to be able to research going forward, which is problematic.

Is there a particular question? Is there a particular research question that coming out of this experience, having done all this work, all this data, you find yourself saying, I really want to know this now?

Talia Stroud:
Yeah, there are two that we’ve really been thinking about. 

One is how is it that content about the legitimacy of elections and other democratic institutions transmitted and how is it that that has become such a defining issue? The question and the things we’ve been looking at there are they can extend beyond social media. I think that this is something that’s that’s become part of a lot of our media ecosystem is when these institutions are seen as legitimate and when they aren’t. And to me—this is going to sound like an overstatement, but I think I might actually believe it to this extent—I think that democracy kind of hinges on understanding that and figuring out how to repair what’s happening right right now. So that’s one. 

And then the second one is we become really interested in thinking about, and you’ll actually really like this one, thinking about how connection happens. So how is it that people can say things in digital forums that bring people together rather than divide them? And are there ways to detect that? Because if I look through the research to date on what it is that we’re developing algorithms for, a lot of it is to to detect the bad stuff. And that’s really good. It’s super important that we’re able to detect misinformation, that we’re able to look at hateful content. 

But I think that there is also some utility to understanding where is it that we see really rich, productive, good conversation happening. And so we have a project that we’re calling Connective Posts, where we’re really trying to figure out what does that content look like that brings people together? Where is it? Can we understand why it occurs in some places and sometimes among some people and more than others.

So those are the two I think that I’m really thinking about of late.

Ethan Zuckerman:
This question of “can we not just look at the bad stuff? Can we also look at the good stuff?” feels like a wonderful place to end on all of this. 

Dr. Talia Stroud, what a pleasure. Thank you so much for this overview of this wonderful and sort of rich data set. We’re gonna be talking with a bunch of other people about these studies, fascinating what we’re learning about them, fascinating how they’re getting conducted, fascinating to think about where we’re going, heading into 2024.

Thank you so much and thank you for the work that you do.

Talia Stroud:
Thank you so much for having me on. Such a pleasure to be here.