
We’re always told algorithms are going to change our world. And they do, but it always seems to be for the worse. Do we have any alternative to simply breaking the machines that have run afoul of our values and needs? We’re thrilled to welcome Ben Tarnoff back on the show to talk his calls for a “third wave of algorithmic accountability.”
In this episode, we discuss Ben’s recent keynote talk “Democratizing the Internet: Platforms, Pipes, Possibilities,” which he summarized in a recent Substack post, “The technologies of all dead generations.”
Ben joined us on the show last year to talk about his book “Internet for the People.”
Transcript
Ethan Zuckerman:
Hey everybody, welcome back to Reimagining the Internet. I’m your host, Ethan Zuckerman. I am thrilled to have back with us on the program, Ben Tarnoff.
Ben is a man of many talents. He is co-author with Mora Weigel of 2020’s Voices from the Valley. He’s the author of the excellent Internet for the People—which came out in 2022—co-founder of Logic Magazine.
Somehow writing and technology critique is not actually what he does for a living. Ben is a tech worker and actually makes his work in this field day to day. But he’s also one of the very smartest, most thoughtful people about the space of technology criticism and thinking about how technology intersects with our society.
Ben, welcome back to the show.
Ben Tarnoff:
Hey, and thanks so much for having me. And thanks for that very kind introduction.
Ethan Zuckerman:
Well, Ben, I wanted to have this conversation with you for several weeks now. You and I were at a conference at the Annenberg School at UPenn. And you gave a closing keynote at this conference that I thought was just—I haven’t been able to stop thinking about.
It was a talk called “The Technologies of All Dead Generations,” which is a reworking of a quote from Marx.
And if I understand correctly, you think that there is a shift going on right now in how we talk about algorithms specifically, and maybe technology more broadly, and it’s a shift about waves of technology criticism.
Can you talk to us about this idea of first, second, and third waves?
Ben Tarnoff:
So the law professor Frank Pasquale has a piece from a few years ago in which he presents what he calls the two waves of algorithmic accountability. And I think anyone who has followed in any fashion the evolving culture around tech critique and tech reform will recognize the patterns that Frank is discussing here.
The first wave involves identifying problems with software and seeking to mitigate them. So an example of this might be the analysis that came out of the efforts of various activists that showed that facial recognition software didn’t do a very good job with folks with darker skin. And a mitigation that advocates of the first wave might propose is to improve the software so that facial recognition programs could better recognize faces with dark skin.
Ethan Zuckerman:
And that is, in fact, quite literally what happened. One of the people working on that was Joy Buolamwini, who was doing her doctorate with me. She released a paper called “Gender Shades” that documented exactly those problems.
And in “Gender Shades, she and Dr. Timnit Gebru came up with a new benchmark, which had more darker skin, more female faces, and was a better way of testing algorithms to make sure that they did well with darker skin faces.
Ben Tarnoff:
Exactly. So in contrast or in some cases as a complement, the second wave of algorithmic accountability is more interested in asking the question of whether these systems should exist in the first place.
So you might think of the second wave as being animated by an abolitionist impulse. And here a good example would be the efforts in municipalities around the country to pass spans on the use of facial recognition software by police agencies. These campaigns have been successful in a number of different places.
And so here, again, the emphasis is not so much on improving the software so that it can work better, but actually abolishing software systems or at least severely constraining the spheres in which they can operate.
Ethan Zuckerman:
And once again, the fun example here is that Dr. Bulamwini now runs the Algorithmic Justice League, which is in fact very much involved with refusal-based stances, abolition-based stances on a number of uses of AI technologies by law enforcement.
And we should probably mention that this is a critique that builds on a huge amount of work done around the carceral space, in particular the carceral tech space. And this idea that some of these systems simply can’t operate ethically and that improving them is a way of further entrenching them rather than challenging the fundamental injustice of them.
Ethan Zuckerman:
Precisely. So there’s the chronology here where, and I think Frank is keen to point this out as a kind of periodization, where again, both first and second waves are still with us in some form. But the second wave really flows out of the first. There is a kind of evolutionary aspect here.
So in the talk I proposed provisionally what I have seen emerge as a third wave of algorithmic accountability. And the third wave in my view would be about building alternatives.
If you can think about the first wave as harm reduction, the second wave as abolition, the third wave is trying to build completely new kinds of systems. And this is an impulse that is visible in a number of different arenas.
You could think of Mastodon, which has generated quite a lot of interest since Elon Musk’s acquisition of Twitter, as being in many ways the paradigmatic third wave project, an effort to create a new kind of social media. But again, this is an impulse that is visible in a number of other domains as well.
Ethan Zuckerman:
So let’s embrace Mastodon as third wave, at least just for a moment here. One of the things that you said in the talk, that’s it’s just such a nicely crafted sentence and a nicely summarized idea that I’m going to quote it directly.
“Critique is how we make the map of the terrain the creativity of the third wave must traverse.”
So can you unpack that a little bit? These first two waves of critique, one harm reduction, the second one refusal or abolition, and now this third wave alternatives, how are those first two waves of criticism around social media mapping the territory that Mastodon has to now undertake as a third-way project in this space.
Ben Tarnoff:
Maybe the best way to start is to point out that there is usually an emotional component to people’s participation in the third wave in the sense that there is the feeling of relief in getting to build things, in getting to go create projects, create alternatives, in getting past critique instead of just critiquing, which we’ve been doing for quite a while.
In fact, I think there is, you know, quite understandable level of exhaustion with tech critique, you know, seems to, in many ways, kind of circle the same territory over and over. We can break out of that circle and finally start writing code, start designing, start building real alternatives.
So my starting point was sharing that feeling of relief, but then also being suspicious of it, wanting to be a bit cautious about where that feeling came from. It satisfies this very familiar American obsession with being constructive, with being practical, with not just pointing out problems, but trying to come up with solutions. And that appeals to me.
I grew up in the United States. I work in an industry that is very focused on solving problems, so to speak, although, of course, creates money of its own. Nonetheless, I felt a certain apprehensiveness about that. And that’s where I felt we needed to bring critique back in.
And what I mean by this somewhat nomic sentence about critique being the way we make the map of the territory that creativity must traverse is that as we sit down to build our mastodons or our platform cooperatives or our cooperatively owned broadband networks, whatever these new projects that model some different set of values that we’re trying to impose upon the internet.
When we sit down to do that, we are never working from a blank slate. We are always working within the constraints of our technological inheritance. And that impinges on the range of potential that is available to us. And so critique is a way of kind of processing those constraints and highlighting them so that we understand the medium in which we’re operating.
And that inheritance, right, that set of technical constraints that come before us is where we get the Marx quote that you are repurposing for the title of this talk, that sort of weight of history that weighs on our shoulders, and you’re talking about the weight of previous technologies.
Let’s get concrete. You have some examples of how Mastodon, as what you identify in some ways as a paradigmatic third wave project, is both very cognizant of some of those limitations of previous projects and also constrained by those previous projects.
So Mastodon, on the one hand, as those who have used it should be familiar with, is quite a conscious effort to model a different kind of social media. Its creator says this in interviews, that he wants to inject a different set of values into our experience of social media.
So what are some of these values. Well, one of them certainly is anti-commercialism. There’s no advertising. Most of the moderators of Mastodon instances are volunteers. The open source project itself relies on contributions.
And another important value is what we might call devolved content moderation, which is probably the best known feature of Mastodon. It’s decentralized nature, makes it possible for instances to establish their own rules for how content will be moderated.
And in some cases, there have been some quite exciting experiments with democratically managed content moderation.
Now, on the one hand, that sounds pretty promising. On the other hand, if you’ve ever used Mastodon, you know that it looks a lot like Twitter. To the point that concepts are directly borrowed and renamed, “toots” instead of “tweets.” I’ve now forgotten what they call retweets. There’s like all of these funny terms.
Ethan Zuckerman:
Although a funny thing, Ben, is that one of the things that is very fundamental to Twitter’s vocabulary, which is the quote tweet, where you retweet someone and then comment on it.
You know, Eugene Rothko has basically said, I’m not going to let you do that. I think that’s bad for a number of reasons. And therefore, I’m not going to have it.
So it’s this, it’s this very complex inheritance. It is very clearly aping Twitter, but it’s aping Twitter with some obvious critique associated with that we’re going to allow some of the behaviors and not other of the behaviors. We’re going to make certain changes that people have asked for, like giving people a lot more space to comment.
Something you say very explicitly within the text is that the third wave recognizes that not only does technology have politics, but it’s going to embed its politics within its technology. Can we talk about, you know, what Langdon Winner might say about Mastodon with that in mind?
Ben Tarnoff:
So the theorist Langdon Winner has this famous formulation of technical artifacts having politics. And what he means by that is that a technology by virtue of its technical composition, not solely the social relations in which it’s embedded, but actually how it’s constructed as an artifact can alter the distribution of social power in quite direct ways.
So an example that Winner uses, which is I think a broadly familiar one, is the case of Robert Moses and the overpasses he built on this Long Island parkways being too low for buses to pass through.
And this is an anecdote that appears in Robert Caro’s biography, has in fact subsequently been challenged and complicated somewhat, but nonetheless serves I think as a useful illustration of of Winner’s point, which is that in this case, a fairly crude artifact, like an overpass, can embody and enact a political project of exclusion.
Now, the question for us, I think, becomes more complicated because Winner is concerned with how technologies embody and enact politics through their technical composition. And the third wave of algorithmic accountability, we face this constructive task of trying to actually build technologies of what we might call liberation.
But we are forced to do so within a technological inheritance of domination. And this is the question. You are coming up against the constraints of the medium and how do you work against the grain.
And one of the things that you talk about in working against the grain, one of the ways that you talk about working against the grain is ways in which whole paradigms, whole systems of technologies, for some very complex social changes.
Ethan Zuckerman:
One of your key examples here is one that I hadn’t heard discussed in quite this way, was how the steam engine changed the geography of industry in England. Can you talk about that and sort of talk about the carbon drift that comes out of this?
Ben Tarnoff:
I think the steam engine is a useful example for illustrating both what it means for a technology have politics.
And then what are some of the limitations or frictions, let’s say, that technological politics faces—what Winner calls “drift.”
So to drill down with a bit more detail, the steam engine is invented at a time when the primary source of power for cotton mills in the United Kingdom was rivers. You had water wheels that were powered by rivers.
And this was in fact a quite efficient technology, but it had a problem, which is that workers were not in abundance in the areas of the countryside where rivers tended to be. So workers had to be brought in from the more settled areas and housed in what were called colonies.
What the steam engine made possible was a portable form of power so that mills could be sited where the workers lived. And thus you could draw on a much deeper labor pool and drive labor costs down. Now, this is analysis that is presented in wonderful detail by Andreas Malm in his book, Fossil Capital.
And it’s a good example of how not only technologies have politics, but that the politics of the technology is deeply involved in determining that technology’s viability. Because as Malm points out, the steam engine does not represent a more efficient form of power, but it does enable a new balance of class forces between the owners of the cotton mills and the workforce.
Ethan Zuckerman:
Right. There’s a number of ways in which the steam engine is significantly inferior to the water wheel. You have to go and get fuel to power it. It has this tendency to explode.
There are some real serious drawbacks associated with it, but it ends up catching on because the positives are so serious in terms of the labor aspects of this, the fact that you can now hire anybody rather than actually having to pay either a premium for local workers or find a way to bring people out to where your meals are located.
Ben Tarnoff:
Precisely. And this also gives us a good example of what Langdon Winter calls drift. And by drift, he means a set of unforeseen requirements that a technology can impose. In the case of the steam engine, as you pointed out, you need fuel.
In order to have portable power, you have a portable source of fuel, which of course is at first coal. And this initiates the hydrocarbon dependency that eventually rewires the entire economy and has quite significant ecological consequences as we are facing now.
So drift is what you might think of as the entropy in the politics of technology. It’s this what I call the quantum of chaos. It’s this element of unpredictability.
And on the one hand, that is demoralizing for third waivers of algorithmic accountability because as difficult as it already is for them to try to do politics through technology, facing this hostile technological inheritance that they have to work within.
Now they also face drift, which is that even if they manage to embed their values in a technical artifact, drift may intervene and produce a set of unforeseen consequences that erode or perhaps even obviate their initial political vision.
But it’s important to point out, and I think that this was a substance of a conversation that you and I had, Ethan, in Philadelphia. It’s important to point out that drift also presents us with an opportunity, because if drift affects our projects, it also affects the technological projects that we are trying to undo. It also affects this technological inheritance.
So it creates cracks in this inheritance that we can enter into and use as points of subversion.
Ethan Zuckerman:
So let’s return to that conversation.
I think what I ended up saying to you when I heard you present this idea, if I remember correctly, Winner’s key example of drift in many ways is nuclear power and a drift towards bureaucracy, and essentially the recognition that that creating a power source that is both so powerful, but potentially so dangerous, the fact that it can be repurposed for military purposes, for terrorist purposes, the fact that even used correctly, it can be an incredibly deadly technology, tends to put us in these incredibly bureaucratic, protective hierarchical systems. And that even if you didn’t have the bureaucratic state, you would sort of need a bureaucratic state to be able to produce nuclear power and do it in a way that isn’t safe.
And so drift might be essentially a drift in organizational structure that a technology demands in the same way that the steam engine drifts towards this particular dependency on carbons and ultimately on the production of carbon dioxide.
I then offered this wonder of whether the space of drift—for something like Mastodon—is attention, the ways in which human attention is commodified, packaged, passed around to advertisers.
And I’m wondering if there’s a way that Mastodon can kind of find a way to escape from that logic and perhaps from that drift associated with it.
Ben Tarnoff:
It’s a really interesting question. And I think a very difficult one to answer in a speculative way. You know, I think again, a lot of the answers here are going to be produced through experiment, through the experience of building alternatives and frankly seen what happens.
But I think you’re articulating a reservation I have about the attempt to build better social media. However we define that, decentralized, democratic, and so on.
In that the social media paradigm, which of course in its contemporary iteration has been generated through these large corporate platforms, may itself be beyond saving. And this is something I struggle with because I don’t know what the better paradigm might be.
And I also acknowledge, I think something again we discussed in Philadelphia, Ethan, that developing a completely different way for people to connect online isn’t going to happen overnight. This necessarily has to be an iterative process. And frankly, the fact that Mastodon looks like Twitter is what makes it possible for people who are accustomed to Twitter to try something else.
If I came up with some really interesting but completely out there way of connecting online, it may not generate as much interest as something like Mastodon. So I don’t mean to suggest a kind of maximalism that doesn’t allow for the type of iterative experimental process that is called for here.
But on the other hand, I worry that we are imprisoned within an enemy paradigm and that we’re tinkering at the margins, but not breaking out in a way we need to.
Ethan Zuckerman:
So let’s go to the big question there because I think maybe that’s the big question as far as these three waves.
If wave one essentially says, okay, wave one essentially says, here’s a critique, we’re going to try to figure out how to make this tool better. And as you’ve pointed out, you and I are both tech industry veterans. We’re very used to looking at things that don’t quite work and trying to fix them to one extent or another.
The second wave says, wait a second, it’s much bigger than that. It’s not just a matter of taking this potentially harmful system and reducing the harms associated with it. Sometimes the moral, the ethical thing to do is to refuse. It’s to say, we’re not going to do this thing at all.
The third wave of saying, okay, we’re going to do it, but we’re going to do it informed by that critique and informed by that refusal. I agree with you, it feels good. It’s literally what I’m doing. My lab is literally looking at social media and saying, can we do a version of this that is deeply conscious of these critiques.
Are we fooling ourselves with this, Ben? I mean, are we being technosolutionist. Are we sort of letting ourselves off the hook of that second wave critique too easily?
What I take in that second wave critique a lot of the time is if you really want to fix these systems, you need to fix giant social systems. If you really wanted to fix a computational system that makes it more likely that a person of color is going to be inaccurately arrested, you’d actually need to go fix criminal justice in the United States, which is pretty massive.
Is it possible that the third wave is just a retreat to the first wave because it’s what we want to do? How do we make sure that the third wave is really the synthesis of a dialectic, not just turning away from that much harder critique?
Ben Tarnoff:
I think that’s a great question, very well put, Ethan. And I think this begins with the acknowledgement that there is always going to be a very strong temptation for people with a technical frame of mind to go build it.
You and I share that background, I think we take a pleasure in that sort of thing. It is fun to build things that work and see them work. There’s a kind of craftsman-like pride in that as well, which I respect.
So I think we need to acknowledge that there is—among the types of people who have the predisposition and the background that would incline them to build these alternatives—there is going to be a temptation to just build and not think about the difficult issues and not let the kind of frictions of critique bear on our work. So that’s perhaps the first step.
Another element that I heard you raise is this question of, well, how much of the work that needs to be done is technical versus, let’s say, political.
And this is a way of thinking that I associate with that second wave of algorithmic accountability, which is rather than going in and improving the facial recognition program, let’s pull on the policy levers and get it banned.
The real innovation, so to speak, that we need is at the level of politics, whether in the realm of official politics or the many spaces of politics that operate in in our daily lives and not in sitting down and writing some more code. And again, I respect that impulse, but I think it’s incomplete.
I think it’s bending the stick a bit too far because politics also operates within the code.
And that was my hope with this talk and with bringing Langdon Winner back into it, is that when we talk about the political work that needs to be done, it’s not just pulling the policy levers. It’s not just engaging in various forms of social struggle. It’s also struggling over the political projects that are embedded at the level of software.
So I think there is an opportunity here for a synthesis, for a synthesis that acknowledges that we need both creativity and critique and that when we talk about the space of struggle, space of politics, we’re talking about not just the streets, not just the ballot box, not just you know, the various spaces in which we put bodies, but also code, also the technical inheritance.
Ethan Zuckerman:
So maybe it’s sort of a recognition that we can start with the problem and look for maybe what looks like sort of a shallow technological fix, we can realize that the shallow technological fix doesn’t work and that we actually need is deep political engagement.
We actually need transformation of systems. But maybe once we acknowledge that and also, and I think critically commit to that work and agree to engage with it, a recognition that in that transformed world, we’re probably still going to build a technical system.
We’re going to build, we would hope, a much better technical system, a much more sensitive and aware technical system, but a technical system that’s trying to address this new world, and it’s a new world that’s probably going to have a technology associated with it.
Joseph Weissenbaum, who was the early AI pioneer, if we can use that term, and becomes a leading critic of AI is a figure who’s had a lot of influence on my thinking. And Weisenbaum used to say that optimism and pessimism are about probability. They’re an optimist believes that it’s a better than 50% chance that something good happens. Pessimist believes the opposite.
But that hope is about possibility, that the possible may occur, even if it’s not probable. And I like that framing because I think what we need to do is to retain our hope in the possibility of social miracles. That sounds a bit theistic, but I think ultimately a kind of faith is required.
And by social miracles, we could say these occasions throughout history in which masses of people come together to create a better world. We’ve seen that throughout history. And the reality, of course, is that in normal times, so-called normal times, these miracles don’t occur.
But every now and then a window of possibility opens and we’re in this miraculous period. I think we saw miraculous things happen in 2020 with the George Floyd uprising.
Certainly, we’ve seen it at various junctures during my lifetime, although not as many as I would have liked. I hope that’s not too sentimental.
Ethan Zuckerman:
No, there’s also a really intriguing possibility in there that social miracles could work at different scales.
I’ve got a lot of research in my lab right now about Reddit. And one of the things that I love so much about Reddit is that it’s not one thing. It’s tens of thousands of things. And I think some of those communities have accomplished social miracles on really interesting small scales. There are cases of transformation where the combination of a community and technical choices that they’ve made have led to really different behaviors.
And so I find myself wondering whether a social miracle requires us always to get 10 million people pointing in the same direction, or whether there’s the possibility of thousands of smaller miracles that collectively lend us in some positive directions if we can figure out how to communicate the lessons learned from them in the same process.
But I hope that’s in consonance with the spirit of hope over probability that you’re offering.
Ben Tarnoff:
I think it is. And the existence of those little everyday solidarities is what makes these broader visions of social transformation credible.
Stuart Hall has a wonderful line where he talks about how the idea of socialism was plausible to industrial workers of, let’s say, the late 19th century because the strategies for survival that they had developed, what today we would often call mutual aid, modeled in miniature the values of a future social society that they hope to achieve.
In other words, simply by throwing rent parties, sharing food with their neighbors, developing these forms of cooperation, which were matters of everyday survival, that created a sense that a different world was possible, that if you could scale up those daily solidarities into something society-wide, you would end up with a different world.
And I think it’s not too much of an exaggeration to say that we see those everyday solidarities even on our very degraded internet today, even within the market-saturated corporate platform space that we all have to live in.
We see these everyday glimmers of a different way of organizing ourselves online.
Ethan Zuckerman:
Friends, he’s Ben Tarnoff. He’s an incredibly insightful, thoughtful, and optimistic voice about possible futures for the Internet.
We’re talking about a wonderful essay he has online, which is the talk that was given at Penn Annenberg. It’s called “The Technologies of All Dead Generations.” Ben, thank you so much for being with us on Reimagining the Internet.
Ben Tarnoff:
Thanks so much, Ethan. It’s always a pleasure.