80 *Slaps Roof of Algorithm* You Can Fit so Much Taste in This Thing with Nick Seaver

photo of Nick Seaver next to his book Computing Taste: Algorithms and the Makers of Music Recommendation
Reimagining the Internet
Reimagining the Internet
80 *Slaps Roof of Algorithm* You Can Fit so Much Taste in This Thing with Nick Seaver

Do Spotify’s algorithms make a listener’s music taste, or does taste make the algorithm? Nick Seaver embedded himself as an ethnographer at a music recommendation software firm to learn about the the very real way very specific people influence the algorithms that power our automated world.

Nick Seaver directs the program in Science, Technology, and Society at Tufts University, and his fantastic book Computing Taste: Algorithms and the Makers of Music Recommendation is out now on University of Chicago Press.


Mike Sugarman:

Hey everybody, welcome back to reimagining the internet. I am your producer, Mike Sugarman, and I am joined today by Nick Seaver. Nick is an anthropologist. He is a professor in the Department of Anthropology at Tufts. He directs the program in science, technology, and society there. And he just wrote a really fascinating book, Computing Taste: Algorithms and the Makers of Music Recommendation. It’s an ethnography of the people who build music recommendation systems. So that’s, you know, for example, when Spotify puts a track after the track that you just listened to, someone made that. A lot of people made that and Nick embedded himself with them. Nick, it’s a pleasure to have you.

Nick Seaver:
Thanks for having me on, it’s great to be here.

Mike Sugarman:

So from what I understand, you basically showed up at a lot of people’s workplaces and watch them build music recommendation systems. But I think I caught an implication somewhere in there that you also interned at a company as a resident ethnographer. So tell us more about this.

Nick Seaver:

Yeah. So I basically went into this was my PhD dissertation for an anthropology degree. And I basically went into that program thinking, I’m interested in music and automation in the relationship between technology and culture. I want to see sort of how people working at that intersection think about what they’re doing somewhere in the world. And I thought, okay, music recommendation, this is about circa 2010. Music recommendation sounds like an interesting place to do that. And so basically that was as much of the idea as I had And to kind of pursue that question, what I ended up doing was basically trying to, yeah, observe people who were working in this domain.

Now, one thing you will discover if you try to observe people working in settings like tech companies, is that they don’t always want to be observed. Or more specifically, the lawyers that their companies don’t want them to be observed because they’re worried about what might get out into the world from this kind of work. So it took a really long time for me to get anyone in the company to agree to sort of let me be there for any protected period of time. So for a long time, what I was doing actually was I was interviewing people. So I was often in their offices, but not for a long time, interviewing people regularly. You know, I interviewed some of the same people over the period of several years actually to get to hear how things were changing for them, what, you know, to earn their confidence, as we say.

And I was going to a lot of research conferences. So venues like the Association for Computing Machineries Recommender Systems Conference, which is the kind of annual conference for people building recommender systems. So I was going to conferences like that, hanging out with people, interviewing them, talking to them, and then eventually, yes, I did do a three month internship at a music recommendation company that I call Whisper in the book, where I got to sort of see things, quote unquote, from the inside for a bit.

Nick Seaver:

I think it had just come out in the US. So I think it had existed in Sweden.  My general timeline for this in my head, which I don’t remember the specific dates of anything anymore, but I feel like circa 2010, a year or two before that, is when you get this sort of batch of on-demand large catalog music streaming services that operate in the US and actually have deals with most of the major record labels.

So this is the kind of moment when it starts to feel plausible to people that this is actually happening. You’ve got people for a decade plus before that point in time trying to make this work, but it’s usually one label or some weird catalog. And this is where we really start to see the Spotify model as people usually think about it now, coming into existence. So there was a Spotify. I don’t think I had a Spotify account when I started, but it was just starting then. So it was an interesting moment to be doing it. 

And I also, it was also interesting because algorithms, the way that we talk about them now, were not really an object of public discussion, the way that they are now, right? Nobody talked about algorithms. I didn’t think I was doing a project about algorithms.

Mike Sugarman:

One  thing that I think was emerging around that time, I remember there was a lot of conversation in the early teens about something you briefly mentioned in this book, a company called Echo Nest, which I think comes out of the MIT Media Lab as an early startup that’s basically built an algorithm for finding similarities between different sorts of music, based on, I believe, it was a variety of attributes, including things like rhythm, tonality, tempo, instrumentation. And then I’m sure there was a bunch of metadata stuff in there also, right? Such as like year record label, geographic location.

Echo Nest, I remember at the time was something people thought was really interesting and really cool because it’s like, wow, and with enough data, draw all these associations. This is kind of around the same time that Pandora was mildly popular, if it ever actually became popular. Pandora, which basically you started listening to a song, it would make a whole radio station based on that song. But Echo Nest, you know, it ultimately gets acquired by Spotify, I believe.

Nick Seaver:

That’s right. 

Mike Sugarman:

Where at least the two guys who started Echo Nest got hired by Spotify, which is maybe functionally the same thing. 

Nick Seaver:

The whole company got acquired, for sure. 

Mike Sugarman:

Okay, the whole company got acquired. And, you know, I think that that that marks the beginning of Spotify as we know it today, where Spotify is not just a place you can find a bunch of music, but Spotify is where you might be discovering music with the help of these algorithmic recommendations. But I might actually ask you to tell me what, how did you use to find music online at the beginning of your research? like before all of these algorithmic recommendations really took hold like you were, you know, you were a PhD student, how were you finding out about what you wanted to listen to? 

Nick Seaver:

Oh, that’s a great question. Now, I think you’re getting to some really interesting things here in all these levels of questions, which is that it’s hard to do the history of these systems for a lot of people. Like it’s hard to remember what things were like before the system that feels so naturalized now, right? So like, oh, what was it like before Spotify? One like concrete example, sorry, This is not answering your question yet. But this idea that people have that a recommender system is intrinsically a creature of the large on-demand catalog. So the idea that the Spotify model of all 40 million songs or whatever it is now and a recommender system, that those go together, feels so natural to people now. 

And it’s hard for them to remember that recommender systems existed before then. The first modern recommender system for music, It comes out around 1994 and nobody’s got on demand streaming services. It’s a recommender to go to tell you what CD to buy at the store. And it’s it’s wild to me because it feels so natural now to think like, of course, like the reason we have recommender systems is because, you know, 40 million songs, what are you going to do about it? And so on and so forth. 

How did I listen to music? I was a hoarder of digital music at the time. I came of age at a time where it was easy to acquire MP3 files. And I did that. And so I had a lot of them. And until one horrible summer day, I remember vividly still my external hard drive crashed. And I had not backed them up because you know, easy come easy go. And they were all gone. All of it. All of it gone. I have still somewhere a PDF printout of the iTunes library, like text of it. So I just like can see in time, you know what that library was in my Glory Days. I believe it’s called itunesglorydays.pdf somewhere on my computer. 

But yeah, I was a hoarder of digital music and I was, you know, I don’t know, I really started to enjoy the on-demand streaming stuff in part because it let me replicate my hoarder tendencies in another venue, except the problem was I was also a hoarder of like weird music and it was always upsetting to me when I couldn’t find something that had previously been in my orders catalog on the cleaned up Spotify catalog. 

Mike Sugarman:

Yeah, and there’s a whole other conversation that we could probably have about the subtle politics of Spotify, which is if it’s not on Spotify, it doesn’t really exist. That’s kind of, I think, the implicit threat that Spotify likes to make to up-and-coming musicians and artists and stuff you have to be on here if you actually want to do this. We thought that is maybe a different conversation. I think what I think conversation that would be more pertinent to your book is, you know, what you describe as this kind of like digital hoarding, downloading all these mp3s. I mean, you know, that was me too. It was a lot of people I knew. It was a lot of people I knew who are into music. It’s honestly most of my friends. It was a lot of the people I interact with on the internet at the time.

And again, another conversation we had there about if it was good or not for musicians. I would reference listeners to our episode with Damon Krakowski, where Galaxy 500 where he says, “Hey, I love when people buy our music. They would usually buy our CDs because they got to try it first.” There’s an argument you made for that. But, you know, awarding music suggests that there’s a lot of music online at the time. might call this a kind of information overload, but I don’t think that’s anything that you’re describing. I don’t think it’s really anything I experienced. 

If anything, it seems like access to all of that stuff was a way to kind of make sense of the vast amount of music that has been recorded over the course of the past hundred years exists in the world. But that a lot of you described in this book, that a lot of the kind of drive to build these systems, these recommendation systems going going back to the 90s is this idea of information overload. The internet has just put too much information in front of us, people’s brains, they’re just not built for it. It’s kind of this like weird pseudo-neuroscience stuff. I would love to hear, not just a bit about, give me that like 60 second history, but I also kind of like to hear your take on the information overload stuff. If that’s valid, if the people you did ethnography I’ve even really believed in it at the end of the day. 

Nick Seaver:

Oh my gosh, yeah, okay. So I have a lot of opinions about this. As I was working on the book, and this actually didn’t come up that much for me in the dissertation version. This is the thing that really grew as I was revising the dissertation into a book. I was thinking about this question of why do recommender systems exist? Because I think a lot of people who are critical of recommender systems, which is a reasonable thing to be, have this idea that they should, they don’t, are unnecessary, right? We shouldn’t have these, we don’t need them. 

And I was thinking, okay, well, just like concretely, why the people who make them think that they’re worth making? And there is one answer that you will get, you’ll get a lot of answers eventually, but the one answer you will get first, I can almost guarantee it from basically everyone, is this information overload reason, right? There is too much music, there’s too much whatever.

So I got really into this question and I’m just kind of like a contrary person. So I had this approach to it, which was, is that really true? Like, is that really a problem? The too much music? And if you look, there’s a really interesting literature on the history of information overload. And we’re talking here sort of two literatures. One is the history of sort of information as a concept, which is not that old, right? 100 years-ish in its modern sense. 

And the other is this history that sort of goes beyond the concept of information, but to find examples of things like, you know, scholars writing in the early modern period after the invention of the printing press saying, “Oh my god, there’s so many books. What are we going to do about all these books?” And it’s really interesting to see this same concern get repeated over a pretty long historical time span. Not because it means that the people in the past were wrong. I think there’s a lot of tendency to show a picture of people reading the newspaper at circa 1900 on the on the Metro and say, Llok, these people are absorbed in media just like we are on our phones. And that means that every worry about media was wrong. That’s not necessarily true, right? The guy who was worried about there being too many books in 1600, he, maybe there were and maybe there still are. Maybe there still are too many books. You could totally be right. 

But what I was interested in was the way that this information overload idea functioned as what we would call an anthropology, a myth. And this is where it gets a little bit like arch. But in anthropology, when we talk cost-wing a myth, we don’t mean that it’s like a lie or like a fake thing that needs to be debunked. What we mean is it’s a kind of story about the way the world is. And it does not matter whether that story is true or false. There is no fundamental truth or falsity to this claim. 

And I started to think about claims of information overload, not as claims about like a literal factual claim. there’s too much stuff, like actually, but rather a kind of story about what the world is like. And if you look, you’ll find really interesting versions of this story where people are like, yeah, we exist in a world that’s like fundamentally overwhelming. And humans have developed all sorts of techniques to try to help us cope. And those include algorithmic recommender systems, but also like society itself. And you know, which is a way for us to you know, the division of labor is a way for us to like let other people do things on on our behalf because, oh my God, what would you even do if you had to do everything yourself? 

The answer is you would do a lot of stuff. Like you would do fewer things, you would do it for yourself. And so I think that that was useful and interesting for me to pursue because at the end of the day, why would we think of having access to 40 million songs on Spotify as a kind of problem to be solved rather than as like a resource that you can use or not use. 

Mike Sugarman:

This is something the journalist Joshua Minsoo Kim talks about a fair amount that would Spotify opens up that a real boon is access to all of this international music that we would never really know existed before.

So it’s actually a great gateway into like global pop music. You know, you can really go deep and learn a lot about Afro beats, K-pop, you name it, whatever. I guess my question for you Nick is what came out of people building these tools with the assumption that there was a problem to solve and what did they end up creating with these music recommendation algorithms based on those assumptions? 

Nick Seaver:

That’s a great question. So I think that the the point that I end up at with this information overload idea is that it’s not enough to say that a certain amount of material is overwhelming, right? Usually stories of overload are like, “There’s 40 million songs on Spotify and that’s the problem.” Those numbers don’t mean anything, right? What matters in overload is this relationship, right, between an archive and some sort of overloaded subject. And this is where I really pick up from this history of overload literature, which some people in that field call this a kind of, what’s the line that I quoted in the book?

It’s like an uneasy balance between desire and anxiety, right? You want the stuff, you kind of want all of it, and you’re a little nervous about all of it. Without that kind of desire to listen to everything, you don’t have that problem, right? Without that sense of like, like, oh, I wish that I could have already listened to everything. It’s fine. And so you need that. You need what I call in the book this overloaded subject. You need this idea that the user is like that. 

And so what does that mean in practice? Well, one thing it means in practice is that recommender systems, in general, I should say that recommender systems can be designed a lot of different ways and with different kinds of users in mind. And they are certainly done that way now. But I would say that the bulk of them are designed with the idea of a user who is too overwhelmed with whatever it is. Maybe it’s their life to put in a lot of effort into finding things. 

So the assumption is that the typical user of a recommender system is someone who is not going to engage super to a great degree with the interface. They’re not going to do the stuff that you were just describing of going in and hunting down some specific regional artists themselves. But to their credit, they might be interested in that. They might want to go there. They just may not have the energy to do so or may not realize they want to do so. And so often the goal of the recommender is described as that, is described as helping people who don’t have that much interest in exploring the catalog find things in the catalog that they might not have otherwise found. That’s the sort of idealized mission of it. But it’s definitely a shift from, sorry, I should say it’s a shift in the mission of recommender systems. 

These early ones, like the one I mentioned, that’s sort of 1994, that’s the Ringo system, which also came out of the MIT Media Lab from Patty Maes’s Software Agents Group, which was really explicitly aimed at like music enthusiasts, like people who are really into music. And for them, it was figured as this kind of aid to exploration. Like you had to do kind of a lot of work build your profile, your taste profile, we would call it now. And then the idea was it’s worth doing that because you want them to help you find new stuff. 

And importantly, the idea was not that you were going to replace the way that you found stuff with this, right? You were going to use this also. This was a fun thing you could do in addition to the other stuff you were already doing. And I do think actually that’s a significant point because a lot of what we see as sort of negative of externalities of recommender systems now, I think derives from the everything through themness of recommendation. When they’re being designed, even as recently as 10 years ago, they are not being designed to be the funnel through which all content flows. And everything changes once all content flows through the recommender, I think.

Mike Sugarman:

Yeah, that’s an interesting point. So something that I find myself thinking about weirdly a lot is this website that you’d be surprised some people who listen to this podcast still use and they’re probably the younger listeners. Last.fm. So Last.fm, yeah, you laugh, right? You probably use a lot because you have a big digital music library and it was a great way to keep track of what you were listening to when you were like downloading. 

Nick Seaver:

I love Last.fm. Yeah. 

Mike Sugarman:

Last.fm, as you may remember, had two features that don’t exist anymore. I don’t know if you know why those features don’t exist, but I can tell you real fast. One was something called Shoutboxes, and that was like, let’s say you were on the animal collective page of Last.fm. There was actually a social media group that was effectively the animal collective devoted social, there’s a group where you could talk about animal collective in anything else. I talked to someone a few months ago who met a long-term girlfriend through the animal collective Shoutbox because they both figured out that they were at NYU.

Last.fm doesn’t have that anymore. I think because they didn’t really have the funds to hire the moderators running. Last.fm also had a radio service. So again, Lastf.m keeps track of everything you listen to. That’s a lot of good data about not only what you listen to, but what you like. 

Nick Seaver:

Last.fm is so interesting though, right? Because one of the things about Last.fm is that you have this, you know, this scrobbling, this recording of all the music that you listen to that was kind of cross-platform. And that was something that I thought was really, I mean, it was really unique about Last.fm that you would have a kind of play history that exists outside of the place that you’re doing the playing. And that is interesting to me primarily because of something weird about the very early designs of the recommender system. 

So in one of the most famous papers in the field, Recommender Systems, which basically, with a few other papers, sort of invents what we think of as collaborative filtering today, the sort of typical users like you, liked items like this kind of recommendation. It’s a system called Group Lens out of the University of Minnesota in collaboration with an MIT researcher at the time. They had this Recommender System for Usenet News, is what it was. And in their vision of this, they do a bunch of interesting things. 

One is they talk about the possibility of what they would call Balkanization, I believe they call it there. So this worry that of what we’d call now filter bubbles, this idea that recommender systems might isolate people into interest groups. So this is in 1994, I think, early paper, they’re anticipating it. They are talking about the rise of implicit ratings. So what we see today is behavioral data. So, you know, treating for instance, how long you read an article as an expression of interest in the article, or how many times you listen to a song as an interest in an artist, rather than asking you, you know, rate this artist zero to five stars or give them a thumbs up or something. 

And what they also have, and this, so those two things are very, you know, perceptive, those happened. The other thing that they say, though, is that they imagine that these recommender systems could be run by sort of a third party organizations that they call Better Bit Bureaus, which is a very cute name. But this idea that you might actually have your recommender profile exist outside of the company that is sort of providing the stuff that you’re going to have recommended to you.

And Last.fm is kind of the closest thing that we ever got to that, which was like, you know, all they had until they did some of the streaming stuff was your taste, not my own, your taste. They had a record to be real listening history. And you could do fun things with Last.fm, like, you know, see another user and see that percentage match, right? To see like how much your music was like their music. And if you saw like a big match, then you could go in and like on your own, explore and find, you know, the things that they listened to that you don’t, because if you have a 95% match, that remaining 5% might be really appealing to you. That is the same logic that’s in play and a basic collaborative filter recommender system. 

But for a certain kind of music listener, there’s something much more appealing about going in and doing it yourself. And certainly about that kind of social network aspect. So when I was in the field, I mean, many people had worked at Last.fm, I should say, in these places. And many people were very nostalgic for Last.fm. And even for Napster, like the original iteration of Napster, not because of the like, oh, music piracy was the best thing, but because it had this kind of social network quality, which was that if you found a song that you wanted to download in someone’s collection, you could then go and look at the rest of their collection and you felt like you were kind of navigating music via people in a kind of direct way. But people had a lot of nostalgia for Last.fm at the time. I think it got bought in, there’s like a series of acquisitions that kind of let it languish. And I think that was probably to blame for any like weird accumulation and disaccumulation of features.

Mike Sugarman:

I think something really interesting that you are alluding to here, which is Spotify is a place—and Spotify is not the only streamer that’s like this Apple Music, too TIDAL—where all you’re supposed to do is live there to listen to and find your music. And weirdly enough, there’s no social interaction. I say weirdly enough because 10 years ago, it seems like everything was supposed to be social media, right?

And I think part of the shift is Silicon Valley culture where it’s not that, oh, everything on the internet is supposed to connect to other people. Everything on the internet is supposed to help you navigate the complex demanding world, right? So just like how recommendation algorithms are supposed to be deal with information overload, something like Instacart is supposed to help you deal with the fact that you don’t have time to shop for groceries.

What was the world they found themselves creating and how did they end up feeling about that world? 

Nick Seaver:

When I think about the changes that have happened in this sector since I started doing this work, people talk about algorithms now. There’s a lot of concern about algorithms. Nobody really talked about algorithms, certainly not in public discourse when I started. And people think these systems work, which is also really different, right? So 10 years ago, people would laugh at you if you were like, “Oh, music recommendation.” They really like, they know your taste and they give you music. You’ll be like, “Oh, shut up. Like, that’s not… They don’t actually work.” They’re just kind of like gimmicks. But now, people really talk about recommender systems, like they’re so good that the so goodness of them is a problem, right? A lot of these are like, filter bubble concerns are predicated on systems that work. And if the system doesn’t work, you know, if it is noisy, it’s kind of, it’s constantly throwing you stuff that like, doesn’t really fit what you need. It’s actually better in some sense for some of these filter bubble kind of concerns, right? Like you’re going to get a little bit broken out of a filter bubble, because the system is not good enough to keep you in it. It’s going to keep, you know, it’s going to keep giving you stuff that doesn’t quite fit. 

And so that’s a real change, I think, think of the way people think about these systems now, and that I think it’s forced people building them, who ironically you would think would be the big proponents of like, yeah, these systems are good. Like they’re objectively good. I, it’s forced them to change the way they think about their systems, because I think historically, a lot of the people who’ve built these systems think of them as being, for lack of a better word, neat. Like they’re just like cool things. Like, isn’t this fun. Look, you can do this. They don’t have like panopticon goals of saying like, I know all of your taste deeper than you do. Not really. Like, I think they just think, Hey, that’s cool. Like that that works. 

But now they work in a sector where it’s getting suffused with this, like, we know everything. This system knows you better than you know yourself. And it’s a different game. It’s a different game to play versus this one that’s like, I don’t know, show me something weird, I’m looking for a new way to find something that isn’t just listening to the radio or hearing a recommendation from my friend. But it’s not like that anymore. 

Mike Sugarman:

You actually describe a lot of situations where personal taste within the office, even personal taste as a kind of bias in how you build technology. influences what people see. And we know for a fact that there are curators behind the scenes at Spotify. We know for a fact that there are certainly PR people who interact with Spotify and make the case that their artist music should be featured in whatever playlists and kind of internally kind of heated within like the algorithm itself. And I do think it’s worth asking, when you’re talking about recommendation algorithms and you’re talking about the people who’s taste informed these things, the kind of human decisions that go into them, I mean, how much does that influence recommendation as we know it today? 

That is absolutely something I think stretches to conversations about TikTok, which is, is eerily accurate algorithmic television. It seems to know exactly what you want to watch.

Nick Seaver:

Yeah, I mean, so this is, in so far as I have a contribution to the kind of academic critical study of algorithms, it’s primarily this one, which is to say, we used to talk about, I should say we don’t do this very much anymore, thank God, but we used to talk about algorithms as being this really kind of non-human, sorry, maybe we’ve just displaced it to like AI or something, but this idea that like the algorithm was a non-human kind of intelligence that was going to make things happen in the world. And the problem with algorithmic recommendation was that it was algorithmic, right? An algorithm could never understand taste. It could never really understand that human stuff because it wasn’t human.

And I had an issue with this, which was largely, well, there were sort of two sides of it. One was that I think that that overstates our like special human uniqueness, for I think that humans have taste and make music with all sorts of technologies and the idea that technology and culture are intrinsically opposed is a goofy one in either direction.

But the other is that if you look at the development of algorithmic like recommender systems, as I did, there are humans all over the place. It is clear that this is a very human enterprise. And so let’s talk about mechanisms, like how this could work.

Back in 2014, when I had the bulk of my work, certainly at Whisper where I was, but also at most of the companies where I talked to people, there were not very lively user research groups. By which I mean user research groups were doing things like interface design questions, right? Like where should we put the buttons and that kind of thing. The recommender system development, like, you know, how do we decide what signals to analyze to do what? that was not really the province of any user research. That was engineers interpreting data from user interactions. 

And so I talked about this in the book, that this gave engineers the kind of privileged access, or sorry, a kind of privilege to say what users are like because they had this kind of objectivity of data, right? Like we know, we don’t need user researchers to go like ask people questions. I can see what you did, right? There’s this very like ideology of objectivity and data. In any case, what happened when these engineers would work is they would, for instance, you want to change your playlisting algorithm, you would change the playlisting algorithm, and you could check it immediately, right? You wouldn’t go to users necessarily. You would like put in your favorite artist and see if what came out made sense to you. And you couldn’t do that, that little, they would call them sniff tests or smoke tests sometimes. You can’t do that little checking with music you’re not familiar with, right? you have to use something that you’re sort of familiar with. And I wouldn’t say this is like the end of testing in these settings.

Of course, there’s other tests that happen. But this is the pervasive form of testing, right? If I am designing a system and I’m trying to fix it and I’m like kicking the tires on it, I’m going to do that kind of thing where I put in an artist that I know and see if the output makes sense. I’m going to do that a hundred times compared to the one time that we do some test with with a user. And so there’s this pervasive way that the fact that people have to rely on their own kind of cultural intuitions to check their work, because there’s no objective good recommender, right? There’s no objective criterion that you’re meeting. You don’t know that the recommender is like, Yeah, yeah, that’s a correct recommendation. That doesn’t exist. So people are using their own stuff to figure out what works. 

So I will say that’s something that I am very confident was the case in 2014. I don’t know now. I think that there is more collaboration with user researchers today. But I wouldn’t be surprised if there was still a kind of ubiquitous sniff testing culture going on. 

Mike Sugarman:

OK, so I think you’re describing– you’re saying something really simply that I think is so important to keep in mind about– I mean, it’s like, yeah, sure. It’s like about how technology is built. It’s also just about how our society is built, right? There are certain people who end up having a position of power over something, and they start to build their little corner of the world in their vision.

So one of these is, you know, I , I like Cocteau Twins. I think this algorithm should probably feed me Slowdive. I’m going to make sure it does. I’m building an algorithm that does that. Another is I’m an engineer. I think there’s way too much information in the world. I think we need ways of sorting through this information. And the way I’m going to solve this problem is by getting more and more and more and more information about users, right? So you’re saying about which songs they listen to for longer, maybe where they are and what the weather is that day and that sort of stuff. And I think that as you mentioned AI, I think that’s kind of the defining feature of this kind of AI moment that we have, that not only is there so much data, but there’s always more data to find. And there’s always models that you could make from that data. It’s this weird kind of feedback loop situation. And indeed, it’s created a sense that social media is in a place where there’s too much to pay attention to. There is too much demanding your attention at all times. It’s, I think, a dominant stream in how kind of at least one strand of social scientists, especially in Europe, tend to think about social media, the attention economy. 

Where does the information overload thinking that starts about 30 years ago, bring us to you today? How are we seeing this? 

Nick Seaver:

Yeah, so that’s another great question. I think I would trace some of this information overload stuff. I mean, obviously, you can trace it older. I think the 30 years ago sort of recommender system version of this is a very compelling one. I think you go a little bit farther back and you find this sort of cybernetic information science version where I think this is where you really see the contemporary sense of this come together. 

The attention economy idea is usually traced to Herbert Simon, this 1972 piece where he says, “We’re not in an information economy. Economics is the allocation of scarce resources because information is not scarce. What’s made scarce by information? Attention is the scarce thing. Therefore, we should do an economics of attention.” Everyone’s talking about attention as an economic object.

Yeah, I think it’s an interesting question. And this is where I’ve been going in my research since the book. I’m now working on another book somehow already. I don’t know why I decided to do that, but here I am. About attention nowadays, about the sort of symbol of attention and the way that people use the idea of attention to make sense of the world. And I think you can see that in Herbert Simon, this idea where he’s going to say, okay, well, like, what’s the, what’s the obverse side of information? It’s attention, sure, why not? It sounds very plausible. And I would imagine that most people who are in a position to listen to this will find themselves recognizing that they live in a world where people describe all sorts of stuff in terms of attention, right? Problems, goals, solutions, ideals, values, virtues. Attention is like all of these things.

When we usually talk about attention, We usually pick up either this psychological mode, which is that, you know, oh, it’s about our brains, it’s about what it’s doing to us mentally, you know, this sort of classic, like what is what’s happening to my niece who’s using TikTok too much. Or we pick up this economic one, right, which is like, oh, companies are incentivized to like do this. The attention is subject to these economic pressures in a way that’s bad for it. And it’s this kind of like wicked pairing of psychology and economics. we’re not usually doing when we talk about attention nowadays is talking about what attention means to people. 

By which I say, what does attention like, what does that word mean? What does that symbol refer to? How does it fit in a broader system of values? Why are we talking about everything in terms of attention? And so I kind of am right now trying to take the anthropologist’s privilege of stepping out of some of that discourse where I’m worried about my attention to blah, blah, blah, to say, “Well, what even is attention anyway? How does attention work?”

And so I’m working on this project where I’m interviewing people who are doing all sorts of things, research and relation to attention, and in particular, trying to come up with different techniques for measuring attention. Because if you look at the variety of ways that people try to measure what attention is, you will see that they are not measuring the same thing as each other. They are doing very different things. 

And if you look very closely at a measure of something like dwell time, for instance, which is a common attentional measure for how long you’re on a website. That’s measured attention very differently than attention measure like eye tracking, for instance, where we might try to use a camera mounted onto your computer or onto glasses that you will wear in a lab to see where you look, right? Where you’re looking with your eyes and how long you spend on a website, those are clearly different from each other. But we think of them both as attention. 

And so I’m really interested in how many different things there are right now that sort of go by the name attention and how attention appears to have emerged as this really crucial object to defend, especially in sort of humane technology critiques of social media, but has a very poor definition. It’s not a very solid definition at all. It’s not a problem for me. I’m used to the idea that symbols have multiple meanings and people don’t agree about the whatever. It does seem like it’s a problem for psychology fairly often though. 

So I have an article that’s called Seeing Like an infrastructure, which is about this in relation to music recommender systems, which says, you know, you want to understand what users are users are like, and you’re going to look through all your data that you have, but your data is infrastructure shaped, right?

 Your data is all is shaped like what you already had, you know what people listen to because you have to send them the music to hear it. And you have to keep a record of it because you have to pay people for it. Right, there’s a system there, it’s sort of built into the log already. And so you to do anything else, you have to change it. And so I think for me, that’s the big question around a lot of these systems, right? And I mean, arguably behind the concept of my first book altogether, Computing Taste, is that we often think about these systems as though they are trying to apprehend some object from outside of them, right? And in that read, Computing Taste is like, okay, computers are over here, taste is over there, the computers are going to go and they’re going to eat up taste and they’re going to spit it out in some gross form. But another read of this, which I kind of prefer, is to say that these systems are inventing the things that they are claiming to measure. They are helping to create these quote unquote “outsides.” So in that sense, computing taste means taste is being remade by computers. 

Computing is producing taste. And I think what we see now is that what it means to have taste in a world full of recommender systems is different than it was before. And I wouldn’t go even so far as to say that that’s bad, necessarily. It could be bad in certain aspects if it might be undesirable. But it’s not bad because it’s influence, right? It’s inevitable that our taste is going to be shaped by the technologies that we use to apprehend music.

If you flash back in time 50 years, your taste in music is largely what records are you going to pick at the record store, which is a whole technology for distributing music. And it would be a different kind of thing then it would have been if you flash back 150 years before there’s any recorded music. Right? And what’s your taste in music then? It is absolutely not the selection among a bunch of equally easy to access records. It’s a different thing. So it makes sense that taste will change over time. I think it makes sense that our concepts of attention will change as well in relation to techniques that are used to operationalize and materialize attention and technologies. And I think one of my privileges as an anthropologist is that I get to kind of look at that and say, “Okay, well, how is this happening now?” And briefly to set aside the question of like, “Oh my gosh, what’s happening to my own attention span?” 

Mike Sugarman:

Nick, that was I think a really wonderful way to sum up a lot of complex things that you’re talking about during the course of this interview. Thank you for that. Nick Seaver, author of Computing Taste” Algorithms and the Makers of Music Recommendation. It’s out now on University of Chicago Press. I highly recommend you read it. It’s super fascinating. Thank you so much. 

Nick Seaver:

Thanks for having me.