64 Forgetful Advertising with Chand Rajendra-Nicolucci

image for interview with Chand Rajendra-Nicolucci about Forgetful Advertising paper
Reimagining the Internet
Reimagining the Internet
64 Forgetful Advertising with Chand Rajendra-Nicolucci
/

How could we curtail one of the most ambitious surveillance operations deployed in human history? This week on Reimagining, our very own Chand Rajendra-Nicolucci explains his new paper co-authored with Ethan outlining a new model for online advertising that eschews invasive data collection.

Chand’s and Ethan’s paper “Forgetful Advertising: Imagining a More Responsible Digital Ad System” was public in a special edition of the Yale Journal of Law and Technology called “A Healthy Digital Public Sphere.

Transcript

Mike Sugarman:

Hi everybody. Welcome back to Reimagining the Internet. This is your producer, Mike Sugarman. I’m filling in for Ethan today. The reason I’m filling in for Ethan is because we’re talking to one of our colleagues here at iDPI, Chand Rajendra-Nicolucci. He’s a research fellow. Chand is the co-author or arguably the main author on the Illustrated Field Guide to Social Media that we put out with the Knight First Amendment Institute last year. If you haven’t read it yet, you should go check it out on Knight’s website. It’s a really awesome PDF full of illustrations of birds and these great articles mapping the different types of social media online. I think it will become a print book soon. Chand, welcome to the show.

Chand Rajendra-Nicolucci:

Thanks, Mike.

Mike Sugarman:

So I actually have you on today to talk about a paper you wrote with Ethan. And Ethan is not interviewing you because you two have been working on this for the past year, and I’m confident that your conversation would be so high level that you might not even explain it to listeners very well. So I’m here to ask all of the dumb questions about this wonderful paper you just wrote. It’s called Forgetful Advertising: Imagining a More Responsible Digital Ad System. That was just published in the Yale Journal of Law and Technology in a special issue called A Healthy Digital Public Sphere. So, forgetful advertising. Chand, what is that?

Chand Rajendra-Nicolucci:

Yeah, so basically what we’re trying to do with forgetful advertising is offer an alternative to surveillance advertising and kind of the dominant models that Google and Apple in particular are basically advocating for and starting to impose through their control over various infrastructures online.

And so shortly, what forgetful advertising is saying is let’s still be able to target ads online using information about users, but that information can only be gleaned from a single interaction between a user and a website.

So that means, for example, if you went to a newspaper’s website, they could target ads to you based on, for example, the article you’re reading, your location. Maybe they have some information based on your profile that they’re able to use, like say your age or education level. Maybe they can infer that from your behavior in this interaction.

But the key thing that we’re saying here with forgetful advertising is that it can’t use anything from any previous interactions. So advertisers can’t remember or store any of your previous behavior or interactions online for the purposes of advertising. And we think that that strikes a really nice balance between protecting people’s privacy and agency while also making it so ads are reasonably effective and take advantage of the things the internet make possible.

So that’s kind of a high level summary of what we are trying to get at with forgetful advertising.

Mike Sugarman:

Let’s talk about surveillance advertising for a moment. I’m actually going to summarize something that was covered really well in a much earlier episode of Reimagining the Internet. We interviewed Tim Wong, it feels like a long time ago. Honestly, it could have been last year. Time moves so weird. But at the time Tim joined us to tell us about his book, Subprime Attention Crisis, which I still think is one of the absolute most entertaining popular books about technology that’s come out in the past several years.

It’s a really interesting account of how the surveillance advertising model, it actually sucks. And it’s not just bad for users, it doesn’t just mean that Google has this huge record of everything that you’ve done online when you’re signed into your Google account and they’re using that information all the time to sell you things. Frankly, surveillance advertising is also bad for the people buying the ads.

What Tim basically says in the book is even though they’re using all of this data, targeted ads are not really targeted properly. And on top of that, basically Google uses the fact that it has all of this data to create these bidding wars that happen in the course of microseconds between various ad providers. And that that’s really kind of the main reason Google keeps all this data. It’s basically saying we have this precious resource, we can drive this bidding war, we can be in control of the ad auction, the ad marketplace.

That’s really just one of the major issues with surveillance advertising. But you could imagine there’s some other stuff. Obviously it’s not very nice to be surveilled. But what does it mean when surveillance advertising doesn’t forget? Can you kind of walk us through the issues with that?

Chand Rajendra-Nicolucci:

Yeah, so we kind of bucket that into maybe two different categories. So one is your standard harms of surveillance, particularly related to people’s privacy. So there have been a number of really interesting articles written in the popular press, particularly from the New York Times. They had this whole series called The Privacy Project, where they did a really good job of illustrating what this kind of data that you can get from advertisers and the advertising ecosystem means for people’s privacy, and what the implications are.

And so they showed that, for example, they could track individuals and protests using this data. They could de-anonymize people on online using this data. Some really serious stuff with implications for our civil liberties, our participation in a democratic society. The government is able to purchase a lot of data from digital advertisers. Because they’re purchasing data, they’re not collecting it themselves. There’s a lot of loopholes in terms of what they’re able to do there, and that’s certainly a danger. But also, these private companies have access to these huge amounts of data, and that presents its own surveillance concerns.

So that’s kind of a standard narrative. People know that surveillance is bad. It has a lot of harmful effects, particularly when we’re talking about living in a free society.

But the second category of harms that we think come from surveillance advertising are harms to people’s agency. So when advertisers are using your past interactions and past behavior to inform the things that they are showing you and targeting you with in the future, it has this kind of corrosive effect of trying to influence you to conform to those past behaviors or whatever they think your previous behaviors were and the patterns that they believe you were making.

And so one example we brought up in the paper is somebody who is a recovering alcoholic, is trying to stay sober. Because of their past behavior, perhaps searching for alcohol online, even seeking out various groups or online communities, advertisers are continuing to target them with alcohol advertising based on that past behavior. And that makes it harder for them to become a new person, to change their behavior going forward, to become sober. You have this buildup of previous interactions that advertisers are using that really affects what they’re showing you in ways that can have direct impacts on people’s lives, and particular, the choices they make and how their life plays out.

Mike Sugarman:

We know from some really important work published by ProPublica and the Markup, some really important research done by [inaudible 00:09:22], that there’s this specific thing that algorithms are in fact really good at. Algorithms are in fact really good at reinforcing a lot of the structural issues that create discrimination in our society. It’s another way of saying it’s easy to make a racist algorithm. A algorithm that helps decide what bail should be set to or if bail should be issued is going to look at the fact that our justice system disproportionately arrests and prosecutes Black people. So what the algorithm is going to say is if this is a Black person, they are less likely to be offered bail simply because Black people are less likely to be offered bail in general. It kind of creates this feedback loop.

Similarly, I think part of what you’re alluding to here is this really important reporting done by ProPublica a few years ago that looked into how Facebook ads were targeted, and found that it was in fact really easy to not target certain ads for jobs that were available based on a Facebook user’s zip code.

So if that’s a red line zip code, a zip code that is historically the result of some sort of housing discrimination, where a certain group of people were encouraged to move to a certain area of the city because they weren’t allowed to get mortgages for a more white, more even affluent part of that city, that zip code likely still has that same kind of segregation today that it did 30, 40, 50 years ago. And Facebook can look at that and basically give someone who is trying to advertise a job post the ability to say, “I don’t want to target those people.”

That’s how racism plays into algorithms. That’s how racism gets embedded. And that’s something that I think the forgetful advertising paradigm that you two are suggesting, it takes it into account. It potentially says, look, advertising doesn’t just need to run on this giant record of all the data about somebody. There are a lot of risks that come with that. There’s a lot of potential bias that plays into that. A paradigm for advertising that doesn’t have that huge compendium of where you’ve lived, what your habits have been, who you are. It is potentially one way around those issues.

We know a surveillance advertising can be discriminatory. Can forgetful advertising be discriminatory?

Chand Rajendra-Nicolucci:

Yeah. So yeah, I wanted to bring some nuance to some of what you were saying. So the example you gave about Facebook, yeah, what they were allowing were advertisers to target based on protected characteristics including race. But also, like you said, there are proxies like zip codes. And we don’t necessarily address that head on in forgetful advertising. We are actually saying hey, you can use people’s zip codes to target them as long as you don’t remember their zip codes over time. The distinction being you can’t remember that I was in Chicago last week, when you target me this week. You have to target me based on the location that I am right now.

But I think a more interesting kind of finding about Facebook’s ad system that kind of relates and draws out what forgetful advertising brings to the table is there was this paper that found that even when those self-serve targeting characteristics and options are fairly neutral, they’re making it so people can’t target directly based on people’s race or other discriminatory demographic characteristics, they find that Facebook’s optimization algorithm leads to a lot of the same discriminatory outcomes.

Basically that means that Facebook has this algorithm that tries to match ads with people, and optimize that process. And what they found was even if an advertiser picks relatively neutral characteristics, that optimization algorithm from Facebook, because it’s trained on a bunch of prior behavior, it leads to discriminatory outcomes anyways regardless of the selections that the advertiser has made.

So that kind of points to what we’re trying to get at with forgetful ads, which is saying remembering all this stuff about people’s past behavior is the critical issue here. It’s what leads to all these problems.

And going back to the question about can forgetful ads be discriminatory, I think yes, because again, we’re presenting a higher level framework saying essentially you can use whatever you want to target ads as long as it’s gleaned from a particular interaction.

And so maybe that means that people discriminate based on the gender that they infer from a single interaction, and that’s not great. And obviously we would want to think about ways to perhaps mitigate that, but we also don’t think that that’s necessary the role for forgetful ads and wasn’t really what we’re trying to get at with this paper. Because a lot of that comes down to the choices that advertisers are going to make.

But one thing that forgetful advertising does do is it means that advertisers can’t build up over time all this demographic information about you to build these really sophisticated and constraining data sets that affect your privacy and your agency online.

Mike Sugarman:

So there’s something I think it’s worth pointing out here. There’s a few different ways to enact change. There’s regulation. So for example, because of European regulation, now websites have to ask if you consent to cookies being stored. It’s why now there’s an extra dialogue on the majority of websites we use today where it says, “Do you allow or deny the cookies?” You get to select which cookies you want. I think I, like a lot of other people probably don’t think a lot about it and probably just hit deny or for some reason select allow, but definitely don’t go through and check all the boxes. That’s a regulation example. There’s norms. If for some reason in some sort of bottom up sense, people suddenly decide that they don’t want to see ads anymore, and that’s something that suddenly advertisers have to deal with. It’s happened. And we have popup blockers.

What you and Ethan are proposing here is a market intervention. What you’re assuming is that there could be a new product that’s developed, the forgetful advertising model. It could be a type of market actor that becomes interested in it either because they view it as being more responsible, or because they view it as being more accurate, or because they think Google thinks, oh wow, we are phasing out cookies. Since we’re phasing out cookies and you can’t store cookies in Chrome anymore, we’re going to need a new model.

What is the incentive for people to adopt the forgetful advertising model for that market intervention to be effective? And is it something that you hope will be widely adopted, or do you think it doesn’t have to be widely adopted in order to do things like signal a shift in norms, or suggest to governments, hey, these are things that you can do to regulate the advertising market better? What’s your ideal version of how this pans out?

Chand Rajendra-Nicolucci:

So in the paper, we kind of explicitly compared it to the fair trade coffee movement where we’re essentially thinking about this. The way we’re thinking about this is that there are these organizations that we call values-led organizations. For example, public broadcast organizations, mission-driven companies, places like the New York Times who have explicit values that they seek to uphold. We found that in conversations with, for example, European public broadcasters, they really want alternatives to using Google’s ad system or Apple’s advertising system because it quite literally conflicts with their charters. They have explicit protections in their charters about upholding their members’ privacy and autonomy. And it’s quite clear that surveillance advertising conflicts with that directly.

And so the way we’re thinking about it is that forgetful advertising could be a paradigm that could be taken up by organizations like that. It also serves as a useful provocation to the dominant model that is out there right now, perhaps showing that, hey, we can do digital advertising a different way, a way that respects people’s privacy and autonomy, and it’s reasonably effective and is being used by this list of organizations. And it’s a way to push back against the narrative that there’s only one way to do digital advertising online, that doing it any other way would be catastrophic for companies and advertisers online. So yeah, that’s kind of how we’re thinking about the theory of change here.

Mike Sugarman:

So that is your theory of change about how forgetful advertising would work. I would say there are some other market actors that have a theory of change.

So as I alluded to, Google has basically decided we’re not going to use the cookie anymore. Chrome, at some point in the relatively near future will not store cookies anymore. Google I think has some leverage in deciding whether or not we use cookies since a lot of advertising goes through the Google ad sense marketplace.

Apple, potentially in a hostile move against Facebook specifically, but clearly affecting the kind of data landscape overall, has decided that people who have iPhones and other Apple devices should be allowed to explicitly consent to any data trackers that pick up what they’re doing on their iPhone just by using an app. So if you do something within Facebook’s app that would create some sort of record of who you are, what you do online, would therefore become a data point that Facebook would collect on you. Apple gives you the opportunity to at least find out about what data is being collected and also to opts out of it.

So should we take any comfort in these other interventions by these market actors and these other developments? I mean, cookies disappearing, Apple giving you control over what pieces of data are being stored by Facebook, those seem like positive developments. Whether or not we like the fact that those companies have so much control over advertising and surveillance and all of that stuff, it seems positive. But Chand, what should we make of that?

Chand Rajendra-Nicolucci:

Yeah, so I think we agree. I think we agree that certainly some of the changes that they’re making are positive. But it’s important to kind of hone in on the kinds of changes they’re making and the implications of those changes.

And so essentially what the changes Google and Apple are proposing and implementing do, is they really restrict third party collection and targeting for digital advertising. And so that means they’re kind of going after the data brokers who compile data from all these different sources and bring it together, and then they can trade that or sell it to different advertisers to use as they target.

And that’s really important. A lot of the really significant privacy harms and surveillance harms have come from these third parties who a lot of the time are kind of shady, having access to these huge combined stores of data that the previous system through cookies and the mobile advertising identifier enabled.

So Google’s getting rid of cookies, Apple’s getting rid of the mobile advertising identifier. And so those are both positive developments in terms of people’s privacy and their agency, particularly related to, again, these third parties who kind of collated and merged all these disparate sources of data.

But it’s important to recognize that the model that Google and Apple are proposing and implementing, encourages and protects first party collection and first party surveillance for the purposes of advertising. And that’s convenient for both of them because both of them have access to some of the biggest first party data stores in the world. Google is home to the vast majority of internet searches. Most people use Chrome as their web browser. Apple is the premium smartphone out there. Ton of people trust Apple. They believe Apple is out there to protect their privacy. And so this really has the effect of entrenching their advantages for the purposes of advertising.

Apple, for a long time hasn’t really been in the business of advertising, but as they’re making these changes, they’re rolling out more and more advertising features and systems on their products with the idea that it doesn’t qualify as tracking because it’s all kind of first party. Apple is the one who made your device and is who you’re interacting with, so they can store all your data and that’s okay.

What we’re saying is that’s actually not okay. Like Google having all of your data and using that for advertising is actually bad. Yes, third parties being able to merge all these datas in various ways was dangerous, but so is first parties having all of your data and using that for the purpose of advertising.

The harms we talked about in relation to your privacy, don’t disappear when it’s just Google or just your favorite app storing all your data, instead of being able to combine it with five different websites to target you. It’s still dangerous for them to be tracking all of your behaviors, storing it and using it to target you in relation to your privacy, and again, in relation to your autonomy.

And so what we’re bringing to the table is saying hey, yes, attacking the third party digital advertising surveillance ecosystem is great, but you’re conveniently ignoring the harms that come from the first party ecosystem. And those harms are significant and important, and we should be worried about them. And forgetful advertising says first party or third party, it doesn’t matter. You should not be remembering any of people’s previous interactions to inform your digital advertising targeting.

Mike Sugarman:

And I think what you’re driving at here is that it’s important to propose a new model of how targeted advertising can work, that falls outside of the surveillance ecosystem model.

I think you’re also honestly driving at something else, which is that we are going to need culturally, and probably on a regulation level, consent models that are universal for what anyone collecting data for the purpose of advertising can and can’t do. And that’s not just for third party data collection. It probably also has to be for first party.

And that’s actually something that’s kind of interesting about Google phasing out the cookie. In theory, websites won’t have to ask you anymore if you consent to cookies being stored. So they’re actually ultimately removing one of the forms of consent that actually has come out of a regulation of the digital advertising space. And I think that forgetful advertising is going to be a piece of the puzzle.

I don’t think anyone, I don’t think you and Ethan are saying, hey, forgetful advertising is something that will replace all the other advertising. I think the fair trade coffee thing is a good example. I think what you’re trying to do is you’re trying to say that this is something that could be available to people that feel passionately about it, but it can also be something that, the other thing you say about fair trade coffee in the paper is it did help drive a market demand for more fair trade coffee overall, right? It wasn’t just Starbucks decided, hey, let’s use fair trade coffee. Other people said, hey, that’s a pretty good idea too.

Hopefully that is something that signals to companies, users and regulators, look, the surveillance stuff, there can be an alternative to it. Here’s one alternative. Yes, probably forgetful advertising would be something that would be nice to mandate as a law. But I also think it could be a useful way to alert people to the fact that just because tracking isn’t happening for third party apps, there are other ways tracking can happen as well.

So on that note, everybody listening, if you haven’t read this paper, go check it out. It’s available on the Yale Review of Law and Technologies website. I will link in the show notes. And Chand, thank you so much. Hopefully we talk to you again soon.

Chand Rajendra-Nicolucci:

Yeah, thanks Mike. Appreciate it.