Alondra Nelson is arguably the most important sociologist of science in America. She isn’t just a brilliant researcher of how race and racism has shaped public health in America, nor just a thoughtful, savvy tech policy maker. She is also someone with a gift for communicating research and ideas on these huge, important matters in plain terms. That’s the long way of saying this week on Reimagining, we welcome Professor Alondra Nelson for one of our best ever episodes.
Alondra Nelson is co-author Auditing AI, forthcoming on MIT Press and author of The Social Life of DNA: Race, Reparations, and Reconciliation After the Genome (Beacon Press, 2016) and Body and Soul: The Black Panther Party and the Fight against Medical Discrimination (University of Minnesota Press, 2016).
Professor Nelson is currently the Harold F. Linder Professor at the Institute for Advanced Study. She was acting director of the White House Office of Science and Technology under President Joe Biden, where she architected the administrations “Blueprint for an AI Bill of Rights.”
Mentioned in this episode are her “Three Fallacies” remarks at the AI Action Summit and Arvind Narayanan, who appeared on episode 63 of this show.
Transcript
ETHAN ZUCKERMAN:
Hey everybody, welcome back to Reimagining the Internet. This is Ethan Zuckerman, your erstwhile host.
I am here with just an absolute hero of the world of science and technology studies, public interest technology, technology and good. I am here with professor Alondra Nelson from the Institute for Advanced Study.
Alondra is the Harold F. Linder Professor, leads the Science, Technology and Social Values Lab. And my lord, it is going to take me a moment or two and a breath to get through Alondra’s bio because she has done a lot.
Alondra was deputy assistant to President Joe Biden, and acting director and principal deputy director for science and society of the White House Office of Science and Technology Policy, better known as OSTP. In that role, she led the development of the White House blueprint for an AI Bill of Rights, which was a cornerstone of President Biden’s executive order on safe, secure and trustworthy development and use of artificial intelligence. That may not in fact be how the current White House is approaching AI, but it was super important.
Alondra was previously the president and CEO of the Social Science Research Council, one of the most important organizations in the social sciences.
She’s written two really brilliant books that we will talk about and is right now getting to press a new book for MIT Press called Auditing AI.
Alondra, what a treat, I am so happy to have you here.
ALONDRA NELSON:
It’s so great to see you, Ethan, thanks for having me.
ETHAN ZUCKERMAN:
So you have been deep in the trenches of helping the world figure out AI and policy. And pretty soon I’m going to ask about an amazing speech that you gave at the Paris AI Summit, but I actually want to start asking about this new book, Auditing AI. What does it mean to audit an AI and why is it so important that we go through the practice of doing that?
ALONDRA NELSON:
Yeah, so this is a project that I did with, if you can believe it, I think 11 other authors. We call ourselves the Marquand House Collective because we started our first draft of this book and a place in Princeton called the Marquand House where we just holed up for a week and just wrote together. My collaborators on this book are computer scientists. They are people who’ve worked in industry, they’re people who’ve worked in cybersecurity, they’re people who both on the, in the commercial side and the academic side worked actively in the work of auditing. We have journalists and people who are adept at science communication, and then a couple of social scientists like myself and a historian of technology as well.
And so the book is not new research. The book is in the MIT Essential Knowledge series, which I think you’re probably familiar with. And the purpose of the book is to, to sort of be a primer, to sort of help people understand what auditing is and why it’s important.
And we wanted to do this because we’re reaching a place and I think how we’re thinking about governance of AI and algorithmic systems in which lots of different people from different perspectives, outside of politics, inside of politics are pointing to auditing, evaluation, assessment, kind of as a thing, you know, work that you and I have worked on for a very long time, trying to get people to understand.
But now it’s a time to sort of, I think, give people the tools to know what that means, to what’s its genealogy, how have we thought about it? How has it worked? How has it not worked? And if we’re going to use it as one of the tools we need to ensure that we have better and more responsible algorithmic systems, how do we do that? So what is a book to broad audiences?
ETHAN ZUCKERMAN:
And auditing is hard, right? Let’s be really clear about this. I had the amazing experience quite a few years ago now, advising Joy Buolamwini when she was a master’s and then a doctoral student over at MIT. Her master’s thesis was an audit of a number of computer vision systems that tried to make gender determinations. So she was able to demonstrate along with Dr. Timnit Gebru that many of these systems had difficulties distinguishing the gender of people with darker complexions.
The reason I am stating that also carefully is that what was being audited is so specific and so complicated. And once you start talking about it, it very quickly turns into computer vision is racist. And there’s a pretty good argument that computer vision is racist, but auditing is this super specific set of questions and the data and the processes sort of associated with them.
What’s hardest about writing that general interest book for MIT Press and also knowing really deeply the complexities of what we can and can’t do in auditing these systems?
ALONDRA NELSON:
That’s exactly right. I mean, the book is called Auditing AI. There’s a kind of parsimony in the title, but I think it also could have been called something like auditing as a way of life, right? Because part of why it’s so hard is that it never stops. It is an ongoing practice and often the systems that one is auditing are dynamic, right? The training sets are changing, the parameters are changing, sometimes by the day, sometimes by the moment, if you’re talking about generative AI, that’s sort of got this iterative feedback loop that’s bringing data back into it.
So it’s much more like something like cybersecurity where you’re kind of constantly—you’ve got this kind of dynamic ecosystem. So it’s hard because of that.
It’s also hard because, what are you trying to, what’s the outcome that you want, right? So people, we throw technologies into social ecosystems and we have a vague idea that we want the use of a particular algorithmic system or AI model to like make it better, make it efficient, et cetera.
And we’re often not clear about what that kind of benchmark is. And so part of what auditing in part does is try to crystallize that question. An auditing team will come and say, what is it that you’re trying to do? Is are you trying to get 2X optimization out of this model? Are you trying to have more people served by this product? Are you trying to have less fraud in this system? And sometimes it’s all of those things together. So that makes it hard as well.
And then it’s hard on just like a basic level of like how you do it. You mentioned my time at the Social Science Research Council—a project that you helped to certainly offer guidance on—was this proposition that we could get Facebook data to do research on social media and its implications for democracy. And we can talk about how that went, but auditing is also about sort of creating an expectation and benchmarks around what data you need, what information you need to actually even be able to pose the right questions about whether or not systems are operating the way they should, both sort of functionally, is it doing the functional thing and then the way it should socially. And so these are kind of social technical questions.
ETHAN ZUCKERMAN:
So there’s 19 things in there that I want to pursue. So I’m going to see if I can hit three of them and then end up with a question on one of them.
So first of all, auditing is hard. It’s hard to get the data. It’s hard to get platforms to understand why they might want to give us the data. It’s dynamic. The data that you might have at one moment in time might not tell you anything about the current moment.
The analogy to cybersecurity is spot on because it’s an evolving threat environment. And particularly as you find something in an audit and you air it, things end up changing.
You referred without using a very fancy term in this space, which is fairness, but you referred to these questions of what are you optimizing for? What are you seeking? Our friend, Arvind Narayanan over at Princeton has a wonderful talk about the 18 different meanings of fairness. All of these things require an incredible amount of precision associated with it.
Here’s the one that I want to play with, which is that auditing historically is an adversarial process, right? If you go through a financial audit, you are basically asking accountants to make your life miserable. When public interest lawyers started audits of, oh, let’s say New York real estate developers for whether people of color could live in their housing, those were adversarial audits where white and black couples would go out and we would look for disparate impacts.
Who are the adversaries here and how do we think about protecting that ability to do adversarial work at this moment in time?
ALONDRA NELSON:
I love this question. So one of the things that we do in auditing AI is we talk about the public interest lawyers, but we also talk about the local community activists. So it’s not only who are the adversaries, but who are the partners? Like who are the people who are the people for whom auditing is a tool, right?
And so by going back to that history from the late 20th century and talking about local communities in places like Long Island and the broader New York City area, it’s not just sort of an expert domain or an expert activity. It’s an activity—and again, I said the books for broad publics—that local communities can understand themselves to ask for, to potentially be engaged in. And so I think that for us is important.
So, but it doesn’t have to be adversarial, right? I mean, it’s also, I think I said to you, auditing is a way of life. I mean, we could also think about auditing like going to the gym. Like if you went to the gym this morning or worked out at all, it’s adversarial in the sense that like, did you want to bench that weight or did you want to get on the treadmill? But it really is to, for a kind of broader, healthier you, your broader like biological kind of ecosystem.
So it is, I think certainly it can be very adversarial, but I think it, we also want to and are trying to frame it in the book as like a kind of normative best practice for companies, for communities to ask for and that is sometimes adversarial, but is not necessarily adversarial.
ETHAN ZUCKERMAN:
I love the community turn that you’re taking it into and at some point I want to tie this back to your amazing book, Body and Soul that looks at the Black Panther Party and fights for medical justice and medical equality. And I think you and I are both fascinated by the Panthers as an example of an incredibly effective community organization.
I love this idea of community organizations being able to take responsibility for this. It feels like if we want community organizations to be able to hold AI systems responsible, they have to understand what AI is and what it isn’t.
ALONDRA NELSON:
One-hundred percent.
ETHAN ZUCKERMAN:
And you did such an amazing job in raising three fallacies of AI in your remarks at the Paris AI Summit. It’s just the best that I have seen anyone do.
So the first is that AI is for efficiency and scale and instead you say, no, it’s for people. You then say there’s a fallacy that AI requires trade-offs and the answer is that it requires governance, which is hard. And then the third is that AI is necessarily going to benefit humanity and you end up essentially saying it is, but only if it’s governed in the public interest.
ALONDRA NELSON:
So let’s go back. I mean, this is a brief remarks that I gave at the sort of third international AI summit. So we had the Bletchley Park AI Safety Summit in November of 2023 and in the spring of 2024, we had in South Korea, the AI, I think it was also called the Safety Summit. And then the French did this AI Action Summit. And I had the great honor of being asked to both attend dinner at Élysée State Palace at the sort of home of President Macron where there were lots of heads of state present including Vice President Vance and the leaders of Germany, the EU, other places.
And so I was given three to five minutes, I think I was given three minutes and the question for me was sort of, what do I want to say to these people? So if I am one of the few academic, civil society people and I was one of a handful, what do I want to say?
And for me, I think my project broadly around artificial intelligence and AI governance has been to ask people, to sort of prompt people or encourage people to have a different choice architecture than the one that they’re given. Like, so we are told that these things are inevitable, that this is how they must happen, that this is what the present is and this is what the future is going to look like and get on board and just go with that.
And so I wanted to take this opportunity with this “Three fallacies” speech, remarks, to sort of offer a kind of reframing. And I knew I was people were going to be eating, they were going to be drinking, and so I wanted to sort of be able to sort of hit it and hit it hard.
And so the very first fallacy is that, is I think the thing that people believe most and the thing that policymakers are told is the thing that’s going to help them balance their budget, it’s going to help them get economic growth, that the purpose of AI is for efficiency and scale. And so my argument is that that might be the function. And I think even that as a function of AI is arguable.
But the purpose, if we’re thinking about why we’re all here, it has to be for people, to help people do things better. People who are doing dangerous work, make the work less dangerous. People who need healthcare help them get it more, et cetera. And to just totally help folks identify that we are confusing function for purpose. That we do that all the time with AI and that the purpose is the people.
ETHAN ZUCKERMAN:
And also just to say that at a moment where the current version of AI requires billions and billions of dollars of investment, it is designed for five or ten companies to try to sell to the 500 largest companies in the world. And the goal seems to be to eliminate human jobs.
ALONDRA NELSON:
Yes.
ETHAN ZUCKERMAN:
It’s an incredibly effective corrective that right now we seem to be in a world where AI is for capital. And it would be really nice if AI were for people. So that’s the first fallacy, what are the second and the thirds?
ALONDRA NELSON:
The second fallacy is that AI requires these kind of radical trade-offs that we could never possibly do. And I think that for me, this fallacy goes back to a kind of narrative framing of AI that it is there’s nothing like it in the world. Like we’ve never had to do a trade-off like this before because we’ve never had a technology this shiny, this sort of glazy, this amazing.
The fact is that we make trade-offs all the time and that good governance is something that we can do. It’s something that’s admirable. We can adjust, sometimes we get it right and sometimes we get it wrong. But we don’t have to either have safety or productivity, economic productivity. We don’t have to only have either innovation—or we don’t have to trade off innovation for safety, for example. So that’s the second fallacy, that there are these kind of radical trade-offs that can’t possibly be reconciled or that we can’t possibly kind of work in the space of that tension to build something better for more people.
The third fallacy is that there’s something inherent to the technology. This kind of, so many of our colleagues have written about this as tech-solutionism and have used other kinds of phrases. So I was bringing that tradition kind of in the room with me. But the sense that AI is somehow going to automatically benefit society, that like benefit humanity, that there’s something inherent to the technology itself that is going to produce all the goods that all the VCs and all the CEOs tell us are going to come from it.
And this is not true. If you sort of open up a chatbot and you say healthcare, healthcare is not improved, right? If you put a box of software, I’m still using old school, like it’s the 90s, you take software, you say software is going to make health records better. It’s like the technology itself doesn’t do it.
And so my point was that any of the positive outcomes that we hope for from the technology actually have to be stewarded. So, there’s this conversation that any kind of regulation or any kind of desire or governance to sort of steward or give shape to what the technologies will be and do is somehow inherently bad or problematic. Whereas my point was, if we want to have these good outcomes, we actually have to make them possible. They don’t just emerge out of cloud when you open it up to use it.
ETHAN ZUCKERMAN:
What I found so wonderful, so first of all nothing makes me more scared than a three-minute talk. Like, we’re academics, right?
But part of what’s wonderful about this sometimes is that a very short statement forces you sort of to the essence of things. And this notion so that AI isn’t for efficiency, it’s for people and for making people’s lives better. That these are not in fact unprecedented tradeoffs. These are not waters we have never sailed in before. You sort of want to take some of these AI people and sort of say, “let me introduce you to the world of bioethics.” It turns out there’s really hard questions we’ve been dealing with technology for years and years.
But also this idea of AI benefiting humanity, which almost seems like an article of religious faith. And this is one of the things that’s been really interesting in the technological world lately. We’ve got two movements going on right now, the sort of blockchain cryptocurrency movement and the AI movement, which both seem to require a leap of faith to be sort of a true believer within them.
You are coming very much from this global policy, responsibility perspective. How do you deal with that balance between people who I think sincerely believe that these technologies are unprecedented, are changing the world in ways that we cannot possibly even understand. And the wisdom that you’re bringing to the table as a scholar of technological change and society over many, many years.
ALONDRA NELSON:
Yeah, so I would offer maybe sort of two reactions to that fascinating question. I mean, the first would be as a scholar, as a graduate student, my first published work was on Afrofuturism, right? And so for me, it’s partly that the futures we’re being offered aren’t big enough. They’re presented to us as these kind of amazing cosmologies of what the future can be, what innovation can be, glossy, oh my goodness, we can’t wait to get to this future. As opposed to imagining futures that actually have more equality, more justice, more and more for more people.
ETHAN ZUCKERMAN:
Sam Altman isn’t nearly as interesting as Sun Ra.
ALONDRA NELSON:
Right. Although I think Sam Altman is pretty interesting and I hope there will be some more work on Sam. He’s a fascinating figure.
And then I would say, as a scholar, you just know it’s not true. So as a scholar of technology, so my other, the second work that I published was a book called Technicolor, the subtitle of which was Race, Technology and Everyday Life. So this was in 2002 or something, 2001, a million years ago.
But part of that was that there have been these other historical moments, many, many, many of them, in which we’ve said this technology is going to change everything for everyone forever. And so I think to bring a scholarly, sort of gimlet eye to this is to say, well, these technologies are cool, but even thinking about the history of Silicon Valley… You know, we’ve got the WELL in San Francisco. We’ve got the Californian ideology. We have all of these other kinds of narratives.
And so, you know, part as a scholar, I think, you know, part of what’s amusing and sometimes bemusing is that you’re being told that there are these kind of new narratives and visions, but if you study the space, even over the last say, 80 years, let alone 180 years, you know that this is the kind of changing same, this kind of, you know, there’s a kind of iteration to it.
ETHAN ZUCKERMAN:
We can hand all these folks a Fred Turner book and ask them to start from there.
ALONDRA NELSON:
Yes, precisely.
ETHAN ZUCKERMAN:
Yeah, so you’ve had opportunities to do this work, bringing in these historical, these sociological perspectives at the very highest level. You’re doing this within the White House. You are captaining a team that’s doing very, very thoughtful work on AI policy.
Basically what you’re doing with the Office of Science and Technology Policy under Biden is this model in which elite experts advise the government on very complex topics. And this is something that comes out of the progressive era. This is the moment at which we sort of look at the world and say, wow, this is a lot for legislators or presidents to deal with. We need the Congressional Research Service. We need government-sponsored think tanks. We need elites to come into government and political science starts to emerge as a field.
We’re now living through a really rapid counter reaction to that moment—what is starting to look like a war on expertise in government. What’s happening, Alondra, and what’s at stake from the perspective of someone who’s really done this at the highest levels of the game?
ALONDRA NELSON:
Yeah, so I think there was an inkling of this, of the kind of moment that we’re in. And I think that there were, I think, decades of concern leading up to this. But certainly when we went into the Biden administration, it was very clear that the work of OSTP and of science and technology policy was going to have to change, right?
The day before, two days before—I was a Day One Biden appointee—one or two days before we start, Ron Klain, who was to become chief of staff, sends out this memo about the four crises. It’s kind of a short email to all of us. And reminding us that we’re in the middle of a kind of world historical pandemic, that we’re in the middle of an economic crisis, that there’s a kind of racial crisis, and we’ve got kind of concerns about trust and other issues.
And so if we’re actually listening to that context, that means you can’t just go in with all of your expertise and do science and technology policy. It means that you have to respond, try to be responsive to an American public that’s saying, what’s going on with these vaccines? Things are maybe going a little fast and loose. And you also have an American public that’s saying, people in my community are dying at rates disproportionate to other people all over my city and all over the world. Like what is going on?
And so we went into office with a real commitment to try to think about how you change that progressive era expertise-only model. And we had a couple of hypotheses and I think, history will have to be the judge about whether that we were right, but certainly the blueprint for an AI Bill of Rights comes out of this posture and this instinct, which is we needed better science communication.
So we hired, I think maybe for the first time, actual professional science communicators on the OSTP team. We did a lot more communication. So there was just a lot more blog posts and there was just a lot more of us out in the world talking about what we were doing.
For my part and the part of my team, I think I tried to encourage people to take low trust as just where you have to do your work. So I think there was a lot of conversation that was: how do we get the trust back? And that there was something that you should be doing in government to get the trust back.
My response was we have low trust. And I come from as a black woman out of a community that’s traditionally historically had low trust in government and low trust in expertise. And so you sometimes just have to meet people where they are.
Then the other thing I would say is someone who had done work as a scholar that was sort of critical of expertise, right? You didn’t include enough voices, that sometimes expertise made trade-offs in which you were experimenting on people, abusing people that we release products too soon or we didn’t care that they were damaging people even though they might benefit a few others.
And so I came out of a perspective that was deeply appreciative and of the privilege and honor of what it meant to be in leadership at OSTP, but also that both because of the historical context of my work and the contemporary context that we were coming into the office under, understood that there was a lot that had to change. So I think we didn’t talk about it this way, but I think there was very much a kind of reform agenda around how you think about and do science and technology policy in that moment.
ETHAN ZUCKERMAN:
That’s incredibly helpful because I, so as you spell it out, that all makes an enormous amount of sense to me, right? In 2021, as you’re entering the administration, I’m publishing a book called Mistrust, basically arguing that mistrust is the central characteristic of our political moment.
But you caught me, right? Like I was looking at this in terms of the Trump administration is leading a war on expertise and you’re absolutely right. There’s been a fall of trust in expertise probably since the Nixon administration and certainly very aggressively since the Obama administration.
And some of that fall is justified. Elites have failed in many, many cases and communities that feel marginalized, that feel put upon in one fashion or another often have completely legitimate bases for the mistrust. Of course, now mistrust has become a political stance from the people in power, right? It’s gone from being a weapon of the weak to being literally the person with the most power in the world. So obviously some of this is we need to be better science communicators, we need to understand mistrust as a baseline.
But I want to bring you back to this question, which is what is it that we might lose if we start losing this perspective of expertise within policy and policymaking?
ALONDRA NELSON:
We’re losing it. We’re like living at a moment where we’re literally losing it, right? Thousands of people who bring a particular kind of, also very narrow kind of wonkish expertise that few people in the world have, have been cast out of their jobs in government. It is devastating and it is tragic. And so I don’t know, Ethan.
I think the worst outcome for the United States as someone who lives their life and was born in the United States is that expertise goes elsewhere, right? So a historical perspective. So I sit here in my office at the Institute for Advanced Study. Some of the first faculty members here were immigraes from Europe, right? And they came here because you could no longer do science because they were coming from not only a deeply kind of discriminatory, antisemitic, racist world and worldview, but also a worldview that had weaponized knowledge and expertise and like that you literally couldn’t do your work, right?
So I think a worst case scenario is that all of that goes elsewhere. That the United States just becomes a place where you can’t have facts, expertise as the basis for governance and as the basis for kind of reasonable policy debate. And that would be truly tragic, but I think that might be the case.
I also think—I have been struck as I’ve been watching the coverage of, we’re entering hurricane season here. There’ve been these terrible floods, seasonal floods, but much more devastating than in years past because of climate change, obviously.
I’m also struck that when it comes to capital and money, that people are well aware of facts. So insurance companies, for example, have been changing premiums, raising premiums, there’s things they will or will not ensure. So can you get insurance if you live in parts of California for fire? Can you get insurance for floods? Can you get insurance if you live in low-lying parts of Louisiana or Texas for flooding? I think in some cases you cannot, the answer is no. Or if you can, you’ve got sort of actuarial people sort of making these calculations.
And so even as mistrust exactly as you say has been weaponized, in the world of capital, there is a realm of facts that people are very clear about that I think are always going to have to be reckoned with. So I would offer that as well.
ETHAN ZUCKERMAN:
Although, and this is hilarious, I am working on a paper right now with Raul Castro Fernandez at University of Chicago where we’re trying to understand why climate information is not moving the real estate market. So Americans continue moving to markets where their properties are extremely high risk.
And hilariously, we have a pair of young data scientists both from South Korea working on this. And so we’ve sort of had them going through FEMA data and housing price data and migration data. And one of the students comes to me and says, “Professor, we have found confirmation of your theory. It is a place called Tampa.”
And I just cracked up because I mean, of course it is. People are flooding into Tampa despite the fact that Tampa is an incredibly dangerous market to be in.
So this is another one of these cases where, yeah, some people are benefiting from that information. Reinsurance companies are all over that information. Individual people making housing choices may not in fact be benefiting from that information and trying to figure out what that is and what’s going on with that. That’s sort of the mystery of that paper.
But this is another topic that’s getting harder and harder to study, right? So EPA is evidently floating a policy that’s going to take them out of the space of regulating greenhouse gases. One of the things that we’re very worried about in this research is that the data comes from FEMA. It’s FEMA data about what communities and what individuals are harmed by different natural disasters that people use to build their models.
You have worked on some of the topics that seem most likely to be hard to study in the United States. There’s a particular animus in this government for work in diversity. Your book, Body and Soul was about the Black Panther Party, the fight against medical discrimination, the social life of DNA, looks at how study of ancestry is revealing both painful moments of African-American history as well as helping African-Americans discover important and erased aspects of heritage.
This is the sort of work that seems to be attracting the most negative attention in the current climate. What’s at risk when NIH and NSF won’t fund work focused on the experiences of a specific group of Americans?
ALONDRA NELSON:
Well, I mean, it means that you don’t have legitimate research because you don’t have the full picture. So there used to be in the historians of medicine, oh gosh, I’m trying to think, I think it was my former Yale colleague, John Harley Warner writes about the, oh gosh, he has a wonderful phrase for it, but it’s something about the single male individual, like that there was the sort of research subject for research for generations was a single white male of a certain age, right? And then we look up and say, oh, well, it doesn’t work for Europeans from the North and differently from the South or for women or for other people.
And so we have modernized and improved vastly biomedical research, right? By not saying there is one standard model human—Steven Epstein actually, I think writes about this as well—on which like all of medicine should be based.
And so it’s bad for all of us. If we don’t understand the sort of broader ecosystem, then I think it’s also just morally wrong. Some of the impetus for some of this research are data, which we now want to stop collecting so we don’t have to reckon with it, that showed that women were disproportionately excluded from some professions or some specialties and medical and scientific research. Or if we, we talked a little bit about the pandemic, the health disparities that meant that African Americans were dying at these disproportionately high rates from the pandemic.
Why is that? Should we care about that? How do we understand that? What do we need to do to understand it? And what do we need to do to mitigate it? So, that is ultimately, I think what work on equity and on inclusion is trying to do.
Certainly in the Biden administration, and I will just say this as a corrective, because I think one of the frustrations—I’m not a person who ever thought I would work in politics at all, and so I don’t have particular investments in my persona as a person who worked in politics, right? As opposed to, I like, and someday I will, I’m writing a little bit about this, someday I will publish this, in opposed to this magical world of people I came to work in who only ever wanted to work in politics their whole lives. But these were the sort of six-year-olds who wore suits and wanted to be student body president in elementary school and all of that.
So, I don’t have a lot of investments in sort of defending government when it’s wrong and I’m happy to critique it when it’s necessary. But it is also the case that like, what they are dismantling and tearing down is a perspective on equity and on the role of government as being in support of the common good that included veterans, that included rural communities, tribal communities. I mean, honestly, if you look demographically, I think included the vast majority of the American public who were in some ways being underserved, right? Or they’re not having their needs addressed in a kind of equitable manner by government.
So, when I say, you know, the way that we think about research subjects historically affects all of us. I mean, there’s a way in which the decimation of these programs is literally, demographically affecting a large swath of the American public in a negative way. And I don’t, you know, you know…
ETHAN ZUCKERMAN:
I was just thinking of this in terms of climate change. So, one of the first findings we’ve seen going through the FEMA data is, FEMA looks at annual incidents and sort of annual risk in terms of risk to agriculture, risk to building and risk to human life. And they put a price tag on human life. They run a human being at $11 million, which, you know, seems like a compromise in all of this. You start looking at certain disasters and disasters that we pay a huge amount of attention to, like wildfires, do enormous amounts of property damage. They actually rarely kill people. People usually are able to get out of them.
The disaster that kills people is heat. It’s heat waves. And it does almost no property damage, but it kills a lot of people. And those people are poor and unhoused. And that becomes another group of people who end up invisible. And my guess is also end up being made more invisible.
This idea that there is not a default human, but we actually have to look at the diversity of humanity—that seems like one of the things that is being most thought over right now. That’s gotta feel really weird as a social scientist at this moment in time.
ALONDRA NELSON:
I mean, it also just feels terrible as a woman and as a black woman. I mean, you know, you literally have a narrative frame that’s being offered by some in the Trump administration and supporters of this perspective, that there is literally no way that someone who looks like me could ever be qualified for any grant or any job or any role, right?
So, you know, you just think it’s an actual, you know, other humans don’t matter. Other humans aren’t equal. It’s just a fundamental premise that there’s not at a kind of basic level, whether or not that comes from you out of moral philosophy or out of a religious tradition or whatever. They’re trying to erode a sense that there is a fundamental human equality, right?
And so, you know, that is, we’ve got to all fight that battle from whatever perspective it offends you and it’s offensive on a lot of levels. It might be religiously offensive. It might be, you know, the philosophically offensive, et cetera. But we can’t go back to a time in which we say, you know, some lives matter and some lives don’t matter. And that’s really what’s at stake at a high level, even as we’re dealing with all of the fires that we’re putting out every day.
The other thing I would say that’s interesting as you were talking, I was thinking as you were talking about heat waves and fires, we are clearly careening into a time of increasing just disasters: natural disasters, unnatural disasters.
And the Trump administration response has been kind of, it’s probably not the right phrase, but a kind of neoliberal response. So if we think about one of the ways that we think about the turn to neoliberal governance being that things that were collective responsibilities or obligations, things that were government responsibilities or obligations get cast onto the individual. If you don’t have healthcare, your fault, your problem. You see a similar thing happening with the dismantling of the EPA, FEMA, et cetera, right? You see it in a, there’s kind of two stage. At a mezzo level. You see it being sort of cast onto the states.
Like, so, and then I think, Ethan, this is another moment where you know that despite all of the political performance, that people in the Trump administration are quite aware of the facts, right? Because it is a big proposition to think about how government is going to, is going to deal with all of this increasing, the increasing sort of consequences and outcomes of climate change, right? And if you include in that fire and you include in that heat, et cetera, all of that cluster of things. So you can either have a strategy for it, as many governments, United Nation, you know, COP conferences have tried to do, or you can kind of say, not our problem.
ETHAN ZUCKERMAN:
Not just say, not our problem, but construct an information environment.
ALONDRA NELSON:
Correct.
ETHAN ZUCKERMAN:
So, my fall class at UMass is called “Defending Democracy in a Digital World.” And the first thing I have to say is, I did not retitle it this year. I’ve actually been teaching it under that name for four years. But it does feel like an interesting moment to be talking about democracy and information, right? It’s basically a history of news and information and discussion as essential to democracy from the Greeks through the American experiment, so on and so forth. And we are at just a crazy moment, right?
We have a president and one of his closest advisors, each of whom own social media networks where they are putting forth propaganda and now possibly warring propaganda. There is an AI chatbot that periodically veers into praising Hitler as well as trying to construct an interactive vision of knowledge to essentially deny this dialogue about diversity of human experience. Mainstream journalism is threatened by the advent of AI-generated search systems that don’t direct traffic to the news sites, cutting off maybe the last bit of revenue.
How am I supposed to teach this, Alondra? What are the big things you want these 150 undergrads at UMass to get out of this discussion at this very unique moment?
ALONDRA NELSON:
Well, I know that you’re not going to do this, but what I don’t want, and I hear too often is everything’s fractured and we used to have three television stations and oh, the halcyon days, et cetera, et cetera, et cetera.
ETHAN ZUCKERMAN:
The halcyon days where the only people who were in positions of power that television stations looked like me and not like you.
ALONDRA NELSON:
Correct, correct, right? And so we do have three television stations now or whatever, the equivalent, they’re called memes, right? I mean, I think there is this kind of interesting thing about memeification and virality is a way that social communities are making choices for good and for not about the importance of information.
So even if you think about silly things, like what was the ice bucket challenge or whatever, I mean, that was the equivalent of everyone watching Fonzie on Happy Days, right? I mean, there are these moments that sort of capture attention.
ETHAN ZUCKERMAN:
Cultural touchstones that end up being the building blocks of our contemporary culture.
ALONDRA NELSON:
Absolutely, and in the building blocks of our kind of political consciousness, these kind of schemas for how we think the world can be and should be, so they’re very powerful sources of narrative building. So I think there’s a political economy piece, so I would want your students to understand the economics of it, that there are, the difference between the sort of Murdoch mogul empire and what influencers do, and you might have a moment in which an influencer kind of has a viable moment and is able to sort of capture a new cycle or national or international attention for a moment, but that anchor in a particular political economy means that the Murdoch empire is going to win every time.
And so I think we wanna, I think young people need to, your students need to understand that even if someone who I’m a big fan of like Hasan Piker is making millions of dollars a year and is very successful and has a big audience, you’re still talking about very much a David and Goliath situation. And so I think for them to understand that fundamental kind of inequality and what the implications of that inequality is for their lives, even if all of them are like very successful influencers, I think is really key.
ETHAN ZUCKERMAN:
I think also understanding that any of those influencers can be destroyed on any given day by platforms deciding they’re no longer going to have the power that they once had. One of the reasons why CBS was so powerful was not just the content that they were creating, but the broadcast facilities, the ownership of the spectrum, et cetera, et cetera. There’s a very real sense in which these new tech overlords are extremely powerful around this.
I think maybe one of the things we’re seeing are the extent to which that crossover between tech and media and political power are sort of blurring together, obviously in a president who has a lot of experience with reality television.
ALONDRA NELSON:
Certainly, yeah.
ETHAN ZUCKERMAN:
Alondra, what gives you hope right now? What’s something that when you wake up and you do the hard and important work that you’re doing every day, what’s something that you find yourself thinking about that makes you want to keep doing this work?
ALONDRA NELSON:
So I’ll go back to the Blueprint for an AI Bill of Rights. So one of the ways, we realized we had to do things differently. It was not a statutory document.
It did not become law, but it has become model legislation for a few states and that they’ve sort of bank shotted off of it to create other legislation. Sometimes the language was taken up explicitly in places like California, Colorado, Connecticut even. So that the project lives in the project, not just the project as a thing, but the project that had a process that included not only lobbyists and industry executives and experts and academic experts, but members of the broad public. It is a document that distills best practices and aspirations from as wide a swath of people as we were able to accomplish.
And so it also, I think for me is a model of a way in which you can address the public and engage the public in a conversation about artificial intelligence. And so I have hope because that’s become not just model legislation. It’s been used in school curriculum at the K to 12 level, at the high school level, that it’s starting a conversation.
One of my hopes, we started with talking about the Auditing AI book, is that it will be that similar kind of tool that people will understand that they don’t have to, understand the difference between model weights and parameters, that they don’t have to have a PhD in computer science to sort of get in, dive in and understand and sort of challenge the sort of narratives that we’re being told about not only AI, but all of the kind of technological kind of ecosystem that’s being built around us.
So I have hope that we can continue to expand that the people who feel that they are constituents in that conversation. And I also have hope that by the sort of survey data and the polling data, that suggests to me that people are exhausted by the social media era, exhausted by what they see didn’t work. And that they are also expecting much more from this moment and that the data suggests that people are very concerned, they’re distrustful about AI, they’re worried about what it’s going to mean for their jobs, they’re worried about what it means for their children.
So that to the extent these sort of large rococo narratives of the future are being sort of embroidered all around us, that there is a large growing and bipartisan public that sort of says, “I don’t know about that.” And I think that that is a tremendous political opportunity and tremendous leverage against very powerful players that I think when I run the ball down the field without including anybody else. And so that gives me hope.
ETHAN ZUCKERMAN:
Alondra, I think you’re doing some of the most important work out there in connecting these questions about technology and possible futures and fairness and justice and doing it in concrete ways that are really bringing people into these conversations and having impact in the world. I’m so grateful for your time today. She’s professor Alondra Nelson, I’m Ethan Zuckerman. This is Reimagining the Internet. Thank you all so much
