Ifeoma Ajunwa

95. What’s the answer when workplace surveillance creeps into the home? Ifeoma Ajunwa says civil rights and organized labor

Reimagining the Internet
Reimagining the Internet
95. What’s the answer when workplace surveillance creeps into the home? Ifeoma Ajunwa says civil rights and organized labor
Loading
/

Ifeoma Ajunwa wrote the definitive book about how data is used to surveil and attempt to automate away workers. This week on Reimagining, Dr. Ajunwa tells us how a history rooted in eugenics and Henry Ford sending private detectives to workers’ homes led us to this moment when software is used as a cover for discriminatory hiring practices and genetic testing is now part of workplace wellness programs.

Ifeoma Ajunwa recently published The Quantified Worker: Law and Technology in the Modern Workplace on Cambridge University Press. She is the AI.humanity of Professor of Law and Ethics at Emory University.

Transcript

Ethan Zuckerman:

Hey everybody, welcome back to Reimagining the Internet. I’m your host, Ethan Zuckerman. I’m with Dr. Ifeoma Ajunwa, who is the AI.humanity professor of Law and Ethics and the founding director of AI and the Law program at Emory University. Ifeoma has done some amazing writing for The New York Times, The Atlantic, and The Washington Post. 

She has a relatively new book out that we’re gonna talk about today called The Quantified Worker, which came out with Cambridge University Press. It’s building on some work that’s been recognized in the past. She won the NSF Career Award in 2019 for her project, “The Development, Design, and Ethical Issues of Algorithmic Hiring Tools.” We have something in common, which is that we’re both lucky enough to be faculty associates over at the Berkman Klein Center at Harvard University. And she’s someone whose work I’m following very closely. Ifeoma, it’s wonderful to have you with us. 

Ifeoma Ajunwa:

Thanks so much for having me, Ethan. It’s a pleasure to be here. 

Ethan Zuckerman:

So this is just a wonderful new book. It is deservedly getting some attention. It’s a really broad portrait of what it means for workers to become quantified. I wanted to ask you to maybe go back to the introduction of the book and walk us through a day in the life of a quantified worker? What does it look like as we’re facing work in a quantified workplace?

Ifeoma Ajunwa:

Right, so the day in the life of the quantified worker that I portray in the introduction to the book is really to show how everyday life for the quantified worker is reduced to numbers, it’s reduced to metrics, and it’s also about surveillance. And a lot of it is self-surveillance, right? Self-surveillance for the benefit of others. 

So the quantified worker, she wakes up and she’s already been surveilled in her sleep. And then she goes about her day with all these metrics tracking what she’s doing. There are metrics to track what she’s eating. There are metrics for tracking how much movement or exercise she’s getting. Everything is being tracked. And so she’s sort of reduced to a slate of numbers, right? She’s reduced to data points. In the book, I refer to it as like, she basically becomes, she has a digital double, which is something that Gina Neff and others have also written about. 

Ethan Zuckerman:

Give us a sense of some of the companies that are sort of on the cutting edge of this, maybe in sort of scary ways. I know that, for instance, delivery drivers have become massively quantified. How is quantification playing into that job, and what does that job feel like?

Ifeoma Ajunwa:

Right, so for delivery drivers, I think the person that has written the most about this is probably Karen Levy, looking at trucking. 

But for delivery drivers specifically, there’s quantification in terms of are they wearing this seatbelt? How fast are they going? Even down to what turns are they taking? Are they taking a left turn or a right turn? Some companies actually prohibit their drivers from taking any left turns. And then they are also being timed in terms of how long they’re taking to deliver packages and how long they’re spending at each house.

Ethan Zuckerman:

And you write about other workplaces like Amazon, where quantification seems to almost run smack into the ways in which a worker might try to optimize their own work. Is the quantified workplace happening because workers are lazy? Or is the quantified workplace happening because management is obsessed with control? Or is there another dynamic that’s sort of taking place here? 

Ifeoma Ajunwa:

Ultimately, it really is a breakdown of the employer-employee or employer-worker relationship in that, you know, there isn’t that relationship of trust that both parties are going to do their best for each other. And so then you get these quantification measures. 

I think also something that’s driving greater worker quantification is that you now have these technologies that enable greater worker quantification. So I think in the book, I refer to it as a Chekhov’s Gun, right? Once you have a gun introduced in a play, you have to use it at some point. And that’s somewhat what’s happening with these technologies. Once you know, management or corporations know that there are technologies out there that allow for greater surveillance or allow for greater quantification, they feel they have to use it. They feel that we would fall behind, you know, in terms of technological innovation for not using the latest surveillance tool. 

So there is this bandwagon effect created once, you know, once a company starts using the latest surveillance tools that other companies also want to use it. 

And there is sometimes a lack of, you know, deliberation of thinking, do we need the surveillance tool? Will the surveillance and the amount of quantification that we’re trying to achieve? Will it actually accomplish our ends, right? Which is greater productivity. 

And because the research actually shows no, right? Greater surveillance impedes worker autonomy and actually can make workers less creative and therefore, sometimes less productive in accomplishing what the employees need to have accomplished. 

Ethan Zuckerman:

Well, that idea that surveillance can actually decrease productivity was one of the things that I found most fascinating in your work. I found myself very much thinking about Taylorism and the scientific management revolution. 

Can you help us sort of understand a little bit of the history of how we’ve gotten here? When does workplace quantification start? And is this sort of more of the same, or is there something really substantively different at this moment in time? 

Ifeoma Ajunwa:

Right. So, I would refer to quantification as an iteration or evolution of scientific management, which is Taylorism, right? So Taylor’s idea was really that workers and managers can work together to figure out the best way to accomplish job tasks in the most efficient way possible for greater productivity and for the prosperity of all. So that was his big thing. 

And yes, you know, he did have a view towards avoiding “soldiering,” which is, you know, when workers are sort of just doing the minimum or, you know, not really doing much work but appearing to do work. But ultimately, his idea was, you know, I want workers to be able to work efficiently, more productively, and therefore, his focus was really on the job task itself, not on the worker.

So now Taylorism or scientific management starts to evolve with Fordism. So when Ford introduced his modified, scientific management in his factories, the focus actually now shifted to start to shine on the worker. So Ford instituted something called the Sociological Department, which had detectives that followed workers and went into their homes to check on them in terms of what they were eating to see if they were drinking alcohol. He followed them on their day off. 

The detectives followed them on their day off to see if they were gambling or, you know, engaging in what he considered immoral behavior. And, you know, Henry Ford was actually explicit about this, that his intent was basically to create the ideal American. He wasn’t just interested in productivity per se but to create the ideal worker, essentially. So much so that actually he had a lawsuit brought against him by shareholders of his company, accusing him of not caring enough about profits and, you know, being too focused on his social engineering project. 

Ethan Zuckerman:

How do companies assert that right to pay attention to you outside of the workplace? What is that conceptual shift that says a company has a right to know something about you beyond the eight or let’s be honest, ten, twelve hours a day that many of us are within the workplace? 

Ifeoma Ajunwa:

Right. So, I think a big part of that was actually the rise of neoliberalism and this idea of individual liberty as paramount and separate from social welfare. So there was just sort of this downgrading of the state as being in charge of social welfare and uplifting of the idea of, you know, it’s called rugged individualism. 

It’s like, you know, the idea that Americans, you know, believe in their own manifest destiny, their own individual manifest destiny and, you know, don’t necessarily believe in the idea of solidarity. And this is borne out in the sort of systems that we have, including the fact that we have employer-funded health care. We don’t have universal health care or single-payer health care. 

So the fact that we now have employer-funded health care is actually a lot of what is driving employers to say, well, we’re going to fund you if you are ill. So, we need to ensure that you are taking all the steps to make sure that you are going to be healthy. And that’s really how you saw the rise of workplace wellness programs. 

So, an interesting tip is that workplace wellness programs actually came out of the eugenics movement, which is also how life insurance came about. In the book, I describe how there was a meeting at MetLife, right, and they’re discussing this idea of creating life insurance policies for their workers. And really, they were discussing that this could then be a trojan horse to having the workers do all these tests, medical tests—to figure out who was fit to get the life insurance, but also who was not fit to get the life insurance. And then, frankly, should be fired because they’re, you know, not fit eugenically, right?

So there is a continuity there in terms of ideology, right? So there’s this idea of the ideal worker, the ideal human who doesn’t have all these human frailties of disease and et cetera. And, you know, that the workplace could be a way of enforcing, right, this ideal human, an ideal worker ideology, and also uplifting them, right? And at the same time excluding the people who did not match that. 

Ethan Zuckerman:

But that, in some ways, drives me back to what I think got you interested in this in the first place, which is carceral technologies and what’s happening with surveillance with people who are transitioning between incarceration and the workforce. Can you talk about that work and sort of how that led you to the questions that you’re asking now?

Ifeoma Ajunwa:

Right. So, you know, I actually came about this book, you know, as you sort of alluded to it in a very orthogonal way, which is that I was researching formerly incarcerated people and their efforts to reenter society and, you know, gain employment. 

And as I was interviewing these formerly incarcerated men and women, there was a refrain and the refrain was I hate computers. And, you know, to be quite honest, my first thought was, oh, ok, you’re not computer literate. You have issues because, you know, you’ve been imprisoned since the 1990s. So you sort of missed the, you know, Internet Revolution, et cetera. But no, actually, once I probed forever, I actually realized that’s not what they were saying. 

They’re saying that the biggest barrier that they were encountering to even getting an interview for a job was a fact that they had to apply on an automated hiring platform. So they would, you know, go to, like, you know, let’s say your local retailer, your local fast food place. So, like, you know, semi-skilled or, you know, low-skilled work. And they would request, you know they would request an application, or they would request to speak to the manager and the manager would say, oh, I can’t help you. You need to go and apply online. Or, you know, you need to apply on the computer. You need to go to a library and apply. You know, I used the Internet there. And then they would go, they would get help, they would do the application, and then they would not hear anything back at all. 

So they were getting quite frustrated because as part of the reentry, you know, programs, they were taking classes on anger management. They would take classes on how to interview well. They would take classes on how to be professional. All these things, right, that maybe if they did get an interview might help. But the fact was that their applications were just getting trashed. It was not even being seen by a human because automated hiring systems had been set up to look for gaps in employment or had been set up such that if they checked a box saying they had been convicted of a crime or even, in some instances, just saying I have been arrested before was enough to never get an interview. So it just seemed to me that these automated hiring systems then were not really democratizing work in the way that I probably thought it was. But actually, were doing something very different. They were actually serving as calling mechanisms to separate people thought fit for work from people thought unfit for work period. 

So, let’s really start from the beginning. So, a co-author, Dan Green, and I actually did an empirical study looking at the development of automated hiring systems. And what we found was that in the beginning, the impetus for automated hiring systems wasn’t to diversify the workplace. It was actually to find more of the same kind of people that you already had. And so that was actually the tagline. The original tagline was “clone your best worker.” So, the worker that you already had, right? 

Ethan Zuckerman:

That nothing ominous about that, nothing ominous about cloning your best worker. I can’t imagine that going wrong. Your best worker, right?

Ifeoma Ajunwa:

Nothing, nothing, you know, exclusionary about that. Right. So yeah, that was really the impetus of the automated hiring system. This idea is that you can replicate your workplace how it is already because you’ve already found the best worker. You are already confident that you already have the best workers. So you’re not necessarily trying to find other kinds of workers. So that was how they started. 

Now, as obviously we move into the modern age and more into, you know, people thinking more about inclusion and diversity. They now started being marketed as an anti bias intervention. As in, oh, we know human managers or human interviewers have bias and that they can actually unleash the bias on applicants in even an unconscious way. So why not just remove the human and turn to the machine, right? Turn to automation as an anti-bias intervention. The problem with that, of course, is that’s not how these machines were created. So, trying to use them as an anti-bias intervention does not work if that’s not really what they were created for. So, you would still actually need to do more work to actually have them work as anti-bias interventions. 

Ethan Zuckerman:

Right. This ties into some of the examples that we’ve heard of Amazon discovering that their hiring system discriminates against women, in part because Amazon may well have had a workplace culture where women were set up to fail and therefore, by looking at the most successful employees, they were only hiring men. 

You have an example about a I believe it’s a law firm that ends up being biased towards lacrosse players, which we understand to be encoding a bias towards white men and particularly privileged white men, but which the system can claim is independent. It just happens to be a characteristic that co-occurs. 

Do you see people work on systems that actually are trying to anti-bias hiring? What would that look like? Are we seeing people working in that space? 

Ifeoma Ajunwa:

So, first of all, what it would look like is basically having guardrails in place to check for bias. So, the automated hiring systems we have now do not check for bias. They’re not even set up that way. And this is something that actually first came to my attention reading Cathy O’Neil’s book, Weapons of Math Destruction, when she talks about hiring, how basically, generally the way it works is you have people who are red-lighted, the people who are yellow-lighted and the people who are green-lighted. Red-lighted, meaning like basically your application is never seen again. 

So you apply on the automated hiring system. If you’re red-lit, you’re gone. No human gets to see your application. And there’s not even a bracket of your application. Yellow-lighted means, ok, you know, you’re still on the cusp. So maybe a human might see your application. And green-lighted, those are the people for sure that will have the application seen. But all the red-lighted people are gone, and there’s no record. Like they disappear into the ether. 

So, of course, what can happen there is that a lot of the red-lighted people could actually all be minorities or could all be women, right? And you would not know this even as a corporation unless you were keeping a record to then audit, right? And check for this bias. So that’s, I think, what’s missing. 

Until we start creating, and that’s what I advocate for in my book, we need to start creating automated hiring systems that actually have built-in auditing mechanisms. Right. So they will keep a record, not just of the yellow-lighted and green-lighted, but even the red-lighted just to be able to do that audit. 

Ethan Zuckerman:

And this is, in part, what I really love about this book and why I wanted to have you on this podcast, which, you know, really tries to emphasize this idea that critique often can be followed by moving in the right direction and possibly finding ways to deal with this. 

One of the terms you use in the book, “captured capital,” which sent me back to an experience that I had maybe ten years ago. I was teaching at MIT, and I was brought in to talk about algorithmic auditing with a group of union leaders from the United Auto Workers. And one of the things that I’ve learned from union leaders, is over time that these guys are really smart. They may have come up through a very different educational path than I have, but they’re often going to ask questions that I never would have thought of. 

And about halfway into this conversation, someone says, look, we understand the companies are videotaping our movements. They are trying to learn from our bodies the best possible ways to do this job. And we know that what they’re going to do with that imagery is train robots to do what the best of us are able to do. What I want to know is, do we have a right to the intellectual property of the movements that we’re creating? And I found myself going, I have no idea. What an amazing idea. What an amazing concept. 

And then you’re talking about that. You’re talking about captured capital potentially being something that could be an asset for workers to access either in negotiation or in a more real sense. Walk us through what that might look like as far as sort of equalizing these dynamics. 

Ifeoma Ajunwa:

Right. So, you know, what’s happening here is that this capital is being captured from workers, not for their benefit, right? But for the sole benefit of the employer in that, capital is then used to automate the workplace, therefore displacing the worker. And there’s a question of, ok, the worker is displaced. If the worker does not have a vehicle to employment or, you know, a livelihood, who should bear the cost of that? Right. And so for me, captured capital is actually a way of conceptualizing who bears the cost of automation. It shouldn’t be the worker because, ultimately then, that means it’s all of us, right, because then the worker will need public assistance. 

So my belief is that as the employer is using worker data, you know, whether it’s biometric data in terms of body movements, et cetera. Whether it’s, you know, the knowledge of the savoir faire, you know, the knowledge of how the work is done, then there should be some of that capital going back to the worker. So, we can think of it as maybe the basis for funding a universal basic income that goes back to the worker. 

Ethan Zuckerman:

Oh, wow. 

Ifeoma Ajunwa:

For example, right. So for each worker, like you can think of some kind of tax, right, on corporations for each worker you automate, you still have to pay a certain amount of money, right, to the universal basic income fund that will then go back to the workers. So Jarrett Lerner, for example, has, you know, he has argued for this sort of like individual rights to sell your data. I am low to advocate for that framework because I think the power imbalance is too big. Then, right, such that the very disadvantaged, right, could be talked into selling the data for a dollar, right. 

You know, let’s say you’re a homeless person, and then somebody, you know, just wants to get all your data for like, you know, five dollars, ten dollars. You may not even realize the true value of your data, or you might be desperate enough to sell it for much less than it’s worth. So I don’t advocate for that regime of just, you know, anybody selling their data, but I do see a potential for a sort of regulatory scheme, right, on corporate entities such that that capital, the captured capital, goes back to workers also, some of it at least. Yeah. 

Ethan Zuckerman:

What are the other sort of worker rights frameworks that help us think about this? And just a reminder on that, we’ve been talking about delivery drivers, we’ve been talking about factory workers, but we saw, particularly during the pandemic, we saw quantified work come into almost every workplace. 

We’re starting to see people who are engaged in creative professions, who are graphic designers, suddenly being monitored by systems that are looking to make sure that their mouse moves every few seconds. And, of course, you see all these creative things with people wiring little robots so that their mouse moves periodically so that they have time to go take a break. So this is if we don’t find a way to defend worker autonomy and defend worker rights. This is coming for all of us because the economic forces seem to be pushing us in that direction. We counterbalance economic forces with regulatory forces. We counterbalance with assertions of rights. Is there a regulatory framework or a rights framework that you, as a law professor, find yourself advocating for when you’re looking at these complex issues? 

Ifeoma Ajunwa:

Right. So I mean, one big part of that is some framework changes to how we allow people to sue for employment discrimination. So we have had Title Seven of the Civil Rights Act of 1964 as really the anti-discrimination framework. And, you know, that’s from 1964. Right. At that time, a lot of technologies we have now were not even conceptualized. So to think that that law is, as it stands, adequate, I think is misguided. 

It doesn’t mean we need to start all over. I’m not I’m not saying that. It does mean that we need to perhaps reinterpret the law. So, the way we have interpreted Title Seven is that it allows for two causes of action: disparate treatment and disparate impact. And I’m actually arguing that you could think of a third cause of action, which is discrimination per se, when you are dealing with automated hiring systems, which is that, you know, a worker who sees something that’s egregious in the way the automated hiring system is behaving should be able to bring suit and then put the burning of proof on the actual employer to provide proof then that that system is not discriminatory, because as it stands, disparate impact as a cause of action has too high of a burden of statistical proof that the plaintiff has to meet and it’s just not possible. It’s frankly a losing game for the plaintiff. So that’s, you know, one idea, right? Having this third cause of action added. 

Another idea is, of course, the audit auditing mechanism as a mandated feature of automated hiring. So it’s the idea that the government can say, ok, you want to have automated hiring fine, but you must audit your automated hiring system. So just making that a mandated feature of having an automated hiring system, I think, is so important.

Also, you know, even at the agency level, right, the FTC, the Federal Trade Commission could get into the business of what claims these automated hiring systems are making. A lot of them, as I mentioned, are claiming to be anti-bias interventions without necessarily any proof of that. So regulating what claims the automated hiring systems can make can actually help educate employers that, ok, just because you adopt an automated hiring system, it doesn’t mean you’re scot-free, and you don’t have to worry about discrimination. You actually will still need to audit and check for that. 

And then finally, you know, another thing I’ve advocated for in my congressional hearing testimony was this idea of a Worker Bill of Rights, really letting workers know that they have rights in terms of how the data is being collected and how the data is being used. Because right now there’s a feeling that you just have to acquiesce to anything in the workplace. And, you know, there’s a feeling that workers are essentially powerless, and they don’t have agency over their own data. And, you know, I think it could be a Worker Bill of Rights that delineates limits in terms of, like, worker civilians, like how far you can go into surveillance. Because, frankly, some of the surveillance can file the labor organization rules, the NLRA, right, that says that workers have to be able to organize if they choose to do so. But obviously, if you’re surveilling them to such a pervasive extent, it can actually interrupt or interfere with organization. So, yeah, just a Worker Bill of Rights, educating workers to their rights, delineating for employers, how far, you know, the limits of their surveillance. I think that’s really important. 

Ethan Zuckerman:

Maybe there’s a hopeful take from this on a labor solidarity point of view, which is that the rights of auto workers can feel very far away for those of us who are lucky enough to have desk jobs. But when I find myself wondering how long I’m going to have a column in a magazine or when my editor is just going to go to chat GPT to do it. 

You know, of course, I think we’re both aware that law is often listed as one of the fields where jobs are most likely to disappear because a huge number of those low-level associate jobs are essentially paper generation jobs that are pretty easily automated, maybe some form of solidarity forms between different surveilled and quantified workers here.

Is there an optimistic version of a quantified future in which, through a different rights framework and a different economic framework, quantification turns out to be a good thing? Is that just unrealistic and too hopeful? Or is there some version where if we get the rights and the regulations right, we could have some sort of positive Taylorism in space?

Ifeoma Ajunwa:

Right. So let me kind of tease apart your question, right? I think there is a version of a data-enabled future that could be positive for workers, right? There is a version of a future where we have the same technologies that we have now in terms of the data that they can generate. But that data is used in a much different way, which is used to empower and enable workers to better do their jobs. For me, the word quantification is inherently negative, because it is the idea of reducing human beings to numbers. 

I think we can have a data-enabled workplace where we don’t want to fire the worker themselves, where we have an acceptance of human frailty and the idea that, yes, some workers will get sick and yes we can all join together and help the people who get sick and even that some people will get sick and still be able to work and we can enable them to continue to work, even in ill health or less ideal health. Right. So just, you know, just a workplace that actually is accepting of our humanity. 

Ethan Zuckerman:

That idea that quantifying the work and not the worker takes us back to Ford and takes us back to that idea that maybe it’s ok to watch people on the assembly line with a stopwatch, but when you follow them out of the factory door, you know, that might be too far. Dr Ifeoma Ajunwa, this is just fascinating. The book is called The Quantified Worker. It’s out there right now. There’s some other great interviews and talks you’re given around this. What a pleasure. Thank you so much for being with us and Reimagining the Internet!

Ifeoma Ajunwa:

Oh, thank you so much, Ethan. It’s also been my pleasure. Thank you.


Comments

Leave a Reply