
Should governments regulate how Facebook moderates speech? Can you sanction an automated smart contract that’s used for international money laundering? Was it a coincidence that every social media platform banned Donald Trump at the same time? In the first part of our 4-part miniseries looking at trust online, we welcome evelyn douek, host of the excellent podcast Moderated Content, and Primavera de Filippi, one of the foremost legal experts on blockchain, to answer a deceptively simple question: what does trust really mean on the Internet?
evelyn douek is an assistant professor of law at the Stanford Law School and the host of Moderated Content alongside Alex Stamos.
Primavera de Filippi is Director of Research at the National Center of Scientific Research (CNRS) in Paris, faculty associate at the Berkman Klein Center for Internet & Society at Harvard University, and author of the 2019 book Blockchain and the Law.
Transcript
Mike Sugarman:
Hi everybody, welcome back to reimagining the internet. I’m your producer, Mike Sugarman, and I am very excited to welcome a special guest today, our host, Ethan Zuckerman. Ethan, welcome to the show.
Ethan Zuckerman:
Hey Mike, it’s great to be here on the other side of the mic. Glad to have you.
Mike Sugarman:
So we’re trying something a little different for the next few episodes. We’re doing a series about trust. Ethan, you wrote a book that came out in 2021, it was called Mistrust. It’s something you think a lot about. And trust comes up, again and again, a lot of different ways online today. Do we trust platforms? Do we trust cryptocurrencies, which are supposed to be a trustless technology? Do we trust fellow community members online? Do we trust Elon Musk to run Twitter? Even what is trust?
Ethan Zuckerman:
Trust is the human… Oh, that’s a good one. Trust is a very basic unit of human interaction. It’s the decision that you are going to let someone else act on your behalf. And they’re going to act with your best interests at heart. If I trust you to hold my wallet, I believe you’re not going to open my wallet, take the money out of it and run out of the restaurant. Trust basically says, I’m going to ask you to be an agent on my behalf, and I believe you’re going to take my well-being into consideration and make it the primary consideration in the decision that you’re making.
Mike Sugarman:
I like that definition and I think what we’re going to realize during the course of this series is that being on the internet is actually pretty similar to what my parents thought it was 25 years ago when we first got a home computer, which is you’re dealing with a lot of people who you don’t know and you have to decide whether or not you trust them, right?
Ethan Zuckerman:
So we have to decide if we trust corporations, we have to decide if we trust other users, we have to decide if we trust ourselves to judge what we’re looking at. It’s a huge set of issues. When you start looking at the internet through the lens of trust, you end up realizing You’re trusting people all the way down. You’re trusting networks to send your bits where they’re trying to go. You’re trusting the search engine isn’t compiling a dossier about you and selling it to the highest bidder. And of course, they probably are. You’re trusting the transaction providers that they’re not going to empty your bank account when you go and buy that book from Amazon. Trust is involved with almost every aspect of the internet. Sometimes when we look at these new sorts of bleeding edge examples, we can see how much But trust carries through the internet, even the parts of it that we use every day and rarely think about.
Mike Sugarman:
So again, you published a book called Mistrust in 2021. It came out in… remind me of the month.
Ethan Zuckerman:
Yeah, January 2021.
Mike Sugarman:
That’s interesting. I have a very foggy memory of that period of time. What else happened in January of 2021?
Ethan Zuckerman:
Oh, it was just a great time to be writing a book about American democracy. There was this little insurrection. You might remember it. So I wrote Mistrust as a reaction to spending nine years at MIT working in a research group called the Center for Civic Media.
At Center for Civic Media, and Mike, of course, you’re an alum of Civic as I am, we spent a lot of time talking about how young people get involved with social change. I got really interested in this idea that young people, in many cases, are really skeptical of government institutions, and sometimes institutions as a whole.
So I wrote Mistrust about a very specific kind of trust. It’s about trust in institutions. And the truth is, in the United States, trust in institutions of all sorts has been dropping since the 1970s. If you ask the average American, “Do you trust the government to do the right thing all or most of the time?” In 1964, 77% of Americans would say, “Yes, yes, I trust the government all or most of the time.” If you ask an American the same question right now, fewer than 15— one-five percent—will say that they trust the government.
The interesting thing is it’s not just government. It’s all sorts of institutions. It’s banks. It’s the medical profession. It’s newspapers. It’s the police. And in almost every case, you can find a moment of crisis where an institution loses the public’s trust. Americans lost an enormous amount of trust in religion after it became clear that they that the Catholic Church had been protecting abusers for decades. But across the board, this is not a pretty picture about trust in institutions.
Mike Sugarman:
If we don’t trust our institutions, it’s hard to trust in, for lack of a better way of putting it, our democracy, right? Our democracy functions because we get to have some say in how institutions work. If we don’t trust them to even be functional, where does that leave us?
Ethan Zuckerman:
So let’s look at a really practical version of this. If you don’t believe that elections are being carried out fairly, maybe you’re not going to participate in those elections. Maybe you’re not going to accept the results of those elections. Maybe you’re going to participate in a violent insurrection to try to overcome the results of those elections.
You could argue that the insurrection on January 6 was literally the direct result of mistrust reaching levels in the United States where it is becoming increasingly difficult to govern this country. We watched the Arizona Senate hold a six-month, $6 million recount of the Maricopa County vote. The recount found out that the vote was completely solid. It actually found 210 more votes for Biden. But it got carried out because these levels of mistrust were so high that the Republican Party faithful demanded it, and mainstream Republicans felt like they had to play to that base. This attitude of no one is to be trusted, everything should be challenged. This is now a mainstream and quite popular political attitude, and it seems to have particular strength in the right.
Mike Sugarman:
And I would say it’s not just the right in the United States. This is a classic tactic in authoritarian Russia, right? This is something that played out in Brazil during the Bolsonaro years. It’s played out in India with Modi and the BJP. These are, for lack of a better way of putting it, authoritarian right-wing leaders. Why do people trust authoritarians when they don’t even trust, I don’t know, post offices, schools, hospitals?
Ethan Zuckerman:
One of the best rules of thumb about what constitutes an institution is whether or not it has a face. Institutions don’t have faces. They’ve got rules instead. You might trust Joe Biden and mistrust the institution of the American presidency. The trick authoritarian use is they say, “Don’t worry about the institution, just worry about me and you know you can trust me.”
And the truth is it’s probably easier for us to trust people than it is to trust institutions. We go through life trusting people. We trust our partners, we trust our parents, we trust our coworkers. We’re engaged in these complex webs of trust all the time. We’re used to trusting people. We may or may not be used to trusting institutions. If what we get out of our interactions with institutions isn’t positive, if it’s institutions that we blame for our financial insecurity or hardship or for other aspects of our lives that aren’t going well, we might be oriented towards mistrust in general.
So what happens is the authoritarian basically says, “You can’t trust this government, but you can trust me.” And in many cases, people believe that. Trust is a negotiation. is much more a verb than it is a noun.
Mike Sugarman:
How do we get out of this trap? Mistrust just seems to breed more mistrust. That seems to make institutions deteriorate. All of this just snowballs.
Ethan Zuckerman:
There’s two basic ways out of the mistrust trap. If you’re an institution, the best thing you can do is invite people to become part of your institution. Once people feel like they’re part of something, they’re much more sympathetic to it. So this is a possible benefit to the skepticism of elections in 2020. Some people responded to that skepticism by becoming poll monitors. In the process of monitoring the polls, they ended up discovering it’s really hard to rig elections in the United States. We actually have a whole lot of safeguards in place. It would be really hard to undermine them on a big level. And beyond that, people who have participated in that work of monitoring polls and actually making those polls run, tend to buy into the institution. You end up with people saying, “Well, I’m sure fraud happens somewhere, but it certainly didn’t happen at the polling place that I personally was working at.” So if you’re an institution, you can open yourself up and find ways to let people become part of that institution too. The second thing you can do is you can be a revolutionary. You can say, “This institution is not working very well. We need new institutions to solve this problem.
And I think in a lot of cases, our enthusiasm for the tech companies has had to do with them positioning themselves as revolutionaries. There was a lot of rhetoric 10 or 15 years ago about unseating CNN, about taking down the gatekeepers, being able to communicate directly with one another, not having anyone try to shape our public opinion. I think that allowed Facebook or Twitter to position themselves as revolutionary. And I think there was a certain amount of trust built into being those outsiders, being those revolutionaries. One thing that’s really tough for revolutionaries is that if your revolution succeeds, you probably become an institution.
And after the revolution ends, you start to calcify. You go from being the outsider to being the insider, and you’re suddenly subject to all of these mistrust dynamics.
Mike Sugarman:
We’re in a situation where Facebook kind of acts like an institution, but there’s something weird about that, right? It’s built on this ethos, as a lot of tech companies are, of move fast and break things. But institutions don’t move fast, and ideally, they don’t break things. So how do we learn to create tech that people can actually trust, given all that?
Ethan Zuckerman:
So Facebook moved fast for a long time, and it broke a lot of things. I think something that finally made it clear to Facebook was when people drew a connection between the violence in Myanmar and Facebook’s apparent inability to control speech on the platform in Burmese. There was pretty clearly incitement to violence being delivered in Burmese on the platform. At that point, Facebook finally went out and commissioned a human rights report on what they’d done right and what they’d done wrong in those circumstances. Subsequently, there have been independent reports on the situation as well. It’s the kind of issue that I think forces an organization to take seriously just how much power it has, how there might need to be oversight.
And Facebook has gone ahead and created an external oversight board. Depending on who you ask, this is anything from a good idea with some shortcomings to a total farce. The good idea version of this says, actually, you want to have someone to have oversight over content moderation. You want there to be public discussion of these decisions. Having it act as an appeals court, having a rotating panel of judges, having an assurance that one of those judges will know the appropriate cultural context. All of those things are smart.
Mike Sugarman:
The cynical side looks at this and says, “These judges are all paid for by Facebook. Are they really going to find that Facebook has done things terribly wrong?”
Ethan Zuckerman:
The interesting thing is that in some cases they’ve been quite critical of Facebook. But people are right to be suspicious. People are right to be critical. Facebook is not the the only platform doing this. Twitter historically was quite good about opening its API to researchers. That’s changed under Elon Musk and it’s not clear that it’s going to change back. But Twitter historically has released research that has been extremely unflattering. Twitter released a study that said, “We think our core algorithm favors right-wing political figures over left-wing ones and we’re not quite sure why.” I thought that was a really good sign. I think these are still baby steps for companies that are facing massive complicated trust problems.
evelyn douek:
The idea that the more speech, the better, and the remedy for bad speech is more speech—is good speech. It’s a uniquely American view. It’s American First Amendment exceptionalism.
Mike Sugarman:
That’s evelyn douek, an assistant professor of law at Stanford Law School and one of the leading voices in the field about what it means to leave questions of free speech up to major social media platforms.
Here she’s talking about the salad days of platforms in 2011, but it seemed like Facebook and Twitter were fueling the Arab Spring by allowing so many people to post so much on their platforms. It seemed like a good and even revolutionary thing that everyone from high-profile activists to every day people could register their dissent online and then flood the streets.
Ethan Zuckerman:
About a decade ago, social media platforms gained collective trust by letting us post whatever we wanted no matter the effect on politics and governing. They positioned themselves as defenders of speech, and for the most part, people felt like the platforms were on their side
I asked evelyn: what changed?
evelyn douek:
And then 2016 happened, the Russian interference, then even more so the pandemic and the disinformation around this literal public health crisis caused them to change tack. There is something different when people are dying all around you and there’s evidence of information on your platforms that could be contributing to that. They took a different stance and they took this approach that maybe not all speech could be cured by counter speech. There was lots of research showing that the spread of the mis and disinformation was much further than a little fact check that says, “Hold on, this may not be true.”
And so they took a much more interventionist approach. They balanced the right of free expression against these other interests and said, “Look, there is a point at which the rights of free speech is outweighed by these safety concerns that we have.” But then once they started doing that, it turns out that’s not easy. They came in at the beginning of the pandemic and said, “Well, we can do this because we’re going to point to the authoritative sources to tell us when we should take down mis and disinformation.”
And it turns out the authoritative sources sometimes get things wrong. The most famous example here is mask ad bans. At the beginning, because the WHO was saying things like we shouldn’t wear masks, the platforms were banning advertising masks on their platforms. And the research far outstripped when the WHO turned to change their guidance on that. And so the platforms were stuck in this difficult position of having some authorities on one side and some authorities on the other side.
So it turns out that that creates this massive trust problem. As soon as you start to draw lines, the question is, “Who are you to draw these lines? How are you drawing these lines? Who are you listening to? Are you getting all the relevant perspectives?” And so a lot of the law is premised on this idea that the process helps, that process is the way that you get the relevant voices into the room, the way that you make people feel heard, the way that you ensure that decisions are accountable and well thought through. Even the process of having to explain yourself will make you more thoughtful before you make the decision, right? You know you’re going to be called on it. You know you’re going to have to rationalize your choice. Knowing that is going to have to happen can make you just be a little bit more thoughtful about what you’re doing.
And so that is where we are. But then turns out now we’re in this process here and it’s like what process? What process makes your decision making more accountable and better? What actually increases that accountability and that trust? And it turns out that’s not so simple. Maybe the judicial model that a lot of people hung their hat on is not going to be the most effective when you’re doing things at the scale that platforms are doing them and when you’re using the technology that the platforms are using. It’s this fundamentally different environment. And so now we’re in this third point of, “Oh, no. We thought we’d solved it, but we have a whole batch of new problems.”
Ethan Zuckerman:
Talk a little bit about the Facebook Oversight Board. You have written very, very thoughtfully about the oversight board. Should we trust platforms like Facebook that are building quasi independent oversight boards? Should we trust them more than companies that don’t seem to be directly reckoning with these questions of trust?
Evelyn Douek:
So I think the jury’s still out. Excuse that the pun, given it’s a [inaudible 00:10:23] institute. The jury’s still out on whether the oversight board is working, whether it means that we should trust platforms more. But that was certainly the intent. The intent was not that the oversight board would fix content moderation or if it was, that was foolish. We talked about the scale of content moderation and the idea that a single oversight board could correct all of the mistakes on Facebook was always illusory.
But the idea was that if you had this independent body that was reviewing Facebook’s decisions and coming up with rules that weren’t ostensibly driven by Facebook’s profit motives or things like that, that that would build a more trustworthy environment. The idea that its content moderation rules were being reviewed for the idea that they were promoting free expression or were promoting user rights rather than just, as the trope goes, trying to keep people on site for longer.
So that was the idea. And it comes out of this historical research or prior research from people like Tom Tyler at Yale who have shown that if you give people more process, more opportunity to be heard, more rights of appeal, more participation in decisions that you make against them, they believe that the process is more legitimate and they trust it even when the outcome comes out against them. So the idea being… And a lot of this work is done in the criminal justice context. If you give someone the opportunity to be heard, the right to present their case, understandably they’re going to trust that more, even if that can now come against them because they feel like they’ve had a right to be heard.
And so that’s the motivating force behind this project, this experiment, I think. But it’s an entirely different context from the criminal justice context. We don’t have a lot of research. Do people even know that the oversight board exists? I think if you went out on the street and asked any random person, any random Facebook user, “Did you know that an oversight board exists?” The answer would probably be no. I’d be very surprised if you could find one that said yes. And so the idea that it might be building trust across the population in those circumstances is illusory, but maybe that’s not the audience. Maybe the audience is you, me, European legislatures, Congress, those kinds of people. Reporters. The idea would be maybe they’re going to trust it more given that there’s these very prestigious, very esteemed people overseeing Facebook and keeping an eye on what they’re doing.
Ethan Zuckerman:
I asked evelyn for her thoughts on one of the more cynical explanations for why the Oversight Board exists: was Facebook trying to create its own set of people who resembled regulators to avoid actual government regulation?
Mike Sugarman:
That’s effectively what Hollywood did with the so-called Hayes Code from the 1930s through the 1960s, when it created a set of rules for what filmmakers working with major studios could and could not put in movie. Hollywood figured if it didn’t make movies where bank robbers got off scot free and couples had to keep their clothes on, the government wouldn’t step in and censor movies for them.
Ethan Zuckerman:
And it worked.
evelyn douek:
If you look at Mark Zuckerberg, he wrote this op-ed in the Washington Post calling for regulation. He said, “Please regulate us. Here’s four realms in which we would like to be regulated.” And one of them was content moderation. I think Facebook would love to be told what to do by governments so that they can get the heat off them. So I don’t think it was necessarily attempted to avoid regulation so much as direct regulation perhaps.
And you have seen some of that. There is a new EU package of legislation. It incorporates this idea of having appeals to an independent third party auditor. So I think there’s this attempt to shape the way the government’s view content moderation and shape the idea of what would make these systems more trustworthy, but I don’t think it was an attempt to make governments back off entirely because I think it’s the big guys, it’s the Facebooks of the world, that are going to be able to comply with regulations. And it’s the little guys that are going to really struggle with the idea that they need to allow appeals to third party bodies, because that’s going to be very resource intensive. So I don’t think it was intended necessarily to be effective in that way.
As to whether governments should regulate, it’s a really difficult question. So one of the main motivating forces behind a right to free speech, the reason why we have a right to freedom of expression, is that we don’t want the government regulating what you and I can say in politics. The government is obviously very interested in or could have its own interests in making sure that the political debates are circumscribed in a way that is advantageous to it. The governments want to get reelected. And we’ve seen a lot the first sign of authoritarianism is often shutting down freedom of expression. So generally we really don’t want governments being involved in regulating speech.
On the other hand, these companies are huge, have their own interests at stake, their own commercial interests, which I don’t trust. I don’t know whether a profit motive is a better motive for regulating speech. And we have no visibility at all. So I guess my instinct is no, we don’t want governments involved in substantive decisions about speech, what you and I can say, where the appropriate lines of mis and disinformation lie. I think that that would be too dangerous in political debate.
But I do think that’s a lot of work to be done around building these systems of trust that we’ve been talking about and getting that transparency that we said was necessary to get insight into the systemic failures or the processes that these platforms have. Because you can’t fix problems that you don’t understand. And at the moment, we just have no idea what’s going on in these platforms. And so the first step has to be using regulatory tools to get the insight that platforms are just not going to give us voluntarily and then using that to inform more productive, perhaps more coercive, and more extensive legislative responses.
Ethan Zuckerman:
So let’s dig into that question of bias because I think that’s the other piece of this that’s absolutely fascinating. I think you are right to say that almost no one trusts Facebook, trusts Twitter, trusts these major platforms, to have the consumer interest at heart. I would say that people on the political right seem to mistrust these platforms even more thoroughly than people on the political left. And people on the political right often have a sense that these platforms have a bias against right leaning content and perhaps there’s no better example of that than the deplatforming of President Trump. What did removing President Trump across social media platforms… And of course you’ve had this very thoughtful argument about the ways in which content companies tend to move in tandem almost acting as a cartel. But how did this mass deplatforming of Trump change the trust relationship between the political right and these platforms?
evelyn douek:
The storm was brewing well before they deplatformed Trump. We had seen this starting with the way that they policed disinformation in the pandemic, the general way in which they approached election disinformation, and the labeling of Republicans. We saw it in the way that they treated the Hunter Biden laptop story that came out. But certainly it’s fair to say that the deplatforming of Trump really turbocharged this. And it really became… I mean, it’s such a salient, tangible political talking point. And it’s effective. And there is something really unsatisfactory about the idea that all of these platforms that ostensibly have all of these different kinds of rules, that have all of these different decision makers, somehow at exactly the same moment decided that Trump had just stepped over the line.
And of course, January 6th was a really catalyzing event. But there’s many examples prior to January 6th of things that he said that arguably went over the line. I mean, the looting/shooting post in the middle of the Black Lives Matter protests is a great example of something that quite clearly could be seen to be crossing a line and indeed platforms did label that post or hide it… Some of them.
But this idea that you had all of these different platforms, different service providers… AWS deplatformed Parler. You had this flood of decision making all at the same time. It does feel like that idea of process that we were talking about before, the idea that process can build trust… There wasn’t a lot of process around that. They all just moved in tandem. There was no real right to appeal. There was no public conversation. Very few of them gave any public explanation. I am still waiting for YouTube to tell us why it made the decision that it did. And also Susan Wojcicki has said that they will reverse it. They will reinstate Trump when they have decided that the danger has subsided. And I don’t know when that is. How are they adjudicating that? What does that mean?
Mike Sugarman:
Ok, easy, we’ve got to regulate the living hell out of these platforms. They make decisions about free speech and politics all the time, and we have no idea what their rationale is.
Ethan Zuckerman:
Well, Mike, evelyn might agree with you and she might disagree with you. It depends on what you mean by regulation? Do you mean the government regulates speech on these platforms, or the government sets up clear guidance for how the platforms should tell us what they’re doing?
evelyn douek:
As to whether governments should regulate, it’s a really difficult question. So one of the main motivating forces behind a right to free speech, the reason why we have a right to freedom of expression, is that we don’t want the government regulating what you and I can say in politics. The government is obviously very interested in or could have its own interests in making sure that the political debates are circumscribed in a way that is advantageous to it. The governments want to get reelected. And we’ve seen a lot the first sign of authoritarianism is often shutting down freedom of expression. So generally we really don’t want governments being involved in regulating speech.
On the other hand, these companies are huge, have their own interests at stake, their own commercial interests, which I don’t trust. I don’t know whether a profit motive is a better motive for regulating speech. And we have no visibility at all. So I guess my instinct is no, we don’t want governments involved in substantive decisions about speech, what you and I can say, where the appropriate lines of mis and disinformation lie. I think that that would be too dangerous in political debate.
And so a lot of the law is premised on this idea that the process helps, that process is the way that you get the relevant voices into the room, the way that you make people feel heard, the way that you ensure that decisions are accountable and well thought through. Even the process of having to explain yourself will make you more thoughtful before you make the decision, right? You know you’re going to be called on it. You know you’re going to have to rationalize your choice. Knowing that is going to have to happen can make you just be a little bit more thoughtful about what you’re doing.
But I do think that’s a lot of work to be done around building these systems of trust that we’ve been talking about and getting that transparency that we said was necessary to get insight into the systemic failures or the processes that these platforms have. Because you can’t fix problems that you don’t understand. And at the moment, we just have no idea what’s going on in these platforms. And so the first step has to be using regulatory tools to get the insight that platforms are just not going to give us voluntarily and then using that to inform more productive, perhaps more coercive, and more extensive legislative responses.
Mike Sugarman:
So, if I’m getting this right, evelyn is saying that regulating speech is such a sticky matter in the US that in the situations where it’s necessary, we need to do it with the utmost care. And that kind of care is probably only possible if tech companies are mandated to show their work, so to speak. So, we need to do all that just to get some idea about how to regulate these companies properly within a US legal framework.
I mean it’s fair, but that’s a pretty complicated solution.
Ethan Zuckerman:
Oh, if you think that’s complicated then we need to talk about regulating blockchain.
Mike Sugarman:
Oh man, we’re already talking about blockchain
Ethan Zuckerman:
We have to Mike because…
Mike Sugarman:
Because this is a series about trust and cryptocurrencies using the blockchain are supposed to be trustless, right?
Ethan Zuckerman:
Yup, exactly.
Mike Sugarman:
Okay well, you might as well explain Bitcoin.
Ethan Zuckerman:
So, picture yourself in 2008. In the United States, you were watching a presidential election playout between an arch institutionalist and a charismatic junior senator whose campaign poster was just his face with the word “Hope” under it. That was Barack Obama, and people really needed some hope. And what happens in September of that year, just before the presidential election?
Mike Sugarman:
Lehman Brothers collapses. And it’s set off this chain reaction in investment banks here and in Europe. Total chaos.
Ethan Zuckerman:
That’s right. Needless to say, there were a lot of people who were pretty pessimistic about the global financial system. Do you know what happened on Halloween of that year, Mike? The mysterious entity known as Satoshi Nakamoto published the whitepaper that established Bitcoin. And what he outlined there was how this technology called the blockchain could act as a decentralized ledger of financial transactions that anyone could check, eliminating the need for a central bank that controls a currency. We wouldn’t have to worry about relying on those flimsy banks or unaccountable governments for financial transactions anymore.
Mike Sugarman:
And there are a lot of really specific, kind of classic right-wing libertarian ideas that show up in that white paper. We’ll cover those in a future episode in this series. But for right now, we’re interested in this weird case where cryptocurrency is supposed to be trustless. We just witnessed a huge wave in 2021 and 2022 of regular people investing in NFTs and various cryptocurrencies because they figured they might get a huge financial return. Sounds to me like that requires a lot of trust.
Ethan Zuckerman:
That’s right. It’s been about a decade-and-a-half since Bitcoin launched, and now we have a whole bunch of other technologies like Ethereum, NFTs, smart contracts, Decentralized Autonomous Organizations – and these are currently unregulated. We have to figure out how the law is going to deal with them.
Primavera de Filippi:
Similarly, in the case of blockchain, we have cases like DAOs where a DAO is actually not recognized as a legal entity by the law. And so if the law wanted to expand its scope in order to regulate DAOs, it will need to some extent recognize some type of legal capacity, legal personality to those entity which are actually not at the current moment recognizable as legal entity because this is just as smart contract. There is no people, there is no individuality. So that means it is possible the law can choose to expand its scope in order to cover those new activities or assets, but that will require a reformulation of the overarching system of the law in order to do that.
Ethan Zuckerman:
That’s our friend from the Berkman Klein Center at Harvard and the French National Center for Scientific Research, Primavera de Fillipi. She literally wrote the book on blockchain and the law, and it’s called Blockchain and the Law. I interviewed Primavera to figure out exactly why so many people could trust this so-called trustless thing we call cryptocurrency.
Ethan Zuckerman:
There’s a wonderful line in one of Satoshi Nakamoto’s early papers about the blockchain where he says the problem with currency is all the trust that’s involved with it and goes on to explain this need to trust central banks, not to devalue currency, to trust banks not to steal money from your accounts. It feels like in many ways, those motivations around central banking and what now gets called fiat currency made a lot of early cryptocurrency investors interested in these trustless systems. Is it correct to think of something like Bitcoin as trustless? What are you trusting when you’re using a system like that?
Primavera de Filippi:
I think in my personal opinion, I don’t think we yet have a system that is fully trustless. Actually a system that is operated by people I don’t think can ever be fully trustless. In the case of Bitcoin, there are two specific dynamics. So one is that they can implement a particular protocol and that protocol because it relies on proof and cryptography and mathematics and whatnot, the protocol is replacing the need for trust by confidence. So we call it the confidence machine instead of the trustless system. So it’s actually generating confidence. And this is true, but this is only true to the extent that you actually can trust the underlying government of the system because all the technological guarantees that Bitcoin or any other blockchain system provides are dependent on the fact that no one can do for instance a 51% attack.
So if you just look at the system and say, “Oh, this is trustless, this is pure confidence enabling,” it is because you are actually not looking at larger picture and looking at the actual underlying governance of that system. And the trust is essentially distributed amongst a lot of actors. So you don’t need to trust one single actor just like a central bank, but you need to trust a lot of actors. And of course, that means it’s harder for those actors to collude, but as we have seen because of the economic incentives and because of the economies of scales, actually today we just have those few mining pools that are hardly controlling a large majority of the Bitcoin network and Ethereum network and so forth. And so trust actually persists. It is distributed, but it’s not that much distributed as to be houseless.
Ethan Zuckerman:
So first of all, I just want to note this idea of the confidence machine. You are suggesting that the ways in which transactions are cryptographically signed and checked, ensuring that any transaction posted to the blockchain doesn’t remain for very long and that it’s the valid transactions that stick there. That is a confidence machine. You see confidence machines as a positive in the framing, yes?
Primavera de Filippi:
Yes, exactly. So when you say something is trustless, you’re not saying much except for saying that it does not require trust. But the question then is what does it do? What it is about? Is it trustless and then what? And so the positive aspect of what is it that it is actually contributing to, it’s increasing the degree of confidence in the system and you can look at trust and confidence as those like if you have a bucket and you need the bucket to be sufficiently full in order to be willing to interact with a particular system and you can fill up these buckets and you can fill it with trust or you can fill it with confidence or you can fill it with some trust and some confidence. And the more trust you have, the less confidence you need, and the more confidence you have, the least trust you need.
But you still need to reach a particular level into that bucket in order to be willing to interact with the system. And so if you have a bucket and you just remove trust completely, you are not going to still want to interact with that system unless you actually add confidence to compensate. And once I have sufficient reliability and predictability or I feel that I can predict enough so that I’m willing to engage into this uncertain activity, then I’m engaging into a relationship of trust when I don’t even question the uncertainty because I assume that things will have to work exactly as expected, then I’m entering into a confidence relationship. And the thing is that there is no such system that is only one or the other. It’s always a combination of that. But some systems have more requirement for trust. As in the case of central banks, some provide higher degrees of confidence, but you cannot completely eliminate the trust and the trust is just underneath at the governance of the underlying network.
Ethan Zuckerman:
I want to ask, what about trust in the code? It sounds like there’s also the possibility that maybe not the core transaction code but the wallet code, some of these other systems that people end up dealing with, that that could be compromised and could have severely negative implications. Are people putting too much trust in technological systems that most of them are not qualified to evaluate?
Primavera de Filippi:
Yeah, I mean in that case, I wouldn’t even say it’s trusting the code, it’s trusting the people that are auditing the code, which is still people. But yeah, I mean, there has been and I think there will be a lot of cases in which you have smart contact code which is actually vulnerable, flawed in various ways, and you felt you could be confident in the way it works and then all of sudden, everything is broken. I would say that I think at least until now, empirically and statistically most of the problems are usually based on the front end. It’s the interface, it’s the web interface to the blockchain that is actually flawed.
There has been very few, of course there have been and they were quite dramatic in the whole way. There have been cases in which the smart contract code was flawed and probably there will be many more in the future, but it’s a problem of, of course, auditing. In some way. I don’t think it has ever really been a hack in the sense of a real hack. It’s really just a very stupid vulnerability that someone didn’t pay attention to. And the problem is that as opposed to centralized staff there, where as soon as you find out the vulnerability you can fix it. If you find a vulnerability on a smart contract, there’s no way to fix it, so you’re pretty much stuck with that vulnerability which require even more trust in the auditors and in the developers.
Ethan Zuckerman:
This really seems like the system that software developers would know not to trust in. I think anyone who’s ever developed software believes that everything has a bug and that when you’re convinced that it doesn’t have a bug, that’s when you’re most likely to have a bug. Were we simply wrong to persuade ourselves that smart contracts were something that could be relied on? Are smart contracts ultimately a good idea?
Primavera de Filippi:
I mean, I think good or bad idea, I don’t know. They are a very interesting type of application. I think if you make a simple smart contact by now, you might be able to be fairly confident in the fact that it works. And I think auditors are also getting better because as you start knowing the bugs, then you start also recognizing them. I think it’s similar, it’s the same with early days of e-commerce, which was flawed everywhere from vulnerability from all other because people still didn’t really have a clue what can be done or not. And I think now, we reached a point in which, yes, of course there are still some hacks, but those are very marginal because we pretty much figure them out. All those bugs have been done and now you know how to protect your contract.
So I think many applications today are actually quite decent and resilient. And again, most of the problem usually comes from the user interface or decentralization. And I think the interesting thing that we’ve seen recently with DeFi where there have been quite a little bit of issue emerging there and actually most of them were not caused by the smart contract by the blockchain, but they were caused by either interfaces that were not properly coded or by the fact that there was actually a centralized entity in the middle which was being trusted and should not have been trusted. So in some way, of course we will never be able to eliminate the bugs in smart contact. And I think that’s also why now, a large majority of smart contacts are upgradeable because people know either that there might be issues, or simply, it’s not even a matter of issue, but it’s a matter of we know that contingencies will change and we know that there is no system that works today that will be still the best system tomorrow.
Now again, when you add the upgradeability to a smart contract, you are reducing the confidence because now you cannot be confident that it’ll always work exactly as planned because you don’t know how it’s going to be upgraded, and who has the power to upgrade it is also an important question. So again, there’s always this very interesting interplay between the more confidence… Actually if you have a system that is completely immutable, it’s probably a very bad system, because what about the state of exceptions? What about changes in the environment which might require changes in the code? But the more flexibility you want to inject into the contract, then the more you’re moving away from the confidence and into the trust. And the problem is that you do actually want something that is flexible and that’s why you cannot completely eliminate trust. And it’s always about finding these right mixture, this right cocktail of trust and confidence and the degrees that you want of each given the particular application that you want.
Ethan Zuckerman:
Primavera has lately been writing on something that might sound kind of strange coming from a legal scholar: a concept she refers to as the alegality of smart contracts.
Mike Sugarman:
So if legality is all the stuff the law covers – traffic violations, building codes, violent crimes, food safety standards – then alegality would be anything that falls outside of the scope of the law.
Ethan Zuckerman:
Exactly. And the thing about smart contracts is that they’re supposed to just run forever once they’re implemented. A smart contract is really hard to take legal action against because, in theory, there’s no one person or corporation responsible for it, but instead an entire organization. It’s an organization that you can’t locate in a specific place because it’s decentralized and and that you can’t really influence because it’s autonomous.
Mike Sugarman:
A decentralized autonomous organization. A DAO. Have there been any attempts to apply the law to a rogue smart contract?
Ethan Zuckerman:
So, I asked Primavera to walk us through one situation: a piece of software called Tornado cash, which is operated by several smart contracts and has recently been sanctioned by the US Department of Treasury’s Office of Foreign Assets Control.
Primavera de Filippi:
Tornado Cash is a mixer. So it’s a smart contract that is fairly popular that you can send either to it and it anonymize it and then you can receive back your ETH and it’s fairly impossible to find out where that ETH comes from. So it’s a way of actually, especially given a blockchain which is very traceable, it’s a way to protect your financial privacy by anonymizing your transactions. Of course, it’s quite obviously also a tool that might be used by money laundering, by terrorist organization and whatnot. But that’s not a company. This is an actual smart contract. This is a pure technical device. And a few weeks ago, the OFAC has actually added to the list of sanction entity, a list of smart contract addresses which relate to Tornado Cash, meaning that all of a sudden, it is now criminal. It’s a criminal activity to interact for a US person with Tornado Cash. And it’s very fascinating. It’s a fascinating case because it is the first time that sanctions have been applied to a non-legal entity, to a technological thing.
Ethan Zuckerman:
And my understanding of this is the US government asserts that Tornado Cash has been used by, amongst others Lazarus Group, which is associated with North Korea allegedly to launder significant amounts of stolen funds?
Primavera de Filippi:
Exactly.
Ethan Zuckerman:
But what you’re saying in this is the sanctions against Tornado Cash and against Blender IO essentially are preventing US persons who might have a legitimate reason for using these services from doing and they cannot actually harm the contract. Is there anyone who’s benefiting from the smart contract? Is there someone who takes something off the top for running Tornado Cash?
Primavera de Filippi:
I mean, so there is the Tornado token holders which is a lot of people all over the world, which benefit partially from the operation of Tornado, but they’re not operating it, they’re not administrator, they are more, I guess, dividend shareholder if we need to find an analogy. And that’s different with Blender because Blender is actually operated as a company, whereas Tornado Cash is not. There is no company, there is no administrator, and there is no operator. So again, it’s different when you sanction a company, it makes sense. That’s actually what function it’s usually used for. Whereas here, you are actually literally sanctioning a technical thing.
And this is very interesting with regard to the legality question. It is fascinating, and it also show how all of a sudden, while it is told that a government cannot influence and stop, Tornado Cash cannot be stopped. No one can stop the operation of Tornado Cash, clearly not one government. But maybe you don’t need to because you just need to sanction it. And if you sanction it and all of sudden it becomes criminal to interact with that smart contract, then even if you don’t stop it and so other countries might still use it, but at least you have literally banned it from your home jurisdiction. And if many countries do the same, then it is banned.
Ethan Zuckerman:
So you don’t actually need to stop it. You might stop enough people from using it that it functionally becomes obsolete or unused by virtue of the fact. Okay. All right.
Primavera de Filippi:
I mean, assuming that you have enough government, I think the fact that the US government has sanctioned Tornado Cash will probably not kill Tornado Cash at all. It might scare people away. And because at the same time, you never know. Yesterday, you have people that have sent money to Tornado Cash, they cannot take it out, US persons. So you might be, “Okay, well the US have done it, other countries are going to also intervene.” And so people become a bit scared of putting money into Tornado because they never know if they would be able to take it out if the government decides to also create some regulation about this.
Mike Sugarman:
So Ethan, what do you make of all of this? It almost sounds like Tornado Cash doesn’t even engage questions of trust, but questions of accountability. But obviously the people who use the platform have to trust that it works, and that they won’t get penalized by the US government for using it.
Ethan Zuckerman:
Well, there are a few implications. If you’re a user, you’re trusting Tornado Cash to tumble your coins correctly, so that there’s no trace of you making some future transaction. But, there’s no there there if something goes wrong.
Maybe the key to retaining trust is to ensure you cannot become an institution that loses trust? Or maybe the point of institutions is that they have to be something we can appeal to when something goes wrong. The sheer alegality of Tornado makes it scary as hell. In Tornado Cash’s ideal world, the software is institution-proof.
Mike Sugarman:
Trust online is getting so tied up with crypto that we’re going to dig deeper into blockchain in the next couple of episodes. We’re talking to Finn Brunton, a historian of cryptocurrency in our next episode, alongside friend of the podcast Molly White about some cases where crypto looks like it should act like a bank but in truth it acts really differently.
Ethan Zuckerman:
Thanks for joining us for the first episode of Trust, and we’re really excited about what we’ve got coming up.