Dan Saltman is the founder of Redact, one of the very few tools that lets you delete data across many of your social media accounts. So why are there so few projects that put users in control of their data, like Dan’s? This week on Reimagining, we talk about the legal, technical, and market obstacles to building tools that give users power over platforms.
Dan Saltman is the founder of Redact and host of the podcast Anything Else?
On this episode, Ethan mentions Tracy Chou, who we’ve had on the show twice: episodes 39 and 82. Ethan and Dan also discuss Unfollow Everything, which is the basis of Ethan’s lawsuit against Meta.
Lab scholar Isaac Brickman also recently published a comprehensive survey of the current middleware landscape right here on our site.
Transcription
Ethan Zuckerman:
Hey everybody. Welcome back to Reimagining the Internet. I’m Ethan Zuckerman, your erstwhile host.
I am here today with Dan Saltman, serial entrepreneur, founder of Redact. And we’re going to be talking about the wonderful world of middleware. Dan, it’s wonderful to be with you today.
Dan Saltman:
It’s fantastic to be here. And yeah, I’m just hoping I can have something to add to the conversation.
Ethan Zuckerman:
Well, let’s start with what you’re doing now. You have a company called Redact, which fills a really interesting niche in the social media space. Talk to us about what Redact does and how you ended up coming up with the idea.
Dan Saltman:
So Redact is a locally installed software suite that lets users manage their content and delete their content across many different platforms that you have. So it kind of works similar to a way of like an email program where if you would go in there and click select all and click delete, you know, you’ve told your email program to go and delete stuff. We’ve extended that functionality to things, anything that you can imagine as far as social network. So mass deleting your posts on various social network sites or deleting instant messages.
Like for instance, Skype is a great example as far as why I created this is actually the example.
A few years ago, I hadn’t used Skype for a while. Because most people transitioned off it. We’ve moved to other options either just using Discord or Telegram or whatever.
But I had to log in because I was dealing with some overseas contractor and I was shocked to find that all of my messages that I’d ever typed were still there. And to me, this was insane because I was thinking if someone managed to get into my account for whatever reason, you know, it could be an evil Microsoft employee that just reset the password himself or it could be someone that, you know, managed to guess the password by sheer luck of a million monkeys with a million typewriters, doesn’t matter. If they get into my account, they can now read every message that I ever sent on Skype.
Ethan Zuckerman:
We find ourselves leaving digital traces. We sort of assume that they are brief and fleeting. In many cases, they end up being a permanent record in one fashion or another. And you had this real direct experience with this coming up on Skype. How did that turn from that sort of moment into a product? Like how many other people find themselves thinking, you know, I really want to clean up those digital traces. Why is that enough of an idea that you’ve been able to build a business around it?
Dan Saltman:
So the danger is real. And I realized it back then. I think originally when I saw that, I was scared because of how much confidential information I’ve worked with public companies before. And we would talk over Skype. That was kind of the norm. It wasn’t anything.
So I was thinking about all of the passwords and usernames and things like that that were there. And I was like, okay, well, this kind of sucks because I don’t want to go and delete my Skype account. I still might need it in the future, but what are my options here? Like you want me to manually go and click delete 500,000 times of every message I’ve sent? That’s unreasonable.
And I think that what we’ve seen is outside of that initial desire for me to want to do that and have a security set, we found that there’s a lot of people that realize that there’s not a lot of good and benefits from keeping your data out there for an extended period of time. It’s really only, it only harms you.
There’s a joke in our company where if someone is going through your tweets from two years ago, it’s not for any good reason. They’re not there because they’re just so interested in your stuff. They’re looking for something that can be used to hack you, something that can be used to be held against you, something that can use to exploit you.
Ethan Zuckerman:
So we’ve just constructed a couple of scenarios that make perfect sense, right? So one scenario is a legal scenario where you’re someone who works with sensitive information. A lot of us end up working with sensitive information even if it’s in the space of doing counseling or helping people with personal issues. There’s stuff that we might not want saved forever.
In an earlier conversation, Dan, you were talking a little bit about the scenario of someone who goes from being not very visible to highly visible, perhaps an athlete who is drafted onto a major league sports team. Can you go through that scenario real quick?
Dan Saltman:
Sure. I think that there’s this major perspective shift that happens from when you are a public creator, influencer versus when you are a private person.
So your neighbor that doesn’t really have much of a presence besides a Facebook account where he cheers on a sports team. Over the years, he may have said stuff that are very personal opinions to him that were not intended for public consumption. They were intended for his internal circle, if you will, his friends, his family that he follows on Facebook or he follows on Twitter, whatever that specific group of people.
And all of a sudden, this guy is thrusted into the limelight because now he’s a professional sports player and he didn’t know that was ever going to happen. And now all of these comments and things that he said are going to be looked at in a different light.
So if you go in there and you say, “I hate Bernie Sanders, he’s destroying everything about whatever “or I hate Trump,” or “I hate Biden.” And you were saying that to a very specific group. That wasn’t, you know, it’s like, “Hold on guys, actually, wait a second, listen, “that was just me talking to my friends. I don’t hate any politicians actually.”
And of course it can get more extreme than that, more less than that.
Ethan Zuckerman:
So, okay, so lots of situations we can imagine where it might make sense to clean up one’s social media history, to get rid of one’s instant messages on a platform like Skype. Lots of situations where we can imagine needing this functionality and you just described a scenario that is very difficult and very awkward, which is going into Skype and manually deleting all of these messages.
Why doesn’t Skype provide that functionality? Why doesn’t Skype have a big button that says, clear my message history?
Dan Saltman:
So there’s a few reasons for this. I mean, Skype specifically is kind of a abandoned product. So it’s not like the best example, but we can use more modern platforms that are currently used to get an idea.
So for instance, like Reddit, they’ve made a decision that when you go, when you click delete on your account, it doesn’t delete all of your old comments and posts. But what instead what it does is it goes in and just removes your username attached from any of those things.
But of course that, that doesn’t, in a lot of cases that doesn’t solve the problem. If you’re very active in a specific subreddit, it’s going to be very easy to tell who you were as a result of that. So it’s not really removing your data. Also this data can sometimes people are replying and they’re going to be like, even if I, the comment says like deleted as the author, if the person replying to that message text is now saying, “Dan, stop talking. You don’t know what you’re going on about.” Well, now you know that Dan wrote this post beforehand.
So as to why companies don’t do this, I think that there’s a few reasons. The first reason is that data inherently has value to a lot of these companies. For someone like Reddit, the value there specifically is that the more words on your page, the more likelihood that Google is going to send someone there for something that I’ve specifically said and landed someone there. So there’s that fact.
The second fact is it is a better user experience for people that come to the website. So one of the most annoying things, and this is kind of a joke in our company that we’re ruining the internet a little bit.
But when you search for a computer problem, like, oh, error 315 was going on here and you’re like, oh my God, I found it. It gives the Google result and it’s the error 315, how do I fix this? And then there’s a top of the post and it says this post has been removed by Redact. And then the reply right to that, thank you so much, this was the solution, thank you.
And obviously that is a horrible experience. But at the end of the day, we had to make the decision that a user’s privacy trumps that of convenience for someone else. So yes, it sucks that you wanted to get this answer and someone had the answer and they put it there and now it’s gone. And you’re like, where was it? Please give it to me. But the user’s privacy has to always come first on that.
Ethan Zuckerman:
So there’s a funny bit of history on this. The WELL, so legendarily, the site of so much writing and thinking about internet community basically had a norm where you couldn’t delete your words because they were part of other people’s conversations. And so you could stop using the WELL, but your words were going to stay behind.
I think I’m on your side on this. I think I agree with this idea that people change. There are things that we’ve said in the past that we’d like to take back and sort of like to go somewhere with it.
One of the ways that I’ve started thinking about middleware is essentially that it’s a feature request. So you can imagine a world in which your response to needing to manually delete, you know, half a million Skype messages was to find a way to sort of put a feature request, you know, in a queue somewhere with the hope that a programmer was eventually going to get to it. Now, it’s not a particularly satisfying way to do it because we know that feature requests don’t always get honored.
But there is this interesting way in which middleware is a signal. It’s a signal to a platform that at least some subset of users would like to use your tool in this particular specific way. And then there’s this really interesting question, which is maybe the platform’s not going to get around to doing it, maybe the platform doesn’t want to do it.
I often use Tracy Chou’s Block Party as an example for this. Tracy was facing quite severe harassment on Twitter. She really needed block to have a lot more full functionality than Twitter strictly wanted to have. And the truth is maybe only 1%, 0.1% of people needed the functionality that Tracy created. So maybe it’s the kind of feature request where Twitter would look at it and say, well, we’re not going to bother doing that. Most people don’t need that. It’s a little too powerful.
Deleting all of your past messages feels well within sort of the realm of something that sites might want to offer. Do you think, I mean, obviously you think that we have a right to add these features.
How do you think about that sort of dialogue with the platform? If the platform can look at your software and say, I didn’t want to allow all this mass deletion, Redact is allowing this mass deletion. How does that play out from the platform point of view?
Dan Saltman:
So it’s complicated. So middleware in general is bound by the law and there are things that are okay and are spelled out as a matter of public policy. And then there are things that are not okay.
Let’s take a video game, for instance. Okay. I want to have an unfair advantage against you and see through walls and be able to automatically have my bullets go to your head, you know, it’s fact that I always win every game. So technically that’s a middleware application. That’s a feature request that I want to have so that I can be better than everyone else and I can cheat.
So we as a society have decided that that’s not good. Right? We don’t want cheaters in video games because it ruins the experience for everyone else. And there’s no legal standing for people to create cheats and to do it at this point in time. So that middleware, even though that is a feature request on that function of, you know, I want this. We’ve still decided that doesn’t matter. You can’t, you’re not allowed to do that.
So I do say that people who create apps have some authority of the way that their content is accessed and through what mechanism. However, that said, there are some exceptions.
So I gave you the example of a bad thing that we don’t want to have happen. But what’s a good thing that we do want to have happen.
So we want our antivirus program to be able to get rid of viruses on our computer, okay? So why don’t you imagine a situation where I’ve made a really cool video game?
And in the terms of service, I say, because we offer this for free, we reserve the right to download all of your browser history, all of your username and passwords and all that. And that’s why this is free. And it’s legitimate. And they put it in there and it’s clickwrap, and they make it apparent why this is the case.
It would be acceptable for an antivirus program to scan that program and say, oh, this is a virus. Even though like they direct you, this is a virus and we’re removing this from your computer. So just the fact that someone has put something in terms of service saying that something isn’t allowed in certain cases as a matter of public policy, we have said that doesn’t matter.
Ethan Zuckerman:
So before we plunge into 230, because I know that it’s a section of law that you enjoy talking about. It’s a section of law that I enjoy talking about. I want to stay more on the conceptual level. And I’m interested in this sort of idea that companies cannot necessarily do exactly what they want to do and have us agree to all those terms of service.
So you just gave a theoretical version of this. Here’s this great game. Your price for playing this great game is you’re going to give us enormous amounts of information, including information that could be used to compromise your account, so on and so forth. You might well have an agent acting on your behalf, an antivirus software that flags that and essentially says, look, don’t get into this.
A much simpler one, the much-dreaded pop-up ad, right? Any website would like to serve you a pop-up ad. And they want to serve you a pop-up ad because they have a higher click-through rate than a banner ad. They can probably charge more money for them. They’re a good way to get people’s attention. And the flip side is we hate them and the bastard who created them. That’s a joke, Dan, I’m the bastard.
Dan Saltman:
I know you created that. I’m old enough to know all this lore.
Ethan Zuckerman:
So however, almost all of our web browsers at this point have software in them that essentially says, I’m going to refuse pop-ups. And we click off, I’m going to refuse pop-ups. And for the most part, you don’t ever see them. So the server wants to give you something, but you on the browser side end up saying, no, thank you, I’m not going to have that. We can extend that a little further to ad blocking.
Ad blocking has been litigated to death in the EU, in the United States. Axel Springer hired an army of lawyers to try to make the case that ad blocking is not something people can do. For the most part, courts have decided, no, you are free to go ahead and put this extension in your browser and have control over it, even if it is very much not what the platforms want you to do.
And to be perfectly clear, in some cases, is quite harmful to the platforms, is taking revenue away from them. So we have found in a number of circumstances that so long as it’s happening in the browser, we have quite a bit of opportunity to shape our user experience. Where this seems to get trickier is when we start interfacing with the server.
So you have potentially this problem with Redact, you are potentially going to Reddit and hitting delete on a post thousands of times. I potentially have this problem with Unfollow Everything. I am potentially going to Facebook and unfollowing each of my friends manually.
This is all something that’s completely legal for us to do on these services, but we are tripping over this moment of automation. Is automation the issue, or is the issue really that we’re essentially renegotiating the contract between the service and the user?
Dan Saltman:
So the issue is Section 230 is very old, and it’s very short. And there’s only very few specific things that they specifically allow to happen for almost every case, with the exception of like a few outliers, like intellectual property claims are not covered, but never really comes up.
The issue that I see with it is that it’s been written so long ago, it should be brought into the modern age. I think that there probably should be certain services and functions that are treated different than others.
Like for instance, right now, section 230 treats everything the same for the most part, whether you’re a video game, whether you’re Facebook, whether you’re an instant messaging program, you’re an email, or just a website about cats, all of it is the same.
So I think that probably Section 230 should be expanded to most likely have more specific rules for specific categories of websites and services. So I personally think that it should be adapted for social networking and for instant messaging, for anything for communication, for a place where people go, that could be, and again, we use this list of rules to determine if this is it. And then under those case, you can now expand protections, but also reel in things like misinformation.
So I don’t know, it’s a challenge. It’s a challenge of where does 230 go when it’s right now so limited with the protections that it has? And there’s only, as I said, there’s only like a few things that are carved out, right?
Ethan Zuckerman:
I mean, the truth of almost every law is that, we can read the source code, but it doesn’t get compiled and tested until it makes its way in court. And there’s certainly been an enormous amount of effort applied around 230-C2B, the famous 26 words that created the internet.
You and I, in particular, are very interested in 230-B. And this is a whole chunk of language that essentially says it’s the policy of the United States to preserve a vibrant and competitive free market, to encourage the development of technologies which maximize user control over what information is received by individuals, families, and schools who use the internet and other interactive computer services.
That control over filtering and blocking certainly leaves me feeling very comfortable about what I’m trying to do with Unfollow. It feels quite comfortable to be able to say, I don’t want to be interacting with a lot of junk that a business is putting into my feed. I want to be able to interact with a very limited subset of things. I don’t think it requires a huge stretch to make this work for Redact and to essentially say, I want to have control over my information. I have changed my mind about what I want to share. I want some utility about it.
I agree with you. 230 does not have good language about automated access. It does not have good language about platform shell, you know, provide a good API for support.
Dan Saltman:
I’ll say this on section 230-C2B is, it’s important that it exists. And every company knows that because every company large and small, the most part will not small of large. They have security teams and their security teams, their job is constantly analyzing threats and software. And when they’re doing that, they’re not respecting the terms of service of whatever they’re analyzing, right?
So if I think that dnscatsite.com is hosting malware and I’m on the operations team of Reddit, what do I get to do? Do I get to ignore their terms of service because I’m trying to see if they’re having malware and do I get to poke around and look for holes and things like that? And the answer that we kind of have right now is yes, because you’re acting as a essentially anti-virus or an extension of an anti-virus program that you’re doing that, that’s filtering out of bad content.
So the big companies will do this themselves. They will go and they will ignore other companies’ terms of services as a result of it. But when it comes to filtering, essentially the challenge of all of this is there’s features that you can have and features that you can’t have and it makes it, you have to walk a very tight rope of what you’re doing to make sure that you have public policy protection.
I think the state of section 230 is very challenging. But the one thing that you can rely on is the fact that you can say, one, everyone hates spam.
And that’s a great thing to point to say, oh, okay. Sorry, do you do any spam filtering on your site? Oh, you do. Oh, that’s weird. So you’re saying that you filter stuff from people and you decide that you don’t like it without any input from me and you delete it. Cool, cool. That seems a little bit weird. I didn’t give you authorization to do that. Why are you doing that? That’s interesting, right? And no one, ’cause spam is this universal thing that everyone hates.
So filtering is in a, it’s an interesting spot with I guess what you’re trying to do there and everything else.
Ethan Zuckerman:
There’s an interesting historical precedent sort of on the spam argument and it involves going far back in time. It involves going pre-web and getting to the world of Usenet, the sort of text-based bulletin board that a lot of the geeky conversations were happening on in the 1980s and early 1990s.
This is a conversation that’s basically being stored and forwarded. So you’re posting to one computer, it is sending copies of your message everywhere else. And it would seem in a network like that, that the easiest thing to do would be to block spam messages at the server level.
Usenet for a whole bunch of reasons decided to put very powerful tools at the end user level. And so you would have things that were pretty obviously spam, not things that people wanted. The classic example of this is a guy who wrote a computer program that every time the word “Turkey” was mentioned, would post a screed denying the Armenian genocide.
And maybe this would be appropriate in context, but it would show up in articles about what to do with Thanksgiving turkey leftovers and you would get Armenian genocide screeds. And pretty much everyone would say, can we please just get rid of this guy? This is terrible, this is not helpful.
Usenet’s approach was to say, look, we’ll show you how to use the kill filter. We’ll show you how to get rid of this person. We’re not going to do any central filtering, but we’re going to put amazing tools at the edge.
My point would be that you have to have those tools somewhere and having the ability to have them both potentially in the center and at the edge seems fairer than sort of assuming that it’s only the server that has that ability to filter a block. Obviously servers are going to filter a block, although maybe not obviously, right?
We’ve had the cases in both Texas and Florida where you’ve had the state’s attorney general come and essentially say, if you are filtering or blocking with a political bias associated with it, we want it taken down, the Supreme Court so far seems inclined to say, no, that’s a chaotic step that we do not want to take.
But another solution to these quite complex first amendment problems is to try to put more control into the user’s hands.
Dan Saltman:
Well, you know what that is. So there is some of this. The problem is that as these companies advance and become more and more valuable, they want to push themselves into more of a walled garden.
But it’s still a choice in my mind of the developer on some of these things, how they want their program to be used under the current law. Now that said, I do think the law should be amended but unfortunately under the current law, I think that a developer has great leeway to say what they can or can’t have done.
But one of the things they can’t void away is the right for you to filter that data. That’s it. For instance, if I really like Craigslist but I’m like, why does this site look like garbage? Okay, I don’t get to go and create my own Craigslist app and put it on the app store because it’s specifically targeting Craigslist. Do I get to go and make my own browser that does every website in a new UI? That I think you could possibly get away with but when you specifically create a piece of software that gives an alternative interface for Craigslist the front end, it becomes much more challenging to do that in today’s legal landscape until we have more guidance and more updates to what 230 is or is not.
But currently no one’s finding in favor of you. These guys are all losing their cases, unfortunately.
Ethan Zuckerman:
I mean, arguably it’s more or less what Padmapper does, right? Padmapper is this real estate finding site that is scraping Craigslist and a number of other things and providing a very straightforward graphical interface that would be very, very hard to get out of Craigslist but turns out to be extremely helpful for people and thus far.
Dan Saltman:
Well, see, I don’t know if that’s something that they have done. I don’t want to say. So that’s also a very different case because we’re talking about scraping along here which is very ambiguous of what is legal versus what isn’t. And they might just have a relationship with Craigslist and Zillow and Trulia and all these other guys saying, hey, listen, we’re going to scrape all your data and put it in one spot, but we’ll send it back to you with this code and then you can track it and get, you’ll get access to all our data and that’s why it’s allowed. I have no idea.
But scraping law, it’s almost not worth talking about because it’s probably the most all over the place rulings and the most bipolar, I guess is the best way of saying of all the rulings across the board is just you’ll have circuit splits constantly across the board of what is legal versus not, even Supreme Court and going and reversing lower and then the lower court saying, this is good, there’s a lot of very crazy cases that come with it. Scraping is always going to be a very dangerous thing to do in my mind on that front.
Ethan Zuckerman:
So, it will not surprise you that we spend a lot of time scraping for research purposes. It’s an enormous amount of what many academics who study social media do and we do huge amounts of that around YouTube and TikTok.
There is some indication in Sandvik versus Barr that scraping is not by itself inherently a violation of terms of services or that violating the terms of services to scrape is not necessarily a violation of CFAA.
Dan Saltman:
I’ve heard this is like the new thing is it’s like scraping stuff that’s publicly available is now okay or something along those lines, but it’s always one, you know, Texas ruling away and that specific districts that they like from that. No, now it’s not the case and they bring a case against you there and you lose and that’s it.
So, circuit splits are kind of the bane of this, but yes, I’m familiar with the idea that publicly accessible data, you should be able to do something with, but then becomes the question of what can you do with that data? Can you package and resell it? Or can you make inferences off of it? Or can you just have it? Or yeah, though those are the unknown that changes a lot of the outcomes as well. Scraping LinkedIn might be legal, but going and then taking that information and doing something might make it illegal, if that makes sense.
Ethan Zuckerman:
Are there things that you want to do with Redact or with other middleware projects that you’ve thought about that you haven’t done or reluctant to do because of legal uncertainty?
Dan Saltman:
Well, there’s a long time ago, there was a tool it was called power.com or Power Ventures. And the idea was very simple. It was just one piece of software that lets you bring all of your stuff into one place and you didn’t have to check 10 sites, you just had this one site to do it. And there’s been a lot of other people who have done it over the days. Some just shut down on their own and some didn’t.
But I think as far as Redact goes, to date we’ve been exclusively focused on filtering and deletion of data exclusively.
There are things I think would be great to offer as far as analytical information. And maybe there’s also social things. So like for instance, okay, let’s say like on Reddit, for instance, it might be cool that whenever someone messages you, you could automatically respond to something. That would be, some people might find that as a tool useful.
But would we build that? We might, but we can’t because that falls outside the scope of a very limited understanding of what 230-C2B currently is today. And in the very minor case law, there’s so little case law on C2B that we have to be extra careful with that.
So that would be a cool feature to have. It’s a social media power feature, right? Oh, you can use Redact and now you can easily go and do this.
So that’s a challenging part. I think there’s a lot of functionality ’cause we’re baked, we are a browser at the end of the day. So we can do anything a browser can do. We are acting on the user’s behalf. It’s not an automated request. We’re simply a browser doing exactly what the user has told us to do. But we play it very safe.
We can’t even really know what is specifically allowed versus not. Like some sites like, for instance, Microsoft and Skype, they might have close to hundreds of different terms of services, depending on what device you’re on and what country you’re on. And it’s not feasible or possible for us to review each one.
So the only thing that we can say is, if you’re using our software, you agree that you’ve read their terms of service and that you’re allowed to do this. And that’s because I can’t know.
So the best that I can say is, I put in our terms of service, if you’re using our software. And first off, this is a lot of protective mechanisms that we put into what we do to stay on the right side of the law. And again, with counseling, long attorneys and whatnot.
But that is a part of it, is saying if you use our software, even though we feel that we have complete authority to do what we do under 230-C2B across the board, just like an antivirus program would. And I’ve actually spoken with a lot of the antivirus legal counsel. They’re like very interested in what we’re doing on that front.
It doesn’t hurt to go in, you know, like a few, for instance, I’ve looked at the Unfollow Everything 1.0 legal complaint. Okay. And the claims in there are always the same for everyone. It’s you’re interfering with the contract with our users. You’re using our trademarks without authorization. And you’re using, you’re accessing us, accessing us in an automated fashion. Those are like the three things usually that they’re hitting you with.
Ethan Zuckerman:
That nobody’s complained about.
Dan Saltman:
Right? And you can kind of, first off, 230 C2B tells them, you know, go, go screw yourself. It doesn’t matter. These rules don’t apply literally as a matter of fact. Any argument against it would be the idea. I could just write a virus and put a terms of service in there saying that you’re not allowed to scan it with your antivirus program. It’s ridiculous, right? Public policy has decided that filtering and removing data trumps anything you can put in your terms of service outside of intellectual property. Okay? Like you cannot, this,
It doesn’t matter what you write in your terms of service. You cannot disclaim away the law. You can’t put in your terms of service that black people are not allowed to use it. That would be against the law. It doesn’t matter. Okay? It doesn’t matter. We have a law that says.
But you know what they can do? There is one thing they can do and that’s totally legal. They can do their best to block you technically. That is absolutely within their reason to do. But that’s it, legally speaking, as to what the options that they have at their disposal.
Ethan Zuckerman:
So sort of thinking in a big picture, because in many ways, so you’re involved with building a successful piece of middleware. I am involved with litigation about whether we will be allowed to return to the marketplace a piece of middleware.
For me, that work is part of a larger vision. And that larger vision is, it would be great to do some of what PowerWeb did to make it possible to look at a lot of social media through the same interface, have increased control over it.
There are some very complicated and ethically complicated pieces of software out there like Grayjay that are now trying to do this in the video content market that are essentially trying to say, “Let’s aggregate everything in the same place.” Grayjay also seems to be challenging some of the business models associated with this. My interest is less to challenge the business models, more to sort of increase the user control.
Do you see Redact as part of a larger vision of how you think social media should work? Is this part of a sort of a bigger plan or is this something you needed, saw a market for and have sort of gone ahead and made widely available?
Dan Saltman:
I don’t think that we would ever move into a space of content consumption being done through our application. We’re very much a utility that is focused exclusively on user privacy and user security. We are kind of a modern antivirus tool that’s focused on a brand new threat that no one’s thought of before.
So what you’re speaking of, the problem is I think that Section 230 needs to be amended to make what you’re talking about legal. And the reason for that is that what is a browser and what is not a browser is such a question that I do not feel has been answered in a way that is acceptable for you to be able to do it because browsers typically when they’re created are website agnostic. Okay, and there’s certain rules we could look at like, oh, they follow web standards and they parse this language in a similar way.
But there’s certain things that are happening different on Brave, which is a web browser which automatically blocks ads and blocks things out of the go, out of the gate. That’s a thing.
But the question comes down when I start writing specific rules for a specific website, as opposed to a broader piece that just works with anything, where is the line drawn? I think unfortunately right now, most cases have put it that the website would work. If you’re creating specific rules that aren’t based around filtering.
For instance, again, Grayjay, I think it’s a fantastic piece of software. But I think most likely because they’ve created a piece of software that specifically has rules for each site that the site is going to say, you’re interfering with our contract, you know it, X, Y and Z. Now, but there may be ways around that legally. I’m not an attorney.
But that’s why I think 230 needs to be updated to be more like that. Because as it stands right now, no middleware is allowed at all, nothing at all. But a lot of sites decide to let certain things slide.
When you say what is middleware? Like is a browser a middleware? Right? Like where is the specific thing inside of Reddit’s terms of service that says you may only use Firefox or Chrome or Edge? Is links allowed? I mean, that’s text only. Your ads aren’t going to show up there. Is that okay? Am I allowed to use a browser which prefetches.
So a lot of browsers will prefetch results before you click them. So like if you go on a page and there’s five links, browsers behind the scene have an option to go ahead and automatically click each one of those links. So when you click on it, it’s ready for you. Okay, is it okay if I do that times a thousand instead of times five? Where is that ruling? Where is that logic in place?
And the answer is that it hasn’t been defined. People have just said, you’re a browser, you’re not a browser. So that’s one of the big reasons that we chose not to be a plugin, not to be a web service. We are literally a browser. We’re Chromium or a custom browser. I think anything else is just, it was too scary, especially being a web service.
Everything we do is technically possible to do. Like if you went to redact.dev and signed in and gave us your information and we did it on your behalf, all that’s possible, but it was too risky. We don’t want any access to user data. We don’t want any complaints about us doing this. And there’s a lot of companies that now have mandates from the FTC that if they think that you have access to user data, they just have to go and sue you. They don’t even have an option. So there’s things along those lines that are at issue here as well. So middleware really bad spot until we get updates to 230.
But the good thing, the good news, and I don’t know how this is happening because both sides hate each other. I think both Republicans and Democrats want change to 230 for different reasons, but actually the result is going to be the same. Republicans want change to 230 because they don’t like the idea of big tech censoring ideas that they don’t like. Okay, fine, whatever.
And then Democrats want changes to 230 because they don’t like the idea of certain accounts being able to spread misinformation and being able to do that. So you have different things. Okay.
I feel like there’s probably, if both sides are agreeing on an issue and you want to have a solution here, there should be a bipartisan way that they should come together and make changes that specifically affect specific types of websites. Okay, let’s target social media websites.
They have these sets of rules and this standard test is going to decide if you’re a social media website. And then this test decides if you’re a game and then this test decides if you’re an instant messaging application. And based on those, there’s different rules and different things and there should be, okay? That is a fact.
An instant messenger should have much, that is one to one for private communication, should have different rules and different legal protections than a social media site where your data is exposed to everyone. There’s a different expectation of privacy and there’s different mechanism of use.
Ethan Zuckerman:
And we are seeing these distinctions being made in European law. We are seeing distinctions where there’s the classification of the Very Large Online Platforms that have certain restrictions and certain transparency mandates put upon them. We’ve seen the interoperability put into place on instant messaging, affecting a certain number of instant messaging systems.
In many cases, they’re using user base. So amusingly enough, the only instant messaging that’s required to be interoperable are all run by Meta so they’re interoperable to start with. So it has not had an enormous impact but it is trying to get a little further as you’re suggesting into defining what the different components are of this.
I get that you very much want to be utility that you don’t see yourself sort of moving into a content consumption side of things. I guess I’m sort of just asking you to take sort of the biggest picture possible, assuming you get the sort of 230 reform that you’re thinking about.
What would you be interested in doing at a point where middleware is more widely accepted and has a clear legal status?
Dan Saltman:
So I think that, let’s just say, the Supreme Court says all middleware, broadly speaking, is legal, that applies to social networks and instant messengers and whatever. What would I do in that case is I would probably have some of my pain points personally answered.
Why do I not have one mechanism to say, where did Ethan, there was a restaurant he recommended me, what was the name of that restaurant? I can’t do one search and have that spread across multiple platforms and get that answered. So that’s something that I think would be solved by an application with the level of access that we have.
Another thing that you could do is getting analytical data about your accounts in general. So you can see usage-based data of how you use each service and getting, now that all of your data is being brought in internally, it’s much easier to do kind of managing that data, I guess is kind of an interesting thing that you can do.
So I see us in that case of building out probably software that’s still utility-based, but more so based around making your experience better for your security and for your ease of use.
So the search thing is probably the biggest thing that I really liked the idea of being able to do a live search. So we can do a search now of anything that you’ve already downloaded to your computer, but being able to do a search of anything across every platform would be an amazing experience to be able to have. That’s really if you think about it, it’s the only thing that companies like Google or Microsoft at this point can’t do. They can do really good searches, but you know what? For the most part, they’re not going to be that tailored toward you.
If I go and I put, if I do a Google search for best restaurant, Miami, Google is going to try and be like, okay, Yelp says this, Yelp says that. You know what would be really cool at the top? Ethan said three weeks ago, oh my God, the best food I’ve ever had in Miami, you have to check this out, is Bodega Taqueria, right?
That should be in my search results and none of them can do it currently because all of them are at wars with each other and it’s like, we’re not sharing our data here and you’re not sharing your data there. So that’s the last step of making good search is personalized, really internal data and none of these guys share with each other, right? And they never will. So that would be the ultimate search engine, right?
Ethan Zuckerman:
Dan Saltman, thank you so much for being with us. We really love what you’re doing with Redact. Really a wonderful chance to talk through middleware. Thanks for giving us some time.
Dan Saltman:
My pleasure. I was happy to be here and good luck with all your stuff.
