Taming the Outrage Machine

Tobias Rose Stockwell
Author, designer, and media researcher
Tobias Rose Stockwell
Author, designer, and media researcher

How does technology amplify outrage and disrupt democracy?  What can we learn from the history of media?  How might we intervene to restore a more healthy information ecosystem?

Show Notes

This episode comes along side of the launch of Tobias Rose-Stockwell's book Taming The Outrage Machine: How Technology Amplifies Discontent, Disrupts Democracy -- And What We Can Do About It. Tobias is an author, designer, and media researcher who has been working at the forefront of this topic for many years.

By investigating media's role in information flows throughout history, Tobias' book brings clarity and fresh insights to the role social media plays in society today. In this conversation Jenny and Tobias distill the book's most salient points for the Denizen audience.

Outline of the discussion:

  • The role of outrage in democratic governance [4:00]
  • Technological disruptions and the dark valley [10:04]
  • Martin Luther and the printing press [10:52]
  • The impact of the first advertising based newspaper [14:04]
  • The advent of modern journalism [18:16]
  • The relationship between distribution costs and editorial incentives [22:10]
  • Television and the impact of repealing the Fairness Doctrine [23:45]
  • The role of human psychology and cognitive biases [27:17]
  • The three design features that changed everything: the algorithmic feed [31:09]
  • The three design features that changed everything: social metrics [33:54]
  • The three design features that changed everything: the one click share [37:13]
  • Economic incentives and multi-polar traps[41:47]
  • The role of government and design vs. content level regulation [43:41]
  • Hope from a historical example: radio [48:54]
  • Intervention points: the individual [53:52]
  • Intervention points: social media platforms [56:06]
  • Intervention points: policy [59:57]
  • A word of caution about decentralized social media [1:01:41]
  • The speed of technological progress vs. the speed of regulatory response [1:03:10]

Transcript

Tobias Rose-Stockwell (TRS): The one-click share, the single click share, which is a very simple feature. It was deployed across the board by 2012 on most people's phones, and that process of just making it incredibly easy to pass on information from one person to another, essentially made our emotions viral for the first time.  The moment that I'm outraged by something that I'm served, I can pass that on to anyone, anywhere. That one simple process of just making it that much easier to take a feeling that I have and passing it on to my entire audience, that propelled us into this level of virality of our content and our emotions that just did not exist previously in our species.”

[INTRODUCTION]

[0:00:49] Jenny Stefanotti (JS): That's Tobias Rose-Stockwell, author of Outrage Machine: How Tech Amplifies Discontent, Disrupts Democracy--And What We Can Do About It. This is the Denizen Podcast. I'm your host and curator, Jenny Stefanotti. In this episode, we're talking about Tobias's book, which touches on what is one of the most important and critical issues of our time; how social media has influenced the information ecosystem in ways that disrupt democracy and society writ large.

Tobias's book is a really important contribution to the conversation, because he doesn't just look at social media and the features and incentives of the technology companies, but he looks at media broadly. He actually takes us back through the history of media, from the advent of the printing press to radio, to television, and shows how with each new technology that changed the way that information propagated, there were disruptions to society and subsequent corrections, and he sees this moment with social media as analogous to these moments of the past.

As always, you can find a transcript and show notes on our website, www.becomingdenizen.com. There you can sign up for our bi-weekly newsletter, where we send the latest content, along with announcements from our partners. This is a really important conversation. I really appreciate how broad we go. It is the insights from that breadth that really help us understand the problem in a meaningful way and start to put us in a position to talk about solutions. I hope you enjoy it.

[INTERVIEW]

[0:02:14] JS: Tobias, one of the reasons that you were so near and dear to my heart is that you were the very first person that I did not know that joined one of these conversations back in 2020, when that topic was content moderation on social media. It was the first inkling that perhaps, people beyond my immediate social network might be interested in these conversations, so I'm so excited to support you in this book. I know it's a culmination of so many years of work.

[0:02:48] TRS: Thanks, Jenny.

[0:02:49] JS: The Outrage Machine. Yeah.

[0:02:51] TRS: Great conversation. Initially, a wonderful intro and just a wonderful way to get connected to your community.

[0:02:58] JS: What we want to talk about today is I want to draw out the key insights from your book. Obviously, the role that social media plays in society is such a critical part of the ongoing conversation that we have here. We've touched on it many times, but one of the things that I so deeply appreciate about your book is that it widens the lens beyond just social media and looks at journalism and press and its history, but more critically, just helps us develop a system to view of the role that information plays in society writ large, both in terms of our political institutions and in terms of culture.

When we can start to draw insights from history about the velocity that information spreads and the ways in which we can verify the veracity of the information, it's just enormously helpful in assessing the current moment and what we might do about it. I'm very excited to draw the really key insights from the book.

I know it's called The Outrage Machine, and when you talk about The Outrage Machine, you're specifically talking about how social media is geared towards outrage, and we'll get to that. Actually, in the latter half of the book, you talk about a different type of machine, which is government, and the role that outrage plays in the functioning of the government. I think that that framing is actually really valuable to hold as we go through the entire conversation, so I want to start there.

[0:04:27] TRS: Great. Yeah. The metaphor of a machine is a useful metaphor when talking about big system level issues. In the latter part of the book, I open up with this wider aperture of thinking about government in general as being like a machine that operates much like an outrage machine, so far as outrages are not bad for society. Outrages are actually quite good for society, if they are quantifying the right problems that we're facing, right? If we see something that's broken in the world, then we want to be able to effectively address it. It's a big coordination problem, collectively for us. If we see a problem, we want to fix it.

The democratic process is built much like a machine that takes outrages and then through a very specific set of steps turns them into policy, which then potentially solves those problems. Thomas Jefferson famously thought that we didn't actually need government as long as you had newspapers. You could just get newspapers to write about what was wrong, and then citizens could get together and collectively solve these problems on their own. They could form a small council and just address the issues and knock them out. It turns out that that's not quite as easy as it sounds, but then there's a lot of failure points for traditional media and the way that operates, which the book goes into pretty in depth.

Yeah, overall, if you blur your eyes for a second and look at government and the way it operates it’s like, we have democracies, so we can federate the process of addressing the many issues that we face together. We can vote for people that potentially take care of those issues, or don't take care of those issues, and then they legislate against them. If they don't take care of those issues, they will revote and get someone else in to actually fix the problem. You can really think about society a bit like a big outrage machine that turns outrages into policy to actually solve those issues. I think that's an important frame for it.

A key piece of it is the speed at which those outrages spread, and the validity of those outrages. A lot of the time when people are pissed off about an issue, when I'm pissed off about something, I won't necessarily be the most level headed about the actual solution. I might do something rash, or draconian. Founders in the early days, in their writing, this was in the Federalist Papers, Hamilton wrote something along the lines of if every Athenian citizen had been Socrates, Athens would still have been a mob, in that even with great intentions as individuals, collectively, we make bad decisions a lot of the time. You need to have a good system of letting people cool off when they're making decisions about how to address these problems and let them chill and let them orient themselves towards solutions with a level head.

You can see this when we get extremely angry about something collectively, how a series of rash policies can be enacted, but don't actually solve the problem at hand, right? If the outrage moves too fast, then sometimes we actually make it worse. We make worse problems as a result. Yeah. I go through that in the book. I tried to think about democracy as this algorithm that we're running collectively, that resembles a bound and branch algorithm, which is we have the system for approaching a problem, and we go through and we sort through potential solutions, until we get to the right one through this process of voting, of observing problems, voting, and then seeing if those problems are actually solved.

Yeah, it's a pretty macroscopic view, but I think it's helpful for thinking about the role of outrage in society. I think we're all a bit overwhelmed by the quantities of problems that we're facing right now. It starts to feel a little bit desperate and dark when we're exposed to so many problems on a regular basis. I do think it's possible to calibrate our inputs in a way that can make the system a little bit more effective and run more smoothly.

[0:08:30] JS: Yeah, I appreciate that. I wanted us to have that orientation as we start to talk about the history, because I think that it's so interesting that you took us through centuries,  you basically said, “Hey, everybody. This is not something that we haven't seen before.” The magnitude of it is something we haven't seen before, but there have been many points in history when there was an innovation with respect to information and the way that information propagated.

There were actors who took advantage of that disruption. Then there's a correction, and you talk about the dark valley that society enters before the correction happens. Some of those examples are so interesting. I want to talk about them. The first one that you give is actually about the printing press.

[0:09:14] TRS: Right. Yeah, we go far back.

[0:09:16] JS: I appreciate how far back, right? Because you said, oh, how did it used to be? Where do we start? We started in small groups and the information was largely held in groups, right? Let's go back to the printing press and insights from that moment.

[0:09:28] TRS: Sure. Yeah, absolutely. Up until that moment, when the printing press came along, there were presses before Gutenberg. There were just generally hand-cranked presses that were just a little bit slower. That process of just increasing the next speed at which people could produce information, the velocity of content being created was enough to really dramatically change the entire structure of Europe. You could argue that that one particular invention was the most violent invention to hit continental Europe in history up to that point, because it caused 100 years of civil war as a result.

I try to frame these disruptions in terms of this historical context, specifically every time we're exposed to a new media technology that is a real sea change in how we process information, there is this period I call a dark valley, where we go through and we are extremely euphoric about in the beginning, euphoric about its usage. We're like, “This is amazing.” Then as there is mass adoption that comes, there are all these harms that are basically hidden from view, because people are so euphoric about its use, essentially. That adoption curve intersects with the decreasing thriving of the species curve, and you end up with this obscured period of net decrease in thriving that comes as a result.

With the printing press, specifically, you can look at it at the time. The Catholic Church was actually really excited about the printing press, and they would use the printing press to print these little slips of paper called indulgences, and they would sell them to the wider public as these, basically, like hall passes to heaven. Basically, they would let you sin if you're willing to pay for it. It was like a little ticket to absolve you of your sins. This rotund and cranky priest in Germany got upset about this. He wrote, if you look at it today, essentially, a Twitter thread –

[0:11:32] JS: I love that you referred to it as that.

[0:11:34] TRS: - 95 thesis. The way that their character delimited each one is an outrage, is pissed off about. Each one is a point, like I cannot believe the church would do this. They shouldn't be doing this on this day of the week. This is crazy and blasphemy that they don't allow for this. They just go – each of these individual points of outrage that he has.

Martin Luther came along at this very specific moment that intersected with the explosion and availability of the printing press. That process of just it being available, it both saved his life and it also dramatically changed the entire architecture of the power structures of Europe at the time, because it intersected with the printing press. He was not a media guy, but the way these print shops that were operating at the time, they were just trying to make money, so they're printing indulgences here and there, and then they found this thesis, his Twitter thread, and they started reprinting that. People found this to be totally crazy and outrageous. It basically went viral at that moment in time, much the same way a really outrageous Twitter thread goes viral now.

It showed that in this process of just intersecting with a new type of viral platform, how a specific set of outrages could expand and change everyone's minds, over the course of a period of time. This democratization of information, it's like we tend to think about democratization of information being really good. No one would have gone back to the previous way before the printing press, but it was deeply violent. It was a deeply violent period, as people were accessing information.

Suddenly, these outrages were visible to people, that weren't visible before. They could see and hear, they're like, “Oh, yeah. Maybe that is actually problematic that the church is doing these things. Maybe this is actually not okay that they're just making money to – they're absolving people’s sin by printing out a little piece of paper. Maybe that's not all right.” There's this lighting up of people's moral matrices in a way that didn't exist before that point. That was hugely, hugely influential to the continent, but also, yeah, very, very, very violent. I think it's important to remember that.

[0:13:34] JS: Yeah. I just found that fascinating, right? Because if that information could not have propagated into the masses in the way that it did because of the printing press, he would have just been killed. The document was not even created for the masses. It was created for the clergy in his opposition to it, right? I just thought that was just a social disruption of the printing press in that particular example, right? Then again, you go on to talk about how much of violence occurred over the extended period of time because of that.

Okay, let's fast forward to how advertising created newspapers, because this is when we start to talk about news and journalism and the advent of that. How advertising created newspapers. Tell us what newspapers were, before all of a sudden, someone came up with the idea to say, “Oh, what if I put ads in these newspapers and sell them to people who previously could not afford news?” Because this was a sea change in the information environment as it relates to our cultural and political institutions.

[0:14:30] TRS: Totally. Yeah.

[0:14:32] JS: We’re in 1833.

[0:14:33] TRS: 1833. That's right. Yeah. News before advertising was not what we think of as news. It was political. It was political broadsheets. If you wanted to actually read the news, you would pay the equivalent of 20, 25 bucks, which was out of the range of normal people for what was basically a piece of political propaganda. It would have some news in it, and there were mercantile presses, too. There was like, they’re shipping news and stuff like that. It wasn't interesting and it was very slanted. It was extremely political.

If you were a Republican in a state, you would send out and frank. Franking was basically the process of sending the postmaster, you would send out a free copy of your political party's newspaper to your audience. If your postmaster was of the same political party, they would make sure that it went to all of your constituents in that particular area. They were very expensive papers. There was this level of political patronage that was just so present. People didn't really read the news. They didn't read the news the same way we think about it today. It was much more politically slanted.

When advertising came along, there was this guy named Benjamin Day, who basically decided that rather than selling the paper, he decided to sell people's eyeballs, along with the paper. He tried to find advertisers, and he would sell the papers for a penny a piece, we would call the penny press. He would put advertising all over the paper. 

In order to get people interested in it and increase his numbers, he would cover the most salacious stuff, like crime reporting.  There's this famous New York prostitute that was killed by a hatchet that was gruesome and dark. Before that point, newspapers did not cover that in the same way and he just zoned in on the salacious, the grotesque, the gossipy, the stuff we think about as tabloid journalism today, he focused on that just as succinctly as possible. The result of that was there was a sudden market of people that were extremely fascinated by all this weird, grotesque stuff that historically just wasn't covered by newspapers.

In that process, he showed that there was a business model available to subsidize the news with advertising. That was a really unique thing. Those papers that actually started doing that, and there was an explosion of different newspapers that started doing that, there were no real journalistic safeguards, there were not any boundaries to what you could, or could not print. Libel laws were notoriously loose. People would print fake news. They would print scandalous stuff on a regular basis and there was an explosion of small print shops selling advertising along with news for the first time.

That process of advertising coupled with news has become the greatest subsidizer of our collective organ of news consumption, of sense making in history. It did not exist before that point. We've seen these polemics about advertising being the core problem with the news industry and with social media. I think there's definitely something there for sure, but I think that it's really important to recognize that advertising is a huge piece of the puzzle for how and why we are able to access information cheaply in the world.

[0:17:49] JS: This was the turning point where average people had access to news. Where's the funding from and what's the motivation? The motivation was political power. The funding was there, it was basically propaganda, right? The motivation shifted to the market. There is something non-profit oriented about this thing that is fundamental to society, which is consuming information, so that democracy functions the way that it should, right?

It's very interesting, and you talked about how the advent of ad-based newspapers brought this salacious content, but that was necessary to get the reach to sell the ads. But there was also a market for real news.  There was a reputational risk of misinformation. The market had an inherent correction mechanism towards truth. Then you talked about what was originally the New York Daily Times was really the advent of modern journalism. As we know, it's serving this really critical function of society. Can you speak to that briefly?

[0:18:50] TRS: Yeah. It's really important to think about it in terms of this dilemma that your average consumer news consumer has. Every day, or every week, you go out and you get a paper and you have this body of different options. You look at all the different newspapers that are in front of you and you're like, “This has something salacious on it,” so you buy a paper that week. It turns out that week, they did a story about animals escaping from the New York Zoo. There was actually a fake story of animals escaping the New York Zoo. So scary was it that people actually went out with weapons to try to find the animals that had escaped the zoo, because they thought they were maybe marauding people in the streets. It was a fake story, but it was printed, because it would sell papers, right?

The next week when those same news consumers come out and they hear that this was a fake story to sell papers, they're going to look at their body of papers that – “You know what? Actually, that paper, I don't trust that anymore. I'm going to go with this other paper.” Week over week, month over month, there's a reputational risk to the news producers if they actually stray too far from the truth, right? Every week, every month, this machine is turning along trying to help inform people, hypothetically, primarily trying to help sell papers. If people's sentiment shifted towards actually viable news sources, week over week and month over month, and that process is core to the sense-making apparatus that is a market-driven enterprise, right?

You don't want a government edict saying, this is what the news is. You have to trust us, nothing else. You do need to have reputational risk for the parties involved. That's actually, where we want to draw a line to social media here. I think that's one of the biggest issues that we're facing right now is that there's not much reputational risk to a lot of news producers online. A news site will spin up for a week or a month and just say salacious stuff, and there's not the correcting mechanism that's necessarily a place for people to verify that that thing was false. There's this consistent process of churning up misinformation, or moral outrage in a way that isn't correcting over time. That's part of the issue that we're facing.

Yeah, so this happened here in New York. There's one little place called Newspaper Row. There was this explosion. It was an explosion of startups that happened on this one street in the city, so it was New York. There was The Sun. There was The Enquirer. There were a whole bunch of these. One of them was the New York Daily Times. Week over week, month over month, over the course of 70 years, the newspapers that were less reliable started to die out. This was also the advent of yellow journalism. This came along around the end of the 1800s. There was tremendous outrage and tremendous salaciousness that was reported on, but there was this process of professionalization that happened.

The New York Daily Times became the New York Times. People started to trust it above other newspapers. There was standardization of professionalization of the news that happened as a result, where people are like, “Oh, no. This is how we do it, right? We need to have reliable sources.” Because newspapers call each other out, right? That's another piece of it that’s really important is that you have multiple newspapers, multiple robust news agencies that are trying to actually tell the truth.

This is a competitive market. They’re looking to try to call another party out and say that was actually false. This other paper over here, they lied about those animals in the park. They lied about these bad people on the moon, which is another real story that was actually printed at the time.

[0:22:01] JS: Yeah. I remember that one.

[0:22:02] TRS: Which is crazy. There is this competitive process of trying to give us information that is really important. A big portion of that is reputational risk.

[0:22:10] JS: Yeah, yeah. No, I appreciate that. Then you also made a really interesting point about how it was so expensive to print newspapers, that there needed to be a certain reach to justify that, which created an editorial incentive to appeal to a broader audience, which meant that you weren't overly partisan in your reporting.

[0:22:29] TRS: Right. There was this strange thing that happened when you got the esteem power printing presses, where things just got very expensive to run this stuff. If you wanted to maintain the reach of your audience, you wanted to maintain this very expensive infrastructure, you needed to actually appeal to the widest audience possible. Partisanship actually reduced the appeal of the paper to a lot of people. There was this editorial process of stripping out partisanship to reach the widest geographic audience, so as to sell more papers.

The thing about the fundamental marketing incentive that was there is really interesting. In contrast to today, where we actually have a marketing incentive for partisanship, because of the ability to hyper target on the internet. That is one of the core differentiators between what was and what is now, is that it was so expensive to produce and distribute news before. Now it's much cheaper and much easier to push it out there.

[0:23:21] JS: Yeah. But I think there's also the important point that there is just this inherent tension and journalism between sense making and the role that it plays in information and feedback loops around democracy. Also, its need to propagate and compete and survive, which also translates towards the more salacious type of reporting.

[0:23:43] TRS: Absolutely. Yeah.

[0:23:45] JS: Okay, let's fast forward to 1987. We've talked about this a lot, right? Because again, when we look at what's happening with social media, it's really just putting a lot of gas on a fire that already existed, right? We're seeing the breakdown, but I want us to understand that the mechanics were there beforehand. That's why I'm taking us through history. Because we used to get our news on TV from a couple of sources. There was a shared understanding of reality. Then something happened in 1987 that changed that substantially. What was it? How is the model?

[0:24:23] TRS: Yeah. All hazily, maybe you remember what this was like to some degree, or at least we've heard stories about what this was like. Back in the day, you went to one of your three networks to get your news. I found it really fascinating just unpacking how – this is the water we're swimming in, right? Everyone allowed today was born into an information environment that was traditionally mostly oriented towards a shared reality, towards shared facts. In the 70s and 80s, you would be exposed to really angering stuff. Society was exposed to really angering stuff. That Vietnam War was a tremendous outrage that was present. Everyone was actually on the same page, because the news media had started to orient itself towards making those outrages visible to people, but everyone's still on the same page.

In 1987, the FCC repealed the Fairness Doctrine, which was a law that was put in place to help manage political perspectives on the news and make sure that there was, basically, fair and balanced reporting in the presentation of political issues. This is also just in advance of cable coming online. Just, just after the Fairness Doctrine was repealed, there was an explosion of very, very partisan news that happened. It started with Rush Limbaugh, he was one of the first impressively successful people to exploit this new media landscape. He started The Rush Limbaugh Show, which became this huge mega hit sensation, orienting towards conservative audiences.

There was this new model of actually being able to profit off of a hyper-partisan political perspective that didn't exist before. Fox News came along very soon after that, taking a lot of cues from Rush, knowing that this is a possible way of making a lot of money. Since then it has, of course, become this enormous cultural beast.

I want to qualify, there is actually a newsroom at Fox that is actually doing decent news reporting and has traditional news reporting. It's the opinion component of it that is so substantially different. They're the beginning of this carve out of like, “Oh, no. We can actually sell opinion and sell extremely partisan opinion in a way that will make a lot of money. We can do that in a way that gets people very, very angry.” Since then, there's been a lot of moves in that direction on the left and the center left as well. In fact, just opinion journalism has become much more common.

I think people actually don't realize that there is this hard line between the editorial side of news, which is opinion and straight news. The reporters at Fox News that are in the newsroom, they don't interact with the opinion side in the way that we think they do, right? They're actually not covering those issues in the same way. There is actually a straight news, a fundamental straight news department there that is responsible for sourcing accurate facts. They were the ones that called the 2020 election. That part of the organization is actually like a traditional news organization. They're actually doing the same work that CNN does, or anyone else does. It's just, they've gotten so much so overshadowed by the opinion side of things. Yeah.

[0:27:17] JS: I think this is an important point to make before we get to the now in social media. Why is it the salacious stories sell? Why is it that we tune into increasingly narrow news orientations? Because if we're trying to address what's happening systemically, we have to understand the human psychology of it. I want to talk about that piece. 

Okay, so now we've got the arc of media. We've got a sense for some of the economic incentives. It's very important to look at how there was a very important regulation in place that when repealed, gave way to this very partisan orientation into how we consume news. This speaks immediately to the need for some market regulation to deliver the outcomes that we care about for society in an information ecology that is governed by economic incentives, right? Let's pin that.

Now let's understand human psychology, because we have to understand human psychology before we start to get into the solutions and even, I want us to understand human psychology before we get into like, okay, now let's look at this outrage machine. That's the big title of the book. Because this is so absolutely essential. How can you think about interventions if you don't understand human psychology and decision-making and why the outcomes are what they are? The reason why we tune into the channels that align with our political beliefs is because we have a bias to confirm our beliefs.

[0:28:49] TRS: A 100%.

[0:28:49] JS: We are far more likely to believe information that aligns with our beliefs and distrust information that doesn't. This is a way that our brain works. This is why we tune into what we do, this is important to understand, right? Because if you're trying to design a system that helps us be more objective, you have to understand there's that tendency. We have these narratives in our head and we have these traumas from our past and we see everything through that lens and we just amplify some pieces of it and we dismiss other pieces of it, and so it's just a very complex psychology that's at play in the systemic outcomes.

[0:29:28] TRS: Yeah. Confirmation bias is absolutely one of the most core. We don't like disconfirming information. We have a set of very basic heuristics that we use to parse our relationship with the world and we don't – we will actually go out of our way to avoid disconfirming information. This is actually tremendously supercharged in the context of information that goes against the psychology and the beliefs of our groups. We will spend a huge amount of effort lying to ourselves and lying to others to confirm the beliefs of our in group and our identity group.

I think this is illustrated best with this quote. “Social death is worse than actual death.” If our community believes something, being ostracized by that community is worse than being literally killed, which makes sense in the context of our history, right? We don't think about it that way when we say, “Oh, I’m going to kill you,” versus “You're going to hold on to this belief.”  But that's absolutely where our fundamental impulses lie when it comes to processing disconfirming information. In the context of COVID, this was a huge, huge fundamental issue, which is that a lot of people would never get a vaccine if they were part of a community, or never part of a vaccine, if they’re a part of a community that was anti-vaccine. You just end up in these bubbles of confirming belief that shape our perception of the world dramatically.

Yeah, it is interesting to think about that in the broader context of news as well, because right now, we're now trading in these narratives that are insulated inside of ideological bubbles. It's very difficult to find narratives that disconfirm that in our news sources also. That's a big piece of it.

[0:31:09] JS: Okay. All right. Now, let's talk about three design features that changed everything.

[0:31:17] TRS: Okay. All right. Cool. I was going to speak to how these particular design features also influence our psychology as well.

[0:31:25] JS: Perfect.

[0:31:25] TRS: Here are just the fundamentals of it. Yeah, so there's three feature sets that were launched rather quietly at social media companies between 2009 and 2012. We, I think both know, a bunch of these people that worked on these features all launched with good intentions and they were extremely reasonable in terms of why they were deployed and why they were given to us.

The first one is algorithmic feeds, which is, as we all know, the rank ordering of content in our feeds that is ordered for engagement, trying to maximize our time on site, which makes sense. We want to not lose the important pieces of information when we log on to our social media feeds later in the day. It's the not-chronological sorting of information that we've become used to. We open up our feeds, we want to see the most important stuff on top.

[0:32:12] JS: It just makes sense if you're a designer sitting in Facebook, the things that you engage on must be indicative of the things that you're more interested in.

[0:32:20] TRS: Totally. Totally. Meaningful social interactions. You want the meaningfulness in your feeds. That makes a lot of sense. There are some fundamental problems with that. There are some issues that we have come to understand, that we will actually engage with a lot of stuff, a lot of borderline content that is actually really problematic. We respond to car crashes, when we're driving out of the highway, we will rubber neck and we will look at a car crash. If an algorithm is tracking what we're supposed to be looking at, it will serve us more car crashes and that’s it.

[0:32:47] JS: Well, this point is so fascinating. I want to punctuate it. I'm so glad that you referred to that Zuck 2018 post about content moderation in the book, because it was such an insightful post when I read it and it gave me a lot of appreciation for what Facebook does. What it showed was that Facebook has a policy around what content is not allowed and engagement goes up exponentially as it gets closer to it. There is something about our psychology that makes us want to engage in this, what they call borderline content. It's deeply wired into us that we want to, for some reason, engage with things that are bad for us beyond some threshold. Violence, pornography, etc. This is in our psyche. Yeah.

[0:33:30] TRS: Our attention naturally gravitates towards the extreme. That's problematic. Certainly, it is problematic, it was much more problematic before this phenomena was really identified. I do appreciate how Zuck was trying to address that problem in 2018 when they figured that out, which was they were going to start demoting content that was reaching towards the borderline.

[0:33:54] JS: They basically inverted that graph to go to zero, instead of infinity. Okay, so we have the algorithm. What were the other two design features?

[0:34:03] TRS: Yeah. One of the other ones is social metrics. We know them as the visible comments, likes and shares that are the numbers that are beneath all of our content. It makes sense that they would have deployed this. A like is an easy thing to give away, right? It's a cheap thing to give to your friends. It's very meaningful when you receive a like from your friend. It's this interesting little nice concept of currency that goes between two parties. It's very inexpensive for me and it feels really nice when you receive it. 

But there's something that was stumbled upon when likes were deployed that is a strange reaction to the process of essentially, when we release content to our friends online, when we post something, we're actually in this strange little game, all of a sudden, where we're trying to figure out, and we all are familiar with this now.  It's like, we're trying to figure out if what we offer is going to get the number of likes, the number of comments, the number of shares, if it's going to go viral, that's going to get this response from our community. 

There’s a fundamental process there, like intermittent variable rewards, which was discovered by the Psychology of BF Skinner. He put animals into a box. They would flash a light, and if they pressed a button, they would get a food pellet. But they wouldn't get a food pellet consistently. If you actually add some randomness to the process of pressing the button and getting the food pellet, the animals go a little crazy and they start pressing it all the time, right? There's this process of randomness in the way that they were trying to figure out what it was, like what the logic was of actually getting the food pellet. That's called the Skinner Box. It's an operant conditioning chamber.

What it does is it trains the animals to obsess about the button. They obsess about what is the pattern behind trying to get the food. That is exactly what we've developed inadvertently with social metrics is the system of training us to essentially press the button of posting, to try to get the maximum number of likes on a regular basis. There's a whole industry that figured this out, while before social media, which is the industry of gambling and slot machines.

You put people into a room and you give them a button to press and they'll get a payout, or they won't and they will obsess over pressing that button over and over and over again, spending thousands of dollars and thousands of hours just trying to press a button in order to get that response. That is a very core part of human nature. The reason why we have it is because for our ancestors, foragers in the wild trying to find a nice tasty morsel, it makes sense to have some reward mechanism that is not consistent. Because you're going to go into this bush, look for berries, you don't find berries there. You're looking at this bush and finding berries. You're going to look and you don't find it. Then you finally find the berries and this is great. It makes sense that we have this fundamental desire to essentially farm for items. It's like, you can see in – this is exploited in video games all the time with rewards. It’s a big piece of our psychology there as well.

That's the second one – that we've actually inadvertently trained ourselves into this Skinner Box relationship with our friends, in which we post online. We're not sure how many likes we're going to get. “Okay, cool. I got to post again. I got to figure out what it is that I'm going to get the most likes on.” We're getting caught in these loops. 

The third feature is the one-click share, the single-click share, which is a very simple feature. It was deployed across the board by 2012 on most people's phones. That process of just making it incredibly easy to pass on information from one person to another, essentially made our emotions viral for the first time.

In the moment that I'm outraged by something that I'm served, I can pass that on to anyone, anywhere. That one simple process of just making it that much easier to take a feeling that I have and passing it on to my entire audience, that propelled us into this level of virality, of our content and our emotions that just did not exist previously in our species. We talked about how disruptive it was when the printing press went from a few dozen sheets to a thousand sheets per day. This is the same level transition in terms of the speed and spread of our information.

The result of that was this crazy period, right? Which I think we all started feeling around 2012, 2013. We're like, “Whoa. This is a new flavor of information. This is a new matrix of stimulus that we're being exposed to, that we haven't been exposed to before.” That started a cascading set of changes to our culture. These three features together started cascading a set of changes to our culture that have, I think, fundamentally shifted how we look at the world.

[0:38:33] JS: Yeah. I appreciate it, too, just looking at it from a systems perspective. You can see how the combination of the like, the retweet, and the algorithmic creates this reinforcing feedback loop, where increasingly outrageous content gets increasingly more engagement. Again, I mean, literally the graph is there in Zuck's post. It goes up exponentially. You can see the system has a loop in that direction, right?

One of the things we talked about was the velocity that information propagates and the veracity of information. We had this incredible up-throttling of velocity where misinformation was actually far more likely to propagate, because it was more outrageous. Then again, coupled with confirmation bias and specifically refuting anything that doesn't confirm with our beliefs and increasing polarization. This information ecosystem where we can see all the dynamics and what we're talking about in the history leading up to this with these three features, it just got turbocharged to the point that it is breaking down fundamental epistemics of society.

[0:39:42] TRS: It's weird to think about, but a viral post today will actually get more reach than a network broadcast post did in our parents’ generation, right? If you think about that, right? The network news would spend days, or weeks researching a story and then present it and push it out to their widest possible audience. A viral post, a single viral post of questionable information integrity can reach the same audience in less than a day. That fundamental difference from the way that we used to process information, to the way we process it now.

[0:40:13] JS: Right, right, right, right, right. Now we talk about how obviously, the ad-based model for social media platforms makes this as bad as it is, because the incentive is to maximize engagement on site. Even if you strip that away, you're still competing for attention, for survival. You're still trying to get more people to pay attention to the thing that you're creating. You still have all of these cognitive biases that we talk about.

We get the role that information plays. We get how media has evolved over history. Thank you very much, Tobias Rose-Stockwell. Here we are, we understand the breakdown in the mechanics of something that's fundamental around human socio-economic systems throughout history. Where do we go from here?

[0:41:01] TRS: Yeah. I think the general angle of looking at the system overall is really helpful for trying to think about solutions. I think drilling down to specifics is even more important than the macroscopic view, just because we got here with a very specific set of featured changes that have turned us into this strange society of cacophonous outrage. I think that if we look back at these individual systems, we can actually start to see solutions. If you look at the internal documents at Facebook and a lot of the research out of Twitter and trust and safety, you can see the way that these problems can potentially be turned down if you are willing to take some of the hit, in terms of your net profit margin. There is, yeah, we're going to say.

[0:41:47] JS: Well, and this is where we have to introduce just the system dynamic around multi-polar traps.

[0:41:52] TRS: Totally. Totally. Yeah.

[0:41:55] JS: Let's talk about Moloch.

[0:41:56] TRS: Yeah, let’s talk about Moloch. Yeah. Let’s talk about one of the specific features to say, putting friction in place between you and your single-click viral share. All right. You might be familiar with the ethnic cleansing that happened in Myanmar as a result of virally spreading WhatsApp posts about the Muslim minority there. Many people were killed. Many thousands of people were driven from their homes, because of badly spreading rumors, or false rumors that were spread about these minorities, that ended up being this real tremendous atrocity that happened in Myanmar.

What Facebook did was, as a result of that, they tried to reduce the spread of that information to a wider audience. Rather than being able to send a WhatsApp message to 250 of your groups that you're in, you can only send it to five. Rather than sharing with all of your friends, you can only share it with a smaller number of friends. Just throttling down that misinformation is helpful.

In the context of Moloch, which is this much bigger issue here, if you're thinking about a competitive market, where there's many potential social media platforms, and you can see this actually with Telegram right now, right? Telegram has become this hugely influential media platform and they have almost no moderation. They have very, very, very little moderation as a result, so you could see people migrating from a tool like WhatsApp to a tool like Telegram, because there's less friction in Telegram. As Facebook is deploying this throttling, friction in place to try to reduce the spread of rumors, Telegram potentially comes in and they eat up part of that market share and they get people on there and then they can pass it on. There's a coordination problem amongst these larger scale entities among social media companies, right?

[0:43:36] JS: If I do the thing that's better for society, I'm just going to lose my competitive position relative to someone else.

[0:43:41] TRS: Right. Right. That's the multi-polar trap problem that we're facing right now, collectively across the board. It does require an entity to help the parties coordinate, right? Does require some level of enforcement at a higher level to keep these systems from being their worst selves, essentially, from the market dynamics driving us into these really problematic traps. I do think that there's a role for the government in that process. You need to have governments in place. I don't think that content level regulation is necessarily the right move. I do think that potentially, design level regulation is the right move.

What is the optimal level of sharing? What is the optimal number of people that you should be able to share a post to instantly? What are the design incentives? What are the design models that we should deploy to make sure that the right number of people are getting good information on a regular basis, right? It's not talking about what information is good. It's actually just talking about what is the right design of these tools to make sure we're maximizing good information to the widest number of people. Does that make sense?

[0:44:44] JS: Yeah. I mean, where you have these multi-polar traps, the system in itself with the actors will not yield the optimal social outcome. There's just a case for regulation around certain attributes. I think that's right. I mean, an obvious example is if you just look at what's happening with AI right now. Everybody's releasing this technology. They don't understand it. They don't understand the implications of it, but everybody else is doing it, so they've got to do it, too. Everyone's ringing the alarm bells.

The only way to really address that – there's not going to be coordination among large public companies that will risk their competitive advantage in an enormous emergent market. The only way to address that is through some sort of a regulation. 

There's also just more competition. In some cases, more competition makes sense, right? If you have more options with different content moderation, and you see this with the proliferation of web3 and blockchain and other social media platforms and the ability to have a federation of social media platforms, so that you can – there's not as much friction to leave them. There are things happening in a competitive ecosystem that enable better outcomes, as well as things that are required on the regulatory side to yield better outcomes.

[0:45:58] TRS: Right, right. Yeah. I mean, it's really important to note that – trust and safety teams do actually know what good information looks like. We know what the model is of high-quality information. We can see it. Facebook actually knows, like Facebook, what's on Meta, they know what good information looks like inside the platform. They can tell. Building incentives internally that actually prioritize good information over bad information is really important. Then having, yeah, having a federation of companies that also agree upon this, having strong trust and safety collaboration across these platforms is really important. Then I do think that our regulatory body should be worked on to try to help mitigate these things.

I am actually optimistic that we can potentially get there if there's the right political will. I think this is truly the problem of our time, making sure we have good access to good information, or a good basis. It's not about, again, I just want to double click on this, which is that it's not about content level censorship. It's about the design of these platforms to be more epistemically sound. It's not about demoting certain types of content, or more tightly defining hate speech necessarily. It is actually about making sure that the optimal design of these platforms is what we're using, so that we're more prone to seeing a real piece of information versus a rumor.

[0:47:11] JS: One of the things I think is very thorny and complex about this problem, and this is, I've raised this repeatedly in many of our conversations is when we talk about all of the biases that we have, there is a tension between the things that we will decide to do with our attention and what is good for society. A simple example is, it's better for me if I don't eat ice cream. But if you put ice cream in front of me, I'm going to eat it. If I was a kid, I would appreciate that my parents will not let me eat sugar all day long, because they know that it's not good for me.

[0:47:46] TRS: Retrospectively, you feel that way, right? Yeah.

[0:47:49] JS: Well, there's just a deep complexity here, around designing a system that yields the behaviors for optimal social outcomes is inherently paternalistic. How do we get around it? With a nudge? We're like, “Okay. Well, we're still giving people choice. We're just designing things, so that they choose the things that are best for society at the end of the day.”

[0:48:12] TRS: Totally. Yeah.

[0:48:14] JS: How do we design things that help us sense make and see opposing viewpoints when our confirmation bias leads us towards increasing polarization? I understand that this is bad for society writ large, and we need algorithms that give us the thing that's healthier for us, even though I really just want to eat the ice cream. I don't know how you reconcile that. That's really at the threshold of this issue. I haven't heard any really good answers, because I think it's really thorny.

[0:48:46] TRS: Look, I think it's less thorny than we think. I'll use this just example from –

[0:48:51] JS: Great. I love it when I'm wrong.

[0:48:54] TRS: I'll use the example from the book when I – the dark valley of radio, this one particular chapter, which just shows, I think, how far we've come in thinking about this problem from the wrong perspective. When radio came along, it was this amazing new technology, right? It’s the first time you had the voice of someone in your living room, speaking to you and your family in this deeply intimate way, right? It was this powerful force of distributing information instantly to a huge audience. One that didn't exist before, and much quicker than it did before.

If you were to turn on your radio in the 1930s, on any given Sunday, you were likely to hear the voice of a Catholic priest who would explain macroeconomics to you. He would rail against the KKK. He would try to give you some life advice, and then he would also tell you about a subversive plot to take over the government that was orchestrated by a secret cabal of Jews. His name is Father Coughlin. He was a vicious anti-Semite, and he reached millions of American households weekly. He was one of the most famous broadcasters in the country in the 1930s.

It's shocking to think about how popular he was at the time. We tend to have a rose-colored lens when thinking back to America's role in World War II, but he was a tremendous voice of anti-Semitism. He was supported by basically, a proto-MPR that he built, where people would send in a couple bucks. They would literally mail him money to run his own radio station, which he developed himself, and he had this tower that he built himself. It was very expensive to operate, but he was so popular that he had people that would just send him in, Patreon style, a couple of bucks to keep listening to this guy's anti-Semitic rants.

He would also verbatim, translated a speech by the Nazi Party to the United States in the 1930s. He was actually exercising his freedom of speech rights in this very clear way, in the lead-up to World War II. He was a Nazi apologist. He was extremely persuasive. Again, he reached millions of Americans.  Eventually he was de-platformed,, and I think this is just important to think about. Someone somewhere is going to make the determination as to what is acceptable speech and what is not acceptable speech. That decision is always made. There's not just a universal spread of information everywhere and anywhere without repercussions. Someone somewhere makes a decision as to how information is spread.

Right now, that decision is based on social media companies' design decisions. In the 1930s, in the lead-up to World War II, this guy was a legitimate threat to national security. He was doing something that was fully legal, 100% legal all the time. There was no problem with him just spewing straight up Nazi propaganda to a huge portion of American households. The way he was de-platformed was that the government nationalized the airwaves. Actually, he is using the airwaves in this particular context, he's using this new thing called the airwaves, which didn't exist before that point. Actually, this is a public good. This is a public asset. The airwaves are a public asset. We are going to nationalize these and we're going to give out licenses. We are going to refuse to give him a license.

This is one of the origination points of the FCC in that process, which is actually forcibly de-platforming a Nazi sympathizer. That if untracked, if let run its course, could have fundamentally shifted American policy away from engaging World War II. The world would be fundamentally different if that hadn't happened. I just think that it's important to put a little spotlight on the history of how we share information. It's like, we go through these periods of time in which we have these questions about freedom of speech. Freedom of speech does have outcomes. Someone somewhere needs to make the determination as to what is allowed and what is not allowed.

Father Coughlin, he was eventually de-platformed. As a result, I think we are better off collectively. We could have been in a much worse position. When it comes down to, I think, the attitude of, we all deserve any information always, it doesn't matter if it's false. That is a very contemporary view that is not reflective of the struggles of history, of the struggles that we've already gone through to try to make sure that we have access to good information and not totally bogus propaganda. That is a really important point to make here. We're swimming in the water of a libertarian ideal here that I totally understand and I totally respect. But when it comes down to it, someone somewhere is going to make these determinations. If it's propagandists, then we're actually seeding some of our epistemic landscape to people that have really messed up agendas. That is going to affect us in a meaningful way.

[0:53:52] JS: You mentioned that you are going to publish a piece, following the book, that speaks a little bit more on the solution side. I know we've touched on it a little bit more, but can you give us a little bit of a preview of what's to come?

[0:54:05] TRS: Yeah. The subtitle of the book is How Tech Amplifies Discontent, Disrupts Democracy--And What We Can Do about It. I do touch on the end pretty deeply, I think, on the broader themes that I think will help point us in the right directions around this. But there is a hunger also for more specific design interventions that I think can help us here. I'm working on a piece right now that will be up on www.outragemachine.org.

[0:54:30] JS: Okay. Can you tell us what those are, or do we have to wait until you publish it?

[0:54:36] TRS: Yeah. I mean, I'm happy to speak to some of them. I think what it comes down to is three different areas. There's things that governments can do, things that individuals can do, and then things that platforms can do. In the book, I speak to some of the individual heuristics we can use to help parse better information. I think regret is a really important emotion for understanding our relationship with these tools. Regret is a system to process. It's not necessarily a system one reaction. In the context of fastening me versus slow thinking, regret is this really important emotion.

It's like, a lot of time people will spend a huge amount of time on social media, and they'll come off of it like, “Oh, this feeling I have is like – oh, that was probably not a good use of my time.” But okay, and we're habituated to it as they come back to it, and they keep on this loop of extreme attentional obsession, and then regret when they come out of it. I think it's really important to focus on those moments of regret and try to calibrate your entire social media diet based – and your news diet in general. Because this is not just social media. This is also news. News is a big part of this issue, too. News and social media together are the outrage machine. It’s not just social media. News is a huge part of it as well.

Yeah, calibrating your diet based on regret and also, looking to sources that aren't partisan, that actually tend to be much more focused on straight news. You can find these, like the AP and Reuters are actually two fantastic sources that don't – that you won't find opinion journalism and analysis in the same way that you will on other platforms. You'll actually find straight news about what's actually happening in the world. If you want information, then you don't want opinion, then that's a great place to go to.

On the platform side, like we mentioned, I think a whole bunch of categories of basically, frictions that can be deployed across platforms to reduce the spread of bad information, to mitigate and deprioritize the stuff. Again, not on this content specific level necessarily, but helping bad information travel slow, if that makes sense. Helping information travel a little bit more slowly. An important distinction is the distinction between velocity and virality, all right.

Virality is not inherently bad, right? A good book that we read, that we pass on to our friends and they pass on to their friends, that's actually great, right? That's actually what you call slow virality. I think that's actually a good thing in general, like word of mouth, a movie, this thing, this is a good thing. High velocity virality tends to be bad. It tends to be the type of information that is emotional, lacking context, oftentimes false and very reactive. I think that prioritizing slow virality over fast virality is a really key piece of that. There's a whole bunch of different specific interventions that companies can deploy in order to, I think, improve the slow virality structure there.

Again, to go back to structure here, we know the structure of good information. We know what good information looks like. It is stuff that actually comes from sources. I mean that if you think about the history of the internet, just to give a little bit of a zoom out here, there used to be this whole industry of people that would go around and they would spend their entire life trying to find a handful of good facts. Their whole job was to go out and find a handful of good facts, and they would put it into a tome, and they would sell that tome. It was called an encyclopedia. 

Then you buy one encyclopedia set for your family, and it would sit on the wall, be this huge line of books. When that industry met the internet, we got Wikipedia. Wikipedia is so much better than a traditional encyclopedia, right? It's free. It is many thousands of times bigger than a traditional encyclopedia, and it is an incredible asset for people. They use it, and we all use it, right? It's this huge asset for our species. We also used to go, if you had a question about something, a very specific research topic, you would go to the library and you talk to the librarian, and you would get this Dewey decimal system, and you go and you find a book, and you go to the – and you read the book, and you find it. It took a day to find a particular piece of information that you're looking for.

When libraries went to the internet, we got Google, right? Think about this relay race of traditional informational industries, or informational enterprises meeting the internet and becoming much better. When newspapers met the internet, we got Buzzfeed, we got clickbait. There actually wasn't a handoff in this. You can think about it as like a relay race, right? The old industry meets the internet and becomes a new industry. That handoff was actually supposed to be social media. The handoff from traditional news to the modern internet was supposed to be social media.

That handoff happened without the specific design structures in place that professional news has. The way the newsroom works, sourcing specific information, making sure that you have one to three verified sources before it reaches a larger audience. That same transition should have happened in social media. We can employ some of these same principles of professional journalism. I just want to note, both encyclopedias and Wikipedia have a really powerful sourcing mechanism in place. Google is entirely based on sourcing and social proof. That's why we use it. We look to it, because it has this powerful engine and page rank, which is based on a process of sourcing information. Again, we know what good information looks like. Even good, current information, good viral information, good news, we know what that looks like. It's just, social media has not embedded the same principles from the industry of journalism into its design. I think that we can do that. 

Then finally, this is a lot, but finally, in the bucket of solutions on the government side, I think that reforming Section 230 is really important. I think the increase in the liability that platforms hold for the type of information that is spread is critical to actually solving this problem. If you increase the liability, I think that they will immediately snap to attention and start to, again, not please content on a case-by-case basis, but actually focus on the design structures that could actually improve the type of information that we're given on a regular basis.

I don't think that we should think about social media companies as being platforms in the same way that we do today. I think they're much closer and much more akin to media. We have regulations around media, right? More people are watching TikTok, instead of TV now. That's how it works. There's much more media companies in a lot of ways.

[1:00:43] JS: Well, that's what's actually really interesting. I read, and you can find it on the Denizen website, for those who want to go down the rabbit hole. But I summarized them on the Denizen website for those that want to see, there is actually a report that was prepared for Congress on regulating social media. It defines three different regulatory models to consider. One being the public square, the other being special industries, which includes broadcast maybe, and the third being a publisher, which has full protection under the first amendment. What you're pointing to is seeing it not as a publisher with full protection, but as a special industry. The more regulation would be justified.

[1:01:24] TRS: Yeah. I think that's the way to go. I do think that what we're facing here is just a dark valley of hidden harm. We did not know that we would find ourselves in this particular position when it came to social media. I do think that we can find our way out. I just think it's a process of coordinating correctly and climbing up.

[1:01:41] JS: Well, again, and it's also with the advent of web3 and regulations that support increasing competition, then you can get a proliferation and affordability between social networks. You don't have the network effects that lead to just a couple of platforms dominating, where you have a lot of these pernicious outcomes. I think that's one positive trend, coupled with the need for regulation. You're nodding your head like, “I'm not so sure if I buy that,” but it's going to be very interesting here. Okay, tell me why.

[1:02:10] TRS: Yeah. I mean, there's a couple of, I think, examples of this, of portability actually not necessarily improving outcomes. There's a number of web3 protocols that are in place that, again, without, I think, the right regulatory framework are becoming, essentially, echo chambers of particular political dispositions, right? We're in the age of bespoke social networks, right? Truth social.

[1:02:37] JS: Yeah. So then, it's like the social networks become your confirmation bias pocket in the same way that the –

[1:02:41] TRS: Exactly.

[1:02:42] JS: Yeah, sure. You saw that happened with Parler.

[1:02:45] TRS: Yeah, exactly. No, and I think it's happening across the board. I think, Twitter in a way is actually starting to default to a very particular ideology as well.

[1:02:50] JS: Yeah, that's an interesting point. That's a really interesting point.

[1:02:54] TRS: I wish it was the simplest, giving people a choice, but I think there's something much more structure that needs to happen.

[1:03:00] JS: It's both and, right? More competition in some ways will be valuable, but I think more competition in the face of confirmation bias is a really important point. I guess, one last question that I'll ask. I know we've been going longer than we intended to, but again, you're covering such an important fundamental topic to the inquiry in the Denizen conversation. How do you feel about the complexity associated with the pace of technological change relative to the ability for these corrections to happen?

[1:03:31] TRS: Right. Yeah, great question. This is why I think looking at these disruptions as a cycle is really important, right? Zooming out and thinking about it as this consistent – it's like a fitness landscape in biology, where in order to reach a peak, you need to descend for a period, before you actually reach the higher peak. You need to go down before you can go up. I think that you can think of these dark values of hidden harms with new technologies as there's descent into a fitness line, into a lower valley before you get to a higher peak.

Recognizing that every new media technology is going to have this, and also, AI is a huge new media technology. I just want to be clear, AI is massive, right? Every new media technology is going to have this period of exploding unintended consequences, a period of decline in which people are confused, they're upset, they are misinformed, they are potentially outraged. They're a period of chaos that comes with every new media technology. I think it's important to think about this as not trying to avoid those entirely, because you can't avoid them, right? There's going to be unintended outcomes for every new major adoption of a tool.

Instead, anticipating the fact that they're going to come they are inevitable, and instead, trying to reduce the depth of these values and reduce the length of them, like reducing the duration before we figure out a way out. I think that's a really key piece of this, because the rate of technological change and introduction of new tools is only increasing. It's just exponentially going up. We're going to be hitting these cycles of disruption faster and faster and faster.

Just recognizing that it's inevitable that we're going to have some harms associated with this, get ahead of them as quickly as possible, and try to reduce them, reduce the length of time before we figure out our way out. I think you're starting to see that already with AI. I think the conversation around AI is much more advanced than it was around social media a decade ago, where we're already starting to – we're like, no, this is maybe a problem. This is likely going to be a problem. Let's not just leave it to the market to figure it out. Let's try to think about this in advance, and let's try to approach this as best we can, without it blowing things up first.

[1:05:38] JS: Okay. Well, thank you for this very meaningful contribution to one of the most important essential topics of our day.

[1:05:46] TRS: Thanks, Jenny. Thanks for your great questions. Thank you. Thanks for Denizen. I mean, it was really – the community has been super helpful and our conversations have been super helpful and contributing to this book and this whole project.

[END OF INTERVIEW]

[1:05:58] JS: Thank you so much for listening. Thanks to Scott Hansen, also known as Tycho for our musical signature. In addition to this podcast, you can find resources for each episode on our website, www.becomingdenizen.com, including transcripts and background materials. For our most essential topics, like universal basic income, decentralized social media, and long-term capitalism, we also have posts summarizing our research, which make it easy for listeners to very quickly get an overview of these particularly important and foundational topics.

On our website, you can also sign up for our newsletter, where we bring our weekly podcast to your inbox, alongside other relevant Denizen information. Subscribers are invited to join our podcast recordings and engage with the Denizen community in our online home, The Den. We're partnered with some incredible organizations at the forefront of the change that we talk about. We share announcements from them in our newsletter as well.

Finally, this podcast is made possible by support from the Denizen community and listeners like you. Denizen's content will always be free. Offering Denizen is a gift, models a relational rather than a transactional economy, enabling denizen to embody the change that we talk about on this podcast. Through the reciprocity of listeners like you, that we are able to continue producing this content. You can support us, or learn more about our gift model on our website. Again, that's www.becomingdenizen.com. Thanks again for listening, and I hope you'll join us next time.

[END]

( Scroll to Explore )

Join us!

Our weekly newsletter is the best way to stay abreast of our inquiry and ongoing events.

Oops! Something went wrong while submitting the form.
The Denizen logo.