Social Media Governance

Sudhir Venkatesh
Sociologist
Sudhir Venkatesh
Sociologist
Tracey Meares
Legal Scholar and Author
Tracey Meares
Legal Scholar and Author

How are social media platforms governed? How might we design products and processes to revitalize community and cultivate values that are essential to civic discourse? What can we learn from the fields of sociology and criminal justice?

Show Notes

In this episode we discuss governance on social media platforms, an important topic given the enormous social consequences of our reliance on them.

Our guests are Columbia sociology professor Sudhir Venkatesh and Yale legal professor Tracey Meares. Sudhir and Tracy co-direct Yale's Social Media Governance Initiative, which is leading research in this field.

In this conversation, Jenny, Sudhir, and Tracey explore:

  • What is governance? [2:50]
  • Governance as a performance [5:28]
  • Establishing norms [7:39]
  • Where authority comes from [10:02]
  • Parallels between criminal justice and social media governance [12:57]
  • How to internalize rule following [13:45]
  • Bad actors and Facebook [16:05]
  • Designing for pro-social engagement [19:29]
  • Challenges establishing channels of communication in tech companies [23:05]
  • Pro-social objectives and incentive misalignment [30:00]
  • Who are tech platforms accountable to? [37:48]
  • International governance and local cultural norms [43:21]
Resources
Transcript

Sudhir Venkatesh (SV): I think feedback is a key mechanism because at the heart of this kind of governance approach that Tracey is talking about, is communication, and with users and having them have a voice, feeling dignity, and being able to appeal and tell you things that aren't working.”

[INTRO]

[00:00:21] Jenny Stefanott (JS): That's Sudhir Venkatesh, scholar and tech industry veteran who works at the frontier of social media and how it's governed. This is the Becoming Denizen Podcast. I'm your host and curator, Jenny Stefanotti. 

In this episode, we discuss social media governance, that is how we govern our online spaces, particularly the large social media platforms that have enormous effects on our social fabric. Of course, this relates to how governments regulate technology platforms. But here, we're focused on how tech companies govern the online spaces they create and, specifically, how they might do so in a way that has positive spillovers for society writ large, as opposed to the negative externalities that dominate today. 

Our guests for this episode are Sudhir Venkatesh and Tracey Meares. Sudhir is a professor of sociology in African American Studies at Columbia University. He's also worked in industry – he was at Facebook, where he worked on bullying and misinformation as Head of Research and Community Integrity. He also served as the Lead Social Scientist for initiatives to improve online health of Twitter. 

Tracey Meares is a professor of law at Yale University. Her research has largely been focused on criminal justice, with expertise on policing in urban communities. So it's really interesting to see how she brings this view into looking at governance on social media platforms. She also worked with Sudhir at Facebook. So they have this amazing perspective that brings in two very different lenses, sociology and criminal justice, coupled with their direct experience working with some of the biggest social media platforms in the world. 

They are co-directors of the Social Media Governance Initiative at Yale. There, they are combining thought leadership with practice. They work with tech companies, policymakers, and communities, with the goal of enabling various actors to create an online environment that's good for wider society. They're particularly interested in creating pro-social governance, where rule following is internalized, versus the dominant mode, which focuses on punishing those who break the rules. 

In this episode, we talk about the parallels with criminal justice and social media governance, the roles that we assume in online spaces and how it can be seen as a performance, how product design informs culture and behaviors on platforms, where authority comes from in social media spaces, the importance of feedback from users to design products that meet their needs, challenges with insularity and burnout in the tech industry, issues with incentives for companies who are ultimately governed based on quarterly business expectations. So there's a lot here, and I hope you enjoy the conversation. 

[INTERVIEW]

[00:02:50] JS: I wanted to start us off with this question of what is governance because there's a lot of ambiguity in what that means. In fact, a lot of people have slightly different definitions. So how do you define governance? And specifically, what do you mean by social media governance?

[00:03:04] SV: Tracey, you want to take that?

[00:03:06] Tracey Meares (TM): Well, it's interesting. I was curious to know what you would think about governance, Sudhir, because I actually think talking about social media governance specifically is a way to understand the general term of governance. So I think I would flip the question a little bit. Because when you talk about governance in the social media space, most people think about taking what I would call a hands-off approach from the perspective of the state, right? That's what a lot of these fights and disputes are about with respect to whether we're enhancing or promoting First Amendment values or whether Congress is going to extend certain aspects of FCC regulation to social media platforms. 

I think, for many people, governance is coincident with what the state does to regulate, and then an extension of that idea is that regulation necessarily means punishment, right? Fines, some kind of punitive action. I think we instead think about governance in terms of relationships among groups and how authorities, whoever that authoritative figure may be. So you're an authority in your house. The way in which you organize your life and are able to promote your goals and projects, I would say, is a governance project. 

Our idea is that there's a lot one can do in the context of governance. Even in terms of promoting rules and certain organizational processes that don't really have anything to do with the state, right? It still is governance, and we like to work with theories that promote understanding how people can come to conclusions about trust and authorities, whether those authorities are the people who are state actors, that's the work in criminal justice, or whether the authorities are the leaders of groups on social media platforms. 

[00:05:25] JS: Sudhir, do you want to add anything to that?

[00:05:28] SV: So I will take the sociologist’s angle and say that just to touch on one aspect of it, which is that governance, for me, offline and online is a performance. I mean that in a very kind of strict sense that a lot of the way that we experience life with other people is through these roles that we have. We may think of ourselves as Jenny, Sudhir, etc. with a highly individualized role, individualized personality and history, etc. But we come into these particular places, a grocery store, Facebook group, whatever, with a particular role and a profile, and it's very limited, and we perform. 

One of the things we know about performance is there are these rules, and there's a working consensus. So I need to know a little bit about what the rules are in this place. I need to know a little bit about what your expectations are and what I can say, what I can't say. I'm trying to read your cues. Sometimes, this happens very quickly. Sometimes, this happens through explicit rules and just implicitly. So the idea of performance can be kind of useful because a lot of what happens that we find on social media sites is that the way that the platforms perform and the way that the users perform are often just not in sync with each other. 

A very silly example can be used to demonstrate this, which is in a lot of the early work that the research team that I was a part of and then was fortunate enough to direct for a while, a lot of the work that we did was on bad actors and people who are breaking the rules was just getting a sense for what they were thinking at the time. One thing that jumped out was that people just couldn't find the rules. People didn't understand what the expectations were. Or we would say in sociology, there was no consensus in that Facebook group. So they were just acting, but there was nobody ever telling them, “This is how you're supposed to behave. These are the expectations.” 

A lot of it is just miscommunication. So we perform, we get a cue. Oops, that wasn't good. Then we often don't have a chance to recover. So for a lot of the way that we think about users is can we help them perform and put on a mask in the right way and in a healthy way online?

[00:07:39] JS: What you've said raises two questions for me, which is who is the authority, and how does that authority get conferred, right? So obviously, the people who run the platforms. But within the users, there's an authoritative role between them. How does that get established, and what does that imply for governance? 

Then related to what you were saying, Sudhir, how do those norms get established? What does the platform do to establish those norms, versus those norms being emergent within the community?

[00:08:08] SV: I'll go first this time. One aspect of it, just to be brief and to give a sense of something that's often ignored, is that we do really think a lot about the platform. What are the expectations that we have of people who create it, the policy team, the designers? Are they using a design-centered approach that cultivates a healthy interaction? We think about the general expectations of the account holder. So we do have some understanding, but there's often something we don't think about. For example, bystanders and how much power bystanders have in determining what your platform will be like. 

Just as an example, in the 1950s, 1960s, people used to do these experiments. They would just do an experiment of people waiting in line, and then a professor would send the student to cut in line in a public place and a long line. One of the things that they found out was that if I cut in front of you, you're about 90% likely to say to me that’s not nice or get out or whatever it is you're going to acknowledge it. For the person in back of you, immediately it goes down to about 5%. They're about 5% likely to say, “Hey, that wasn't really nice. You're cutting in front of all of us.” 

Now, if you can get that person to say something and increase it from 5%, you will be able to resolve conflicts. You will be able to bring a sense of cohesiveness to all of those people waiting in line or to a group, etc. You'll elevate the expectations. You’ll start creating norms. You'll do all these things, just by getting bystanders or what some people would call a silent majority to come into being. That's a silly example, but I use it because when I work with platforms, so much of the attention is, “Are our rules written the right way? Do people understand them?” And it's regulatory-based. Sometimes, it's just thinking about how we want the community to be with each other. 

[00:10:02] TM: So here's where I want to jump in. It's not only that we think about whether we have written the rules in the right way, but then the goal is to ensure that everybody – Well, I was going to say the goal is actually not to ensure that everybody follows the rules. So the goal is to try to figure out who's breaking the rules and what to do with those people. There's always a focus on the relatively small group of people who break the rules more than once. 

As Sudhir mentioned, in many cases, people break rules because they're unaware of the rules. They don't even know that it was a rule. The reason why I hesitated to say break rules is it's an odd phrase to use to say that someone has broken a rule, if they didn't even know that the rule existed. So we've taken a page out of the social psychologist’s book, looking at why people obey rules. Outside of platforms, if you think about it in the criminal justice world, it is a common conclusion that people obey the law. They follow rules because they fear the consequences of failing to do so. If that's what you believe, if that's your theoretical construct, then it makes sense that you will try really hard to identify people who break rules and punish them to try to persuade them not to break those rules.

But social psychologists have shown for decades, actually, that most people follow rules or follow the law most of the time, and they do so not because they fear the consequences of failing to do so, but instead because they think that rules ought to be followed, that an authority has the right to dictate to them proper behavior. So that's your question, Jenny. How do we know who the authority is and so on? You can get to that. We also know that there are certain things that encourage people to trust authorities and trust and rules. It's kind of an unfortunate term, but it's the social psychology of procedural justice. So we can look to four factors that support internalization of rule following. That way, you're not actually relying on detection and punishment in order to ensure that people follow rules. But instead, you encourage internalization of rules. 

So then the task is to identify people who are willing to engage in the kind of acts that Sudhir was discussing and then what platforms can do, and we've actually encouraged some of them to do this, to actually teach and train the folks who are going to lead groups how to become effective moderators, how to intervene, how to say, “Hey, you just cut line, and that's not the kind of thing that we do in this space,” and so on. 

[00:12:57] JS: I'm really curious, Tracey, about your background in looking at criminal justice, and the government-to-citizen interaction, and how that relates to tech platforms and their users? Can you tell us a little bit more about that?

[00:13:12] TM: Yeah. So a lot of the work that I've done is on policing, and these ideas that I was just talking about flow from that. Why is it that people obey the law? Why do you follow what a police officer asks you to do? Like posing that hypothetical in today's world is a little difficult. But the research shows that most people actually follow the law, just because they happen to agree with it, right? It's just consistent with their moral values. 

But there are a lot of rules and laws that people follow, even when they don't agree or they think it's silly. So another example that I use sometimes is in New Haven, if you're driving around at two o'clock in the morning, that's where I am, New Haven, Connecticut, there aren't really that many people around. So if I happen to be driving downtown and I come upon a red light, there's not going to be anybody around. There's not going to be a cop. There's not going to be anyone. There's very rarely someone in the car with me, if I'm riding at two o'clock in the morning because I've been working late. My grandmother is not standing on the corner, telling me that I better stop at the red light. So why do I stop at the red light when I think it's stupid and inefficient and I don't want to? 

I do it because I think it's appropriate that we have a law that says I have to do it, even when it doesn't necessarily make sense to me. That's sort of the essence of legitimacy. That idea we imported directly into thinking about online groups, where content moderation was very much focused on identifying and punishing wrongdoers, rather than thinking about ways to internalize rule following by first telling people what the rules are. But then second, trying to enhance trust through procedural justice by focusing on voice - that's giving people an opportunity to tell their side of the story when they've been told that they have broken a rule. Second, treatment with dignity and respect, that is having an authority listen to your complaints, when you think you've been treated unfairly in this process. Third, having fair decision making. Fourth, trying to create a perception of trustworthy motives by doing some combination of the first three things. 

One of the research projects that I worked on with Sudhir and my colleague, Tom Tyler, and a researcher who used to work at Facebook, Matt Katsaros, was to show that when Facebook adopted these kinds of approaches to addressing rule breakers, it actually had an impact that was more beneficial than the typical just punish the rule breaker approach that they've taken before.

[00:16:05] JS: So many questions. I'm so curious, what exactly did Facebook do to adopt that? I'm particularly curious about this question of how much harder is it to course correct when a certain norm has taken hold, versus establishing it from the get go? Because it sounds like Facebook was successful in doing this, even once it was quite scaled up. 

[00:16:26] TM: That's a Sudhir question.

[00:16:28] SV: When I think back to some of the evolution of Facebook over the past five years or so, the pace has been so quick in that area that they call integrity that has something like 30 different abuse types from misinformation to bullying and spam, etc. It's constantly changing. When we brought some of Tracey's work with Tom that was in that offline space, helping police departments. If you think about it, I mean, talk about a challenge, trying to help the nation's police departments restore their relationships to the community. For us, if they could do that, then certainly it has something to offer our company. 

I just want to tell kind of a story of an early conversation I had with a product manager that exemplifies this. When we were sharing some of Tracey and Tom's work, I asked this person who is responsible for rules and enforcement. How do we, we at the time, so how do we currently think about our approach to rules? Like what goes in your head when you're building products? Blah, blah, blah. His answer was very simple. Punish them hard and punish them harder and keep punishing them until they behave. Set it not with any kind of authoritarian impulse but in a very sweet way. Like that's how people are going to learn to follow the rules. You just have to keep punishing them and increase the punishment, and eventually they work. 

So I said, “Okay. Well, that's an interesting model. Have you ever gotten a traffic ticket?” He looked at me like I was crazy and said, “Yeah. What's that have to do with this conversation?” I said, “Well, what happened when you got a traffic ticket?” He said, “Well, they wrote me a ticket. They told me where I could go if I wanted to appeal, and they told me what rule I broke. They told me how this is going to be this many points. And if I get to these six points or whatever it is in California, I'm going to lose my license. So I understood the consequences.” I said, “Okay, it sounds like it was very clear. It was transparent. You had some voice in it.” 

So I said, “This is procedural justice.” I said, “Why don't we adopt that rule because that seems to be working for you? You seem to want to follow the – And a lot of other drivers. What's wrong with that model?” I remember he thought about it for five seconds, and he said, “Nah, that's never going to work.” There was just this sense that bad actors are bad actors. They're over there. There are those people. So a lot of the early work was helping them to understand who these people are. I think they have this view that these are the people who are doing the most heinous things in the world. Yes, you want to keep wherever those people are off the platform. 

But a lot of it was early education to do what Tracey was saying, to just do basic research and help people understand. A lot of people didn't know where to find the rules. Facebook wasn't even communicating and telling you what rule you broke. You're off the platform for eight days. Okay, why? So just basic kinds of communication was some of the muscle that we had to grow at the company at that time. Then more great things happened. But it was a very basic set of work that we had to do in the beginning. 

[00:19:21] JS: You had mentioned that there were some early findings from the work that you've been doing at the Social Media Governance Initiative. Can you tell us more about the work that you're doing and what you're finding?

[00:19:29] TM: Well, let me start by saying one of the things that we're finding is, again, something very basic that is that the platforms that we're working with are interested in this question of pro-social engagement, as opposed to just identifying wrongdoers, especially in the COVID moment. Many platforms are sort of struggling, thinking about what kind of model they are going to have for interaction, especially when their model of interaction has an outcome, as my kids would say, an in real life experience, right? So what does it mean to be interacting on Tinder or Airbnb when people aren't traveling or people aren't going on the kinds of dates that they were going on?

So one way to motivate that transition of moving from a focus on content moderation/just making sure that people obey the rules to thinking about designing your platform or products for pro-social engagement is to think about a car, right? I use this analogy that content moderation is like imagining that cars are just dangerous. So cars would be the bad words that people can't say or the nudity that they can't post or whatever. 

There was a time in history, when many people just wanted to ban the car because it was too dangerous. They want to think about the possibility of the good things or the benefits that one could get from using a car. The way I like to think about the pro-social governance initiative and the kinds of things that we're thinking about in terms of activities, strategies, products that platforms can engage is instead of banning the car from the road or even putting the brakes on the car, having speed limits and such, instead think about the environment that the car is operating in. 

It's very common for all of us to be driving our cars on four-lane highways that are relatively straight and very well-lit and have rumble strips and have all of these things, right? But that's relatively recent. I'm old enough to remember the highways that have the dotted lines, where you have to pass in the morning. You're careening around, and it’s very, very dangerous, right? It's the same car traveling in these two different contexts, but one is about designing a platform so that the car can travel safely and do things that are helpful to the people who drive them. 

The other one is just sort of letting the car do whatever it wants, and then maybe halting it or identifying it or banning it from the road when it does something poorly or harmful, right? So that's, I think, the way that we try to think about it with that goal. When you do, you think about the possibility of all sorts of things like equipping group moderators to help people engage in positive ways. Or you can think about messages that pop up. Are you sure you want to say that? Or things that give people guidance. It's a different kind of environment then that you would be interacting with to ensure that the interactions that you have with folks are helpful, and pro-social.

[00:23:01] SV: Jenny, can I just jump in here for, say, one other thing?

[00:23:04] JS: Of course. 

[00:23:05] SV: I think there are three questions for me that when I think about the work that we do that keep coming up in the context of platforms, and many of them do one or two things well, sometimes it's hard to do all three. The first is what kind of communication are you having, as Tracey was saying. 

So much can be gained by paying attention to the very fine-grained qualities of communication. There's a famous set of experiments that we've done, and we've all faced this. We've all gone into a hotel room, and we've heard that message that tells us, “If you don't want your towels washed, leave them hanging.” I think something like that. There's some version of that. There's research that has been done around the different kinds of messaging. 95% of the people who have come in this room have decided to leave their towels hanging and to save water. Another message might be, 95% of the people who come from your zip code have chosen this. So different ways of helping you feel like you're part of a bigger community with norms. 

That communication is critical, and most platforms often just – I think of something like Clubhouse. I mean, I imagine how busy they are. They might not just have time to be thinking about this. They're in startup mode, or they're in another mode or even well-established companies. It's just there's so many things going on. The second is what kind of teams are you building internally? That actually has a lot to do with governance of users because how that alignment occurs internally can determine what are the ways in which you are getting that knowledge transfer outside the organization into inside. 

Another way of saying is what kind of feedback are you getting from users, and how is that feedback incorporated into your teams and then into your messaging? So teams and communication and feedback, just doing these very basic things often can be helpful. But for a lot of our experience in the social media world and the world of digital companies, in general, it just becomes hard to do all of them at the same time for a number of reasons; cultural, lack of resources. 

The one thing I discovered when I was in that pool for a period of time was when I walked in, I said, “Well, everyone here is in a bubble, and I don't get it. Why don't they get out of the bubble?” After three years, I realized, wow, I'm in the same bubble, and I really understand it. I'm busy, I'm overworked, I'm really exhausted all the time, I'm burned out, and no one in the world understands all these challenges that we face. So it's kind of a self-reinforcing insularity, that can sometimes make it hard to create those open pathways with the community that you're serving. Take that signal in. Adjust your teams to communicate back out in a transparent way, etc.

[00:25:37] JS: Sorry. I want to make sure I got it. It's how are you communicating. What type of team are you building? The other one, just the process for understanding the user?

[00:25:44] SV: I think it's explicitly thinking about the feedback that you want users to give you and that you are seeking. That's not just in the TripAdvisor model of rating something. But there's a lot of other ways that you can get really useful feedback from your community. Do you really want that? It's sometimes hard for organizations to manage that. But I think feedback is a key mechanism because at the heart of this kind of governance approach that Tracey is talking about is communication with users and having them have a voice, feeling dignity, and being able to appeal and tell you things that aren't working. 

[00:26:22] TM: I was just going to say, but I think – And Sudhir would know this much better than I. I mean, I've worked with folks at a few platforms now, and one of the things I've noticed is there's a communication problem in the other direction, which is the ways in which I'm probably not going to get them all right because I don't really live in this world the way Sudhir did. But there's a policy team, and then there's the product team, and then there's the research team. They’re all, from my perspective, interestingly siloed, right? Where policy, the rules folks were dictating a bunch of things in a fashion that seemed to me to be a little bit upside down, if one's goal is to create an environment and infrastructure and architecture for the pro-social world that we imagined. 

Everybody needed to sort of – In my view, there needed to be much more level setting about what the overall goal would be. If the relevant teams are all operating under different theories of compliance, let's say, or engagement, and the folks who are actually doing the building, the engineers don't really have a theory, at least explicitly, they're also going to operate in an implicit theory that's intuitive, that is actually probably not borne out by research. To Sudhir's example of the fact that the person he was interacting with couldn't sort of make the jump from his own experience with traffic tickets to the world he was living in and creating. 

[00:28:06] JS: There's so much here to comment on, a couple of thoughts. One, I just thought it was so fascinating, Tracey, when you were talking about how you design the product itself to promote the types of behaviors that you want and then thinking about the interplay between the product, the policy, and the users, and how you can create a self-reinforcing environment. I think it's particularly important to think about this as a product is scaling and scaling quickly. 

Second thing that you said that was so interesting to me was when you're talking about voice and, again, back to this parallel in governance, in terms of citizens and their government. Voice is such a critical first component of that process and just understanding what the desires and preferences are of the constituents that you're serving. Then how that translates into policies, whether that be company level policies or, in the tech platform case, product design as well. Then what mechanisms are in place for accountability and feedback to iterate on those things. I found those parallels really, really interesting. I had never thought about voice in the policy and government realm as analogous to good design and user-driven design and iteration in the product world. I think that's really fascinating. 

Then the third thing that you mentioned that really struck me was talking about the silos and talking about how some of the major tech platforms are just built without any understanding or appreciation for the enormous social externalities that could occur at scale. The interdisciplinary nature of the problem, that expertise was just entirely absent from the product development process. The lack of diversity within the teams and lack of awareness of the kind of negative flip side of the coin because there was this blind optimism that this will be so good for the world also strikes me as just part of explaining how we got here. 

I want to zoom out because we have been having this conversation about social media and democracy and society writ large. One of the things that you see on the website is that Social Media Governance Initiative is addressing these two critical questions. What might be done to make social media revitalize community? I think we've talked about that quite a bit today and the secondary question of can social media platforms cultivate values that are essential to successful civic discourse and democratic governance. If so, how? 

There's a quote from your website that says, “Many tech corporations have been criticized for their actions or inaction regarding the governance of their platforms. As a result, negative narration has taken shape about the benefits of social media for the wider society. Our goal is to enable various actors to create an online environment that is good for wider society.” To some extent, I feel like the conversation has been very optimistic about pro-social governance rectifying the problem or making significant headway in rectifying the problem. 

So I want to ask some questions about how you think about just fundamental incentive misalignment with tech platforms and these objectives of revitalized community and successful civic discourse and democratic governance? When we think about the ad based business model, for example, or we think about – I've been thinking a lot about how you have a tremendous amount of content that is at a scale where you have to filter algorithmically. The algorithm is going to take into account engagement as a proxy for what you want to see. We tend to engage with things that are sensationalized just on talks about the race to the bottom of the brainstem. 

The algorithm then creates a feedback loop towards polarization and fake news, etc., right? A lot of these things are outside of the scope of the companies. So I'm curious how you're thinking about that kind of wider lens of how we can kind of address the problem as a society.

[00:31:52] SV: I think I'll let Tracey talk a little bit about some of the aspects of that work that I think are critical that relate to that maybe even come from learnings from working with police departments and her work in Baltimore right now because I feel like there's a lot of ways in which we are learning from attempts to make institutions work better in society like the courts, like police, like community-based institutions, community-based organizations. I think that there's a real need to import a lot of learning. Like I think there's a lot of that world that we can bring into this discussion about tech and its relationship to society. 

Because part of the misalignment for me is that it feels very dyadic. It feels like we are creating a fictionalized world in which there are a series of mostly white, mostly male people, and mostly who are entitled and live in a particular worldview, have a particular set of things that they're trying to create. Then there are others who are the community that they serve. I don't know if that's a good way of understanding how information, how these technologies work in the context of people in their daily lives. So that's part of what I think needs to happen is that the misalignment feels like it's very binary. 

The other part for me is that it just reinforces a deep kind of hubris, which I often wonder about, and that is that the same people that are telling us that they broke the damn thing are now going to fix the damn thing. Then they're going to break it again and fix it again and so on. It’s a very internal discourse. That's also part of the problem. But again, when you just think about the ways in which these platforms and these new modes of information and connectivity function in the course of people's lives, it's much more complicated and nuanced and much more textured. 

But I get nervous in this industry of which I'm a part that we have an overinflated sense of what it is that we're doing, what it is that we can do, and what our relationship is to these movements on the ground. A lot of what's happening is political, economic. We're not really at the heart of that. We're playing a role in it, but there's fundamental things happening that I don't think are really rooted in the institution of technology in the tech sector but are all about the government, all about the economy, all about the nature of politics, etc., and it's connected. 

That's the thing that jumps out for me is just how much of an internalized discourse it is. Tracey, I don't know if you want to add anything. 

[00:34:35] TM: Yeah. I mean, like, that's a really, really big question, and we could have had an entire hour on that. I guess I want to say two things, maybe three. Sudhir mentioned the hubris in the internal conversation, the kind of idea that, in many cases, the folks running these platforms who are very concentrated demographically. They also underestimate their power to do things and to do good, right. 

So one of the things that we think about in the Social Media Governance Initiative, especially given the fact that we tend to work with product teams more than the policy folks, is to encourage the folks working on these products to understand the ways in which they can just help people and their communities achieve their goals and projects, right? I mean, that's why I think some of the work is focused on the “help teams.” There are all sorts of ways that these platforms can help people do things in real life and, in so doing, create different ways for people to not only interact with each other, but interact with platforms in ways that create trust, enhance trust, and therefore, enhance legitimacy. 

Those are some of the kinds of conversations that we've been having with a platform called Nextdoor. It's an interesting platform. It's online but geographic-based, and so the entire premise is about the ways in which people are interacting with their neighbors, but also have this other relationship in this separate space, which has good touches of a purely online space like Facebook/Twitter, but also has a lot of the disadvantages, has a lot of the problems in the ways that people will talk to each other. Or because it's geographically based, there's going to be this focus on the kind of problem that you mentioned, Jenny, like what’s salient for people. What is the dog whistle? 

There's a big issue and how people warn their neighbors about potential criminal offenders in the neighborhood and how you address that problem. Where people are just getting on the platform and saying, “Hey, there's a black guy in the neighborhood. Be careful,” right? So the platform instituted processes to keep people from engaging in biased behavior on the platform, which I think we don't know for sure. We want to do some research on that, but could very well have a beneficial consequence of impacting how people act on biases off the platform, right?

That's the kind of approach that we are really interested in, just the generality of the ways in which your interactions online definitely have an impact on your interactions in other spaces.

[00:37:48] JS: Yeah. I mean, I think that this is such a critical piece of the puzzle that I struggle so much with. My master's is in economics. I’m very interested in industrial policy and regulation and where markets fail. What you're seeing in all of these platforms are these externalities, and it theoretically is the case that the government regulates to address those externalities. You're looking at platforms that are global in nature and asking them to play a role that the government historically has played in regulating the misalignment between the natural market outcomes and the social outcomes. 

I'm curious, this is such a tough one. I mean, specifically, if you look at Facebook, 2.7 billion users developing policy with a platform that clearly has large social externalities, trying to figure out how to navigate with a misaligned incentive. It also raises some fundamental questions about who Facebook is accountable to. This will be the last thing, just this question of let's just look at Facebook, in particular, or Twitter or those big ones, but especially Facebook. Who is Facebook accountable to?

There's a recent incident I found so interesting, where Twitter labeled Trump’s tweets. Facebook didn't. The employees sort of stepped up. Zuck didn't back down, and then CZI scientists, and then advertisers, and then Zuck backed and said, “Okay. Well, let me reconsider the policies now.” On one hand, you can think about the role that the users and the employees and the advertisers and investors and other constituents can play in holding a company like Facebook accountable for our social objectives. I think that there are some very interesting mechanisms that are in play. 

At the same time, you have this insane reality, where one person controls a company that has such profound effects on humanity. That can seem to be something that is just categorically not okay. If you have any thoughts around this practice, have dual class shares, so companies go public, and then the founders retain control. So what we're left with with Facebook just seems crazy for society and much bigger than the pro-social governance question. 

But if you're looking through the lens of, “How can tech platforms be beneficial to society and government, I don't understand how you can address that without having this wider lens around who are they accountable for and what are the mechanisms for accountability and how do you think about the dual class share outcome.

[00:40:02] SV: I am far less conversant with the second part of that. 

[00:40:06] TM: Yeah, me too. 

[00:40:07] SV: I feel like, Tracey, if you want to give it a shot, you should. But –

[00:40:10] JS: The second part, like the dual class share part?

[00:40:12] SV: Well, just, I mean, there are a lot of basic fundamental questions about how corporate governance is structured and how we think about a corporation. Some of that is moral, ethical, philosophical, etc. That has nothing to do with Facebook, and that has everything to do with Facebook, right? So it has much to do with Exxon as Facebook. But I'm only saying I don't have as much expertise in thinking through corporate governance issues in that way and whether there's some way in which – I think it's a great question. 

But the first part of it seems to me that it struck me as well as critical, like when you were asking that. It goes back to what I think I was feeling earlier, or I tend to feel when I think about the discourse of these companies is the discourse of this is there are plenty of parallels historically to technologies that have had profound effects on the organization of societies from the printing press to the telephone, etc. 

I still feel like we're asking too much. Sometimes, the public discourse is overly simplistic about the role of these platforms in the lives of people.

[00:41:24] JS: Yeah. I agree with you 100%. Yeah. 

[00:41:26] SV: I just don’t – As a user or researcher in these companies for five years and as someone who's watched outside this company, I think we're elevating the impact or elevating the role that they play in the daily lives of most people in terms of how they consume information and make decisions. The more that we do that and the more that our expectations get outsized or Mark and Jack and folks like that, the more that we will, I think, isolate ourselves and not take into account really important see changing dynamics that are occurring in the daily lives of most people that are generating all sorts of inequalities and all sorts of problems. 

We're kind of like looking at the bright, shiny object it feels like and ignoring a lot of what's happening internally at the level of communities and society in general. 

[00:42:16] JS: I would absolutely agree with you that there's an oversimplification of the problem, and a lot of it is coming from the media incentives that sit below social media and from just fundamentally how the human brain works, which I sort of mentioned earlier, and just the challenge of intermediating content at that kind of scale, which would happen without the ad base model. So I do believe that there's an oversimplification, but they are kind of the gasoline on the fire. Tracey?

[00:42:41] TM: No, that’s right. I don't have much to say about the corporate governance structure. But with respect to the ad piece, my colleague, Jack Balkin, has suggested legal mechanisms of separating the entities that control the ads from the platforms themselves. It's a version of using the antitrust law, which you can do, which I take it that his proposal along those lines leaves the corporate governance structure intact but separates corporate control of the main revenue generator in certain ways. I suppose you could try it that way too. Anyway –

[00:43:21] JS: No. I thought that the oversight board was a very interesting creative mechanism to take certain decisions into a body that was outside of the company. When you were talking about – Sudhir, when you and Tracey were talking about the norms around obeying the law and that you obey the rules, I was thinking about how that is different in different cultures around the world. How do you see those different behaviors with the same platform, with the same guideline, with the same product kind of vary based on the cultural conditions. Even in my household, my husband and I have very different orientations around playing by the rules. I don't know if you had any thoughts about that. Yeah.

[00:43:55] SV: Yeah. It's a great point, and it's something that we can easily forget or just not really understand the power of that kind of cross-cultural difference and the need for understanding how different cultures think about some of these issues that we're talking about. I don't think they're, obviously, not specific to Facebook and other global organizations I think of like the UN. The UN struggles mightily around the world for, I think, something like human rights. I mean, that concept is a – It’s a western concept. It has a whole set of assumptions built in about how people think about themselves. It has a whole set of assumptions built about what kinds of freedoms we believe people can have, should have, and think that they do have.

A lot of the UN struggles when they go into another part of the world are based on people who might have other kin-based, religious-based, ethnic-based ways of thinking about their identity that just don't accord well or don't fit in well with the ways that abstracted notions of a citizen who has these particular kinds of rights. At Facebook, I remember how often this would come up as a challenge in putting aside creating one global policy for the whole world and just thinking about the ways in which people experience the platform. 

I think of examples like in India, where the user research team was discovering just how challenging it was and how much of a form of harassment it was when photos of women were shared online and that the social construction of gender and the experience of gender in that society that we were not taking that into account when we’re publicizing products or asking people to be more engaged. That’s just India. We can go to Europe. We go all over the world and find cases in which the way the platforms are designed just trample upon a locally accepted practice and put people in real danger or jeopardy or risk or increase that risk because they aren't sensitive to some of those kinds of cultural specific norms and ways of being vocal. 

[00:45:56] JS: Thank you so much. It's just been such a pleasure to have you today.

[00:45:59] SV: Absolutely. This is great, Jenny. 

[END OF INTERVIEW]

[00:46:01] JS: Thank you so much for listening. And thanks to Scott Hansen, also known as Tycho, for our musical signature. In addition to this podcast, you can find resources for each episode on our website, www.becomingdenizen.com, including transcripts and background materials for our most essential topics like universal basic income, decentralized social media, and long-term capitalism. We also have posts summarizing our research, which make it easy for listeners to very quickly get an overview of these particularly important and foundational topics.

On our website, you can also sign up for our newsletter, where we bring our weekly podcast to your inbox, alongside other relevant Denizen information. Subscribers are invited to join our podcast recordings and engage with the Denizen community in our online home, the Den. We’re partnering with some incredible organizations at the forefront of the change that we talk about. We share announcements from them in our newsletter as well. 

Finally, this podcast is made possible by support from the Denizen community and listeners like you. Denizen’s content will always be free. Offering Denizen as a gift models a relational rather than a transactional economy, enabling Denizen to embody the change that we talk about on this podcast. Through the reciprocity of listeners like you that we were able to continue producing this content. You can support us or learn more about our gift model on our website. Again, that's www.becomingdenizen.com. Thanks again for listening, and I hope you'll join us next time. 

[END]

( Scroll to Explore )

Join us!

Our weekly newsletter is the best way to stay abreast of our inquiry and ongoing events.

Oops! Something went wrong while submitting the form.
The Denizen logo.