Forrest Landry
Philosopher, Inventor, Social Architect
Forrest Landry
Philosopher, Inventor, Social Architect

What is governance? What are the attributes of the governance processes necessary to address the challenges humanity faces today? How might we acknowledge the limitations of human cognition while enabling collective action at scale?

Show Notes

In this episode we address a foundational topic: governance. Governance refers to how we make decisions and act in groups, whether that be within a nation-state, a corporation, a community group, or a household. Today's challenges require collective action at a global scale. What factors should we keep in mind as we architect governance processes in various contexts?

Our guest for this episode is Forrest Landry, philosopher, writer, engineer, inventor, and entrepreneur. Forrest worked closely with Daniel Schmachtenberger and Jordan Hall in the Game B movement, where he focused on questions of governance. This is a core topic that informs many upcoming episodes, including social media governance, delegative democracy, corporate governance, and DAOs.

In this conversation, Jenny and Forrest discuss:

  • What is governance [4:39]
  • Tribal size and Dunbar's number [7:50]
  • Challenges with governance at scale [12:42]
  • Group vs. individual intelligence [16:36]
  • Embodied vs. abstract knowledge [18:05]
  • Bias towards action [25:52]
  • The epistemic process [31:41]
  • Knowledge as process [33:24]
  • Information, scale, and technology [36:45]
  • Limitations of information [40:13]
  • Multi-polar traps [42:39]
  • Dynamics of capitalism [48:35]
  • Arrow's theorem [57:58]
  • Ephemeral group process [59:28]
  • Phase parallax and the importance of diversity [1:03:48]
  • Right size for group process [1:09:02]
  • Governance and identity [1:17:22]

"Forrest Landry (FL): The problem is actually deeper and more profound than most people appreciate. Now, the silver bullet, quick fix is just not going to be enough to really think about it unless we actually understand our inner nature and the nature of community process itself at a fundamental enough level, so we can actually address things like necessary, sufficient and complete solutions."

[00:00:25] Jenny Stefanotti (JS): That's Forrest Landry, philosopher, writer, researcher, scientist, engineer and entrepreneur. And this is the Becoming Denizen podcast. I'm your host and curator, Jenny Stefanotti. 

In this episode, we're talking about an incredibly important and foundational topic; governance. Our guest, Forrest Landry, has been a close colleague of Daniel Schmachtenberger and Jordan Halls. And in their work together, Forrest focused on questions of governance. 

When we talk about governance, it's a matter of how people make decisions and act in groups, whether that be governing nation states, companies, community organizations or households. We have upcoming episodes on topics ranging from corporate governance, to social media governance, to delegative democracy, to DAO's, all of which relate back to the foundational ideas that are covered in this episode. 

There's a lot of food for thought in this one. We'd love to hear your reflections. You can join the Denizen community in our Discourse and sign up for our newsletter at With that, I hope you enjoy this episode. 


[00:01:20] JS: We talk about huge issues like climate change. But arguably the meta crisis is our underlying challenges to collectively make sense of the world and decide what the right action is and then take collective action. 

We've talked about in the context of social media, the ways in which our ability to make sense of the world have significantly degraded because of polarization, and the propagation of misinformation and even misleading and erroneous narratives on social media. 

But as we'll see in the conversation today, even long before social media, there were fundamental challenges in the governance processes and structures we have in place, I.E. democracy and capitalism, to really achieve the goals that we want. All of our problems are really either arguably directly caused by issues of human coordinated action or they need effective human coordinated action to address them. 

And to date, our problem-solving processes are actually mostly making worse problems. It's fascinating to think about how much of this is associated with this question of what is the right scope to even understand the problem to begin with. Are we asking the right questions? And this really ties to the conversation we have about Buckminster Fuller and how much he lamented specialization and that we just couldn't see things comprehensively and fully. 

We talked about it needing to be both comprehensive and anticipatory. And so, when we talk about design science, a lot of it talks about starting with the whole system and then narrowing it down. And the reason you start with the whole system and narrow it down is because you make sure you didn't miss something. 

With anticipatory, we’re looking at the unintended consequences. A lot of the challenges are that we're solving things in too narrow of a scope. And so, we don't realize the second and third order effects associated with those decisions. 

And then the question is, "Okay. Well, let's say we can scope the problem and know that we're asking the right questions. Then, how do we go about making sufficient sense of it to know that we can then start to think about what the right solutions are? And what is the design process with respect to both understanding the problem and then discerning the solutions? 

And when we talk about what the solutions might be, that also has to do with whatever design constraints might exist depending on the things that we care about. And what's fascinating about the process that Forrest has put in place is it really considers how we can extract through the process what those collective values are to inform the design process and decision making. 

Now, also fascinating is just how so much of our current thinking results in a sense of there needing to be trade-offs. And if we can widen the lens enough, we can discern that there are actually "win-win solutions" where we don't have to make trade-offs and we don't have this fundamental competitive dynamic that persists in the current institutional environment. 

And then once we can define the problem and figure out what the solutions are, then there's this question of collective action.  How we do that, and how we define success.  How we might instantiate a process that learns and iterates.  And what we're facing today is that the challenges are far too complex for small groups.  And we don't have good processes in place for sensemaking, collective intelligence, and collective action at the scale that's required to address the challenges that we face.  That's why we say this is the most fundamental challenge that we face. And why it's so important to talk about it. And why, as soon as I met Forrest, I wanted to bring that into the inquiry so that we all have this foundation moving forward as we then start to talk about, "Okay. Well, how do we reform capitalism? And what does that look like?" 

And so, what we'll explore today is why are the current systems inadequate? And explore Forrest's thinking really from first principles, using the technology that's available today. How we might instantiate a very novel way of doing that? And it's really fascinating because it takes into account, it has to take into account, fundamental things about humans and the limitations of our cognition and our social needs and interactions. And Forrest just has this extraordinary breadth and depth of knowledge that he brings to addressing this most fundamental question. 

I'm so thrilled to have you here and to have this conversation, Forrest. And so, first, I just want to thank you for being here. And I want to start with the question, what is governance? How do you define governance? 

[00:05:40] FL: That's a great place to start. In the broadest sense, when we're talking about governance, it comes from the word governor, which is actually - it's not just a person. I mean, it could be thought of as that as well. But basically, a device that was on a steam engine that would prevent the steam engine from going too fast and flying apart. Because it would just basically go so fast that it couldn't keep together. 

This device would respond to the speed of the engine and then it would basically, as it sped up beyond a certain point, it would apply a kind of braking pressure and the steam engine would slow down. 

When we think about governance in the broadest social sense, we're looking at the process of lots of people making choices day-to-day. And in occasions where those choices in effect have large-scale implications, then there needs to be some way in which the will of the people can influence choices that are being made. Or that there's some way to move from individual choice to collective choice. 

To the degree that there's a kind of process of large groups of people making choices, then in effect, governance would be a thing that is the way in which that feedback is created. It's the way in which the connectivity between say the brake pedal and the actual brakes and the slowing down action of the car, or the gas pedal and the delivery of more fuel into speeding up. Then anywhere we're looking at large-scale collective action, either in the sense of perceiving what's going on or in the sense of making changes in the world, setting policies and things like that, governance is the notion. It's the label that we apply to the totality of that. 

And so, by that, we're not referring to just democracy, or to a socialist system or anything like that. Really, those are models of how we can do group choices. But those aren't the only examples. There can be other ways of making choices as groups.

[00:07:31] JS: And you said something really important that we're going to come to again and again today, which is this question of scale and where governance currently breaks down at scale. But governance works for small groups and groups up to a certain number. I want to talk about that first.

[00:07:50] FL: There's this phenomenon of sort of a tribal size. It was kind of like a group size that is generally seen when we're looking at human behavior. And it's basically defined by how many people an individual can keep track of. 

For instance, if I assume that I have a certain amount of time in a day, there's a limit to how many people I can keep track of in the sense of having conversations, learning what their experience has been. Sort of knowing them as people and having a way of sort of keeping track of the relationships. 

In the literature, this is called Dunbar's number. It's essentially the largest group that if you had effectively perfect social coherency would be kind of the largest group size that everybody could know everybody else. Not necessarily super well, but well enough to feel that there was some friendship or some capacity to rely on them if you needed help or something like that.

And so, just given that our human brains are finite and that we have only so many hours in a day, there are some natural limits as to how many relationships we can maintain in an active engaged sense. In this specific sense, there's a kind of evolutionary capacity that has been created in us over the long history of humanity going back well before recorded history. Think 50,000 to 100,000 years. 

And in that sort of process of people connecting to one another in tribal settings, we developed a sort of skill and facility for how to be social in that scale of phenomena between say a family-sized group of people and a tribal-sized group of people. 

In that sense, we have kind of this capacity built in to be able to do sense making in groups of people up to about 150 or so. And then once we get to a group larger than that, we can't really rely on the sort of built-in evolutionary tool set that has been given to us. We now need to start coming up with more formal techniques or other ways of essentially creating trust between people and solving free-rider problems and actually coordinating choices which are responsive to changes in the environment and things like that. 

[00:10:06] JS: Before we get into big groups, I want to talk a little bit more about this, because this is such a fascinating critical concept that keeps coming up in our conversations, which is this notion of the Dunbar number, or the limit of 150. And there are a couple of things that I just want to bring here and also things that I've thought about historically in terms of why this number is what it is. 

You mentioned just the sort of ability to maintain relationships. My master's degree is in economics. And I've thought a lot about informal institutions, which is essentially the governance structure that holds in a sustainable way to deliver the outcomes that you're interested in at a number of 150. And that mechanism works, to your point, because everyone can know everyone and you can hold those relationships. 

And so, one of the things we talk about, we talk about game theory, which I know many of you are probably not familiar with – but the notion of a repeated game in game theory, it's sort of what choice do I make given various outcomes for my possible choices? And a single game is just me versus you. And I can make decision A or B. And these are the payoffs. 

But a repeated game is one where how I choose in one instance affects what happens in subsequent instances. And so, the repeated game dynamics that happen in these smaller groups are part of what allows. Because if I screw you over, there's a social cost of what happens when I do it again. And so, that's part of the reason why these dynamics work in these smaller groups. 

And the second point that I thought was so interesting was that in a town hall, you can reasonably have that number of people express their opinions and wants before people kind of run out of steam. And so, from a governance perspective, beyond that number people just might say, "My opinions and values are not considered. Screw this. I'm going to go for my smaller group." 

What has historically happened is that once it hits 150, tribes would just break into smaller groups. And then one more point and then we'll move on, which is that I've also thought a lot about the transition from informal to formal institutions. Once you've moved past these numbers where these informal institutional dynamics deliver the outcomes you care about, we put formal institutions in place. And this is where we talk about things like laws and ways that you can enforce the social behaviors that you care about in larger groups. You start to need those formal institutional dynamics. It's such an important concept. I wanted to make sure we spent a little bit more time on it. Did you want to add anything to that, Forrest, before we move on? 

[00:12:40] FL: No. I'm good with that. You can continue.

[00:12:42] JS: So, then I want to just talk about why this breaks down at larger scales? I mean, the two primary processes that we have in place today for doing this at scale are democracy and markets. And can we talk about why they fail?

[00:12:57] FL: There's several reasons that become important pretty quickly. I mean, one of them is of course trust and how do we deal with the capacity of people to essentially rely on one another when they don't know one another? For instance, you're now talking about relationships, which are needing to have some sort of trust for cooperation to occur. What are the means and methods by which that relationship is essentially maintained? 

Another thing that comes up as to why this becomes essentially a very difficult problem is that when you're scaling up in both numbers of people, you're also scaling up in terms of time. In effect, it's not just that we're looking at a larger volume of space that we are concerned with or a larger number of people that we're concerned with. We're also concerned with a larger possibility space. There's a lot more that can happen. Things that occur relatively and frequently or that are relatively unusual, if you consider a longer period of time, are more likely to happen just because there's more time involved. 

When we're dealing with situations that basically emerge over really, really long periods of time. Don't think years. Think decades or centuries. Or we're dealing with things that are complex enough that a single person, even if they were to spend all of their lifetime working on it, would require say two or three lifetimes to really encompass. 

And so, when thinking about things that are occurring over long periods of time, chronic issues with high complexity. Or even when thinking about things that are emerging really, really fast. For instance, when looking at technological change, for example, it's occasionally the case that the influence of technology on the world is just happening much, much quicker than governments can even understand. I mean, it's like the technology is introduced. And then later on we find out what that means. And then we decide, "Okay. Well, how do we respond to that? 

But of course, for some of these kinds of things, it may be too late. For example, with the pandemic. For example, let's say we had a virus that was spreading even more quickly and was even harder to track. This particular one has kind of been sort of a warning shot. I mean, this says actually on the scale of existential risk. This isn't even a blip. 

So, there's far more serious things that could have happened and fortunately didn't. And so, in a sense, when we're thinking about what are the capacities of a group process to make sense of the world, does that capacity extend into the spaces of fast-moving or really slow-moving? Does it extend into the spaces of being able to consider highly unlikely but also very, very impactful events? So, things that don't occur very often but are devastating when they do. Think volcanoes, or really large hurricanes, or things like earthquakes. And so, in effect, this thing, it's literally outside of the span of the kinds of things that were normally called to think about. 

And so, in effect, when we're trying to plan for situations that occur and emerge over 50 to 100-year time scales, or for some of these things, thousand-year time scales, it's not something that's going to be able to show up in terms of quarterly profits or four-year election cycle. 

In fact, some of these things won't show up in a single person's lifetime. And so, it's very very hard to get people today to commit to the welfare of somebody that they won't even know or some future civilization that is barely imaginable, let alone something that we can relate to the same way we could relate to our friends.

[00:16:36] JS: On that question, how do you think about just fundamental limitations due to human psychology or human lifespan when you think about designing governance processes that account for long-term?

[00:16:49] FL: Well, the main thing is to effectively create groups that are more intelligent than even the smartest person in the group. In other words, to have a kind of embodied knowledge and a sort of memory or retention that allows a community of people to effectively hold in the matrix of the community greater level of capacity than would be ascribable to any particular individual or even small subgroup. 

When we're trying to basically deal with things of this nature, in other words, issues that have really broad scales in time, or possibility, or space, to effectively create an increased visibility, or increased range of imagination, or increased degree of memory, or creative capacity to basically extend the capacity of the community in these directions such that the community itself, as a community, has the capacity to be responsive. 

We think about it as an intelligence at the level of the group that is more than just the sum of the parts. And so, in effect, it's a little bit like the inverse question of say artificial intelligence. We're actually talking about human intelligence or embodied intelligence. But not human in the individual sense but human in the collective sense.

[00:18:05] JS: Can you elaborate on what embodied intelligence? 

[00:18:09] FL: Well, there's – First of all, we want to distinguish between what we mean by embodied versus virtualized. When we say embodied, I'm basically saying things like proximate to the here and now. Has a physical nature that can be understood in terms of atoms. Would be something that people could point at that you can see, whereas virtualized is going – or abstract is going to be things that are more transplantable in a way that doesn't depend upon moving atoms around. Patterns, concepts, thoughts, language, stuff like that. 

Culture, for example, has kind of this mixed thing. There are elements of it that show up in the communications that people have day-to-day. The particular language or the narratives that they use. The symbolisms, whereas in other respects you can talk about the notion of language as being abstract or particular ideas is not necessarily being connected to a specific group of people or a particular time and place. 

In this sense, having a lot of knowledge written down in books, for example, isn't necessarily going to be the kind of knowledge that the community is able to use if, for example, they needed to respond to something relatively quickly. There just isn't going to be enough time for them to locate the book, find the section in the book that's going to tell them how to respond to this particular situation then disseminate that information among the – say there's a council in a community setting and the people in the community need to know how to think about the particular issues, or alternatives that are possible, or what things have been tried before and shown the work and other things that have been tried. And although it might not be expected, it doesn't actually work. 

In effect, there's this sort of reciprocity that needs to happen between abstract knowledge as it occurs in books and embodied knowledge as actually held within the living culture of the time. In effect, there's this real need to think about the exchange of information between say one generation and the next or between a past generation and a distant future one. 

These sorts of things have sort of been – as far as human history is concerned, as far as civilization design has been concerned up to this point, have mostly been kind of emergent. You know, they happened by accident. And haven't actually been something that has been really studied or looked at for its own sake to really think about how do we say, for example, transfer an embodied knowledge that exists in a community today to some other community that lives on the other side of the world or that lives at some future points, say, a hundred years from now. 

In effect, we're looking at this transform of moving information from an embodied way into an abstract pattern. And then from that abstract pattern, back into an embodiment. And these are actually very difficult challenges. I mean, what I'm describing, although on a conceptual level, seems very straightforward. The actual practice of this, as you may guess, is not easy. I mean, how do you come up with good educational models, for example, and really bring people into an awareness as to why they would want to know? 

There's a sense here of who are you? What's your story? And why should I care? You want to make it so that it's relatable, and that it's connectable and that there's actual relevance to the future community of the information that is available to them. 

[00:21:31] JS: A couple things I just want to underscore that I think are really important. Because part of what I'm hearing you say is what are the attributes that must exist for governance that works the way we want it to work for humanity at the scales that we want it to work? And one of the things that you had mentioned, the collective intelligence is greater than the individual intelligence. 

[00:21:54] FL: This is a requirement. It's a threshold – 

[00:21:56] JS: Yeah, exactly. This long-term orientation. And I think there are more of them that'll come out, right? And then what are the things that it needs to be able to do? And the one that you just mentioned is a really critical one, which is this question of how does it store knowledge and transmit knowledge? And how does it access that knowledge in the right ways at the right times? 

[00:22:16] FL: Yes. In effect, process. The collaboration that has led to some of this thinking. At first, it was looking at the nature of the problems and trying to basically characterize. The nature of the problems that are in the world. We have things like climate change. We have pollution. We have places where food is running out because the fisheries are not being harvested in a sustainable way. Top soil erosion. Clean water availability. Clean air. All these kinds of things, right? 

And so, in effect, as we start to really look at the nature of the challenges that we're faced with as communities and as a species, there's a need for us at a certain point to basically step back and to really ensure that the characterizations that we're working with, the questions that we're asking, are commensurate to the scope of the problems. 

On one hand, we're interested in the characterizations of the problem not just because we want to know what we're trying to solve, but also because we want to know that when we propose a solution, that we have some way of checking our work. In effect, if we're going to say propose a solution for healing the Amazon rainforest, or restoring the ozone hole, or making it so that some economy has more vitality to it. Then these are tricky things. And if we're going to invest resources – and a lot of these cases, we're talking substantial resources. We want to know before we do so that it's going to work. That it's going to actually have the outcomes that we are hoping for. That the dreams of the people that are investing are fulfilled and the community is essentially benefited. Are you doing things that the people that you're ostensibly helping, they're going to know, and feel and trust that they are actually helped?

[00:24:00] JS: And this is the essential starting point, which is just defining and scoping the inquiry to begin with. Can you elaborate on why we're not good at it now?

[00:24:11] FL: Well, to some extent there's certain biological processes that are built in. In other words, it isn't that we're not good at it because there's some failing of individual people, or there's some failing of culture, or of a particular leadership or anything like that. But can manifest in those things. 

But ultimately, we've never needed to be good at this before, right? If you look at, say, again the long-term of ecological history or civilization. Again, not looking at just recorded history but going way back beyond that. And basically saying, "Okay, if we basically were living in tribes and we had the skills of how to live in tribes and relate to one another in a tribal setting. And now we're living in cities and we have this world standing culture with these fantastically powerful technologies. Things that are well beyond the imagination of even people 100 years ago." The capacities that we are effectively needing to be able to be responsive to, nature of how to live in a city, this is a new skill. I mean, it's a skill that for the most part no species on earth has had to do in the same way that we're being called to do it. 

To some extent, there's these biological tendencies that sort of encourage us to relate to the world in a tribalized way. But at the same time, the world we're in isn't that way at all. It's actually quite different. And so, the sort of biases to action and propensities to certain heuristic models and so on were very well-adapted to the circumstances we are in even a few hundred years ago and are actually quite poorly adapted to the circumstances most of us find ourselves in today.

[00:25:52] JS: This bias to action is so important. I want to make sure that we punctuate this one because I found this so fascinating. The fact that we want to move to solutions prematurely before we've adequately even made sure we're asking the right questions, let alone understand the questions well enough to come up with the right solutions. But there's a biological basis for this. Can you elaborate on that? 

[00:26:13] FL: Yes. For example, say you're walking in the woods and at some early time. We're thinking like caveman sort of time period. And you hear a rustling somewhat behind you and over in a bush over there. And at that moment, you really don't want to not respond because it could be a tiger. And if it's a tiger and it's about to leap on you, your best course of action is to immediately take defensive posture and to basically prepare yourself to either run, or to freeze, or to yell, or something. There's very little chance that responding aggressively in that situation is a bad idea. Because even if it turns out not to be a tiger, maybe it was something else, or a branch just happened to fall at that moment, you could be wrong 99 times out of 100 and live to see the next day because there's no consequences to the action if you mistakenly overact. But the one time that it is a tiger, your action saves your life. In that sense, there's a sort of built-in run first and ask questions later. 

[00:27:18] JS: Can I ask you a question about that? Because I'm so curious as to what's happening in the brain in various circumstances. Because that's a specific example where you go into fight or flight and you're dominant in a specific part of your brain. Whereas when we're collectively trying to decide what to do, we're more up in our neocortex. But there's still that bias towards action. And so, that's one question. Then the second question is I find it so interesting that our bias towards action is an impediment to choice making in the way that we want to be making choices. Because as a student of design thinking, one of the key mindsets of design thinking is bias towards action. I mean, perhaps you can reconcile this because that action is taken with a lens towards experimentation and learning and not certainty that you're doing the right thing. But I thought about the world as being overly-oriented towards trying to make sure they were doing the right things and not incorporating a process of doing as a part of sense making. 

[00:28:16] FL: Well, this is – I'm basically outlining just the first most basic example. But there's like five or six layers of almost completely unrelated as if they were different phenomena that all have the same tendency. For instance, again, from a tribal bias, people that act decisively are considered to be leaders. 

And so, a person that shows leadership capacities in the, "I know what I'm doing." They have this sort of masculine firmness and they create a sense of safety and of comfort in other people because, "Well, I don't know what's going on. But they do. So, I'm going to basically follow along with them." 

And that has tremendous advantages in a situation where you're trying to coordinate people to basically respond to a situation. Or the person that is acting decisively is looking for prestige or status with respect to the social group. As you mentioned also, as almost a completely unrelated phenomenon, that in order for us to engage with the world, a lot of times we need to experiment. We need to play. We need to try different things out. Have them be somewhat non-consequential. 

I mean, obviously, you want to have the room to explore without it being you only get one chance. Then that's it. It's in a lot of cases you want people to be active and engaged. Because without first-hand engagement, they don't develop the skills necessary. I don't develop the skills necessary unless I actually encounter the problem. 

For example, if I'm trying to solve something but I don't really know the language of the problem because I haven't encountered it firsthand, I'm not necessarily familiar with the kinds of issues that come up because I'd just never been there before, then to some extent the action bias is going to give me the capacity to engage in that situation so that I can recognize a realistic solution when the opportunity comes up. 

Maybe there's people talking about it. And because I'm familiar with the problem, because of that firsthand experience, somebody makes a suggestion. All of a sudden, I look over and I say, "That's it. That's going to work." Because I had the knowledge of the situation from the engagement. The engagement was created by this bias. 

there's a lot of good things to talk about here. And in effect, we could basically enumerate lots of reasons why there is an action bias to essentially select the first good solution rather than the best solution. Because for many problems, satisfying is actually good enough. If you have a problem that has maybe 90 ways to do it wrong that will absolutely fail, but 10 ways to do it that are right, maybe it doesn't matter that much which of the ten you pick. 

The first one you encounter is probably good enough, whereas with some kinds of problems, particularly things that are engendered by technology or that involve highly critical sensitive situations, things that are biological in nature. If you experiment with viruses, if that gets out of the lab, it's going to be really consequential and you don't get to undo it. 

In that sense, there's a real need for us to get the best answer. Not just the first answer. Not just the first thing that could potentially work but something we could depend on that will for sure absolutely be this is – we can't do better than this. And we know that this is the right thing to do. And we can trust that because we've really thought it through carefully and we've engaged with it in a way that's appropriate to the nature of the severity of the situation. 

[00:31:41] JS: Mm-hmm. And I think that's a really important thing, understanding how much certainty do we need? Which is a function of urgency. You know, the consequences of making the wrong decisions. 

I'm curious, the first order question is how do we know that we're asking the right questions or sort of scoping the problem? And I think very closely related, how do we know that we've made adequate sense of it to then move towards solutions? How do we know that the – how do we know? How do we know that we're not missing something? 

[00:32:12] FL: This is basically the question of what is the epistemic process? You'll hear me mention this – 

[00:32:17] JS: Can you define epistemic process? Because those are not – maybe not – people don't understand what that means. 

[00:32:23] FL: Not common language. Yeah, I'd be glad to. I was literally just trying to get there. The first thing, of course, is when we say how do we know something? We're concerned with knowledge. Well, the word epistemic or epistemology is how do we know? It's the question of how do we know anything? And what is the nature of knowledge? 

Those questions of how do we know and what is the nature of knowledge are wrapped up in this box. And that box has got a label on it. And that's called epistemology. And when we think about knowing, we know it as a kind of process. It's not – it's almost a pun. But basically, the idea here is that the engagement itself is a medium, or a mechanism, or a methodology by which we come to know something. 

Going from unknown to known is a process. And once we know something, we can represent that information. We can write it down. Or we can tell other people. Or we could basically just remember it in our minds and hope that when we need that information it's available. 

The idea here is that we're thinking about knowledge acquisition as a process. And we're thinking about knowledge representation or knowledge retention as a process. It's not just a static thing that once it's in a book that's the end of it. You still need to teach the future generation how to read or the book just ends up being a bunch of ink on paper and lots of pretty patterns but doesn't mean anything to anybody. 

We think of the notion of knowledge, and the notion of meaningfulness, and the notion of relevance and such as being established and considered in terms of processes. This is part of the reason why when we started this conversation, we were thinking about the general notion of process and the general notion of choice and how those two interact with one another as basically being a constellation of concepts, which we refer to when we use the word governance. 

In this particular case, I can basically say the question of how do we know that we're asking the right questions? Is an example or a particular element or subclass of the notion of epistemic process of how do we know anything at all? And so, in effect, in different situations we might have different epistemic processes. 

For instance, if I'm trying to figure out who's most popular, it's very important for me to ask other people. Going and consulting nature isn't going to tell me anything at all about that. On the other hand, if I'm trying to find out, say, what is the boiling point of water? I'm going to need to consult nature. Because if I consult other people, well, I can't necessarily know whether they know the answer to that question. And even if they told me what they thought the boiling point of water was, I still don't have any way of knowing whether or not they're right. 

In effect, there are some places where we need to go out to other people to find out information. There are other places where we need to go into nature to find information. And thirdly, there are places where we need to go into ourselves to find information. 

For instance, if we look at, say, spiritual traditions, there's a great lineage associated with knowing thyself. The idea is that, by coming into awareness of things like psychological bias, we could begin to forgive ourselves for having these tendencies, first of all, because they're there for a reason. And nature itself wasn't going to anticipate that we would have a technological society with Gameboys, automobiles, and aircraft and such. There's a whole thing of “we need to figure this out ourselves.” But we need to essentially know ourselves as the creature that is effectively doing the figuring out. 

So, in effect, if we have built-in tendencies that are leading us astray from being able to notice that we haven't been asking the right question, then our epistemic process wants to be upgraded. We want to be able to compensate for the kinds of things that would prevent us from knowing whether or not we're asking the right questions. There's a kind of capacity building that needs to occur. 

In effect, when we look at things like governance process or the kinds of dynamics that would enable groups of people to be more intelligent about the circumstances that they're in and what actions that they need to take in order to respond to larger social climates or larger places in the world, other groups, or health issues, or whatnot, that in effect what we're trying to do is to make sure that we have adequate capacity to actually do the thinking necessary, to do the feeling and the engagement necessary. To do the communication. All the things that are necessary and sufficient to enable the group of people to be the group of people that can solve the problem. 

[00:36:45] JS: This is so fascinating because it's making me think of someone who was at Google for years. The profundity and loftiness of its mission to organize the world's information and make it universally useful and accessible. And I always thought how profound it was, the implications of being able to access information. It's such a fundamental thing that humans do. And obviously, ties into these questions of governance.

[00:37:12] FL: It's funny you say that, because as you were describing the mission of Google, I'm thinking to myself, "They weren't shooting nearly high enough." That goal that you just described, the lofty in one sense, is actually too small. 

[00:37:25] JS: I really have to ask this question about scale because it just seems like something – a big insight that I have had through the conversations we've been having about the epistemic crisis that's stemming from social media. We point our finger at the ad model of social media. But really, I see the issue is a combination of large amounts of information that need to be intermediated by an algorithm due to what we've surfaced in this conversation, these limitations on human attention and time and the challenges of do you think it's possible to tune the algorithm to deliver the right information in the right ways for our collective goals? 

[00:38:04] FL: I do believe that it would be. But I don't think that it would matter.

[00:38:07] JS: Why? 

[00:38:08] FL: Because there's more to it than just that. For example, if we look at business systems, you have a kind of two master’s problem. On one hand, you could talk about the mission of Google to organize the world's knowledge. But they're not organizing the world's knowledge because they need the world's knowledge to be organized. They're doing it because it's a way to essentially make a profit as a company. 

And there's a situation here where we end up with this means-ends paradox, where unless we account for things like the dynamics of how inequality emerges in culture in the first place. And it's not because you've got some one percent that are horrible people and they decide to take advantage of everybody else. I mean, it could be modeled that particular way. 

But that doesn't account for how it came to be that way in the first place. And without really deeply understanding the dynamics of that, you might shuffle off this group of people only to have it be replaced by some other group of people and say 50 years into the future. 

And so, it isn't just that if you pull a coup, for example, that the succeeding government is actually worse than the government that it replaced, it's that there's a fundamental kind of dynamic in the process itself, that without accounting for it pretty much means we're just going to be doing the same thing over and over again and expecting different results, which is a kind of insanity. 

At this moment, there's very, very little deep thinking about these types of issues. And so, as a result, I think what ends up happening is that we use the same toolkit. We use financial instruments. We use institutional design. We use state actor type level governments. Or we think that democracy is going to be the solution to all problems. Markets will solve everything. Or democracy will fix all. Or that it might not be great but it's better than anything else. Nothing better is possible. 

And I think that those kinds of ideas, for the most part, basically just close off the door for even looking at anything else that could be better. But also, that it doesn't really have a good notion of what goodness is in the first place. For example, having the world's knowledge organized in a beautiful way. I mean, let's just say we accomplished that. Okay? 

So, now you've got this perfect, gleaming library with diamond hallways and books inscribed in gold and all the rest of that sort of stuff. Okay? Let's say you have a super quantum computer that you enter any query and it can almost read your mind to answer the question. None of that's going to make any difference if you're asking the wrong question. It's not going to make any difference if the information that it's offering to you isn't relevant to the needs that you actually have. 

In effect, there's a situation here where knowledge in an abstract sense isn't really connecting back to the embodied sense of actual cultures with real people dealing with real situations on this Earth on the ground. And so, in this particular sense, there's a scope issue. On one hand, we could say, "Yes, it's scale." We could try to do sensemaking on a planetary level. And if we are doing sense making on a planetary level, it's not because we have better systems of information. It's because we have better community processes in order to engage with that information. 

Because even if we had perfect information but we couldn't act on it in a coordinated way, if I can't trust that my neighbor, or that that group over there, or that nation at the other side of the world is going to act in the way that's coherent with respect to the common good, then to some extent multi-polar traps, free-rider problems, game theory and all the rest of this sort of stuff is going to hobble the situation. Just basically suggesting that the problem is actually deeper and more profound than most people appreciate. 

Now, the silver bullet, quick fix patch is just not going to be enough to really think about it unless we actually understand our inner nature and the nature of community process itself at a fundamental enough level. We can actually address things like necessary, sufficient and complete solutions.

[00:41:59] JS: I mean, this is why I have been kind of wanting through these conversations, to piece together what is the best thinking on evolving these systems that we have in place to date; democracy, and the market and how they interrelate. Let's put all the best thinking that's out there together and we add that up. Does it get us to where we want to go? And my gut feeling is the answer is no. Right? And that gets to the thinking that you're doing, which is just – 

[00:42:28] FL: Well, you're probably asking the most opinionated person on that particular topic.

[00:42:32] JS: I know. Well, we're going to bring you back when I feel – 

[00:42:34] FL: It's almost unfair. I mean, for me that's like shooting fish in the barrel. I mean, come on. 

[00:42:39] JS: I know. I'm so glad you're here, Forrest. But I do want to – this notion of multi-polar traps, which probably doesn't mean anything in and of itself, is so important to what is broken with the markets and democracy as we know it. I want to make sure that we talk a little bit more on that. Can you explain what that means? What that is? 

[00:43:02] FL: Yeah, I can. It's a little involved. It'll take a few minutes. I mean, I hope there's patience for that. Is that okay? 

[00:43:09] JS: I think it's really important. So, I do. Yeah.

[00:43:11] FL: All right. Say, for example, the first thing that people usually think about is this thing called the prisoner's dilemmas. A couple guys rob a bank and the police catch them and they separate them. But they don't have enough evidence to incriminate the people unless they can convince one person to rat out the other. 

They set up this sort of deal. They basically say to each one of them independently, and they don't know what the cops are saying to the other person, but they presume that they're hearing the same deal. And they basically say, "If you rat out your guy, you'll get two years. But if he rats you out, you'll get 10. If neither of you basically say anything, then you'll both get four years. But on the other hand, if you rat him out, you'll only get one. If you both rat each other out, then you're both going to get 10." 

In effect, you set up this sort of game. And each person is basically trying to figure out what the other person's going to do. Because if I say nothing, then I get – the situation is not great, but it's not horrible. But on the other hand, if he rats me out, then I'm going to be really messed up and things are going to be bad for me. 

Now I have to trust what the other guy is going to do. And he, by the way, is going through the same thinking process and evaluating what am I going to do. I'm basically now evaluating whether he trusts me well enough to know that I'm not going to rat him out. Or does he not trust me and think I am going to rat him out? Therefore, he's better off ratting me out. So that, though we both rat each other out, at least it wasn't as bad as me routing him out and him not ratting me out. At least from his point of view. 

In effect, the idea here is that there are these sort of pernicious situations that can be created that emerge in an actual market process and not just jurisprudence, but in the world at large, where you end up with multiple actors. In this case, the prisoner's dilemma is just two people. But the notion of the multi-polar trap is essentially a generalization of the prisoner's dilemma. It's effectively saying, "Okay, we have lots of people. They're engaging with respect to an ecosystem. If they all cooperate, then things will go reasonably well. But they have to actually invest effort in that. But if any one of them defects, then that person benefits differentially against the entire rest of the group." And as a result, because every single one of them knows that any other one of them could defect, and they all defect. And as a result, you end up with this tragedy of the commons, race to the bottom and everybody suffers situation. 

The usual examples would be things like keeping up with the Joneses, fisheries where you have lots of boats that are all fishing the same stock. And they're polluting the waters. And if they fish too much, then they end up hurting everybody. Or places where you're cutting down too many trees. Anywhere you have a situation where there's a commons and individual actors extracting resources from the commons differentially with one another and they're competing. 

In a certain sense, coordinated actions solve multi-polar trap issues. But it's very, very difficult to create the level of trust that would be necessary for that coordinated action to actually adhere. Arms races are a good example of this. We think that the other country is essentially building weapons and secrets. So, we'll build them in secret. Although, we sign treaties to the effect that we're not going to build a doomsday machine. We do it anyway because we know that they're going to do it. And if we don't do it before they do, then we don't have some sort of deterrent from them using it and so on. 

The idea here is, is that once we get to recognize, once you understand the concept of the multi-polar traps deeply and you look out into the world and you start to see the way in which market systems are developed, and governance are developed and so on and so forth, that all of a sudden it becomes very, very apparent that this situation is – this phenomenon of a multi-polar trap is occurring everywhere. 

There're so many circumstances of it. And so, as a result, we're presented with these problems of how do we deal with these situations? And nobody's developed an adequate tool set of how to do that. Essentially, is commonly deployable and easily used and can solve these problems on a systematic and general level. 

In effect, there's a need for us to understand the game theory. So, we understand the nature of the problem rightly. But it's also that without understanding those aspects, without being able to recognize those situations, we could very easily be putting lots of resources into something and just end up creating worse issues because we didn't really understand the nature of the problem in the first place. 

[00:47:29] JS: Yeah, these are really – I mean, the arms race is important example of where global governance fails to deliver – 

[00:47:35] FL: Well, it's also where community fails. For instance, if you look at social media and lots what's going wrong there, you have an arms race against the people that are just basically trying to just have a conversation and the trolls which are effectively trying to get some sort of notoriety, or to have some social influence, or make something happen, or just to get entertainment. And the trolls are actually competing against one another because they have their own social group. And people don't necessarily know even whether or not they're being a troll or not. 

There's all these interacting phenomenon. And in effect, what ends up happening is, they're creating platforms to compete with one another to essentially draw attention to make more advertising revenue. Governance of competing against one another in all the ways they normally do. And so, they're using social media as a platform to basically advance their nationalist interests and so on. 

These factors are fundamental to really having a good conception of this to the nature of the issue as to why sensemaking is breaking down among these kinds of problems. 

[00:48:35] JS: But this also raises a very profound question about something that is fundamental to capitalism, which is competition. Competition means that you are incentivized to cut corners. Whether that's labor conditions in developing countries. Or we talked about psychedelics and commercializing psychedelics. What if you're just cutting corners around the protocol in terms of treatment? Now, your product is less expensive. Maybe you gain market share because there's information asymmetries between various constituents in the market. Is the competitive aspect of capitalism – does that fundamentally lead to multi-polar traps? And if so, does that have to be discarded in whatever new system we might envision? 

[00:49:21] FL: Well, actually, the answer to that question is a little more nuanced than it would seem. In effect, the first thing that needs to be just really recognized is that market systems are not just competitive. There's actually this undercurrent of tremendous cooperation. For instance, we have constitutional law, and contractual law and an entire legal system. And all those things, for the most part, are kind of a framework that allows for a market to exist. 

In one sense, we're really looking at the balance of how much competition is occurring versus how much cooperation is occurring? The cooperation is kind of invisible. It's like the database administrator. Nobody knows he's even there until the thing breaks. And at that point, now his phone is ringing, right? 

In effect, the competition shows up as a thing. It's obvious. We can see it. It's describable. Something you can point at because it's an event, whereas cooperation is sort of like the invisible context that the event happens. It's the wall behind the picture. We don't see the wall very much. We just notice it's sort of out of the corner of our eye and it influences in colors how we experience the figure. 

When we're looking at these things like marketplaces, it isn't the case that we are basically trying to say, "Okay, we've got to discard all of this and replace it with something else." What we're really looking at is what are the natures of the cooperation that are there? And are they basically receding relative to the level of competition? 

And the balance between those things is actually quite fine. It's very subtle. For instance, think about, say, we have this graph and 0 would be 100% competition and 0% cooperation. And at a hundred percent, you have all coherency and cooperation and no competition whatsoever. 

Now, if you look at actual systems, organic nature ecology systems, and market systems, and human communication and things like that, you end up with this noticing that actually most of these systems are right around the 50% mark. And the difference between a system which is moving into a chaotic state where you end up with a phase change that moves into anarchy and civilization dissolving and lots of trauma for lots of people, versus one where you've got a healthy, thriving economy is actually like defined by a very small number of percentage points on either side of that 50% mark. 

There's only a little bit that is needed to move a system into a phase change where you can go from essentially an ordered democracy to a totalitarian regime. And so, in effect, there's a lot of sensitivity in the specifics of the nuances of some of these things that aren't necessarily calling for wholesale changes but are actually calling for fairly precise moves in a very particular level of discernment basically. 

Go ahead?

[00:52:15] JS: I'm just curious, what are your thoughts – I mean, let me give you a very specific example. And so, yeah, it's very interesting to think about what is the optimal balance between cooperation and competition. If you look at the non-profit sector, the sort of lack of competition leads to a lot of redundancy and inefficiency, right? But I'm also thinking in terms of let's say intellectual property. The importance of innovation and creating the right incentives for innovation. And the whole notion of intellectual property rights is such that you come – 

[00:52:43] FL: Well, this is where I actually go the other way. Now we're moving on to something like incentive process, particularly around things like intellectual property. And in this particular case, I actually go all the way to the other end. I say very much that we are actually held back by intellectual property and incentive systems around it. That it's actually – 

[00:53:01] JS: Yeah, that's – Yeah. 

[00:53:03] FL: It was very good in the early part of the time. For instance, for the first say 200 years of patent law, or 100 years of patent law – I think it's only been 100 years actually. That it actually did some really good things. But since say about 95 or so, then it started to become diminishing returns and more friction than it was worth. 

And so, at this particular point, we're actually losing the capacity to be innovative and to essentially respond to the world's challenges because we're approaching the problem of creative process in a very much inhibited way. We're thinking about it as essentially a feedback mechanism, or incentive system, or something that is to be driven by or to drive market process. Whereas actually we're really needing creativity not so much in relation to markets but in relation to communities. To how do we have a relationship between man, machine and nature that actually works? 

And so, in effect, there's a whole other level of a process here that at this particular point, honestly, the country that wants to win in the world would actually probably do so simply by just throwing out all intellectual property altogether and just basically saying, "At this particular point, we're going to have – everybody knows everything about anything that anybody invents." Because out of that creative matrix will emerge things that today we couldn't even imagine. What got us here won't get us there. We need to change things like how we think about intellectual property law pretty systematically if we're really going to be able to advance in these spaces.

[00:54:32] JS: Yeah. And then just to finish, just making sure that people kind of understand how we're sort of taking this up, is the notion of intellectual property is you need incentive to do that next innovative thing, so that your idea becomes your property so that you can monetize it at least to a certain point. Otherwise, as soon as you come up with the idea, everybody else just steals your idea and they monetize it. The incentive to innovate isn't there. But the problem then becomes that knowledge isn't shared and our understanding isn't able to move forward. 

[00:55:01] FL: Well, it has the notion that an individual actor can benefit from the invention. And that used to be true but it's not so much anymore. For instance, anybody who's trying to do innovation in, say, the space of quantum computing isn't going to go into their garage and build a quantum computer. They're going to need the resources of, well, effectively a pretty large company and quite a bit of investment. 

In effect, all of the stuff that's really Innovative research and development is involving teams of people and larger groups. And so, the idea that I'm going to lose out because my invention doesn't profit me. That notion just isn't even really a realistic way of thinking about it. 

At this particular point, the only real way that I can have my ideas realized in the world is essentially to broadcast them as best as possible. Hope as many people as possible steal them. Try to benefit themselves. And then by some sort of accident, coordinate with one another to essentially emerge something that's better than anything one of them could have done by themselves anyways. 

But in effect, if we're going to go that far, we might as well go to the level up, "Well, let's just actually make this open source and cooperate with the community." Build a situation where people understand that this is actually part of the commons and to know how to relate to commons resources in a much more wholesome way than we've currently been able to do so. 

This is the place where I would think about market systems as maybe being a hammer that's trying to hit too many nails. Some things in the world are screws. Other things are circuits. And if you hit them with hammers, they just don't respond well. 

In effect, there's a phenomenon here, we're using the notion of hierarchical, or democratic, or market-based methodologies to try to solve all the world's problems. And that tool set just isn't adequate for the kinds of problems we're looking at. We actually need to think about things in a much broader way than we currently are.

[00:56:52] JS: But there are some problems that can be well addressed by markets.

[00:56:55] FL: Oh, sure. And they're certainly problems – addressed by current governances. It's just not all problems. So, say, for example in a given year you have 100 problems that a community or a nation state needs to solve. And the tool set is pretty good. So, it solves 95% of them. Or it solves 95 of them. Then you have these five left over that it can't deal with. Well, okay, that's not so bad. But next year, we get a new set of 100 problems and it can only do 95 of those. But it's still got those five left over from the previous year plus the five we have from this year. Now there's 10. 

And so, in effect what happens is that a few years goes by and then all of a sudden you have as many problems you can't solve as you have tools that can solve them. We end up with this accumulation of chronic critical issues that the tool sets that we're currently using are just inadequate to deal with. 

And a lot of people keep trying to retrofit to, "Okay. Well, let's go to liquid democracy. Or let's tweak the voting schema a little bit." And they don't know anything about, say, Arrow's theorem. And so, as a result, they're not even on the right page as far as – 

[00:57:58] JS: I don't know anything about Arrow's theorem. Or maybe I did at some point. But I've forgotten. Can you tell us what that means? 

[00:58:04] FL: Well, it's a mathematical thing. Basically, it says that for any voting system that you want to have reflect the will of the people. You're trying to basically have the voting system and then a notion of emergence for what is the will of the community that people – sort of preference ranking schema, for example. 

It turns out that for any system of voting that would have characteristics that most people would regard as fair and were reliable and just had some basic characteristics that we would want all voting systems to have. It turns out that for certain kinds of preference rankings that this is mathematically impossible. It's not just a question of, "Oh, we don't have the right voting system." It's that voting as a methodology can't solve the problem because of the nature of how voting works. 

So, in one sense it's kind of this obscure thing. It's like this obscure proof. But on the other hand, it has these really profound implications because it. Basically, means that when you're looking at how do we get it to the point where the people are doing sense making, and then doing design and then implementing the design. That if you try to fit it through a voting schema that's set up improperly, you're going to actually hobble your capacity to do anything at all. So, since there's a need for us as thinkers about this to understand these issues at least well enough to not get caught in such simple problems.

[00:59:28] JS: I want to get into your thinking from first principles on how we can enable at least the first part, which is the sense making piece of it. You call this ephemeral group processing. And the first principle considers how you might surface the knowledge and preferences of the group in order to know that you're asking the right questions. It considers and utilizes technology to enable it to happen in groups at the scale that we want to be able to happen at. 

And fascinatingly – and this is something that we'll get into around integrating economics with ecology, this notion of biomimicry. How can we learn from nature how things work? And implement those design principles in the human systems that we're interested in. In the biomimicry of what happens in the brain is part of your thinking here. Maybe we can start to talk about what ephemeral group process is. 

[01:00:23] FL: I'll try to be really brief. There's a bunch of notes about it. I have a website. It's got a very minimal sort of presentation. But there's a lot of text there that describes this. People can read about it. But the essential principles have to do with things like having people in smaller groups so that they can actually talk to one another. 

If you put everybody in a huge room, you're going to end up with a podium or a stage and a microphone. And only the person that's really speaking into the microphone is going to be able to be heard. Pretty much everyone else is not going to really be able to communicate with one another. To just increase the bandwidth of the communication in the group, we actually need to think about a distributed communication process. 

The other thing that is a key principle to this is that when you're looking at how people talk to one another. If I basically sort of look at what language does. Well, I can make statements. I can ask people to do things. Or I can ask questions. And when we look at the implications that these different styles of communication have – 

[01:01:23] JS: This is so fascinating, Forrest. I'm just priming everyone to pay close attention because what he's about to say is really fascinating.

[01:01:30] FL: Well, I hope it's worth it. Anyways, the idea here is that if I say something, and I basically am making a claim, then a little bit of my identity is tied up in that. And if another person disagrees, it's not just that they're disagreeing with my idea. There's a sense in which they're disagreeing with me. And it's hard for me to disentangle my identity from that. 

If I'm making an ask, if I'm issuing an injunction, or I'm telling somebody to do something, or directive of some sort, then there's even more ego involved. There's a sense of does this person recognize my authority or something like that? But when we're asking questions, and they're genuine questions, we're not just thinking about, "Can I manipulate someone with the questions?" I'm actually saying, "Hey, I have a question because I don't know the answer. And I think that if we both ask this question maybe between the two of us we can figure it out." 

Rather than two people facing each other fist-to-fist, you have two people standing next to one another looking out into the world trying to figure out what's going on. And so, there's, first of all, this notion that nobody owns a question. There's a kind of sense of disidentification. Like, I don't own the question. The question doesn't belong to me. It's something I'm working with to try to figure something out. In effect, there's already less egoic identification going on. 

The second piece that is truly crucial is that when we have multiple perspectives looking at a situation. So, we're standing side-by-side. We're perceiving it from different points of view. But the capacity to have the conversation, again, back to the small group, means that to some extent we can start to take in the other person's perspective. And from doing that, we can gain insight into that situation. 

The idea is that I want to hold my perspective and the perspective of the other person at the same time. I'm not giving up one for the other. I'm not trading off. I'm not having some sort of debate as to whether one's right or wrong. I'm basically just looking for what's true about what they're seeing that's also true about what I'm seeing. Such that from the fact of those things both being true, that I can learn something about what we're looking at that neither of us could have learned by ourselves because our perspectives individually just didn't have the capacity to see into that dimension. But there's a phenomenon called Phase Parallax. This isn't something I'm making up. It's something that shows up in nature and it shows up in technology. 

Like, say, I had a radio telescope and I want to look at stars. And the degree to which that telescope can see things is based upon the size of the dish or the size of the lens through which the telescope is looking at starlight. Now, if I wanted to make a bigger telescope, obviously, I could invest in that. But on the other hand, if I take two telescopes and I put them some distance from one another and then I combine the images, I combine the signals from the two radio telescopes using this math, I can basically create the effect as if I had built a telescope that was the size of the distance between them. 

Rather than building a telescope that was say six miles across. I built two telescopes and I place them six miles apart from one another. And then from this synthesis, I now have relatively the same effect as if I had created a telescope that was six miles in size. And that's a huge win. I mean, that's like – if you think about it, you can extend this technique. 

For example, a telescope and you wait 24 hours. You wait 12 hours. And so, the planet turns around. And now you've synthesized the signal from the previous measurements with the second measurements 12 hours later. And now you can effectively start to get resolution on stuff as if you had built a telescope that was the size of the earth. 

Or actually wait six months. And now you basically have something that's the size of the orbit of the earth around the sun. You've just got a telescope that has a resolving dish that's literally 180 million miles across. Now, this isn't a perfect thing. I mean, there's some stipulations. There's some things that are important to understand about this phenomenon. But the idea here is that insight itself depends upon the capacity to have diversity and to integrate that diversity. We're not changing diversity into non-diversity. We are temporarily holding a perspective that encompasses both points of view. 

And that through that capacity to hold multiple points of view, that we level up in our capacity to deal with situations to notice things that are important. We can find solutions to problems that we couldn't otherwise do. This is like if somebody asked you, "Hey, I lost my keys. Can you help me look for my keys?" And it's at night. And the lamp posts are going to have this light cone that they shine and people can look around for the keys where the light is. But in the places around that are not where the lamp posts are, it's dark. And so, if the keys happen to be in one of those locations, nobody's going to find it. 

But on the other hand, if for example we were to detach the light posts from the ground and basically hand it to a person. This is called a flashlight. Now, they can take the light with them and now they can search everywhere. And the chances of them finding the keys, if it's anywhere on the ground, is going to be very, very much greater than they would have been otherwise able to simply because the range of what we have insight to has been increased. 

Anywhere we can ask better questions, anywhere that the questions themselves transform the conversation to allow us to find perspectives that are much more generative to solving problems than we otherwise would have been able to find with the maybe questions that we started with that weren't necessarily oriented towards the problems as powerfully as they could be. In effect, by allowing a little bit of diversity and synthesis we can effectively achieve much more as a community than we could literally through any other technique. 

[01:07:32] JS: There's one thing here that I find so fascinating that the point that you made about just like your identity is tied to a request, and your identity is tied to a statement and your identity is much less tied to a question. I find that so fascinating because I have at various times attempted to put different perspectives into a conversation at the same time in the interest of dialectical thinking, which is just an exploration of the truth. Not holding on to one's opinion. 

And there have been instances. Some of you may have been in the particular instance that I'm thinking of where I was just sort of stunned at someone's complete abstinence around their view. And in retrospect I came to realize that that view was so tied to that person's identity. That there was no evolution of the thinking because it would threaten the identity itself. I just found it so interesting that you said that. 

The other thing I just find fascinating is that you haven't gotten to this yet, but let's get to it now, in terms of inquiry being such a core component. And that the Phase Parallax that occurs when you put diverse perspectives together and this underscores the need for diversity, which has been something we've been talking a lot lately too. 

But there's a right number in the same sense that there's a right number that we've talked about, a maximum number with respect to governance at 150. There's a right number for this process of diversity of inquiry leading to an outcome that's greater than any of the parts. So, let's talk about what that right number is.

[01:09:02] FL: For our observations of humans and what people do in groups and things like that, it seems to be the case that between five and seven people is the right size for a group to really get the best combination of each person getting a chance to speak up and to get to know one another a little bit. But at the same time still have enough diversity of perspective to really enable that group to be able to move forward and to develop insights. 

And of course, one of the things – and this is another aspect of what an ephemeral group process is, is that people don't just stay in the same group. They go from group to group. They consider multiple questions from multiple perspectives and so on. And that there's a kind of emergent dynamic that occurs over time that effectively allows not only individual people to substantially upgrade their own capacities but also to have the community be able to upgrade its own capacity. There's a lot of good things that happen as a result. 

In effect, we recommend that people meet in groups of five and then to essentially assist in the process of having the kinds of dialogues that are focused around question-generation process or question-asking process as a methodological technique, a kind of epistemic process that allows us to get to building the capacities that would enable us to answer those questions that matter. 

In effect, there's this notion that we could train people to be inquiry coaches and to develop kind of facilitative techniques that help people to develop the skills to move from just making statements or trying to answer questions to actually asking questions. And this turns out to be – we've done some experiments. We've held these sessions a few times. We actually do these processes and we learn from that. We get better at what we're doing as a result of these trials. And so, in effect, there's this phenomenon that we've noticed about how to create a capacity in a group of people to really be able to engage in this dialectic of inquiry. 

There's that. Would be the sixth person. And then the next role that's kind of important is that we want those conversations to have those key questions be basically made available to the community. Now you have a kind of archivist sort of function that allows for means by which the really good insightful questions can become available to other people so that those other people in that same community who are in different meetings probably meeting at the same time can therefore form new groups around these new questions and then go further with this process. 

Go ahead.

[01:11:37] JS: Yeah. No. Just to be very clear about what the process looks like. You take a large group of people and you sort them into groups of optimal numbers, around five, based on a certain preferred diversity and potentially based on some other sorts of criteria. And then you take them through this process with an inquiry expert to surface all of these questions. 

And primarily, it's at the same time surfacing two things that are distinct. And I want to make sure they're clear. Which is – or tell me if I'm wrong in my understanding. But one is the intelligence that resides within the group. But it's also the things that the group cares about. I.E. what the group values. And then that intelligence and values then goes into however it is that you codify this information for the larger group. And then you kind of can iterate on this process of putting new groups of people together to do this. But it's important because it's not just collective sense making by virtue of extracting the intelligence. But it's also informing the design process by extracting the values and the design constraints. Am I understanding that correctly? 

[01:12:45] FL: You are. I wouldn't necessarily use the word extraction so much because it's that the community itself learns this about itself. So, they move from a kind of tacit unconsciousness to a kind of consciousness collectively. 

Now, the dynamics of that – and again, there's some subtleties here. But the idea here is that when we are really concerned about something. Like, let's say I have a concern and I want to know that the people that are making choices in this particular space. So, whoever those people may be are considering this particular value. 

Say, for example, if I care about, say, the spotted owl. I'm just picking that because it's a thing that people mentioned from time-to-time in this way. There's a phenomenon here of, “are they considering this,” right? And people can say, "Are you inclusive in this way?" Are you considering the fisheries? Are you considering the fact that the paint that you're using on the bridge may have lead in it? And therefore, you need to be sure that it's a safe paint. Maybe you're thinking about construction and you might say, "Well, have you considered earthquakes? This is California. Is this going to be an issue?" Or if you're in Maine, have you considered frosting? Because that's a phenomenon that happens up there and you need to account for it. 

There's a lot of different things that people could be concerned with that they would want to be sure that it was part of any choice-making process that was engaged in the general area. In effect, the inquiry gets codified. And this is a somewhat awkward thing to express, but there's a real essential point here. I'm hoping people get what I'm referring to. What question that if those people that were making the choice were to ask and I were to know that they would ask this question would reassure me that they had considered the thing that I needed considered well? 

In other words, if I could figure out a way to take my concern and phrase it as a question and know that that question would actually be asked by the people making the choice, that can I discover a question that put in this way would be something that I would know that, "Okay, since I know that they've considered that thing, then I might not necessarily know what their answer is or even agree with their answer but at least it was thought of. And as a result, the value was included in the decision-making process." 

[01:15:08] JS: But how does this decision-making process take into account that? Ostensibly, there's an acceptance of the decisions of the group because – 

[01:15:18] FL: We haven't made any decisions yet. The thing about the ephemeral group process is that it's just a component. It's not the whole deal.

[01:15:24] JS: Sure. Sure. Of course. 

[01:15:26] FL: In effect, it's not trying to answer questions. It's just trying to ask them. But it's trying to find out what questions are the questions that, if asked, represent genuine participation? In effect, if people are going to be engaged in the civic process, they want to know that the things that they care about are becoming part of the civic process. They're showing up and they're wanting to be sure that their showing up actually results in real questions that matter being surfaced in the process of whatever is going to happen next. 

In this particular sense, I haven't even really tried to describe anything subsequent to ephemeral group process. But I have at least tried to set up the conditions through which the people's participation begins to matter in a deeper way than casting ballot could possibly do.

[01:16:15] JS: But does ephemeral group processing get into the answer of this question of, "Is the inquiry the right one?" 

[01:16:20] FL: It does. In effect, the question itself, "Is this the right inquiry?" is an inquiry that has resolution. In other words, we can explore what would we need to know in order to know that we were asking the right question? In effect, we can start to basically dig into the epistemic process itself. And fortunately, the nature of that particular kind of epistemic process is the very nature of the process itself. So, you end up with this kind of recursion that does actually stabilize. It's sort of a kind of transcendental stability. 

[01:16:53] JS: That's what I was about to ask. It has to have a recursive nature to it to integrate new information as it becomes available.

[01:17:01] FL: That's why we call it process. I think the other thing is – And this gets to a point you made earlier about identity. Having smaller groups for one thing makes it a little easier for people to relate to one another. Obviously, it doesn't solve all problems. But it does go farther than would be the case if you had a large group of people and then people get triggered by one another. 

There is a thing to recognize, too, is that sometimes people aren't just holding identity just because they want to define themselves in a certain way. But also, because to some extent it has real implications with respect to their relationships with groups of other people. 

For example, if a person has some dependency that they depend upon a community of people. Maybe it's a religious group or something that they actually need the resources of that community in order to survive because that's the place that they live and nobody can do everything by themselves. And so, as a result, there's a real need for them to have membership in the community and be accepted in that group. 

Say, for example, that group says it has certain values. And so, this person basically says, "Well, if I do something that shows really strongly that I have those values, then that group has to accept me because I've demonstrated that I have those values as well as anybody. And so, therefore I'm definitely one of them." And so, in effect there's this kind of calculus of what things do I need to do in order to make sure that my children get fed? 

And in a certain sense, we can't judge people too harshly on that because that's essentially a realistic situation for them to be making those kinds of decisions about. In some respects, we want to basically – in the long term, we want to move to a situation where people's fundamental needs are handled so that there's more freedom to engage in things that aren't necessarily going to be defined by popularity contests as contingent to your survival. 

In this particular sense, there's a real need for us to not just think about things like ephemeral group process as being a technique. But really to consider them in the larger process of what would be a community? What would be governance process? What would be healthy relationships between, say, man, machine and nature? Or necessary and sufficient to address things like existential risk or civilization collapse? 

[01:19:13] JS: Really amazing and fascinating, Forrest. I am so grateful for this conversation. This is all extraordinarily complex. But I feel like we've made a good dent in these fundamentals with you today. Thank you again, Forrest. 

[01:19:31] FL: Glad to be here. Thank you for the time and the attention.


[01:19:35] JS: Thank you so much for listening. In addition to this podcast, you can find additional resources on our website,, including transcripts and links to additional background materials for each conversation. 

For our most essential topics, like universal basic income, decentralized social media and stakeholder capitalism, we also have outlines summarizing our research, which make it easy for listeners to very quickly get an overview of these really essential and important subjects. 

On our website, you can also sign up for our newsletter where we bring our weekly podcast to your inbox alongside other relevant Denizen announcements. We're partnered with a lot of really incredible organizations on the forefront of the change that we talk about. And so, we're often bringing you announcements from them. 

And finally, this podcast is made feasible by the Denizen community and listeners like you through our gift model. Denizen's content will always be free and accessible. We believe that it's just much more aligned with the future that we seek to embrace this gift model. It's through the reciprocity of listeners like you that we are able to continue producing this content. You can support us or learn more about our gift model on our website. Again, that's 

Thanks again for listening and we hope you'll join us again next time. 


( Scroll to Explore )

Join us!

Our weekly newsletter is the best way to stay abreast of our inquiry and ongoing events.

Oops! Something went wrong while submitting the form.
The Denizen logo.