No Way Out
No Way Out: The #1 Podcast on John Boyd’s OODA Loop, The Flow System, and Navigating UncertaintySponsored by AGLX — a global network powering adaptive leadership, enterprise agility, and resilient teams in complex, high-stakes environments.Home to the deepest explorations of Colonel John R. Boyd’s OODA Loop (Observe–Orient–Decide–Act), Destruction and Creation, Patterns of Conflict — and the official voice of The Flow System, the modern evolution of Boyd’s ideas into complex adaptive systems, team-of-teams design, and achieving unbreakable flow.
140+ episodes | New episodes weekly We show how Boyd’s work, The Flow System, and AGLX’s real-world experience enable leaders, startups, militaries, and organizations to out-think, out-adapt, and out-maneuver in today’s chaotic VUCA world — from business strategy and cybersecurity to agile leadership, trading, sports, safety, mental health, and personal decision-making.Subscribe now for the clearest OODA Loop explanations, John Boyd breakdowns, and practical tools for navigating uncertainty available anywhere in 2025.
The Whirl of Reorientation (Substack): https://thewhirlofreorientation.substack.com The Flow System: https://www.theflowsystem.com AGLX Global Network: https://www.aglx.com
#OODALoop #JohnBoyd #TheFlowSystem #Flow #NavigatingUncertainty #AdaptiveLeadership #VUCA
No Way Out
Karl Friston Decodes the Real OODA Loop: Active Inference and What Boyd Got Right
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Karl Friston, FRS, is the architect of the Free Energy Principle and Active Inference, the formal mathematics that describes how living systems perceive, predict, and engage with their environments under uncertainty. In this conversation, Brian "Ponch" Rivera and Mark McGrath walk Friston through John Boyd’s actual OODA sketch, not the linearized four-box version, and the result is the closest thing yet to a formal validation of what Boyd was reaching for in 1995.
The conversation builds the framework from first principles: the irreducible features of the environment, the Markov blanket as the boundary defining a self, the generative model as the cognitive operating system Boyd called orientation, and the two implicit pathways Boyd drew but few have explained. Friston then formalizes what Wall Street intuits: risk as a KL divergence between prior preferences and predicted outcomes, ambiguity as volatility, expected free energy as the mathematics of risk under uncertainty.
The episode also covers perceptual control theory, ecological psychology, predictive versus prospective control, flow states, the Rebus model and psychedelic-assisted therapy, autism and consciousness, the platonic space, and why current large language models cannot orient: they do not encode uncertainty, cannot plan over counterfactual futures, and therefore cannot truly act.
This is the episode the show has been building toward for 163 conversations.
KARL FRISTON, FRS
- UCL Profile: https://profiles.ucl.ac.uk/2747-karl-friston
- Wikipedia: https://en.wikipedia.org/wiki/Karl_J._Friston
- Google Scholar: https://scholar.google.com/citations?user=q_4u0aoAAAAJ
John R. Boyd's Conceptual Spiral was originally titled No Way Out. In his own words:
“There is no way out unless we can eliminate the features just cited. Since we don’t know how to do this, we must continue the whirl of reorientation…”
A promotional message for Ember Health. Safe and effective IV ketamine care for individuals seeking relief from depression. Ember Health's evidence-based, partner-oriented, and patient-centered care model, boasting an 84% treatment success rate with 44% of patients reaching depression remission. It also mentions their extensive experience with over 40,000 infusions and treatment of more than 2,500 patients, including veterans, first responders, and individuals with anxiety and PTSD
Stay connected with No Way Out and The Whirl Of ReOrientation
X: @NoWayOutcast · @PonchAGLX · @NoWayOutMoose
Substack: The Whirl Of ReOrientation - www.thewhirl.substack.com
Welcome And Why Friston Matters
Brian "Ponch" RiveraHey, Moose, we are 160 episodes plus into our journey. If you remember, we started with a controlled hallucination. The first podcast episode, we did a tritone thing. We were trying out some new things to say that uh the way we perceive reality is actually uh a controlled hallucination. And of course, we brought on guests to help us understand it um in different light. And some of those guests include uh Dr. Hippolyto, we had Bobby Azarian on, uh, we've had uh folks from the Active Inference Institute on, we've had a lot of neuroscientists on to help us understand how we actually engage with reality, the the nature of creativity. Our guest today is uh Carl Friston, Professor Carl Friston. Um we've been talking about his work for a long time. I want to welcome you to the show and thank you for being here today. Uh and we can call you Carl, correct? Yes, please do. And thank you very much for having me. Uh, this is gonna be great. So let me give you some background um on how I came across your work. And it really begins in the psychedelic assisted therapy community, PTSD TBI veteran community. And if you remember years ago, and I'm holding up a book for our listeners, uh, this is a book by Michael Pollen. It's How to Change Your Mind. Uh, I'm reading this about four or five years ago, maybe six years ago now, and I come across this idea called Rebus and Entropic Brain Hypothesis. And it's from Robin Carthart Harris. So I start searching for that in Google Scholar, break it down some more, and I start seeing parallels to what Moose and I are looking at with uh the body of work of John Boyd, and it's how living systems engage with the environment. So I keep pulling that thread. Um, and then I have everybody in the AGLX community, all of the partners went through active inference training with uh Daniel Friedman over at the Active Inference Institute. And of course, uh that led us to like Bobby Azarian and a few others. Uh, we came across uh this gentleman, uh Patrick Scotanis, and the market mind hypothesis. And this is we're gonna get to this a little bit later on when we talk about markets and uh where we're gonna go with today's conversation. Uh, we'll talk about portfolio management potentially and and other things. And of course, uh, I don't know if you know this guy. This is Andy Clark. Have you ever heard of him, Carl? Yes. I hear a lot about him. You you know a lot about him, right? So uh there's a lot that he's written about. The Experience Machine is one of the books that I've read. Uh, but do you mind sharing how you know uh Andy Clark?
Karl FristonOh, um multiple ways. Um in fact, one of my ex-colleagues, twelve colleagues uh at UCL, um moved to Edinburgh and fell in love with Andy, and they uh subsequently got married a few years ago on Brighton Beach. Um completely independently of that, um our convergent interests, um, him from the side of uh philosophy and me from the side of sort of computational neuroscience, uh we met in the middle. So I I consider him to be a partner in crime. So he's been promoting the predictive processing angle from a philosophical perspective, and I from a sort of more technical uh mathematical perspective.
Brian "Ponch" RiveraOkay. So you have a slightly differing view on um predictive processing, is that correct?
Karl FristonOr are they pretty much just No, I I I well um the notion of predictive processing, that sort of phrase, I think actually arose in Andes uh standard paper, um, Whatever Next. So that was a very famous BBS behavioral and brain sciences paper that brought the predictive processing power line to the foreground um among philosoph uh you know, philosophy of science, but also uh a large section of the cognitive neuroscience community. Um and uh my memory is fading now, and I may be making things up, but uh I do remember discussing why why he didn't call it predictive coding and why he didn't call it active inference. Um and uh and there are sort of principal reasons for that. Um as I say, I may be making this up, so you have to ask Andy what's it? Well, we make everything up on the show, so so so he didn't want to limit the um the scope or the compass of predictive processing just to perception. He wanted to emphasize the inactive aspects, the you know, uh what would have been um and still is to a certain extent, known as active sensing and active vision, um reflecting his commitment to embodiment and you know to the fore paradigm. Um on the other hand, he didn't want to go quite as far as active inference, because active inference is an application of the free energy principle, and that's you know the physics of sense making the and the physics of of decision making, which has some particular commitments your first principle accounts. So he he didn't want to sort of um as a philosopher um make uh uh attach himself to some of those very uh technical um and stringent commitments that the free energy principle entails, and you're implicitly using whether you're applying the free energy principle in the form of active inference, uh, so that he could just focus on the good stuff, which was the predictive processing stuff. So we have uh I think completely convergent and um uh concilient views on the nature of sense making and the nature of you know an embodied and situated engagement um with the world. However, um he is distinguished by the fact he writes very beautifully and coherently, and I don't. So from the perspective of describing these ideas, uh, you know, there's there's a marked difference between the philosopher and the physicist.
Brian "Ponch" RiveraI appreciate that. His book is excellent, by the way, the The Experience Machine. Uh, we've asked him to join us on our show as well. Uh I want to bring up another book. I got a few more, and then we'll get airborne here. Um I think you know this guy, Anil Seth, right? Oh, EMU. Very popular book. In fact, I think you have uh an event coming up with him. Uh and my point behind the book is uh it goes through active inference, free energy principle as well. Um but you have an event coming up with uh uh uh Anil, is that correct?
Karl FristonUh usually do. Um yeah, he's another very close colleague and friend. Um and interestingly, works almost next door to Andy. So they're both at the University of Sussex. And uh I know of O Neil and Anil for a long time. He he actually um didn't replace me, but was um a few years later uh a research fellow at the Neurosciences Institute with Jerry Edelman, uh, which is where I spent some of my formative training uh years um looking at value-dependent learning. Uh so we have a common sort of intellectual heritage uh there, um, with some lots of common friends and uh you know, again, a very concilient view on the importance. His perspective is much more sort of the beast machine perspective and interceptive inference. But yeah, I think a common commitment to predictive processing and active inference um as applied to the body.
Brian "Ponch" RiveraOkay, no, I appreciate that too. And then we uh we have uh this this famous book here, Active Inference. I think uh your name's on it. Um it's a great book, it's it's a great textbook to run through and uh really dive into what we're gonna talk about here in a little bit. And uh this new book, I'm holding up a book called The World Appears by Michael Pollan. Uh, goes into uh uh the conversations on consciousness. And again, as I'm reading it, and my wife was reading it, she's like, Brian, you know, here's the free energy principle and active inference all over throughout this book. And like, okay, this there's something going on here. Uh I our point behind all this is we've been following what you've been doing. We we see how it connects to things and artificial intelligence. We believe it connects to strategy, teamwork, leadership. Um, it definitely connects to the markets uh through the market mind hypothesis, and we think even more. We've been following your work with Michael Poland, excuse me, uh Michael, Michael Levin on the platonic uh space, which is fascinating, and we might might get into that today as well. Um, and of course, Stephen Cotler, Stephen Cotler writes about uh Mihai, G said Mihai's work in flow, um, flow states, peak performance. And again, I think there's a connection to uh the platonic space or something else that's out there that I think is starting to emerge. Um, but you have a couple papers with Stephen Cotler, one on intuition, which we might touch on today. And I think you have a new one coming up on uh the body keep score, which you might uh touch on today. But anyway, my point behind all that again is uh uh hate to say we're huge fans, but we like the work. Uh we think it connects well with what we're what we believe um the person we're following, John Boyd, his work, what he was trying to explain in the 80s and 90s, and uh the gap between what people perceive that to be today as a linear decision-making process, when in fact we think it is more aligned to both both perceptual control theory, um, perception action loop and ecological psychology, the free energy principle and active inference. So uh we like to have that discussion with you today uh before we get airborne, uh, or when we get airborne here in a second. So before we share slides, uh I want to share with our audience that we we kind of have a structure today, and hopefully it'll make sense to our listeners as as to why we did it this way. It might be a complete train wreck. We'll find out here in a little bit. Uh and before we get airborne with that, uh, I'm gonna turn it over to Carl Friston to see if if he has any questions of us uh that we could answer before we get going.
Karl FristonUh well, no, I you seem to know more about my work and collaborators than I do. It's very impressive. Uh just by coincidence, that that body keeps the score paper was literally accepted today. I just got an email from from uh uh Michael and the co-author. So it's uh you're very current.
What The Environment Demands Of Us
Brian "Ponch" RiveraWell, I appreciate that. Well, I'm gonna share a screen here. Um this is a uh this is our rendering of the the free energy principle and active inference. I believe it is uh where we have a boundary, we have uh external states, internal states, uh, we have uh sensory states and active states. Um we're not gonna talk about this because there's math involved, right? There's maths involved in all this, and there's a lot of symbols on there I truly don't understand. Um I've been learning about them, by the way. Uh instead, we're gonna look at a simpler approach where we dive into uh, again, external states, internal states, active states, sensory states. And the way we're gonna do this is I'm gonna start off with the environment. Uh and then I'm gonna ask Carl to join in and just kind of um paint a picture for us. So when we talk about environment, we talk about affordances, we talk about attractors, um, we talk about fitness landscapes. And for those of that listen to our show, they should know that the name or the title of No Way Out comes from a brief called Conceptual Spiral. And in that brief, John Boyd identifies features of the world that we just cannot eliminate. There's no way out of that, right? We we we have to live with these. And in front of you right now, um, Carl, are some of those features of the world. Uh, one of them happens to be entropy. We have ambiguity, novelty, numerical imprecision. Is there anything else we could add to this list, or you might add to this list, to help explain the environment, the external world?
Karl FristonI think you've got all the key elements though, um, are much relate to the itinerancy of the environment read as the states that are external to me. Um, just to link it to the previous slide. One um one thing um which perhaps is picked up by the notion of novelty, um is that um the way that one conceives of the environment from the point of view of the free energy principle, is that there are a certain number of states that the environment can be in, and of all the possible states it could be in, there are a vast number it never occupies. So that's just a sort of um uh an intuitive way of trying to express the fact that the environment has characteristic states. There are certain states that the environment will never be in, and that tells you something quite important. If it's the case that all of these aspects of the environment speak to the fact that the environment is always on the move, and you know, from the point of view of a physicist, this would be um uh thought of in terms of moving through some abstract state space. But the movement is important because if you're moving and yet you have to be restricted to a certain number of states, that tells you immediately you have to revisit states or the neighborhood of states that you're once uh you were once in. So there's a certain kind of um closure to the dynamics that have this entropic, that have this uncertain, imprecise, ambiguous aspect in virtue of the complexity of that itinerancy uh on the one hand, and the random fluctuations with which the the world um is equipped in terms of the way the way it goes. So from the point of view of um characterizing the environment, which is important because all of these elim all these fundamental, foundational aspects of the way the environment behaves are going to be installed in your head because you have to sort of resonate with the environment, you have to synchronize with the environment. Um but that synchronization does entail this sort of um technically what's known as sort of non-dissipative or solenoidal flow. It just means that there are cycles, there are biorhythms out there, there are um um oscillations of of particular kinds at every scale, whether we're talking about the motion of the heavenly bodies through to very, very fast oscillations in um you know in my brain, for example. So that's I think an important aspect that you know the the real world, the environment has this particular kind of itinerancy, which means it's completely unpredictable. However, you are unpredictable in a very constrained way.
Brian "Ponch" RiveraDoes the law of requisite variety connect to what you just shared?
Markov Blankets As Self Boundaries
Karl FristonYes, I think it does, absolutely. Um so the law of uh requisite um variety is one of the sort of pillars of early cybernetics championed by um certainly in the in the UK, people like Ross Ashby. And you know, from what I remember, that's basically a statement. If I'm engaged with the world, and again, referring back to your Markov Blanket slide, that basically is my internal states are engaged with the external states vicariously through the inputs and outputs or through the sensory and active states, um, then um the because I am because there is an implicit um cycle of circular causality, you know, I'm influencing the environment through my active states, the environment is influencing me through the sensory states, that introduces a complete asymmetry, but it also tells you something quite important. It tells you that I can change the world, I can control the world. And this would be the cybernetic um perspective. And what it means is if I'm in sync, and I'm using in sync as a stand-in for mathematically what would be known as generalized synchronization or synchronization of chaos, which is the free energy minimizing solution uh to any interaction or exchange with the world. If I'm in sync with the world, that means on the inside, I have to have as many degrees of freedom as there are controllable aspects to that world. And of course, if the world is extremely itinerant and has all of these stochastic aspects or probabilistic aspects, that means it has high degrees of freedom, which means it puts a lot of pressure on me as a good regulator, as a good engager, as a good model of what's going on on the outside. That requires me to have an equivalently high number of um of degrees of freedom, which effectively means that as I engage with more and more complex worlds, um, and possibly engaging um you know with the lived world as we know it, but also um worlds, let's say financial markets or whatever, as as the world gets more and more complicated, become more and more complex with more degrees of freedom, um, then there's an arms race, which means that my brain and I have to become um more and more complex in order to provide an accurate account of uh and predict what's going on, and more crucially, of course, predict what I'm going to do in order to maintain that synchronization.
Brian "Ponch" RiveraWell, let's get to uh Markov Blanket here. You brought it up. Uh we'll get to sensory states and active states in a moment. Um so we have, well, for our listeners, if if you can't see this at the moment, we just added a boundary uh onto the environment, and uh it's it's a I guess it's a porous boundary, meaning there is no real boundary. Uh but can you walk us through what is meant by Markov blanket and then uh talk a little bit about the generative model that's inside of it?
Karl FristonYeah, sure. So uh I mean this is uh uh you know a key pillar of free energy principle, which is basically starts by asking um what are the what is the nature of self-organization? And as soon as you ask that kind of question, you start to say, well, what is a self? So how do you define the self? How do you disambiguate the self, individuate, separate the self from the rest of the environment? So that this is where the Markov blanket comes into play. And as you say, um, I think um you know quite astutely, it it is a boundary, but it is a boundary that so almost paradoxically uh glues you to the environment. It's not something that divorces you or sequesters you from the environment. It is an interface with the environment that has this bidirectional traffic over it, um, which could be thought of as a sort of porous boundary, but that boundary is there to enable you to exist in some sense as separable from the environment with which you are engaged or enmeshed vicariously through these uh through these boundary states. The term Markov Kladkit um inherits from the work of Jude Perl, um who constructed these mathematical objects um on the basis of statistical dependencies, um which can uh arise from um causal links of a particular kind. And the particular kind that we're interested in here is that if there is a way of separating me from everything else, um then that basically means that I must possess blanket states that can be divided into uh sensory and active states, which we'll get up to in a s in a second, as you say, um, that are very simply defined in the sense that the um external states influence my sensory states, but they uh but my internal states do not influence the sensory states. And in an exactly symmetrical way, the active states are influenced by my internal states and influence the external states, but the external states do not influence the active states. Now that's just a very grand way of defining an input and output. So if you're a systems theorist, all I've just said is for a system to be identified or um separated from a larger environment in which that system exists. It can be uniquely defined operationally in terms of its inputs and outputs. And then you just think about what does it mean to have an input and output. Well, an input is something that you can't change. It's come from the outside. And the output is something the outside can't change because um it's come from the inside.
Brian "Ponch" RiveraAll right, so if we put an input on the left side of this image at the moment and an output on the right side, um we spend some time with uh uh Adrian Bajon, Professor Bajon, who came up with the constructal law. So we look at understanding flow systems. What you just described is once you put that in inlet or outlet, you actually have a flow system. Is is that would you agree with that?
Karl FristonYes. Um I would. So I I'm pausing because um the use of the word flow here is it again interesting because much of the mathematics is based upon the the dynamical flow uh um in these abstract state spaces. Um the other reason I was pausing is that I I think it's it it it's it may be slightly misleading to think of this as a flow through. The system, um, in the sense that that really does speak to something which is um in my world somewhat outdated, which is a sort of sandwich model of uh cognition, for example. That you know, there are sensory inputs and they are somehow converted on the inside into motor outputs, for example. That's not really how people um sort of currently conceive of things. So the you know, the better I think notion is that there's a circular flow around uh or through the Markov blanket, and then on the inside, there's a whole hierarchy of circular flows that are informing and constraining this the circular flows uh at the uh at the boundary itself.
Generative Models And Hidden Causes
Brian "Ponch" RiveraOkay, we we might come back to that because it's uh uh uh again, there's a design aspect we haven't built up yet, and we're just looking at the the boundary and a generative model. Uh can we talk a little bit about the generative model, uh what what that is inside of an agent or a uh in this case an organism?
Karl FristonSure. So I again that's you know uh a really important question because it is the second move that you get with the free energy principle. So if there exists a way of, in this statistical uh fashion, separating self from non-self, what one can then do or show is that the action of the self on the world, on the environment, can always dis be described as if it is acting under a generative model. So what does a generative model mean? Well, a generative model in this setting is simply a probabilistic description of the cause-effect structure that is producing your sensory inputs. Um so you have now a model that can generate the predictions of what would be sensed if you knew the underlying cause. So I use the word cause-effect structure in the spirit of um there are causes of my sensory impressions on my Markov blanket out there. They can never be observed, they're always hidden behind the Markov blanket. And in machine learning and artificial intelligence research, these are known as hidden or latent uh states or causes. And then there are observable consequences, there are observations. Um on the basis of the observations, I can infer what caused it. I'll never know. You mentioned hallucinations earlier on with with Inish and uh uh and I presume Emil Seth will probably get a look in uh he loves that notion. Um so these hallucinations, these fantasies, is um what Neil in more serious moments would call um inference to the best explanation, um, are just inferences because you never you're never in direct contact with these states or causes out there because you're separated from them by your Markov blanket. So that inference basically proceeds under this model that you have on the inside that could basically explain what could have caused these particular sensory impressions upon you upon your Markov blanket.
Brian "Ponch" RiveraAaron Ross Powell Is it safe to say that all living organisms have a generative model or an internal model of the external world, including uh cells uh or or uh uh what is it a fractal approach or is this how does that work?
Karl FristonI think it would be safe to say it if you said it to the right people. Um but you would um I think the safest way of saying that is that anything that can be individuated from its environment, thereby it must possess a Markov blanket, can be read as or described as if it was making inferences, perceiving and acting under some kind of generative model. Having said that, I think it's important to acknowledge that there are different kinds of generative models, um, which means that there are different natural kinds. And by natural kind, I just mean some system that has a Markov blanket. Um and there are sort of certain limiting cases that um you would recognize intuitively, so one limiting case would be that there are um the whole thing is just one Markov blanket and there are no internal states. So we're talking uh about things that don't have active states, they just have sensory states, for example. So we're talking about rocks, inert things. But then if you equip certain other natural kinds with an active set of outputs um or active states, you equip that Markov blanket with an active sector, then the thing can act upon the world and it can move. So now we move to animate kinds of things. And then we move now to systems which I think are really interesting, um, which um mean that there are actually Markov blankets on the inside, which induces a kind of heterarchical or hierarchical structure on the inside, which means that certain parts of the inside of your animate thing don't have direct access to the active states. And this introduces a really interesting um um description of these kinds of things, and I'm talking about things like you and me. Um things like you and me um uh uh can always be described as inferring the causes of their sensations, but now I've just said that um we can no longer observe our own actions, so our actions now become a hidden cause of our own sensations. So now I can always describe you as inferring your own behavior, and this is known as planning. And I think that would give you the the the this is part of this complexity arms race I was talking about before. That as we have to deal with more and more complicated environments or worlds, we move up from um inanimate things to animate things, autonomous things, and then from there we move up to um what has been called strange things for for particular reasons um that now are equipped with a generative model of their own behavior. So now you get a slight a certain sort of sort of r a recursion and a self um um um a self um modelling um of your um greater or lesser sophistication, um which I think you could apply to multicellular organisms, but probably not cellular organisms. Why? Okay. Well simply because the cellular the single cellular organism doesn't really have those nested Markov blankets on the inside. Now, some people might argue that mitochondria could constitute this, and there could be some sense in which the mitochondria uh are doing uh planning, but you know the the i the cell a single cell cell would probably not have the capacity um or could not be described as having a very deep uh or hierarchical structured generative model of the sort that you would need to be described as engaging in intentional uh behavior. And I use intentional behavior in the sense that to plan towards something in the future means you have some intended outcome in the future. So I don't think a single cell could could have intentional behavior, but it could certainly have very complicated and very functional behavior. Um you know um you know uh you know uh of the kind that you'd expect a really smart thermostat to have, for example.
UnknownRight?
Karl FristonMulticellism. Sorry.
Brian "Ponch" RiveraThat's great. Uh it's it's good for us to understand because we talk about the fractal nature of this work uh and and it has limitations. And uh we would like to think that when we look at a team of people or an organization, you could put a boundary around them and and think of them as having uh sensory inputs or sensory states and actions and and maybe a similar generative model. Is that true? Is it possible?
Karl FristonCan I just pick up because I didn't s um um speak to your uh uh um notion of a a fractal urbanization. I think that's a really important observation as well. So I would read fractal here as um uh a statement that there is some self-similarity that comes for free with this scale invariance that you get, um, and literally it becomes scale-free in a network. And that is exactly this sort of deep structure I was talking about. So you know, it is you know, the the fractal, the self-similar and the scale-invariant aspects of the organization of something, be it an institution or a person, or possibly even a cell, I think is a really important aspect of or or feature which would determine the nature of your engagement with the world.
Sensory States Versus Active States
Brian "Ponch" RiveraThank you. That's huge. So uh for our listeners, uh at the moment I added sensory states and active states on the left side of the uh rectangle. On the boundary, I added sensory states, and on the right side of the rectangle, I added active states. Sensory states on the left, active states on the right. Please walk us through what these are, Carl.
Karl FristonRight. So the sensory states, um, I repeat, sort of mathematically, they're the they're just defined in terms of um their causal power in relation to the internal and external states. And they're just those kinds of states that um can change and influence the affect the dynamics of um the internal states, but that is not reciprocated. So this is it's this um, if you like, um lack of influence, this sparsity, sometimes called loose coupling in dynamical systems theory, which uh renders them um sensory states, and in an exactly symmetrical way, the active states reach out into the outside, into the uh external states, uh, and cause uh have causal power in terms of changing the dynamics, it's my moving things, for example. Um uh but that uh the again is a unidirectional um influence, then that influence is not reciprocated. So the outside world cannot change the way in which you move. It is you that is doing the moving. So you know to to put to give some sort of concrete examples of this, we can be very concrete and we can say that um we could take a single cell, for example, uh, and say, well, what are the sensory states of a single cell? Well, they're basically the cell surface. Um, or you could be more restrictive and and sort of assign the sensory receptors on the cell surface as registering the sensory uh uh as being the sensory states. And in that in single cell uh organisms, um interestingly, the active states actually just lie underneath the surface and they push the surface, the sensory states, into the environment. Um and because they're hiding behind the sensory states, that means the environment, the outside world cannot affect them. They are so fulfilling the Markov blanket or Markov boundary uh condition. Um and then the internal states are those which influence the active states that push the sensory states out into the environment, and the sensory states register what's going on in the local external nuances. That's not the kind of arrangement that you and I have, though. For us, our active states would be all our actuators, our autonomic reflexes. Um by actuators, I mean our muscles. So there are very we're we're we're only equipped with relatively limited numbers or number of ways of changing our world. We're only animate in a very restricted uh way. So if you think about it, the only way that you can actually change the universe is by moving or secreting something. And that's all you can do. Um which means that you're just looking for your affector organs that are going to contract a muscle or secrete something.
Active Vision And Choosing Data
Brian "Ponch" RiveraLet me ask you this. On on sensory states, your eyes, your ears, your skin, your mouth, all those could be sensory organs, right? And and but when you observe something, don't you change what you're observing?
Karl FristonWell, yeah, yeah, another great question. Uh so that is, of course, active vision. Uh you know, that's a splendid example of active inference. Yeah. So just to reiterate what you were saying, so you you know all our all of our um five uh uh plus uh senses um are um understood in terms of um sensory states being the sensory epithelia with which we are born, which we are equipped with, uh, and you know a really important um example of that those sensory epithelia are the retina um that light the back of our eye and register um um light that's coming in. And then we try to make sense of what could have caused that. Is it a face, is it a bird, is it a leaf, whatever. Um but what you've brought to the table with that question is the crucial um um closure of that um circular causality across the Markov blanket. It matters where my sensory epithelium is pointing. So I've got little muscles in my eye, the ocular motor system, that can now uh become a really potent cause of what I am sensing with my eyes. So this then leads you into um you know uh a really important uh part of cognitive neuroscience, which is visual search. So, how do we how do we um survey a scene? How do we use our eyes, our vision, to make sense of the environment in which we find ourselves? And that you know one way of expressing that is you know to see is to look. To look is to actually position your eyes and attend to the right part of the visual field in a very active way. So the sensory epithelia, the sensory states register the consequences or the influences of the world. They can you can think of this as like data. They they harvest data. But because you can act, you can now choose what data to sample that's going to be most efficacious in inferring what is the cause of what's going on. So, for example, if I see some flickering in the periphery of my vision, out in the corner of my eye, I am actively going to look over there by moving my eyes in order to resolve uncertainty about what I was actually uh what I was actually looking at. So this is part of this sort of um action perception cycle, this loop that I was referring to before, the you know, depends upon this uh fast exchange across the Markov blanket. From the point of view of uh FinTech or um you'll say modeling financial markets, what that would mean is you don't need big data, you need smart data. You need the data that will resolve uncertainty that you have at the moment about what caused these particular or the state of affairs out there, and the state of the market, for example. So you be you know being equipped with active inference and active vision in the context of visual search means that you are now in a position to very selectively use your sensory epithelium, use your sensory states actively in a way that you go and sample the world in the right kind of way to build your the generative model of that world that is as um parsimonious and um as efficient as it can be.
Prediction Errors And Bayesian Updating
Brian "Ponch" RiveraWe'll probably come back to that uh financial market uh example here in a little bit. I want to uh move on to adding what emerges out of sensory states and moves towards the generative model and what moves out of the generative model. In this case, um we have it going back towards sensory states. It's still internal to the agent or within the boundary. So we have a light blue line there. I think it's surprise, uh we call them sensory signals. Uh can you talk a little bit about what emerges or what is sent from our sensory organs or sensory states to our generative model? What would you call that?
Karl FristonUm, there are there are a number of um ways of describing that. I think probably the most uh intuitive one is that afforded by a predictive coding formulation. And what that simply means is that if you make some simplifying assumptions about the the nature of the um the entropy and the randomness and the and the other aspects of um itinerancy um of a first kind, you had on the on the first slide, if you assume that they're well behaved and Gaussian, you can now um um formulate the message passing implicit in the cyan and the blue arrows in terms of broadcasting prediction errors that will drive updates in your generative model that will reciprocate with particular predictions. Um so in this view of um active inference, which is a yeah, I repeat, uh has a particular commitment to a predictive coding formulation, what you're saying is all the sensory flux or flow through the sensory sector of my Markov blanket provides me with news information about the states of affairs out there that are hidden from me that I have to actually uh infer. Did you say did you say news uh or new information? News. News. Um I'm using an Andy Clark um sort of uh intuition here. Um so then the question is what is the newsworthy information? Well, the newsworthy information is that which he didn't predict. That's why we watch um you know the news on our um social media or you know, the news at 10 on television. Uh it's stuff that w that we we can't predict. So that's the prediction error. So technically it's simply the difference between what we're actually sensing and what we predicted we would sense in this moment. And the difference between the prediction of what we'd uh sense and the actual sensation is the prediction error. And that's the newsworthy part, and that is a part that's passed back into the generative model, into this uh usually a hierarchy of internal states that then plays the role of a generative model, because if you just think through what I've just said, in order to create a prediction error, I have to have a prediction. So my generative model has to furnish a prediction of what I would see. So the generative model holds in mind, effectively, or can be described as holding in mind a putative explanation, an hallucination of fantasy, um, a hypothesis about state the cause of my sensations. And if my generative model can then generate a prediction of what could uh or what I would see if my hypothesis was correct, and that's basically the blue line that you've got here, I can use that prediction to subtract it from the sensations, just leaving what I didn't predict, and that's the prediction error. And then I'm going to use that prediction, which would be the cyan line, to drive my uh hypotheses to a better explanation. And this technically would be known as Bayesian belief updating. Um so you know I I think that's the you know that that would be another example of this sort of um you know recurrent message passing and and internal loops that are driving um, in this instance, just making sense of the sensory states. Um I'm just wondering whether you know how I mentioned I noted before you talked about perceptual control theoric because this might be a nice part, a nice point to sink that in if that was uh Yeah, we can talk about that. We yeah. So at the moment your active states are sort of hanging outside this purely perceptual loop. Um and the perceptual loop we're talking about now is the um the recurrent message passing on the inside, with these prediction errors going deep into the internal states, deep into the generative model.
Brian "Ponch" RiveraLet's do this, Carl. Let's go let's come back to that in a moment. I think uh uh there there's we'll come to perception here in a moment. I think it'll make more sense on perception action. Uh we've had uh Warren Mansfield on The podcast to talk about that. And I think it's a I think what we're drawing here or sketching here connects to that as well. So let's let's uh put a pin in that and we'll come back to that. Um but I do have a couple questions on what we're looking at now. So you brought up Bayesian updating. We talk about the Bayesian brain. Is this when we talk about prediction process, predictive processing Bayesian brain, is it just this part or is there more we need to explore before we can say the Bayesian updating happening in uh in the brain?
Karl FristonI think if we're we're just looking at the sort of the the use of the uh the the way that the brain uses sensory states to make sense of the world, I think that is uh a perfectly sufficient to say yes, this that this is um uh covered by the Bayesian brain hypothesis. Um but you'd have to qualify that because there's no action yet. So action, we haven't seen action, right? Yeah. So we're just talking about sense making and perception. And I think it is perfectly fair to say that um um you know predictive coding is one particular implementation of the Bayesian brain when deployed to understand perception and sense making. And that just requires this sort of recurrent exchange of predictions and prediction errors uh of your imperial states that um entail this generative model and the the sensory states in exactly the way you've drawn it here. And I said before that these prediction errors are newsworthy newsworthy part of the sensory information drives um um changes your mind by driving hypotheses or expectations to um provide better predictions to eliminate the prediction or reduce the uh the surprise. That drive mathematically is called Bayesian uh Bayesian belief updating. You know, literally sort of uh driving your or changing your hypotheses to better fit the the uh the sensory data.
Action Planning And Set Points
Brian "Ponch" RiveraI think our uh our our friends on Wall Street are gonna love this conversation because they use a lot of Bayesian updating to do uh work on their models. Uh let's move on to this. So let's get to the active part. Uh I did skip over variational free energy just uh in the interest of time. We may come back to that. So the way we understand it is, and you just alluded to it, is there prior to this slide, there was no action, right? So we have an internal action loop and an external action loop, external to the boundary, uh to the Markov blanket. Can we talk about the um the orange pathway that in this case goes from active states and points back to sensory states? It's still internal to the system. Can you tell us what's what's behind that?
Karl FristonRight. So we're now moving into the more interesting world where we're now, you know, uh uh searching for the right sensations um and we're deploying our sensory epithelia or our sense organs in the right kind of way. We're doing smart data mining, if you like. Um uh but to do that we we have to um move our eyes, um, so we have to um think about the mechanics of how we actually engage our um um active states or our our actuators. Um perhaps the simplest way of of thinking about this is from the point of view either of homeostasis or more simply from the point of view of say a thermostat. If you want a thermostat to do something like switch a heater on or off with a um with the agenda of keeping um um the sensory states within viable bounds, so that's the sensory state here would be the a thermoreceptor of some kind, then the thermostat, all you need to do is to tell um the switch of the ri on the radiator um or um the th the temperature um um the thermoreceptor, what is its desired point, what is its set point? And if you can specify the set point for a thermostat, then you can just compare the um desired set point temperature with the actual sensed temperature, and when there is a deviation in one direction or the other, then you just engage your actuators, which were with the red line here, to actually turn on the radiator, for example, or turn it off. So I'm phrasing that in a way which should be sympathetic to perceptual control theory. So what we're saying is if we can specify an active engagement with the world in terms of the preferred or predicted consequences of that action, we can now just use our motor, uh muscle reflexes, like a knee-jerk reflex, for example, or on the inside, autonomic reflexes to make that prediction come true. And this is how we actually work.
Brian "Ponch" RiveraWell, how about this? Uh two questions. First question is an individual um doing some mental rehearsal uh simulation, is would that be what this pathway is here, the orange pathway? Some type of planning, some type of uh simulation?
Karl FristonUm Certainly if you mean rehearsal in the sense of um simulating an imaginal counterfactual future. Absolutely. Yes, yeah, yeah, yeah. Yeah, yeah. I mean, that is a gift of very complicated things. I don't think that mitochondria or single cells would do that, but certainly you and I do that. Uh, most high-like forms.
Brian "Ponch" RiveraSo people that are in peak performance and and and thinking through their complicated world of performance may actually run that simulation. It could also be dreams, too, right? I mean, that that low fidelity dream that is being projected internally when I'm sleeping, is could that be represented by the orange line?
Karl FristonYeah, absolutely. I think both of those things they actually speak to um uh I think a really interesting but slightly orthogonal issue, which is rendering your generative model as efficient as possible. Um so that sort of mental rehearsal of your an elite athlete um spending much of their lived day physically uh doing what they do uh so expertly, but also um just rehearsing it in their head. These are um examples of um maximizing the efficiency of the model. Mathematically, this would be sort of minimizing the free energy or maximizing the marginal likelihood or the evidence, the like uh evidence for your model, um, that entail um a minimization of the complexity required to perform accurately. And things like sleep and introspection and mental rehearsal of this sort are thought by some, and most people in my field, to be the process by which we maximize the efficiency of our generative models by removing spurious or redundant aspects to it so that we we become habitized and very, very efficient, pursuing mathematically, paths of least action. Um, that when we actually deploy those um that generative model to predict how we're actually going to move or behave, it's much more efficient and um incidentally will also generalize uh to slightly novel or new situations.
Brian "Ponch" RiveraUh another scenario. So the three of us are portfolio managers, we're working on the same team. Um, we can kind of think of this orange pathway as a risk management. Uh, I believe you call it expected free energy. We're trying to reduce that in the future. So as long as we don't take any external action, if we kind of talk about what we want to do in the market tomorrow or next week or whatever that may be, we're actually rehearsing something that um is within our blanket, uh, if you put a blanket around the three of us, right? So can we think of expected free energy as a way to or minimizing expected free energy as a way of risk management?
Karl FristonYeah, well, I mean, literally, uh, absolutely. Um, so you you've you've jumped straight into expected free energy, which was which was courageous of you. We haven't even done variational free energy, but uh perhaps we don't need to because you can certainly so if you just look at the physics of this, is the case that these self-organizing systems that have this um deep structure, these deep generative models, literally in the spirit of deep um deep learning that you see uh in things like um transformer architectures and large language models, if they have this deep structure and they have the capacity to infer the consequences of their action, um then that tells you something really important because the consequences of the action are in the future because they haven't yet happened, which immediately tells you we're talking about a very special kind of generative model that is future pointing. So anything that you generate in the context of um simulating the consequences of your action is exactly what you were describing. It is rehearsal on the inside, because the consequences have not yet happened. And this, of course, can be described as planning, it can be described as optimizing your policy, strategizing, um, you know, selecting among the best the best path into the future. What's the best path into the future? Well, it's the one that um either minimizes the expected prediction error if I did this or I did that. Um uh way of reading that is it it uh it it minimizes the expected surprise. And that expected surprise comes in two flavors. It could be read as um uh as uncertainty. So I'm always going to act and move and indeed rehearse all the possible ways of um strategizing or policy optimizing um in a way that maximizes the information gain or minimizes my uncertainty. So this would be um uh an example of I said before, I would look over there to see what caused this flickering in the corner of my eye. That would be purely driven by this um aspect of expected free energy, which is the expected information gain. There's another kind of surprise, which is the cost of uh some outcome, which would be very surprising for me, for being a very low temperature, being very poor, um, having a massive drawdown if I was a portfolio management. Uh so that those two aspects of surprise minimization in the future uh underwrite the expected free energy. But crucially, you can rearrange the terms of the expected information gain and the expected cost, negative expected utility, to read as risk and ambiguity. So in in choosing the rehearsed or counterfactual policy that um minimizes the expected free energy, that is mathematically the same as choosing that which minimizes risk and also minimizes ambiguity. And mathematically the risk here is uh a measure I have to say it's good uh you know uh mathematically it's called a KI divergence or a round of entropy. So I I you know uh it's uh uh long long words for something which is actually very simple and easy to compute. It's just the difference between what I expect to happen if I do this, if I make say this investment, and what I ex uh a uh uh what I would um consider to be the most likely kind of outcome for the kind of thing that I am, sometimes written down as your prior preferences. So if I think I'm a portfolio manager that um is going to um um have uh you know a very, very reasonable thank you um rate of return with a sharp ratio of say uh two, um and um I am not going to um make uh you know uh incur a negative rate of return, I'll have some drawdown on my investment, then I have some very precise priors about the kind of thing I am. And then I can say, well, if I did this probabilistically under my generative model, what would happen? And is that close to what I think I am? And that degree of closeness now is the is the um the inverse risk or the the measure of the distance between what I anticipate will happen if I did that and what I would prefer to happen is the risk. So risk minimization, literally in an economics context, uh, now becomes installed into this expected surprise or expected free energy. Um so you know another way of stating that is a first principle account of self-organization naturally surfaces risk as one of the key imperatives for sustainable behavior.
Mark McGrathThat's such a better definition than the wide use of just volatility as being risk. Yeah.
Karl FristonBut it's interesting, isn't it, that volatility induces uncertainty, which again pushes apart these these probabilistic beliefs about it. So volatility is a really important aspect of what gets into into the into this sort of risk um risk uh assessment, effectively, uh that you're doing during your during the rehearsals. Yeah.
Curiosity And Maximizing Bayesian Surprise
Brian "Ponch" RiveraSo there might be a special case here. Uh I want to bring it up and I'll do this slowly. So the red pathway commits to external action. Uh that action perturbs the environment and generates new sensory signals. Uh so if you think about some of the most adaptive agents, maybe in combat and markets and surgery, uh, they may deliberately choose an action that will generate maximum surprise. Is there so we're talking about novelty seeking here, or or something, maybe in that case, is there a special case for this in the active inference and FEP world?
Karl FristonAgain, a great, great question. Yeah, absolutely. Um it allows me to just sort of dispel a particular um myth about surprise. So very often um you say all your sense making, if you remember the cyan and the blue arrows of the previous slide, um, is in the service of minimizing prediction errors, minimizing surprise. And that's a sensory surprise. When you want to minimize that, that's the variation of free energy. But when it comes to action, you want to maximize surprise. And this kind of surprise, this type of surprise called Bayesian surprise, and it is this that scores the information gain. It is exactly this kind of surprise that drives behavior that explains information seeking, curious behavior, novelty-seeking behavior. So the surprise, you want to minimize sensory surprise in perception, but when now planning your next move, choosing what to do, deciding what the best predicting your active states effectively, then you want to maximize Bayesian surprise. You want to maximize that information game. So if you remember before I was trying to say if you rearrange the terms, so you can write this down mathematically, um, you've got three terms in play, um, and you can sort of put you can you can arrange them in in um two ways. You can either bunch these two together, and that becomes risk, and that becomes ambiguity, or you can bunch these two together, and this becomes expected information gain, which is the minimizing uncertainty, it drives curiosity, and the the one that's left is the expected utility or negative expected cost that you get again in economics. So economics has all exactly the right sort of um terminology and the right rhetoric that maps beautifully onto various decompositions of this expected free energy.
Flow States And Fast Control
Brian "Ponch" RiveraSo I'm starting to see the real value of this on Wall Street, and and we'll come back to that in a minute. So the if the math is if the math is there and we have a process that we can, and we'll talk about a process here in a moment, um, this changes how things could work on Wall Street. And I think uh uh we'll come back to that here in a moment. I'm gonna move on to uh two things here. Uh we talked about uh ecological psychology, ecological dynamics, uh constraints led approach. We've had folks on the show talk about these things, and there's a place I'm going with this. So in this uh image at the moment, I have a pink arrow moving out of the generative model back towards sensory states, and that represents perception. That's that Anil Seth's approach of perception is top-down, inside out, it's a controlled hallucination. And I have a green arrow that moves in the opposite direction, and that moves towards active states. All these are internal to the boundary. Uh so basically what I have here is a perception action loop, and we we talked about that earlier. Can you talk a little bit about uh uh both perception and and even your work with uh uh Kotler and Manino on intuition? Uh, I think you guys talked about type zero, type one cognition.
UnknownYes. Um yeah, there are a number of ways that we could um we could take this.
Karl FristonUm I think probably the most useful uh dialectic or or distinction um probably pertains to um the distinction between sort of um reflexive behavior and intentional behavior. Um I know it's not quite what you were driving at, and you'll have to guide me back to it to speak to what what you were talking about, but if you are if you're now um um if you have in mind a a distinction between um reflexive behavior of the kind that um Warren Mansor would have been very fluent talking about in the context of perceptual control theory, where I just need to quickly and a fast and fruitable way, as people like uh Gigarenz would say, equip um my actuators, my the things that I actually use to move around with the right set points. Um uh, or more specifically, equip my perceptual apparatus with the right set points and then let the reflexes do the job of making that come true. Then you're talking about something that doesn't need to do this planning that we're talking about. You don't need expected free energy. You can do all of this with the variational free energy and to fast and reflexly in a very, very efficient way. So that this is what what a skilled athlete would you know um would be um driving towards. So they don't have to think about it. So it's system one versus system two thinking, basically. And um on the other hand, we've just been talking about um deliberative uh thinking, planning, rehearsing into the future, evaluating different policies and looking at the different ways of um um that evaluation uh uh in terms of expected utility and risk and ambiguity and expected information gain, all different perspectives on the same uh on the same underlying quantity. Um but that's a very different kind of thinking. Um now imagine that you've got the opportunity to do this sort of long-range um forecasting in your head, contingent upon one of a small number of policies, uh, or possibly a large number of policies, and then you're picking the policy that you're conforms to the risk minimization, ambiguity minimization, or the maximum information gain and uh expected utility. Umagine now that you can uh contextualize how far into the future you go. So you can actually now um contextualize whether I actually need to think out about this or whether I know exactly what to do in the next moment by shrinking that time horizon down and just responding reflexly. So that you've now got an opportunity, and I'm thinking here more of the work on things like flow, for example. Being in the flow means you're not planning uh in into the distant future. So being in the flow, if you're a skilled mountain climber, for example, means you are not thinking about which restaurant you're going to go to when you get off the mountain. You are in the moment. And that basically means that the future is no longer part of your generative model. You're just choosing the very next thing to do, very much like a thermostat, or very much like um a small insect would do, uh cutely fist fast. Frugal, um, knows exactly what it is, exactly what to do next, um, and doesn't have to worry about the future. There's no uncertainty about the long-term future. So I think that you know perhaps one way of um associating the distinction between the the um the pink and the green arrows would be to sort of go back to Kahneman's system one, system two distinction. But read in this instance as something that is much more reflexive. Some people call this mere active inference. So you're not now talking about the planning aspect, the risk and ambiguity um ways of scoring um the um the consequence of this uh this plan or this policy. You're now immediately just responding in a way uh that you have always responded. And interestingly, um what tends to happen if you simulate these things is that initially you start off with the green arrow, you start off with a deliberative thinking your way through the problem, what would happen if I did this and is this the best thing to do? Uh, and then choosing the most lightly thing to do, giving you the kind of thing you are. And then you see yourself doing this time and time and time again, which brings us back to the elite athlete you're rehearsing time and time again. If you see yourself doing exactly the same thing and in exactly the same situation, um, then you will slowly habitize and learn that. So you'll actually go from a sort of system two kind of thinking to a system one kind of immediate fast frugal response. Uh and I imagine that's probably another perspective on the importance of not so much dreaming in this instance or introspection, but just r repeatedly doing the same thing time and time again until you be necessary, only habitize it effectively.
Predictive Versus Prospective Control
Brian "Ponch" RiveraSo I want to go to perception in a moment. I want to stay on the uh green pathway, the intuitive we're calling it perception, intuition, and process. And the reason I'm calling it a process is if we scale up to a team and the three of us are on a team and we have good understanding of a process of how to plan, uh, meaning we reduce the energy required to figure out how to go through a planning process before simulation. Um, I want to point out something here that that process feeds back down into here, right? So we can actually run the simulation. Right. So it doesn't go external to the environment right away. We can, if we put the boundary around us, we run an effective approach to planning. I'll just say that. Um that's another way I look at that is we can run that simulation. And going back to your elite athlete, they have the reflexive approach to understanding how to triple jump or run a play or whatever it may be. Um I think they call it habits of mind. We don't call it muscle memory anymore. Uh, I think that's gone. But they have this reflexive capability and they can simulate it internally to improve future performance, right? So that's kind of what we're doing here, both at the individual level and at a team level, is is is we haven't done anything out in the environment yet, right? We we're just kind of we're keeping it internal. And uh that that's where I'm kind of thinking how this scales up towards a team. Uh and now I'm gonna switch over to perception. I'll let you comment on both here in a moment. So, perception, uh, the way we understand it is perception emerges out of something. Um, and it actually controls our what we sense, right? Uh or I think it does. It may filter what we allow into our system. And I'm talking us as humans at the moment. Can you comment on that and and help help us understand all that a little bit a little bit better? Making sure we're not lying to anybody.
Karl FristonNo, you're not lying. Uh that was that was very nicely put. Um yeah, absolutely. Um the so in fact it couldn't be any other way. So sorry, I'm talking about the importance of on the inside passing back the consequences of your plans to the registered sensed outcomes. It couldn't be any other way because most, if you just think about it, um, deploying these ideas um as um in the context of say neurodevelopment. I said before that the way that you measure risk is to use the difference between what I think will happen if I do this versus what I a priori uh expect to happen given the kind of thing I am. So I'm not talking about sort of um declarative or propositional beliefs, I'm talking about just the way I am built. Um so these are subpersonal um uh this is made at a subpersonal level. Exactly the same thing applies to an infant. But how can you tell an infant what it prefers? The only thing you can tell it is basically the preferred sensory states, the preferred observations. Why? Because at this point it hasn't got a joint model. It doesn't know that there is an opportunity to settle at the mother's breast, it doesn't know there's an opportunity in later life to your to make friends at kindergarten. It just hasn't got these representations. So you can't specify these prior preferences, these innate preferences, this home the thing you need for your homeostasis in the space of the latent states that is the realm of domain and gender model. So it has to be in the sensory states. So you have to specify your your preferences in terms in the the currency of your sensory observations. Um and that basically means you can only evaluate the quality of a particular plan once you project it back into that sensory space, that outcome space. So that what has now become a gr a green arrow on the inside is an essential part of this risk evaluation. So you actually the probability distributions I was talking about before over the anticipated given and action and the prior preferences that define the kind of investor or person or infant I am or phenotype that I am. That that that probabilistic uh that that that um those are probability distributions that um are compared against each other, um, and that has to be done in the future on the inside in order to um in order to evaluate the quality or the risk associated with a given uh with any given policy. Those probability distributions crucially are over the outputs. They're over the consequences of your behavior. We're talking about rate of return before, for example. So you you can only specify your preferences about the consequences that you actually observe, which is how much money you make or lose if you're the kind of thing that would qualify as a portfolio manager, for example.
Brian "Ponch" RiveraOkay. Uh just in the interest of time, we've we've got a lot of questions we want to get to. Uh, I'm going to continue to build here. Um, so predictive versus prospective control. I want to talk about this briefly. You co-authored a paper that lays out the different accounts of uh anticipatory behavior. Um, and he wrote this with some of the folks who've been on the show, uh, Dr. Hippolyto, Bray, uh Warren, uh, and yourself, and I think a few others. So, my view on this, and I think Mark's view on this as well, is is the what we just drew encompasses what ecological psychology says happens when somebody's throwing a baseball at, you know, towards their kid, um, that they don't have to think about it. It's it's just the affordances in the environment. Uh, we talk about constant bearing decreasing range in fighter aviation. That's that's how we know we're going to hit something andor catch it. So is it possible that these all work together in models or or can you talk a bit more about the uh the conflict between predictive and prospective control?
Karl FristonYeah. Um I'm gonna read it um you know in the way that you've set out the slide and set out the question because uh I haven't I haven't heard it put like that before, but I think it's an excellent way to put it. So the um the predictive aspect I think would be very consistent with uh a perceptual control theoretic uh perspective. It would be the mere active inference, it would be the system one reflexive kind of response, which would only involve very um fast and frugal um um what Giga Renzer would call heuristics, um, that are entailed just by engaging um state action policies so that immediately I see um this, I infer this state of affairs, um, and therefore I know exactly what I'm going to do because this is the kind of thing I am, and I will send my set points down to my sensory epithelial and let my reflexes do all the rest. And the obviously the example here is I only need to expect to see my hand um uh decrease the angular disparity between uh my hand and the ball, for example. Um and I and I don't need to deliberate about that, I don't need to think about that. Um I I could I could you know write down a machine to do this um you know very quickly and very very efficiently. And that's the exact opposite of what we're talking about in terms of this ability to plan, to think through the consequences in a prospective way. Um what will happen in a few seconds' time or a minute's time if I do this or do that. So that the distinction here, I think, between the predictive and the prospective control is that the prospective control takes you outside the um radical inactivism and perceptual control theory. You can't you that there's no room in perceptual control theory for people who actually think about the consequences of their actions in the distant future. Um and that's the system two kind of thinking where you get this risk you know, risk and ambiguity on the expected free energy in play. So um active influence is not in the radical inactivism camp. Um it does certainly accommodate 99% uh or acknowledge that 99% of the way we actually move and engage with the world is beautifully described by perceptual control theory. Um if you're an engineer, this would be known as model predictive control. If you're a physicist, it would also be known as KL path integral control. These are the standard ways in which you build robots, in which you build drone interceptors and the like. And that's you know a really efficient way of doing it. But that doesn't work in a more complicated situation where you've got a number of alternative paths into the future. That means you now have to choose or select among a number of alternative paths into the future. And that future I'm associating with the word prospective control. Um that that now requires this risk-sensitive or path integral control or KL control or model predict sorry, model predictive control that has um the ability to choose between a number of counterfactual futures. So at this point, that's what I meant by intentional behaviour. That you know, I intend this kind of outcome and I've selected this particular plan into the policy into the future over another one. The predictive control doesn't have that option. There's only one path into the future, and that's a path of least action or maximum efficiency or minimum effort or maximum likelihood. Um uh and that you know has uh a great efficiency and it is um perfectly okay when the system um doesn't have uh has a certain kind of short-term predictability. But in other situations where you where your actions have consequences in the long term that uh need to be predicted, then you we have to move to this uh this more sort of prospective kind of uh kind of planning as inference. If you can, you know, an insect wouldn't have the choice. An insect would just be in this kind of predictive control. Uh but you and I have a choice. We can either be in the flow and do the predictive control, or we can stop and think about things uh and deliberate on what's the best thing to do, and that would be much more of a prospective control.
OODA Loop Reframed As Inference
Brian "Ponch" RiveraWell, we're big fans of flow, so uh we we like that green pathway. Okay, let's wrap, let's kind of connect everything and we'll get to some uh good questions here. Um this has been fascinating, and I think we can go on for about 10 more hours if you had you if you had that much time. We won't do that to you. Okay, so uh here we have the overall drawing. We have a fitness landscape, an environment full of hidden states. Um you call it are they called numinal? Is it is it am I using that word right in the environment? I think Kumala, yes.
Karl FristonUm the the phenomenon phenomenon of the numeral. Um yeah, that's very cantin of you. Again, you you know more about this than I do.
Brian "Ponch" RiveraOh no, no, no. We we we're we're we're still learning here. So we here we have uh the boundary, the Markov blanket. We have it separating the agent from that environment. We have the sensory states on the left, we have the active states on the right, sending actions out. Um we have a generative model doing Bayesian inference, um, sending predictions down, receiving prediction errors up from the sensory states, I believe. Uh we have the active inference cycles, if you want to call that, one internal, uh one external. Uh, and we have two implicit pathways, and that's what we're calling them, that pink arrow and that green arrow, one being that perception. Uh, that's we're calling that pink one, and then we're calling that um uh green one more of that intuition, that that type zero, type one cognition you worked out with uh Kotler. Um, I want to share something with you. I'm gonna build something up, and and this is the surprise uh for you. Uh and here comes the build. Uh this is actually what we call the OODA loop, observe-oriented decide act. This is what uh John Boyd gave us, and I'm gonna let Mark talk a little bit more about that. So this actually exists, but most people don't know anything about it, right? They they turn it into a linear um thing. They they go, oh, oh, D A, right? They think the OODA loop is as simple as first you observe, then you orient, then you decide, and then you act. What we just accomplished in this hour and 20 minutes that we've been together is to show them that's not how this works, right? We we have so much more uh behind this that we believe this is a, I'm gonna use the phrase, a weapon system, a way to understand um how living systems engage with a changing environment. And unfortunately, a lot of folks go out in the world and they share the linear version. Um, we believe that the work that John Boyd did, um, which includes looking at cybernetics, uh, Goodle's um theorem, uh, Heisenberg uncertainty principle, he looked at mysticism, he looked at you know, quantum uh physics. I'm not gonna get too I don't go through a lot of detail there, but he was on the same pathway that you're on. And I believe what you did is you actually formalized the maths behind all this, the math behind it, right? And um to me, that means that you're on the right path. I mean, the folks that are looking at this are on the right path. Uh Carl, question you have you ever seen the OODA loop or heard anybody talk about it?
Karl FristonI've heard people talk about it, but I I I haven't I uh you'll forgive me, I'm uh somewhat colloquial in my expertise, and I'm not I don't I have your the breadth of your scholarship. Um, but that seems very convincing. Um interestingly, that's exactly why I had that second allergy allergy to the notion of flow. So this notion that things are just flowing from um from left to right. Uh that's exactly where I was trying to counter in exactly the way that you're um describing a misconception of the of Laudeloup. Um there's lots of sort of the current circular causality of a cybernetic sort on the inside, yeah.
Platonic Space And Multi Scale Selves
Brian "Ponch" RiveraI I want to point out one thing on the left here uh and ask you a question, and we're gonna I'm gonna turn over to Mark for a lot of questions. But on the left, we didn't have this earlier. We had uh we we have unfolding circumstances, which uh we believe is coming from other agents that are uh trying to maximize our surprise. So if we're competing, we want to create mismatches for others, right? That increases surprise while minimizing our surprise. Uh but there's another one here outside information. You've been working with Michael Levin. Uh, I've seen you on some of the podcasts there about platonic space. Can you spend a couple minutes on talking about what platonic space is and how it relates to the free energy principle and active inference?
Karl FristonUm well, the the agenda that Mike's pursuing is trying to find um sort of a first principle account of self-organization which would be apt many uh, you know, to use a word factor, but certainly in a scaling variant way. Um and um looking towards theoreticians and mathematicians to try and find that these sort of uh um uh uh I I I I hast to call them that because I could get into trouble, but sort of platonic uh uh structures that explain um self-organization in a um in on an in an ensemble in a multi-agent setting, in a multicellular setting, uh in a scale-free in a scale-free setting. So it's interesting you you brought in now the notion of you know different two active inference agents talking to each other, or perhaps a community of active inference agents, which are known to have their own Markov blanket, and then you get Markov blankets of Markov blankets. I say that because um the I think the intersection or the connection between um Mike's Teutonic program and the free energy principle is exactly the application of the principles of um the free energy principle at multiple scales, in a scale invariant or self-similar fashion. I think that's the key to uh the uh the the nature the the that's the right maths that Mike is searching for. Um that everything is contextualized by by the scale above. And this becomes incredibly important when you're thinking about communities of things, whether they're cells or whether they're investors or whether they're populations, whether they're um yeah, communities of exchanging uh via social media, um, and the you know the the formation of Markov blankets at at different at different scales, you know, the formation of in-groups versus out-groups. So I I think that's probably the the point of connection, which I I imagine you probably want to explore with uh Yeah.
Brian "Ponch" RiveraOh I'd like to know if we can keep you for uh uh at least 20 more minutes. Is that possible? At least of course. Yeah. Okay. So um I do want to get into markets, I want to talk a little bit more about flow, uh briefly on psychedelics, uh, AI, and consciousness. I know that that that's about six hours right there. Yeah.
Mark McGrathUh but before I do that, I want to turn it over to Moose and just check in with him to see what uh I just wanted to go, I wanted to go back to the sketch punch and get Carl's uh take on it because there is there is so much there is so much overlap um when we when we look at the sketch. And this is and he this is what he called it. You know, he he um he only drew it once. I mean we we Ponch and I have seen all the the various iterations of this inside of the archives, but the this this particular version was done around 1995, right before he passed in in 1997. And of course, we wonder what would have happened had he had he lived a little longer than than age 70. Um but when we when we look at that uh sketch, the the generative model in the middle is what he called orientation, and it's shaped by our genetic heritage, our our our cultural traditions, our previous experience, new information, etc. And if you'll notice, kind of back to your point about why you shirk away from flow because things just don't go in a circle, and like we we go away from the reductionist model of Boyd's work because there are so many arrows pointed in so many directions on this sketch, including inside the generative model, all of those things, if you look closely, they're all pointing to each other's as if they were firing off simultaneously as they interact with reality outside of their uh outside of their boundary. In the other place, um, we could we could send you over the brief, but his primary collaborator was a guy called Chuck Spinney, and and he's he has a uh a deck called the evolutionary epistemology of John Boyd's destruction and creation. And when he draws the OODA loop, he includes inside of the uh the graphic, you see the boundary. Now he doesn't have it quite as a Markov blanket, but you see the green area, the green shaded area on the on the on the right-hand side, uh, the graphic. Um, first of all, you can see the the the uh the the the contrast between the reductionist uh garbage consulting uh circular oodaloop, which does not account for complexity, it doesn't account for all the other things that Boyd put into this. Uh, as Ponch mentioned, you know, it was a very interdisciplinary thing, cybernetics, warfare, history. Philosophy, everything, um uh quantum, um, entropy, and and his core destruction and creation was that we're constantly breaking and shattering and revising that generative model because it shapes how we sense, it shapes our sensory states. That's what he called our our observations. So um, whereas you say the the sensory states and then the active states, that was the the test uh to Boyd for a hypothesis, which would be the the decision. And what people have done with this is they've uh reduced it to the point where it talks about, well, I'm in this phase and I go to this phase and I go to this phase, but we know that complexity is not linear, and Boyd knew that. And Boyd never said that it was, and Boyd drew it this way very specifically, and I think it really, you know, what we get in this part that we've where we've spent the last uh hours going over is that there's that there's there's a a beautiful synergy in affirmation of what he was actually pursuing um completely autodidactically. I mean, this was a man that was doing all of this and coming up with it in his in his uh in his free time, if he had that, although he had any free time. But that's that was the uh that's the result. And that's why with us, that's this is why active inference in the free energy, that's why it resonates so much. And it's nice to see that all the mathematical backing and everything, because I think at a minimum we could argue that Boyd inherently understood this, that he he implicitly understood this on how complex adaptive systems interact with their environment.
Karl FristonSo did did he actually use the words hypothesis and test for those two?
Mark McGrathHe he did, yes. Yes. When he when he drew out the sketch, he he did use uh those those exact terms. Which, by the way, to your point, just to say that, that that shatters the linear that shatters the linear reduction of it because decision in that, if I frame it that way, a decision is not a hypothesis. It's just part of a step that I've that I phased through circular in the in a circular fashion. But yes, he did use hypothesis.
Karl FristonYeah, that I mean you know, that's my frame of way of thinking about the free energy principle. This is, you know, I talked about inference to the best explanation, but you know, you could equally frame that in terms of hypothesis testing. Um so all my active engagement in the world is just soliciting the right observations that enable me to disambiguate and test my hypotheses. Yes. So you know the the this is isomorphic with um with active inference.
Mark McGrathUm that's what Ponch and I have been uh Ponch is the one that showed me this stuff and it clicked with me uh you know clearly, and in our and of course our our thinking develops and evolves as Boyd would have wanted. Though the one other part I would point out is that uh is the area where it says implicit guidance and control, because that's the second part. So when we talk about uh explaining John Boyd's Oodaloop sketch, we talk about that generative model, the orientation. It it starts it starts there. That in turn shapes or implicitly guides and controls how we make sense of our environment, and in some cases, um how we how we act. I you know there's there's certain things that I can do because I've done it so many times, like catch a baseball or throw a baseball. I don't have to go through any type of a linear thinking process to know how to do that. It becomes inherent to me so that my uh the my energy expenditure is actually lower because my orientation, which is attuned to that, implicitly guides and controls how I would act, you know, how I would how I would do something. Same thing in how I would sense something, even intuitively. My orientation is so well attuned to the environment, through my previous experience, through the unfolding circumstances, to how I break things down and put things back together with novelty, that I can sense things and feel things faster and have a lower energy expenditure because the implicit guidance and control is uh my orientation through implicit guidance and control is shaping how I see things. And if I can do that faster than my competitors or faster than the rate of change, that's what Boyd said would give me the uh the capacity for free and independent action. That was ultimately what he said in his uh his core document, his core doctrine, which would be called destruction and creation. The reason that we break all these things down and revise and update is to implicitly guide, control, and shape to increase how we or to rather improve um our capacity for free and independent action. So the the more attuned we are, then the faster we can get, and the faster we can get, then we can defeat our uh defeat our competition. But it's but it's in no way limited to competition, but that's just one application of it. Yeah.
Karl FristonYeah. Um just to just to sort of endorse that. Um so in my world, much of my thinking inherits from um Helmhoff's obviously, but also how you write down the maths um in the form of Richard Feynman. Um so the barrier to free energy is the one that uh Richard Feynman used, and it is finding the path of least action. It is exactly what you've just said. It's finding the most efficient path in virtue of the action being time times energy. So if you find the path of least action, you are effectively doing the same thing efficiently quicker, or doing it with less energy. That is the definition of the path of least action, which is why it's called a variation principle of least action, the free energy principle. Uh, it's also you you you you also mention free and independent action. So it's quite nice it's got free and action in the same in the same in the same chance, which is of course uh one's straight up to the free energy principle, it's actually uh a principle of at least free action.
Mark McGrathWell, we'll send you the paper destruction creation because that that's what we find when we interact with people that reduce his understanding and you know what he actually taught, you know, the authentic version of what it was that he's teaching. We find in most cases, not only have they never read destruction creation, they don't understand it. Um and and I and I I know when you when you read it, you'll look at it, you'll see it it it'll it'll click and you'll see the sources that he that he had, like Popper and Pollani and even Alan Watts. I mean, it you know, it was pulling from an extremely interdisciplinary approach of how we make sense of our world by by shattering and revising and updating our our understanding of it in order to get quicker, get faster, get stronger, get better, whatever it takes to improve that capacity for free and independent action.
Karl FristonCertainly, if if if he inherited some of these ideas of cybernetics, I think that we'd also recognize that. Uh, you know, a lot of these and you could regard a lot of the um active inference uh formalism as basically a 21st century version of cybernetics as we had it in the 1950s, probably.
Brian "Ponch" RiveraWe do have one question on orientation or the generative model that we have to ask, and that has to do with is it local within the boundary or is it going into 4e cognition? Is our generative model, I'm talking mine right now, and and Moose's and yours, is it contained within here or is it all around us?
Karl FristonIt's uh contained within there, but it's about all around us. Um so um uh that may sound a quite a a a facetious response, but I think it's quite important because um it is the internal states that that can be read as encoding the sufficient statistics of your Bayesian beliefs that constitute your um um your generative model of this probabilistic sort of this mathematical sort that you can now measure um the distance between different probabilistic beliefs like risk. So in saying that, you are saying that the um the actual generative model is encoded or instantiated on the inside. But the space that it covers, the beliefs that are entailed by that internal biophysical encoding on the inside are about the outside. That's the whole point of the Markov blanket. It allows you to interpret the inside as a model of the outside. So now you can talk about the information in entailed by certain um say neuronal activity or brain brain activity as information processing of a particular kind where it's information about the outside. So this is beyond Shannon information. This is information about the outside, uh which can be read exactly in the in the spirit of the hypothesis that we were talking about in the in the contact on the previous slide. These are hypotheses about what could have caused my experimental results. They're not hypotheses that they are beliefs that I have on the inside about the outside causing um causing my my experimental data, for example, the the data that I use to test my hypotheses about the outside. So in that sense, it is um it entails a um a synchrony, a coherence between the inside and the outside, simply because if you've got the right model, then what's going on on the inside is a reflection of what's going on the outside. And then we get back to not the law of requisite variety, but its sister, the good regulator theorem from cybernetics, that in order to regulate and to engage with my environment, I have to be a good model of that of that environment.
Markets, attractors, and the Platonic Space
Brian "Ponch" RiveraWell, I I would love the title of this episode Carl Friston Explains John Boyd Zootaloop, but we'll we'll figure that out later. Uh shifting gears, let's talk about markets. You talk you're talking about markets and attractors recently. Um we brought up uh Patrick Scotanis' work on the market mind hypothesis. Can you tell us what you're seeing and what you can share with us about uh maybe using active inference AI, maybe using uh free energy principle and active inference towards markets and attractors. What what what's so special to you about that?
Karl FristonWell, that was a really interesting question. I think answered at two different timescales. So Patrick, um Patrick and like-minded theoreticians in um economics, I think are pushing um for a move effectively away from behavioral economics to cognitive economics, um, to put agency incentives back into the game again. Um and I I would wholeheartedly support that, you know, from the point of view of uh, you know, cybernetics or or though the dupe or active inference. Um you know, to understand a complex system such as the markets and the engagement of investors um uh with that market and constituting that market, I think you have to you have to have this more nuanced view of the way people behave and and and and exchange. Uh so um uh Patrick's been pushing for this, uh writing books about it, and trying to move um sort of uh uh sort of uh orthodoxy in um the way that economics is taught and uh you know applied uh in that direction. Um the other more short-term thing is is uh the particular applications of the free energy principle um that um we've been pursuing from a sort of more pragmatic and purely theoretical perspective. And it's a really interesting um a really interesting domain because um it is exactly um a complex system that it has been co-constructed by lots of agents and exists uh at a particular scale that is distinct from the agents that cause the or that constitute uh the market, so certainly free markets, for example. Um and yet there is an opportunity to um uh write into uh the uh this complex system modelling um active inference in a very explicit way, because you can now focus in on the activity of an investor or an investment team, an portfolio management team, for example. Because you can now uh describe their behaviour as consistent with or as driven by the principles of active inference. And if one so does, then one gets into the game of exactly what we're talking about before, of explicitly being able to compute and evaluate and simulate their behavior through the minimization of this risk, where risk is formally defined as this chaos divergence between their prior preferences and what will happen if they did that. To do that, though, you have to now have uh install a generative model of the free market. Now that's an interesting problem because if you can solve that kind of problem, then um you're in a position to deploy that kind of generative model to all sorts of complex systems, you know, healthcare provision, climate change, food security, uh you know, loads of really important sort of um um uh complex systems that if you had a good generative model of, um, then you can now do the forecasting that is requisite to do exactly the kind of pointing to the future, the counterfactual processing, the rehearsal that we were talking about theoretically when going through the slides. Um you can now do that. You can evaluate the expected free energy, you can evaluate numerically the risk. We don't have to worry about the ambiguity because there's not much noise on uh you know fintech data, but we so the risk we can actually evaluate formally from different kinds of policies and um you know and the temperance scheduling. So from my perspective as somebody who knows nothing about financial markets, um it it's a wonderful opportunity to test out these ideas, both about them both about the sort of the universal constraints on generative models of complex systems in which we co in which we participate and co-construct, and yet are too big for us to uh influence in any meaningful way as individuals, yet at the same time also um test hypotheses about um you know motivated or intentional behavior being motivated by uh you know um the the components that qualify uh qualify uh that quantify uh the risk that we were talking about. And also an opportunity to actually connect the dots between um risk-sensitive control and pathetical control in an engineering context and the way that economists um talk about risk and ambiguity and uncertainty, for example.
Brian "Ponch" RiveraI'm gonna go out on a limb here. Uh I'm gonna push this up a little bit and it might be a little too extreme, but let's let's build a snowmobile. So, real fast. We you we've talked about attractors, uh, we know a little bit about affordances in the environment. Uh you you follow Patrick's work on harmonics and things like that, maybe in his book and what he talks about. Um we know a little bit about the platonic space, uh the the the we've talked about that briefly with Michael Levin. Uh and I we haven't talked about consciousness yet, but let's bring that all together. Is it possible that um collectively we're projecting um what we call the market? Um and and you know, that this complex adaptive system, it's a flow system, whatever we want to call it. Um and if that's true, then then what's in our I'll call it our orientation now, the way we're designed. Um we may be seeking those attractor states, um, those levels in the market that that things bounce off, you know, that go to or they roll, they go through. They may be discoverable. Is that possible? I mean, and i I I know, like I said, I'm I'm going out on limb here, but based on those things that we've talked about with Patrick's work, uh attractor states, uh fitness landscapes and things like that and the plotonic space. Is it possible that we could find um uh build better maps, internal maps of the external environment when it comes to the market?
Karl FristonI think so, yes. And um and I I think you used all the right words there. Um so technically that notion of an attractor, I think, is absolutely crucial. Um so there's some really fundamental constraints on your generative model or your your your orientation, just by acknowledging that there is an attractor in play. If there's an attractor in play, that tells you a lot about the nature of the dynamics that would constitute your prize of your generative model. Uh and these priors have never been leveraged before, to my knowledge, in um macroeconomics or sort of financial modelling. People have come a little bit close by putting 20th century physics and equilibria, so sort of uh dynamic uh stochastic equilibrium modeling in financial markets starts to get a bit close, but it misses the point that the market is an open system uh and therefore it cannot be modelled with equilibria. You have to move to non-equilibria. So you have to move to the kind of physics that the free energy principle deals with. You have to move to open systems that have non-equilibria. Why are they open? Because there's a Markov blanket with his porous. So now if you put the maths of disequilibrium, non-equilibrium um uh self-organization into play, you now have the bare bones of a generative model that can you know an orientation that any one person or any one group, if they're sharing that that that that narrative or that generative model could then use to do their own policy evaluation um and uh you know or indeed sort of offload that onto a computer um you know with the right kind of model.
Brian "Ponch" RiveraI tell you what, Carl, if you can call David and Michael at any minute and just say, hey, we gotta we want to go forward with this, um, we're ready to go. I'll just say that. Uh we'll we'll keep that in in this uh uh this this episode. But we don't want to go too deep in this because we're looking at this right now and it's it's unbelievable, to be honest with you. Uh anything, Moose, on uh on markets at the moment?
Mark McGrathI mean so much of it's hard to top. I can't, I mean I can't top it other than say like I'm really glad, Carl, that you said all that. Um it was the it was the it was the not the last part that you said, the part right before that that really uh resonated with me and I've lost it all of a sudden.
Brian "Ponch" RiveraBut I think the connection to Mises, is that is that what you're talking about?
Mark McGrathYeah, yeah, that's it. Because um, you know, high Hayek and Mises are often dismissed because they're not mathematical uh economists, and when you study those uh those things that are looking at things that are axiomatic, that are a priori, um, and and there's no you know econometric uh visuals and things like that. People dis people dismiss it. And that's you know that's what I wound up getting my master's degree in motivated, motivated by Mises and Hayek and others, because being a a Bachelor of Arts in history, um I found that that that economics in that way made more sense, specifically around the actions of humans and the decisions of humans. And some of the uh parallels right before I met Ponch, um I was working with a collaborator friend of ours, uh, also uh also went to Cambridge as well uh as you did, um in a Cambridge economist. And we were actually connecting Boyd and his uh concept of orientation um with with Mises and Hayek, uh specifically around around human action and and uh what uh Mises called the praxeological axiom. So um it was to hear you say that was really uh refreshing.
Brian "Ponch" RiveraWell let's move over to uh psychedelics and flow. Uh so I mentioned Robin Carthart Harris' work uh with the uh Rebus model and uh entropic brain hypothesis. Uh can you talk a little bit about uh I I think to me when when people are in a meditative state or they're working out or they're they're uh even dreaming, you're in some type of flow state. You're getting access to um really relax your default mode of seeing the world, and you get to revisit things through counterfactuals, and that would be that orange pathway we talked about. Uh that's just my rough, you know, this is how I think it works. But can you talk a little bit about how uh psychedelics affect um uh active inference and and what you think is going on?
Karl FristonYeah, well, there's a deep link between the action of psychedelics and uh the changes in your computational architecture in your brain when you engage in, say, mindfulness or meditation and uh presumably actually doing lots of um sport. Um I think the simplest way to conceptualize this is to revert back to this predictive coding metaphor or um implementation of this orientation where you're um getting the newsworthy part of the sensory observations in order to update your generative model, to test your hypotheses ultimately, and to realize that there is a a lever to turn, a knob to turn on the degree to which you pay attention to certain observations relative to what you have already learned and know a priori. And this knob is um comes uh is known in uh you know um by various names. In my world it would be called precision. It's just the inverse voluntility. So the inverse variability or dispersion or variance or dissipation. So the the complement to that is something that's very very precise something which you can imbue with a high degree of credence or reliability or confidence. So if you read precision as confidence, I think it's probably the simplest way. So what we're talking about now is that if you want to use orientation as a word of you know the right kind of belief updating on the basis of observations in the service of testing hypotheses then you would um you you you also have to be very careful about adjusting the balance between the precision afforded the observations relative to your prior beliefs that you've accumulated in your generative model. If you afford the observations too little precision, then all of your sense making will be dominated by your prior beliefs and you'll miss stuff. If on the other hand you afford too much precision or confidence or credence to the observations, the evidence at hand in relation to your hypotheses, then you're going to subvert your prior hypotheses and you're going to be effectively exposed to overfitting and possibly even false inferences that are not constrained by your prior beliefs by your prior beliefs. So that balance is really important and that balance um just for those people who read um Andy Clark we're talking about precision weighted prediction errors we're talking about the precision weighting of a prediction error. So this is a sort of second order um uh aspect of um active inference or predictive coding it's not just about the prediction you've also got to predict the predictability or the predict the confidence so I mean this has um again some beautiful connections with the markets in terms of you know people worrying about the markets lost confidence. That means uh and and volatility models and economics all of these are about estimating and predicting fluctuations in the precision of various things uh and um from the point of view of um uh you know an experimentalist or a scientist it's basically how much weight do you give your experimental results in relation to your convictions about your prior hypotheses so if you get if you get that wrong you can you can expose yourself to all sorts of psychopathology sometimes it's useful to relax the precision of deeply held convictions, prior convictions if they're no longer fit for purpose. So this is the the rebus um relaxation of these overly unduly precise uh prior beliefs uh uh uh and pay more attention to the sensory evidence at hand just to explore different hypotheses so psychedelics act on um the um part uh on receptors that that uh that basically control the excitability of various um n neuronal populations that encode this creator's confidence or precision and they do so selectively by reducing um or it is thought that they do so selectively by reducing the precision deep inside the model your prior convictions relative to the sensory information so when you take psychedelics you're now paying much more attention and your percepts are much more elemental much closer to the sensory epithelia they're all about sort of colours and shapes they're not about the causes of the colours and shapes like faces or houses or your high level abstractions that we normally perceptual hypotheses would bring to the table to explain all of these elemental things. And in the same way in saying certain sort of meditative practices, say breathing meditations, you're trying to get some volitional control over this balance between the prior precision and the sensory precision by really focusing endowing a particular modality say proprioceptive and interceptive signals from breathing with a your high precision and then suppressing the precision of everything else even to the extent you get into a non-dual state that the precision of these very high level beliefs for example I am me is now flattened. Just to uh big response to closure but also to uh um speak to your I think important notion of landscapes here one way of thinking about precision is in terms of movement on a Waddington landscape on a free energy landscape so that if you now um so now you're we should have done variational free energy but if we just read the free energy as um surprise for example as the total amount of precision metric prediction error then as we are doing our belief updating we are changing the free energy and that induces a landscape a free energy landscape and we're always trying to find the minimum we're always trying to find the path of least action through this landscape. So the path of least action literally is the exactly of the kind we were talking about before in terms of your action that is free and independent and very efficient is literally finding that the the lowest point of this free energy landscape as we do our belief updating, as we do our planning as well. And clearly the nature of the paths that are um traced out on this landscape depend upon how sharp how those um those minima are how the curvature of this landscape and what and um basically what the reader's hypothesis says is that um sometimes you can get stuck in a rut. You can have very very pre precise overly um overly precise ways of moving through your belief updating and consequent actions and responses to situations both perceptual and um and in terms of your behaviour that are no longer fit for purpose and let's say you've moved house or a loved one has died. Your world has died or there's some um you know then there's the the Iran war. Everything you knew before um is now no longer fit for purpose in terms of prior beliefs that that afford the best hypotheses for this particular so you now become disorientated but sometimes you don't know that because you assign so much precision to your prior beliefs you're not actually paying attention to what's going on out there. So to do that to remediate that then you have to relax the precision and decrease the curvature of these free energy landscapes that can be done with psychedelics and you can also done with uh with meditation.
Autism, the hard problem, and why consciousness requires a generative model of the future
Brian "Ponch" RiveraSo I think the notion of a landscape where the attractor now is just um the points to which you are attracted when you're trying to find the minima of this landscape is a very useful heuristic and just to repeat the precision corresponds to the curvature uh the steepness of all of these of these minima so uh Andrew Gallimore writes about uh methyl treptamine tryptamine uh DMT and he uses the fitness landscape in a lot of your I believe the free energy principle and active inference explain exactly what you just said right and and so I know a lot of people don't read about books on DMT or about DMT um but they should take a look at that because I think that he does a great job in explaining that there. I do want to spend a few more minutes again we've we've been going for two hours uh I want to flip over to consciousness and then AI but in the context of consciousness uh we've had uh Dr.
Karl FristonHippolyta on here um a few years ago and about that time she was on here she wrote about autism as well um and connecting to the free energy principle can you can you talk a little bit about um the hard problem with consciousness and maybe how FEP looks at uh um autism yeah um and I'll try and do it in a way which moves from autism to consciousness and so the autism is um in um computational psychiatry is the post-to-child example of when we get that balance between the sensory precision and the prior precision wrong. And the particular failure um or the s the story at least um in severe autism is that there is a um a failure to attenuate the sensory precision which just basically means I can't ignore anything. I am exposed to the sensorium so I am compelled to explain every little elemental stimulation observation that I am exposed to. And this comes at the price of never building um uh precise, more abstract generative models orientating in a way that you can abstract and explain, create hypotheses about the causes of all this elemental sensory stimulation. So that explains a lot of the classical triad and a lot of the uh attendant um signs and symptoms of severe autism you know people will avoid unpredictable situ people with severe autism will avoid very unpredictable situations and try to make the world as elemental and predictable as possible uh by say self-stimulation also because they're attending to they've assigned great precision or credence to the sensory level of perception they're also uh you can get idiots savance uh who you know who are you know very gifted in terms of reproducing visual impressions graphically for example um again at the price that there's no central coherence or no sort of abstract narrative behind it. It's just you know but there is incredible detail um that they are aware of which people like me and you might not be aware of. One consequence of the inability to build these um deeper generative models um that um explain or have hypotheses that are more abstract and more um apt to describe uh relationships and and um especially um other things like me for example the mother um means that you never develop a full sense of the other and in particular the mother the mom yeah so if you can't develop a hypothesis that the world is populated by others you're never going to develop a sense of self so to have a s because the sense of self is just the hypothesis that I am like this thing that I observed and this thing that I observed my mother is an agent and has beliefs and has autonomy and has intentions. So it takes several years for us to develop this these and the to even develop the hypothesis um that you know I am like another in particular I am like my carers or the or the things I see around me are the people around me and therefore I am the self. So you get this um this explains the impoverished theory of mind that people with severe autism have. And the reason I introduce so theory of mind just means um I can entertain the notion that you are another self that you have a perspective and intentional stance that you are sent into in some way and that you know you can I I can deploy my beliefs about myself in in uh in the service explaining your beliefs. But if I've not got in my repertoire of hypotheses uh in my generative model even the notion that the world is divided into self and other, I will never develop a theory of mind. And I will never be able that I will never be self-aware. So I'm trying to work towards what you would need as part of your generative model that would support consciousness of a narrative personal sort of you know a minimal kind of selfhood that um sorry a minimal selfhood that entails some awareness that I am be. Lots of things are like um uh I would imagine insects of lower life forms and certainly plants don't don't need a sense of self um but we are equipped because we have these very deep uh deep charity models if we're lucky and if we don't get you know severe autism for example or we don't grab um as a as a um um a feral child you know if you might if you you know if you if you look at the literature on children who have not been exposed to other things like themselves they have a very they have great difficulties relating uh to other things and of course representing themselves um uh you know uh as another thing of of this of this kind. There are lots of interesting issues that we could talk about but I I can imagine that uh I I'm exhausted and I imagine that you're getting exhausted as well. Just to just to measure what we could have talked about and perhaps next time you know if if I can come back we can pursue this um I I think to really qualify as conscious you'd have to be an agent and to really qualify as being an agent you'd have to have intentions and to qualify as having intentions you'd have to have a generative model of the future which means that we are talking exactly about the expected free energy aspects that green line uh the type two uh system two kind of thinking where so you know you can you can actually specify the necessary attributes of your orientation and your hypothesis set uh for this kind of thing um that would be necessary to support consciousness uh which is why I I can say you know um glibly and polemically um a single cell can't be conscious uh simply because it can't plan. You know you'll never get a single cell to look into the future. It's a beautiful fast and frugal efficient little machine where it is at the time scale it exists. But it's not in and of itself sufficiently self-similar and big enough to do the kind of planning that you and I can do.
Why Current AI Lacks Agency
Brian "Ponch" RiveraI tell you this has been incredible um I want to move on to AI real fast and uh we've been going for two hours um this is a huge cognitive load to have this conversation um for both all of us I hope um but I do want to talk about AI and I want to share something with you first before we do that. So large language models uh we've we equate large language models to the linear observoriented side act loop right it's a closed system um it doesn't engage with the external world and then we know you're working on um active inference which it looks more like you know free energy principle active inference looks more like this um can you tell us where we are in the I don't know if there's a uh in the life cycle of AI, what's coming next, uh what's happening today? Just give us your some context on how you see the world as re uh in regards to AI.
Karl FristonYeah don't think don't take anything I say too seriously there. I'm not an influencer in this space so and uh you know I I take a strictly sort of academic uh perspective on this um and also in part a um a clinical um perspective on it but as far as I can see um the AI of regenerative sort that currently uh um is you know sort of the center stage and the the source that is attracting all the funding and money and indeed underwriting the GDP in America uh at the present time that is a particular style of AI which is nothing to do with with active inference. It's a beautiful feat of engineering um but it has um it has certain aspects that uh I think um provide a s a ceiling to what it will and can achieve and those um ceilings are sometimes described in terms of um a poverty of efficiency in exactly the same way we talked about before in terms of free and independent action on the path of least action. They're also um um di uh described in terms of a lack of explainability um just by having an or a generative model or a way to orientate yourself on the inside means that you can't you have an explainable intelligence. So natural intelligence is always explainable and always by construction maximally efficient in the various ways that we've been talking about much of these constraints and limitations inherit from the fact they don't encode uncertainty. So they don't know what they don't know. And if you don't know what you don't know you can't evaluate the information gain or the risk. If you can't evaluate the information gain on the risk you can't act intelligently so you can only generate content. You can't act which means you can't prompt for example so generative AI cannot act in the way that we have been talking about from Gibson and cybernetics through to the loop to you know to um you know the the the non-autistic child experimenting and uh you know asking uh questions of the world to work out there's a mother there and the mother is an intentional being um so all of this true agency um is denied despite the hype about agentic AI using large language models I think that's a nonsense because there's no there's nothing on the inside there's no uh situational awareness there is no none of all the mechanics um that we've been talking about that would necessitate um um agency so where is it going to go? It's going to I think very much like um um very much well let me preface this again by saying that you know large language models transformer architectures and the current deployment in um in commerce uh and all the investment that that entails are beautiful uh really beautiful and impressive probably the most impressive engineering feat apart from going to the moon uh this century um the um but that's not going that's not the direction of travel if you want uh to um uh uh intelligent artifacts because you're gonna have to be much more biomemetic. You're gonna have to comply the principles of least action you're gonna have to incur uncertainty you're gonna have to deal with belief updating and hypothesis testing orientation in exactly the way that we've been talking about and that requires a different tech and that tech is now I think um some people are talking about this in terms of explicit biomemetic computation such as photonics, memistas, silver nanowires, even organoids. And another group of people are talking about thermodynamic computation both of them emphasizing the efficiency angle and if there's a commitment to having the right generative model under the hood to to put the explainability and the uncertainty in play the risk evaluate evaluate risk the information gain and curiosity then I think that is the future of artificial intelligence or at least intelligent artifacts artifactual intelligence which will be much much closer to natural intelligence. To succinctly summarize that what we're talking about is building machines that have a constrained curiosity. So your large language model will start to prompt you because it generally wants to know about you. And once it knows about you it might develop a sense of self and it might become conscious.
Final Takeaways And Closing
Brian "Ponch" RiveraI think that's a great place to wrap it up. We've been going for two hours and 15 minutes tonight. I cannot thank you enough for uh joining us we've been looking forward to this for a while been trying to uh at least get our understanding of how Things are kind of operating in different domains before we had this conversation. But again, uh, Carl, thank you so much for being here. And uh I think this is gonna be one of the most impactful episodes. Moose, do you agree with that?
Mark McGrathYeah, absolutely. Yeah, thanks for your time. And uh, we'd love to have you back to talk about destruction creation and other things.
Karl FristonYeah. Well, send me the paper and I'll I'll read it over over the summer holidays next is to me, done Done Easter holidays.
Mark McGrathWell, it's a very brief paper. It's not it's not a big paper. Um that was one of the what what people that dismiss it find, though. Um, and why a lot of the circular UDA crowd or the reductionist Utah crowd, it's extremely dense. Um you you'll get it. We're pretty confident you're gonna get it. But a lot of the people that haven't done the the intellectual work that it just goes right over their head.
Brian "Ponch" RiveraYeah. Well, again, thank you so much. We're gonna keep you on here just for a minute. I'm gonna stop recording and then I'll just uh do a quick debrief as soon as we start.
UnknownThank you, Carl. Really appreciate it.
Brian "Ponch" RiveraMy pleasure. Thank you.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
The Shawn Ryan Show
Shawn Ryan
Huberman Lab
Scicomm Media
Acta Non Verba
Marcus Aurelius Anderson
Danica Patrick Pretty Intense Podcast
Danica Patrick
The Art of Manliness
The Art of Manliness
MAX Afterburner
Matthew 'Whiz" Buckley