
No Way Out
Welcome to the No Way Out podcast where we examine the variety of domains and disciplines behind John R. Boyd’s OODA sketch and why, today, more than ever, it is an imperative to understand Boyd’s axiomatic sketch of how organisms, individuals, teams, corporations, and governments comprehend, shape, and adapt in our VUCA world.
No Way Out
Active Inference and The Free Energy Principle: A Journey from Ant Behavior to Cognitive Security with Daniel Friedman, PhD | Ep 37
Ever wondered how your brain minimizes surprise or how organizations can harness the power of Active Inference? What does this have to do with the OODA Loop?
Join us for an in-depth exploration with none other than Daniel Friedman - co-founder of the Active Inference Institute. As you tune in, you'll find yourself unearthing the fascinating concepts of Active Inference and the Free Energy Principle (FEP), weaving the threads between cybernetics, the OODA Loop, and the human perception of reality.
A deep dive into the mind of Daniel Friedman takes us on a journey from his PhD studies on the distributed behavior of ants to co-founding the Active Inference Institute. You'll discover how we can apply Active Inference and FEP in a variety of settings, from optimizing surprise in information theory to using expected free energy for policy planning and navigating the explore-exploit trade-off. And it's not just about individual cognition; there's a whole new perspective for leaders on how these principles can scale up solid teamwork skills and help organizations understand their members' perception of reality.
Pull up a chair as we venture into the realm of cognitive security, drawing links to physical and digital security risks, and the timely need for cognitive security in an era of information weaponization. We take a compassionate look at how active inference can shed new light on PTSD and TBI in veterans, first responders, and trauma survivors, and how understanding neurodiversity through the lens of active inference and the free energy principle can help shape our world in value-guided ways. Stay tuned till the end as Daniel shares how you can contribute to the Active Inference Institute and its initiatives. It's an enlightening conversation you don't want to miss!
Daniel Friedman, PhD on LinkedIn
Active Inference Institue
Daniel Friedman, PhD Personal Website
CENTER OF GRAVITY FOR COGNITIVE SECURITY
NWO Intro with Boyd
March 25, 2025
Find us on X. @NoWayOutcast
Substack: The Whirl of ReOrientation
Want to develop your organization’s capacity for free and independent action (Organic Success)? Learn more and follow us at:
https://www.aglx.com/
https://www.youtube.com/@AGLXConsulting
https://www.linkedin.com/company/aglx-consulting-llc/
https://www.linkedin.com/in/briandrivera
https://www.linkedin.com/in/markjmcgrath1
https://www.linkedin.com/in/stevemccrone
Stay in the Loop. Don't have time to listen to the podcast? Want to make some snowmobiles? Subscribe to our weekly newsletter to receive deeper insights on current and past episodes.
Recent podcasts where you’ll also find Mark and Ponch:
Hey, welcome to no Way Out. I'm Brian Pontravera. Today I want to talk about a topic with Daniel Friedman. I'll introduce him to you in a moment, but I also want to go back and explain why we're looking at what we're going to look at today, and that is a quick conversation we had with Mary Ellen Boyd, the daughter of John Boyd. She pointed out that the majority of people that follow John Boyd's work do not truly understand what her father was trying to do with it.
Brian "Ponch" Rivera:It is my strong belief that the topic of active inference and the free energy principle and, moreover, the key feature or a key feature that is often excluded when we talk about John Boyd's OODA loop is active inference. Now, this did not exist when John Boyd passed away in 1997. It came about in the early 2000s. We could talk more about that here in a second, but our guest today is Daniel Friedman. He is the co-founder and president of the Active Inference Institute. He is a postdoctoral researcher at the University of California Davis and he actually wrote a paper about conflict, warfare and active inference which is very powerful to review. If you want to get that link, we'll put that out for you as listeners. But, daniel Friedman, welcome to the show and could you help us on a couple of fronts. Up front, and that is, can you tell us a bit more about who you are, how you came into co-founding the Active Inference Institute, and then, why is the topic of active inference and free energy principles so important to leaders in all types of organizations? So, daniel, welcome.
Daniel Friedman, PhD:Thank you, great to be here and I really like a lot of the previous conversations you had, so looking forward to where we go today. All right, I will look to those questions in a one, three, two order. First I'll say who I am and then the why of why active inference is so interesting and important to apply in an organizational setting, and then connect a little bit to how we founded Active Inference Institute. So first I am a, as you mentioned, a postdoctoral researcher at University of California in Davis. I went to Davis for my undergraduate and I studied genetics and biology. I then went to Stanford for my PhD. I studied with Professor Deborah Gordon collective behavior in ants. So I was doing a lot of field work in Arizona, counting ant foraging activity, dissecting ant brains, thinking about molecular biology, thinking about distributed systems and complex adaptive behavior in collectives.
Daniel Friedman, PhD:And during that time in graduate school, I started getting very interested in formal and statistical models of perception and action, because sense making and decision making are kind of like the inbound and the outbound on how systems work. And so, whether that's like an individual, in terms of our own self efficacy and self development, or it's a team, in terms of how the team forages for information and integrates it and then takes decisions and makes productive outcomes relating to that, or it's an organization. This kind of cybernetic sense making on the inbound, decision making on the outbound structure is pervasive, it's non contentious and it's kind of like the esoteric yet in plain sight mystery that also Boyd spoke to in the OODA loop. What is it that happens between observation and action? And so that is the cognitive moment, that's the action perception loop and it's the mystery with a thousand names. So it's really exciting to see how it's being now tackled in this current time, using active inference and some new techniques. But that's the who and how I got to this place, which was studying distributed biological systems, also curious about human systems and wanting to study more about the state of the art and the best in class in terms of thinking about cybernetic systems.
Daniel Friedman, PhD:Okay, now as to the active inference Institute. So during 2020, along with some colleagues, we were looking at the rapidly developing literature on active inference and free energy principle, and we were seeing applications ranging from clinical psychiatry to kind of Gaia scale, nested Markov blank. It's all the way down and we thought, wow, you know, there's actually one system that's every day coming into play. It's super important for all aspects of our world today, and yet it doesn't have a direct active inference bridge. And that was remote teams and organizations.
Daniel Friedman, PhD:And so in 2020, we wrote a paper about active inference in the context of remote teams, communication and organization, and in September 2020, when we finished that paper, we thought, wow, you know, that's a great pointer and vision. However, the technical debt, the research debt, which is to say, the delta between the promises and the potential of the framework generalized, unifying framework for sense making and action and then the capacities and the actual applications that exist on the ground. We saw that differential as being large and growing, and so we wanted to take a direct education, research and service oriented approach to tackle that differential between active inference and principle and active inference and practice. And so we founded the active inference lab, which then kind of developed into the active inference Institute, and since the beginning of 2021, we've been on quite a journey in our role in the active inference ecosystem.
Brian "Ponch" Rivera:That's awesome, one of the connections you made there on the team side of the house. It's interesting to hear that that's how you kind of found this pathway. One of the things we're doing now is using active inference, the way we understand it, to help organizations understand how each team member perceives reality and then from that we can start to really invite good practices that scale solid teamwork skills. So things like how do we mitigate cognitive biases? How do we make things visible? How do we create situational awareness? So some of the things that are in your paper.
Brian "Ponch" Rivera:With the Inslee model, where does effects-based operations, which is based on value focused thinking, fit in the context of the Kinevan framework? So it's amazing that what we started with years ago has now shifted from a let's start with the big frameworks and start with that to starting with how humans perceive reality and showing people through like a strip exercise showing different colors and things like that, or inattentional blindness images, or images that help them see that they only see what they expect to see, and we are able to explain how that perception is created or constructed top down, inside out, and that is amazing. You have that connection with teams and I thank you for that paper from 2020. It's really nice. So, daniel, we got to get down to the basics here, all right, and this is kind of a heavy topic.
Brian "Ponch" Rivera:I've been reading about it for the last three years now. I'm still pretty new at it. My connection to John Boyd's Doodle Loop begins with mismatches, which is, I believe, another way to say surprise. So can you help us understand the free energy principle and active inference in your own words and walk us through? What is the free energy mean? What are we trying to do there?
Daniel Friedman, PhD:Great. Well, there's so many on ramps and play toys in this space. There's no one pedagogical path, and that's kind of the beauty of the way finding an ecosystem. But I'll jump in exactly where you asked, which is what is the relationship between mismatch or differential surprise and free energy. And these terms mismatch or differential surprise and free energy are going to be distinct. However, they will be aligned in a way where, if you understand one, you're definitely pointing in the right direction in all of them. I'll also just note that the language that I'm speaking here is the active inference ontology, and we have this translated into a variety of programmatic and natural languages, because this entire journey is about the accessibility, rigor and applicability of the framework. It isn't a implicit knowledge. We work every day to make these kinds of connections and questions incredible questions like you're bringing up, make them clear, make them available.
Daniel Friedman, PhD:So mismatch or an error differential is denominated in the units of what is being compared. For example, if you expect it to be 40 degrees and the temperature reading comes back as 45 degrees, the mismatch or the differential is plus five, and so mismatch or differential denominated in the units of whatever it is that you're measuring. Now, if you have a statistical distribution about what you expect. Another way to think about that incoming thermometer reading is in terms of how surprising it is. And so 45 would be somewhat surprising. If we expected 40, that would be the least surprising. And if we got 100, that would be even more surprising. And so in that way, surprise is denominated in information theoretic units like NATs or bits. But it can also be thought of basically like a Z score, like how many standard deviations are you in terms of temperature? But that's not denominated in units of temperature, that's in units of information theory. So when you minimize the differential, you also minimize surprise. However, they're not denominator in the same units. But again, you can imagine if things are coming exactly as you expect back to you. Your surprise is minimized because the differential is minimized. Now, surprise minimization is a great heuristic for Bayesian statistics, the reason being because when surprise is minimized, model evidence is maximized.
Daniel Friedman, PhD:However, for real-time settings or with large data sets, surprise calculation directly is not always tractable. And so what free energy is is? It's a bound on surprise, so it converges to surprise, it bounds surprise. However, free energy is incrementally optimizable, and so that allows us to transform a very challenging plug-and-chug type big computation, do it once and you're done with it.
Daniel Friedman, PhD:Surprise into this real-time accelerated optimization problem on the variational free energy. And this is exactly what's used in variational Bayesian methods like variational autoencoder or variational Bayes or energy-based learning or evidence lower bound. And so what active inference is doing is it's using variational methods which have been in use for decades and all over the world today, using variational methods on the cybernetic synthesis of sense-making on the inbound and decision-making on the outbound. And so to kind of close that when the differential is zero, the surprise is the lowest and the free energy is minimized. A non-zero divergence means that it's not minimized surprise, which means that you're going to have also a higher free energy. And so what we do is we pursue, via optimization on free energy, bounds on minimizing surprise, and that supports the differential being reduced. So again, they're not the exact same thing. However. Differential, surprise and free energy are all kind of cousins.
Brian "Ponch" Rivera:That's a great explanation and I appreciate that. I'm gonna try to bring this to a knucklehead perspective from fighter aviation, all right, and I might use biomimicry or auto-memicry, so please correct me if I'm getting it wrong. So in nature, animals will paint themselves or have a different color on them to show that they're bigger or smaller. In fighter aviation, at one point many years ago, we used to paint a canopy on the underside of an aircraft and that way when you saw it in the air, you couldn't tell which direction it was heading or you're looking at the top side or bottom side. That, to me, was a mismatch. We're creating a mismatch in the environment to slow down my decision-making loop, right. To kind of put some doubt in it, to put some novelty in it, to really go, do I really understand what I'm seeing here? That's one aspect of it For most people who've seen the movie the first movie, top Gun, and this sounds kind of funny the radar could only pick up one aircraft.
Brian "Ponch" Rivera:I remember. If you remember this, I'm ready to go visual ID them. When you get to that merge and you see that there's two aircraft there. That's surprising. It's even more surprising when there's a third aircraft there, right. So our ability to make sense out further out in an environment, and warfare, in this case, fighter aviation. Many years ago we wanted to minimize that surprise. We wanted to send somebody close to visually identify them, to understand what we're up against. And I think the same thing is true in organizations today is they have to be working or they have to have a closed loop with their external environment. They have to have some type of sense-making capability to understand what's in that feedback that's coming from the external environment. So I think hopefully I'm kind of making sense of what you just shared with us. How does that resonate with you, daniel?
Daniel Friedman, PhD:Yeah, in terms of us wanting the best for ourselves and our organizations, we wanna bound or minimize surprise.
Daniel Friedman, PhD:Now that doesn't mean eradicate novelty or creativity or innovation. It means we don't want the kind of negative surprise that we would associate with basically a lack of situational awareness and an increase in risk and essentially threats to cognitive security. So that's the kind of surprise that you want to avert, and in pursuit of that, part of it might include techniques that intervene in the sense-making or decision-making of another cognitive agent in the ecosystem and so camouflage or different kinds of cues could be understood as de-risking or deceiving or confusing other cognitive agents. And so that speaks to using active inference, even qualitatively, as a way to think about the kinds of sense-making and decision-making engagements that do occur, whether we're talking about social setting or professional setting or any other kind of setting. If we can pull back and understand these entities as cybernetic systems, with that inbound perception and the outbound action, whether it's a computer or a person or a team, then this helps us compose those cognitive ecosystems in really legible and modular, interpretable ways.
Brian "Ponch" Rivera:Okay. So, if I understand the free energy principle correctly, we try to minimize surprise as agents and an agent being something that can plan I believe that's a basic definition. Potentially, or we try to maximize evidence or information for our own internal model. Is that what we're trying to do with free energy?
Daniel Friedman, PhD:Yes, I'll just add the half caveat that not all agents engage in planning. So a very simple active agent like a metronome might engage in action. However, it doesn't engage in explicit prospective planning as inference, it's not engaging counterfactuals about what would happen if that happened, and so that kind of sophisticated inference is possible to describe within this broader umbrella of free energy principle. But what's really cool about free energy principle is it also takes us down to describing even inert or really simple active systems. So it doesn't need to always go into this very, very complicated space of different cognitive phenomena, but it can. It also is grounded in like a baseball doing a parabola.
Daniel Friedman, PhD:Well, that's the least surprising trajectory that the baseball can take, and that's called the path of least action. Path of least action doesn't mean path of least movement. It means, given the initial conditions of that baseball, that parabola is what physics predicts. Free energy principle is a principle of least action for cognitive systems. I know there's a lot to unpack there, but the same way that when you roll a ball down a hill or when a baseball goes on a parabola, the path it takes, whether you like it or not, is a path of least action. This, the free energy principle is a principle of least action in information geometric spaces for cognitive systems.
Brian "Ponch" Rivera:Awesome. There's a episode we did with John Flack and we talked about baseballs, and I think that's a solid connection between how we actually catch a ball as a human system, and I think it's the same thing with machine. Okay, I wanna move a little bit towards active inference, and the way I wanna frame this right now is within John Boyd's OODA loop. There are four pathways that are often excluded when people think about John Boyd's OODA loop. The first pathways is the implicit guidance and control pathway that moves from orient back to observe and that, in my view and our view, is its perception, that's our schema, our mental models of the external world that's constructed, by the way. The second one is the pathway that moves from decision or decide, which is also known as hypothesis, and I think a hypothesis or a decision can also be known as a prediction. But that pathway moves from decide back to observe, and it's a feedback loop. Right, it's a feedback loop there, or feedback pathway, however you wanna frame that. The third would be the pathway that moves from act back to observe. So that's the first being implicit guidance, control. The second one, decision back to observe. The third, act back to observe, and then finally, we have what I believe is an external loop which moves from act back to observe. It goes back through the environment, through our external world. That's our closed loop there.
Brian "Ponch" Rivera:So with that frame in mind, and one thing I wanna share with you, I'm not sure if you're aware of this when John Boyd, initially I'd say, created it, but when he was trying to figure out what to call his loop, he actually wanted to call it the soda loop for sense, orient, decide and act, where orient is clearly the Schwerpunk, the most important aspect that helps us make sense of the world, decide and act in it. It's the. You can call it an internal model of the external world, or what he called orientation. So that's the frame. I think most people get wrong when they draw the Oodaloop. They draw it as a linear approach, like like plan, do check, act right, and that's not the way John Boyd drew it. So I wanna know if you can help us, through the active inference lens, understand what is actually going on through when humans perceive reality. I don't know if you can, or maybe there's another way to talk about that.
Daniel Friedman, PhD:Well, it's an awesome question and a very exciting avenue in 2021. With my colleague Scott David and RJ Corday in cognitive security, in our active inference and modeling conflict paper we tried to look at the type of generative model that we use in active inference and then schematically show where these different phenomena of the Oodaloop are kind of loaded onto the active inference generative model, which is a partially observable Markov decision process or a POMDP. They both describe the same territory or they both are like two different.
Brian "Ponch" Rivera:Before we dive in it, can you just go over two things? Markov, and I think you said POMP, is that correct?
Daniel Friedman, PhD:Yeah, pomdp, okay, okay. So there's two aspects to the POMDP. There's the PO and the MDP. The PO is partially observable. Partially observable means that our model is dealing with observables like temperature readings, but not everything that we care about is an observable. For example, the true temperature in the room is not an observable, though the thermometer readings are. Or latent causes in the world might not be directly measured, but they're like hidden causes that give rise to sensory observations. So partially observable model just means that it's going to be. Certain parts are data-driven and can be connected to sensors. Other parts are going to be so-called hidden states or latent factors, and those are going to be modeled, but they are not going to be associated with a sensor reading. That's partially observability.
Daniel Friedman, PhD:Now Markov decision process. My personal feeling is that last names do not need to be attached to different equations, because it gets really confusing. First off, there's multiple Markovs. Second off, they both worked on multiple things mathematically, and so I wish that we could just describe things, sometimes with their technical details, to be more rigorous and clear actually. So Markov gets thrown around and it does mean slightly different things in different settings.
Daniel Friedman, PhD:The Markov property, when talking about a time series, is the idea that the past can only influence the future through the present so that's a Markovian property is basically that the present insulates the past from the future.
Daniel Friedman, PhD:So there's no skipping the present forwards or backwards in time. But it turns out that that Markovian property on a time series is an example more generally of what is called a Markov blanket, and a Markov blanket is a node in a Bayesian graph that makes two other sets of nodes conditionally independent, and so in the time case that is the same thing as saying each instant is like a blanket that insulates the past in the future, making them conditionally independent, so that only through the eye of the needle in the present does the past influence the future. But also Markov blankets can play roles in other systems, and so Markov decision process is a statistical model that includes a perceptual and, importantly, a decision making components from control theory and follows this Markovian property. So partially observable Markov decision process, observables and non observables, in a sense making a decision making task. That's why this is kind of a standard form for thinking about cybernetic systems when we're doing statistical modeling.
Brian "Ponch" Rivera:Okay, I'm going to slow down again. Cybernetics this is, and I don't want to keep you from your rhythm here. There are a lot of terms that are being thrown out here, and I think this is kind of important because we need to connect what you know. A lot of people may not understand cybernetics and, if I understand it correctly, we got into. Let me just try this we need an internal model of the external world. That's kind of the basics of cybernetics. Can you build on that or amplify that or dampen that?
Daniel Friedman, PhD:Cybernetics arose to explain a huge diversity of adaptive or goal oriented systems, and so, whether it was thinking about the guidance systems of ballistics, or whether it was thinking about a bacteria looking for sugar in its niche, or a person looking for some kind of information, cybernetics is like a really general framework to talk about adaptive agent behavior, and so by cybernetics I'm referring to the type of agent, the type of thing again, whether it's software or physical, augmented, heterogeneous, all the adjectives you want, but we're highlighting that this is some kind of thing that has an inbound sense making capacity, some internal processes that we call cognition, and some outbound action selection capacity that we associate with decision making and cause it back into the niche. Okay, Now.
Brian "Ponch" Rivera:Thank you for that. And this is foundational to John Boyd's work too. He studied this in the 50s, 60s, actually more in the 70s. Okay, back. I'm sorry to interrupt your floor. I just want to make sure we get those right because our listeners may be scratching at you on. What does that mean? What does this mean? To me, a markup blanket could be just a boundary, and we had Adrian Bayjan on the show, professor Bayjan from Duke. He talked about the boundaries in physics. Dr Hippolito Ines. Hippolito talked about the boundaries, the markup blankets. Hopefully our listeners are somewhat familiar with that concept. And then something we propose is that when you put a blanket or a boundary around John Boyd's zootal loop, you end up with a action perception loop and, going back to an earlier point, it's the stuff that happens inside and that's what we're going to talk. I believe we're going to talk about with active inference. Now correct.
Daniel Friedman, PhD:Exactly, okay, yes, awesome. So guests and Dr Hippolito and colleagues are doing some fascinating work with dynamical markup blankets and all these other features. So to come back to the ootal loop again, ootal loop, active inference and really any other cybernetic type model of the action perception loop, whether it's TOTE or any other kind of like cyclic or spiral like model that's describing sense making and perception and action. They're describing the same territory. You could apply one of those models like a lens or a filter onto the pilot in the cockpit. You could also apply it to the person doing knowledge work. You could apply it to the ant foraging for a seed. In principle, so the territory is not the map, the map is not the territory. These are different maps that can be projected or made about a given territory or system of interest, something that Toby Smyth from the category theory side calls compositional cognitive cartography, and I love that because it brings together the cognitive system of interest, the composition of the personality from category theory and cartography or map making, and so it's like we're making maps of settings. Now there's archival maps that try to capture all of the information about a given system and there's itinerary maps that capture how to get from A to B in a really efficient way by abstracting or coarse graining across other pieces of information. And our digital systems today allow us to reimagine maps and blur the line between the archive and the itinerary, because the itinerary can be re-rendered on demand. So think about using a maps application. You can add a new stop or you can say search for this kind of thing in my area, active inference gives us a really powerful statistical framework.
Daniel Friedman, PhD:To talk about the OODA loop and all of those neglected edges that connect back to observation. Those are all super important phenomena to have situational awareness about. However, just bringing awareness to the phenomena by itself doesn't give us the computational framework to actually make simulations, digital twins, unique explanations and predictions and so on. And so I see Boyd's work as absolutely singular and so far cited in in some ways scoping out design patterns and principles for cognitive systems as well as for staging the synthesis of thermodynamics and information theory. But these edges that you describe. Then the challenge kind of comes back to us after this ontology mapping has been done how do we bring awareness to these loops that you describe, and how do we do that in a way that isn't just like a hot take but rather in a way that brings integrity between the natural language, the graphical visualization and the executable cognitive simulation. That's what I call the triple play.
Brian "Ponch" Rivera:Okay, it's pretty powerful. So what's behind active inference is sound statistics, mathematics. It kind of unifies a lot of different thinking from biology, perhaps complex adaptive systems, cybernetics Is that what I'm hearing?
Daniel Friedman, PhD:I think it does, and that is not exactly a double-edged sword, but because it's so grounded in measurements and the capacity to describe and simulate systems, it by itself is not a normative principle about what one should do. You could describe a failing business with active inference. You could describe a car driving off a cliff. You could describe a fighter pilot falling out of the sky. So the ability to describe is not enough. But the ability to describe brings us to the vista of decision making as individuals and as organizations.
Daniel Friedman, PhD:And so it can describe anything, anything with persistence, whether it's a rock metronome, thermometer heating system or more sophisticated cognitive system. We can develop a map of that thing to our heart's content. But that doesn't mean that the map that we make is going to be useful. It doesn't mean our little car is going to be able to drive on that dirt road just because the map's application told us that we could or should. And so this is really the work and the opportunity and the challenge, which is to go from a generalized theory of things, active inference and the free energy principle and apply it with integrity in an adaptive way.
Brian "Ponch" Rivera:Okay, all right. So active inference, if I understand it correctly, we do not passively observe the external world, right? We actively engage with it. And that's what the active inference pieces within it and that's what I believe John Boyd Doodle Loop tells us, with those three pathways that I pointed out, there's going to be four pathways I pointed out. So let me kind of walk you through a basic understanding. So our sensory organs pick up some type of signal from the external environment. Those signals get not necessary signal. It's like a photon or a molecule or something external, those sensory organs, which we'll call observations. They in turn create a sensory signal that goes into some type of internal map of our external world. That internal map of the external world generates a prediction, a prediction about the causes, which I believe are hidden states Correct me if I'm wrong that are behind the signals that it's receiving, right? So can you help correct me or am I on the right path on that?
Daniel Friedman, PhD:Yeah, yeah, okay. So you mentioned kind of using these micro experiences to help people understand this active sensing component. So here's two fun little micro experiences. Sure, so if you're finger on a surface any surface fabric or something on your desk is the surface rough or smooth, turns out you can't reduce your uncertainty about that without moving your finger, because it's not a property that's revealed by a stationary finger. So active sensing Another example of active sensing that's very embodied is our visual system.
Daniel Friedman, PhD:So our visual system has very high resolution color image in the very center of the visual field. However, off center there is lower resolution and there's less color vision. Also, we have a fairly large blind spot literally on our retina, but that does not play into our visual experience. Additionally, isocades, which happen several times a second, are suppressed so that we see a fixed visual field, even though a camera that you moved a bunch of times a second, it would look extremely chaotic.
Daniel Friedman, PhD:So all these phenomena from the sort of continuity of our attention with isocating, to the high resolution color and no blind spot visuals that we see in our visual field these are evidence that help us understand that our perception, our experience, our situational awareness is a generated model that is fine tuned by incoming sensory data, which may be very partial or sparse. So it's not that we're getting this rich data and processing it, distilling it and then getting to the nugget of insight. Rather, it's like the generative model of the brain or the team is the insight machine that's always in motion, being fine tuned by sensory inputs. Now, what is that insight machine? What is that generative model? It includes predictions about current and future observations and hidden states.
Brian "Ponch" Rivera:You hit on something here predictions about currents and future. So I think we touched on the current, so that the sensory signals that are coming in. We're making predictions about that minimizing surprise. I think John Boyd put it as new information or information. So we're trying to minimize that, and I think the idea behind this is our brains are very expensive organs. They burn a lot of energy. Two percent of our body weight burns 20 percent of our energy, something like that. Therefore, this is a low energy approach to find efficiencies within our organs, within our brain, to perceive the external world.
Brian "Ponch" Rivera:I guess the other case would be if we, if our sensory organs, were picking up everything, we'd be overwhelmed and we'd probably die, right, I don't know if that's true or not. So back to the current and then, sorry, back to the future. Inside of our mind or our brain or in an agent that plants, there's a planning mechanism. I think it's called a policy, and I believe you pointed out that it's a covert plan, it's internal to the, to that agent or system, and it's a counterfactual, or what, if right, am I getting that correctly?
Daniel Friedman, PhD:Yeah, awesome, okay. So first off, we're now casting. This is the temporal thickness of the moment. Visual input and sonic input are happening at slightly different, staggered times, milliseconds different. But the temporal thickness and the temporal continuity of the now, or the primal impression, is this generative model that has those things seamlessly integrated, coarse grained and so on. So it's not just that we're like perceiving the now directly, but then the past and the future are hypotheticals and there's a continuity between the past, present and future, because it's all part of what the generative model is predicting. So whether it's recalling or accessing memory, whether it's now casting in the unfolding moment, or whether it's projecting or pretending towards expected futures, these are all integrated activities of the generative model.
Daniel Friedman, PhD:Now, what is that cognitive capacity of planning, and how do we approach planning as inference? So let's think about a chess computer. It has a defined set of moves, it has the rules of the game built in, and then, at certain depths or time horizons, it can play out all possible futures. Now, even within the game of chess, the combinatorics get really out of control, and so you need all of these computer science techniques to like prune the trees that you do end up exploring. This is just to give a kind of guide to the way that policy selection occurs in active inference. So the all by all space of a long time horizon, all possible actions, is very vast, and the challenge is, first off, just to sift through that space. But more to the point of cybernetic and adaptive systems is to make adaptive actions and to select adaptively within that space. And what happens in policy planning is in active inference. Again, this is map, not territory. So this is not necessarily how a given system does policy planning. This is an analytic framework that we can apply as an ethologist, as a behavioral researcher, to describe any system. So we don't need to wait for systems that are, from their heart on out, applying active inference themselves. We can apply this to a tiger today or to some other information framework.
Daniel Friedman, PhD:Today, policy planning is modeled as basically evaluation of counterfactual futures, and those counterfactual future paths are evaluated in terms of expected free energy. And the expected free energy has two components epistemic value, pragmatic value. Epistemic value how much will I learn by taking this path? Pragmatic value how well will my preferences be satisfied by taking this path? And so when epistemic value is off the table, there's nothing to learn. This model becomes essentially a utility seeking equation. Conversely, when pragmatic value is off the table, we only have epistemic value. Then we seek maximum information and novelty. But for situations with both epistemic and pragmatic value on the table, expected free energy gives a really finessed way to navigate the explore, exploit trade-off.
Brian "Ponch" Rivera:This is great. So the application of this from the individual, even from a neuron, but most of our clients are not neurons, they're humans. So from the human level, from the individual level to a team level and organizational level, the application of active inference is, I'm gonna say, universal. It's self-similar, right. I mean it scales, it's almost scale free, right.
Daniel Friedman, PhD:Yeah, it's scale independent. The framework is not based around a certain spatial or temporal scale or system of interest, which is again why there's so much work in the application, because the framework itself doesn't respond or require any specifics of any system of interest, which is why some people are frustrated with the generality of the principle and of the framework. But I just see that as the opportunity to actually make it go the last mile and apply it to systems of interest that we actually care about, work on every day.
Brian "Ponch" Rivera:Okay, so business leaders listen to this podcast and they are talking about VUCA or uncertainty. So volatility, uncertainty, complexity and ambiguity. We talked about mismatches a little bit earlier. Is it okay to say that at the organizational level, when we do some type of policy planning, a counterfactual work, or we're doing some design thinking or our complexity thinking, we're trying to figure out how we can minimize uncertainty? Can we say that?
Daniel Friedman, PhD:Yes, we could think about.
Daniel Friedman, PhD:Given the generative model of our organization, the imperative is to bound surprise.
Daniel Friedman, PhD:Again, that doesn't mean eradication of novelty, because we could define the generative model to mean our bank account incrementally increases at this rate and our product usage increases at this rate, and if it's increasing it this way we'll be minimizing surprise, we'll be fulfilling our preferences.
Daniel Friedman, PhD:Those policies will have high pragmatic value. Those policies, or some other policies, may have high epistemic value because they'll help us refine our world model so that we can make better pragmatic decisions as well. And so all those policies, the more research, epistemic, exploratory, and the more kind of locking in, fine tuning, making the observations aligned with our preferences today, pragmatic value, both of those can be spread out and evaluated with a common currency on the same table. And I'll point to Bijan Hasri, who has written his PhD dissertation, which was published as a textbook, like publication, and it is an incredible modern work on free energy governance. And it's just the beginning of what will be one of the areas of applying active inference in the active inference ecosystem to organizational design and re-understanding previous management, professionalization and organizational operations within the context not of reward and reinforcement learning, but rather within the modern cognitive science and cognitive security approaches that we have today.
Brian "Ponch" Rivera:Yeah, so I'll tell you right now that one of the challenges we have is all these different models and frameworks, that they confuse folks right. So anything that is a unifying model that can help people understand and not simplify the world but makes sense of the environment, I think invites the right interventions for their current context. So we can look at the Kenevan framework, we can look at John Boyd's Udalloop, we can look at Ansley's situation awareness and so forth. I wanna shift gears a little bit here and kind of touch on the paper you wrote about conflict and there's some context to this as well. So fifth generation warfare is being talked about quite a bit these days disinformation, misinformation you can think of it as manipulation and the environment propaganda, whatever it may be. Social media is accelerating this Active inference and fifth generation warfare. Is there a strong correlation or connection there?
Daniel Friedman, PhD:Yeah, this is a very good and a very deep question For those who have different familiarities. The one, two and three generations of warfare. First off, as we described in this active inference and modeling conflict paper, these are not chronological and so it's not that these were different epochs of history and now we're in a different epoch of history. But we can understand all of these generations as kind of being like archetypes that are playing out in different situations in different ways, different contexts. And whereas the lower numbered generations of warfare are referring to the most boots on the ground, the most physical and infrastructural, these higher numbers four, five, however one wants to think about this increasingly engage with cognitive and informational conflicts and ultimately just informational communication overall. Because one of the complexities of fourth and fifth generation warfare is, for example, the gray zone and the ambiguity around what kinds of actions even are aligned, because we all know of situations in our teams and organizations where, like all the good intentions in the world, all the right types of thought, diversity, all of the right types of thought, resonances, don't translate to effective action and one can imagine that adversaries are able to threaten the cognitive security in terms of the perception and action selection of individuals and groups, and those threats on cognitive security, threats to sense making and decision making are, if we use the warfare framing, considered fourth and fifth generation warfare.
Daniel Friedman, PhD:And so, as people are saying, yes, there's the physical, yes, there's physical security, you lock your door when you go outside. Yes, there's the digital, yes, there's the digital security, encryption and the DDOS and all of these things that computer systems do. But what is the interface and what is the leverage point that we have as biological creatures? What is a way that we can tackle these synthetic or emergent patterns of risk that deal with both physical and digital systems? For example, person distracted by a cell phone while driving car Is that a physical security threat? Is that a digital security threat? It's a cognitive security threat because it's an emergent outcome of a cognitive system in their use of a cyber physical niche. And so cognitive security and the framing of conflict that we provided in that paper helps us squarely target and center the cognitive phenomena as that which should be understood and secured, instead of relying on a physicalist description or a computational description of that system. Rather, we can bring the physical and the digital components into play by talking about the cognitive agent in the cyber physical niche.
Brian "Ponch" Rivera:So cognitive security is a term that you brought up today. That is not the first time I heard it. Actually, the first time I heard it was from you. We talk a lot about cognitive warfare, but I do like your description there of the iPhone or the phone going off in the car and get distracted and how it connects to all this.
Daniel Friedman, PhD:Just one note on cognitive security. Of course, we can keep talking about this, but I would point people to a 2017 testimony by Rand Waltzman the weaponization of information the need for cognitive security. So, as he wanted to share it, then, that is one of the documents that inspired RJ and I to work together in this area.
Brian "Ponch" Rivera:That's amazing. So the application we got fifth generation warfare, we got teams, we have organizations dealing with FUCA, volatility, uncertainty, complexity and ambiguity. There are other applications of active inference that we're seeing, and this is pretty near and dear to my heart, and that is the application to PTSD and TBI in veterans, not just veterans but first responders and anybody that faces some type of trauma. So we're starting to see a lot of use of the free energy principle and active inference there. Just saw a great paper by Inesipolito on I forgot who the co-author was, but on autism, to understand that. So neurodiversity am I saying that right? Neurodivergence, neurodiversity, so understanding basically how we make sense, decide and act in this world is at the core of active inference and understand the free energy principle. So, daniel, what else can we look? What other domains are we seeing? Active inference and FEPP pop up in?
Daniel Friedman, PhD:Awesome. Yeah, I'll give one thought on this kind of human neuroscience and neurodiversity and then we can explore maybe other areas. So one cool thing about active inference is that, given the generative model, we then unroll perception and decision making as Bayes optimal. So that doesn't mean again that the startup is going to work or that the person's behavior is socially accepted or that everything goes according to someone else's dream for it. It means that the ball rolls downhill according to how the model is set up, and so this is a slightly different way to think about neurodiversity, because instead of drawing some DSM line and saying, well, here's the pathology and here's not the pathology normal and sick, or here's in a value of criterion that's going to classify people in some other way, we can actually respect and open the conversation to these different axes of neurodiversity, including how they intersect and how they relate to environmental factors, without pathologizing or saying that some type of activity is irrational. No, it's rational. It's the ball rolling downhill, given a different generative model.
Daniel Friedman, PhD:And so one of the most active areas of work is understanding various types of conditions and different kinds of cognitive phenomena. Like you've mentioned, autism and other topics that have been well researched include depression and another cognitive patterns, approaching those and not asking what is going wrong here, but rather what is different here and how does the optimal rolling out of how this group is different give rise to the sense of making and decision making patterns that we observe and that may lead to some unconventional suggestions about interventions or ways to modify the niche. So I think that's really positive because it helps us find continuity between accessibility and inclusion and taking a broad perspective, but also in high reliability systems, being able to really dial things in and make critical decisions. So I find that that synthesis, just qualitatively, is very pleasing and allows to have a value guided framework that also has a performant edge. Any thoughts on that?
Brian "Ponch" Rivera:Well, I think, as you're walking me through this, what came to mind is walking into an organization and helping them minimize free energy by showing them a little bit about active inference through experiential learning activity or activities where they can see how they interact with the world and how that applies throughout their home life, their work life and all that, so it scales right. And then I think what ends up happening or could happen is people start to see the application to part, maybe artificial intelligence, maybe to big data or data, maybe to, like you brought up startups. That's what it's two things. It's John Boyd's Oodleoo. He gave us something that's very powerful, that's many people misunderstand and active inference helps people understand that at another level. Right, and that's my view on this. So if you could show folks these things through experiences, kind of like you did with the little you know, put your finger on a table or on a surface that's a great explanation, that's an experience and I go OK, that makes a lot of sense.
Brian "Ponch" Rivera:And if I could see that from an individual level and I could see as a team, maybe I need to leverage weak signal detection within my organization, meaning those that neurodiversity or cognitive diversity I need to leverage that we're the tools and techniques that allow us to do that, because that's an intervention and that way we can make sense of the external world, which each of us have a different perspective on.
Brian "Ponch" Rivera:And I think that one of the concepts is we see the world from our advantage point, not necessarily our vantage point, but from our advantage point, and I get that from Robert Grant. So there's a lot of implications here in understanding and active inference. And I think what makes it challenging and this is my perspective there's a lot of math in it, and when I look at it I just start my eyes roll back and I'm like, oh my gosh right, why am I looking at 20 pages of math? By the way, john Boyd did a lot of math years ago and gave us the energy maneuverability theory, and we can explain that today pretty, pretty well too. So I don't know. There's so much, so much more we can unpack here.
Daniel Friedman, PhD:Yeah, I would love to see a more rigorous John Boyd EM active connection. I think that's. That's hot and high on the bounty list. No kidding, do you think?
Brian "Ponch" Rivera:Yeah, be insane. Part two We'll find the right guys, yeah.
Daniel Friedman, PhD:The the question about level of technical detail in math. There's so many layers to this question too. People are relatively comfortable looking at scatter plots and linear aggressions. People are comfortable with t tests and p values, and so those techniques are from the 1900s and they're still absolutely valid. But if one needs to kind of pull that thorn out and add in a new thorn with Bayesian statistics, suffice to say there's one million PhDs of technical detail that no human will dive into.
Daniel Friedman, PhD:However, everyone just like we use numbers without necessarily getting caught up on number theory when we're at the store. We can do cognitive modeling at the store without being caught up in the actually infinite levels of technical detail. And then, lastly, just to the areas of application. So we talked a lot today about organizational and personal, as well as psychiatric, active inference applications, but just a few other areas that I think people might be interested to see the scope, agro ecology and understanding bio systems, modeling, robotics, where it's important to have edge computing and the Internet of Things in this adaptive and real time low compute setting. And also theoretical biology, so just understanding processes of learning and behavior, ecological secession and evolutionary or intergenerational processes. So anywhere where you see living organisms there's something active that could be happening there, or at least you could be modeling it that way. But those are some exciting areas that people are looking into.
Brian "Ponch" Rivera:Hey Daniel, I want to wrap this up for our podcast, the audio portion anyway, and we'll invite our guests and listeners to stay on and move over to YouTube into our no way out extras. But before we conclude this portion of the podcast, the audio part anyway, how can our listeners get in touch with you and what's on the horizon for you and the active inference Institute? Sure, thank you.
Daniel Friedman, PhD:I would point people to two places. First, if you would like to learn and apply active inference, please head over to the active inference Institute, active inference dot org. All backgrounds, all perspectives are welcome, so we'll very much look forward to having you participate in the ecosystem, because there's so many things happening and surely, even if you listen to this tomorrow not July 12, 2023, you'll find different information on that site and different things happening at the Institute. So just head over there whenever you hear it, and that's going to be the best way to get involved. And secondly, I would point people to cog secorg C-O-G-S-E-C to learn more about our work in cognitive security and to catch up on our previous research initiatives and contribute to our ongoing work.
Brian "Ponch" Rivera:Awesome, daniel. I really appreciate your time today and, like I said, we'll continue the conversation here in a moment.