 
  No Way Out
Welcome to the No Way Out podcast where we examine the variety of domains and disciplines behind John R. Boyd’s OODA sketch and why, today, more than ever, it is an imperative to understand Boyd’s axiomatic sketch of how organisms, individuals, teams, corporations, and governments comprehend, shape, and adapt in our VUCA world.
No Way Out
Agentic AI Thinks Like Boyd: The OODA Upgrade LLMs Can’t Touch
🎖️ **Royal Marine Commando Ben Ford** (software engineer, Haskell/category theory) joins *No Way Out* to expose the **OODA loop myth** most AI devs still believe.
  🔥 **Agentic AI runs John Boyd’s *real* OODA** — not the linear "observe-orient-decide-act" cartoon.  
  🔴 **LLMs are evolutionary dead-ends**: text-in, text-out, no destruction & creation, no real-time model updates.  
  🟢 **Active inference AI = fractal OODA**: destroys old models, creates new ones *every cycle* — at 1/1000th the energy.
  ---
  🧠 **KEY BREAKDOWNS**
  • **Orientation = the entire loop** (neuroscience + category theory proof)  
  • **Unfolding circumstances = affordances in 4D space-time** (Wardley Maps + spatial web)  
  • **Destruction & Creation = Active Inference** (FEP, Markov blankets, low-energy dominance)  
  • **Why Scrum fails, exploration wins** (knowledge *of* vs. knowledge *about*)  
  • **2026 AI forecast**: edge-device FEP agents, Verses AI, LLMs obsolete. 
💬 **QUOTE**
  > “LLMs are linear OODA. Active inference is *fractal* OODA. The difference? Real-time destruction and creation.” — **Ben Ford**
  ---
  🚀 **No Way Out Podcast** – Where Marines, AI devs, and systems thinkers decode dominance.  
  #OODA #AgenticAI #JohnBoyd #ActiveInference #FreeEnergyPrinciple
💬 **QUOTE**
  > “LLMs are linear OODA. Active inference is *fractal* OODA. The difference? Real-time destruction and creation.” — **Ben Ford**
NWO Intro with Boyd
March 25, 2025
Find us on X. @NoWayOutcast 
Substack: The Whirl of ReOrientation 
Want to develop your organization’s capacity for free and independent action (Organic Success)? Learn more and follow us at:
https://www.aglx.com/
https://www.youtube.com/@AGLXConsulting
https://www.linkedin.com/company/aglx-consulting-llc/
https://www.linkedin.com/in/briandrivera
https://www.linkedin.com/in/markjmcgrath1
https://www.linkedin.com/in/stevemccrone
Stay in the Loop. Don't have time to listen to the podcast? Want to make some snowmobiles? Subscribe to our weekly newsletter to receive deeper insights on current and past episodes.
Recent podcasts where you’ll also find Mark and Ponch:
One of my best friends I've never met that I've been collaborating for like, I don't know what, five, six years, and we talk and communicate, and uh we're both brother Marines, and he's over there and we're over here across the pond. Who's that? Ben Ford, Royal Marine, commando, Royal Marine Commando.
Ben Ford:I've been I've been waiting for this invite since like you know episode two. We kept sending you invites.
Mark McGrath:You just have you've you've We wanted to get it perfected before we brought you on. We didn't want to go half-assed.
Ben Ford:So how very unoodable of you. Yeah.
Brian "Ponch" Rivera:I have an idea. Let's let's just come up with some ideas to talk about just for the next three minutes. Just we'll throw ideas around. We don't have to anchor on anything yet. And then at the end of three minutes or so, it's uh I got the time right now. We'll we'll decide what we want to talk about. So I'll throw this out there. The machine is fractal. Heard that this morning. Fractal approaches to everything, uh, sacred geometry. We've been talking about that a little bit on the podcast. Many people push back on it. I don't I don't care. Uh so I'm gonna put throw it over to you, Ben. What do you want to talk about? We'll keep going around in circles.
Ben Ford:So I would love to look at some of these links between Boyd's work and some of the newer neuroscience and pure mathematics stuff that you guys have been digging into with the folks from Versus and Free Energy Principle. I think there's a really interesting theme to pull on there that kind of takes Boyd way beyond warfare and like right into kind of current current times. Moose.
Mark McGrath:Fractal was one of the first conversations I remember having with Ben years ago. We were talking about how orientation was fractal. That would be cool to talk about. Uh the neuroscience heavy of the of the duo is of course Ponch, so I always uh get my notebook out. Yeah, I'm not Yeah.
Brian "Ponch" Rivera:Maybe.
Mark McGrath:Go ahead, Michelle. I I keep fine- I keep finding that the you know what I think all three of us are dialed in on is that authentic Boyd still has a lot of runway. And when you integrate authentic Boyd, one, you can find things that are allegedly new that Boyd already Boyd already did a long time ago, we just called it something else. But two, um and this is what a lot of the stuff I've been writing about when you look at there's other snowmobiles to make with Boyd of other thinkers like Marshall McLuhan, for example. And when when you have depth of one, it's easier in learning the other actually makes your depth of the original. So, like the more I learn about McLuhan, the better I get at Boyd, and vice versa. The more I'm bored about Boyd, I feel like the better I get at McLuhan. I think that there's a lot out there that people have left on the table.
Ben Ford:Yeah, I agree with that. That's very category theoretical of you as well, mapping between different domains.
Brian "Ponch" Rivera:Yeah. I've got another one. Uh, outside information. We've got three sources of information on the outside, and you're you're probably looking at my screen, going, what the hell did you come up with, Ponch? We'll talk about that some other time. But outside information, very few people talk about this. Connect in connection to the Muse. We've had folks talk about uh telepathy tapes on on the show. Maybe the Akashic Records. Uh it's just one of those things that I think many folks ignore when they look at the Oodle Loop. That is uh what do those three things on the left side actually mean? That might be something we could explore as well. Ben, your turn.
Ben Ford:Yeah, so so I want I want to dig into unfolding, unfolding circumstances. There's a very particular, there's a very particular meaning to folding and unfolding within functional programming and category theory. I think I've been kicking around some ideas, basically looking at the intersection between Boyd's work and that that economics Nobel Prize.
Brian "Ponch" Rivera:Yeah. Destruction and creation.
Ben Ford:So I'd love to kind of dig into what's folding and unfolding and destruction and creation, where we can take that. That would be cool.
Brian "Ponch" Rivera:But all right. We got another minute. Moose, over to you. You have the last one.
Mark McGrath:Well, being an economist, like having a master's degree in it, and Ponch has a degree in economics, and John Boyd has a degree in economics. The creative destruction that they talk about comes from Joseph Schumpeter, whose book, Capitalism, Socialism, and Democracy, was one of the books that Boyd actually read. It's on my list of return return trip to the archives. That's definitely one of the books I want to find to see his margin notes because there was a LinkedIn post. I think we probably all saw it. Like somebody was saying that he, you know, this was plagiarizing Boyd, but you know, Boyd was basically Boyd actually reversed Schumpeter. But I think that it still goes without saying. I think Boyd took it deeper. And I think I think that Boyd's actually at a higher level than these economists, quite honestly, because I think that um they're they're looking at a triangle and Boyd was looking at a pyramid. They're looking at one side of a pyramid, and there's a lot more.
Brian "Ponch" Rivera:Well, let's start on the outside there, where uh Ben brought us up to unfolding circumstances. Uh I I brought up outside information, but let's start there and bring it on to the inside of the loop, which I believe takes us into FEP into other things. Ben, um, when when I talk about unfolding circumstances, and and today uh the guys at Hedge I were talking about um uh when you got somebody on the roof, you know, it's it's time to go ahead and and take some type of actions. We've had John Flack on here. We've had other folks come on the show talk about affordances. Unfolding circumstances to me mean affordances created by other living systems. So I'll throw that in your your lap and tell me tell me where where to go with that.
Ben Ford:Hey, my my internet just had a little bit of a wobble, but it seems to be okay now.
Brian "Ponch" Rivera:No, I was just saying, hey, so unfolding circumstances, to me, that's an affordance created by another living system, an opportunity for action for us. So going back to your uh point about it earlier, what's it mean to you?
Ben Ford:So if you think about like when when we take action, we cast a bunch of ripples into the environment, right? And there are multiple other agencies, entities doing the same thing. And so our unfolding circumstances are you know, all of the multiple ways that those things can interact and overlap. And the way I see it is our orientation is basically figuring out like what the pattern matching, reverse engineering, what those overlapping threads means, right? And ascribing some kind of pattern recognition and model of the world, that kind of inside-out model of the world to those patterns, right? So when I think about unfolding, you know, when you on when you unfold from a kind of category theory functional programming point of view, you start with a seed, something, and then you generate a whole bunch of things from that something. So it could be geometric progression or a bunch of different pol possibilities for a trading system, and all of those are kind of unfolding possibilities, you know, uh few of which are eventually followed and uh all of which are possible, but few of which are actually followed.
Brian "Ponch" Rivera:Let me ask you this do you have to take an action to see an unfolding circumstance, or can you just sit back and observe it?
Ben Ford:So I think you can, and I think this is maybe where some where people like I I like to when I explain Uda try and start from the point of view of action. Because if you think about you know the active inference lens that you guys have been really successfully shining on this, you know, you don't really have the ability to orientate until you've put something into the world to try and react to, right? So yes, you can sit and observe, but in reality, you know, my visual field is being built by my eyes making their little movements, right? So I'm not I'm not sitting here passively absorbing this conversation in the world. So yes, I I think I think there is a there is a model of Boyd where you're basically sat as some kind of passive observer, but I think in practice that's not really how it works, especially when you when you lend when you bring in um you know FEP and all the other stuff you guys have been talking about.
Brian "Ponch" Rivera:So, hey, Ben, on the so the three sources of information that we have on the OODA loop, the third one would be unfolding interaction with the environment. So to me, that would be uh that negative feedback loop that's uh I guess you can call it a negative feedback loop, but once we take some type of action onto the environment, which it could be a you know observing or uh moving our head, whatever it may be, um that mixes with the features of the world, right? The the features of the world which include entropy, uncertainty, mutations, and all that. So walk me through how we, you know, I'm I'm a little stuck on on your description at the moment. So unfolding would be our actions mixing with everybody else's, is that correct?
Ben Ford:Yeah. So so within um So there's a there's a um mathematical structure which is kind of like the so just to take a step back, the the very weird little um pathway I found myself going as a software engineer, kind of before or just around about the time I came across Boyd was functional programming, you know, specifically a programming language called Haskell. Haskell is a quite academic language and it's very um influenced by category theory. So there's this kind of relationship between, you know, I know you've had a few guests on who talk about category theories. The mathematical structures within category theory kind of feed into this kind of programming language that I was using for a long time. Unfolding in there is essentially you start from a seed and you generate a structure from that seed. And a very simple example might be start from zero, add one, keep adding one, and then you get an array of natural numbers. But that's uh that's a simple linear unfolding. You can have fractal unfolding where you unfold something that then unfolds something that unfold something, and you end up with these quite big structures being built. And then the the opposite or the dual of a fold of an unfold is a fold, right? So all of these structures that you build up, you then collapse down, right? So you create and then you destroy. Um there's quite an interesting you can you can look at create and destroy on either from either lens there, right? So if you start with something you can create something from that, or you can consider that you're exploding something to expand into new things. Did I drop out again for a sec there?
Brian "Ponch" Rivera:Yeah. You did for a moment, but it'll pick up. We'll be fine.
Ben Ford:Okay. Yeah. So so you've got this kind of depending on how you look at creation and destruction, you can consider expanding from a single point into multiple as creation, or you consider exploding that single point into multiple things as destruction. Right? So depending on how you want to look at that, and I think there's something quite interesting with uh I I was playing around with some some of this stuff with the help of Claude the other day. Internally, our orientation, we need to we need to destroy and recreate our internal models of the world. So the the what I'm thinking at the moment is that your unfolding interaction with the environment is destroyed or compacted down into your orientation, which is then again exploded out into your various different courses of action and the ones that you choose as as actions. And then and then the loop continues in that fractal kind of way.
Brian "Ponch" Rivera:And so you uh you you bit into category theory a little bit deeper than we have. The folks that came on, uh, we worked with uh at an oil and gas company, they went into MIT for safety, looking at I forget the name of the safety uh protocols, but they ended up coming out being very familiar with uh category theory and that's the path they're on now. Do you do you have any more background on category theory for knuckleheads like me? Because uh when they were on here, I was like, it kind of sounds like FEP, it sounds like a lot of active inference and um predictive processing and understanding complex adaptive systems. But do you have any more you could highlight with the with that?
Ben Ford:Yeah, so I'm you know, I'm some some way from being a mathematician by any means. The way I think of category theory is it's a it's a common language that allows you to map multiple different disciplines to a single thing, right? So category theory, because it's so abstract and so general, you can make these kind of statements. You know, if you have a category or if you have a mathematical structure that forms a category, you can make these inferences about it. And it turns out that there's a lot of these, right? So there's a lot in physics, there's a lot in economics, there's you know, new I follow this guy on LinkedIn who comes out with a literally a new uh paper on category theory across various different things. I think the one he came out with um the other day was portfolio construction in in economics, right? So because it's so abstract, it's um very applicable to multiple different things. I know that the Versus team have got a big, a big team of category theorists who are, you know, basically built using this concept of lenses, which is one more advanced area of category theory. I don't pretend to have the maths to understand, but they're using Bayesian lenses to model the free energy principle and the things that they use to then engineer smarter agents out of that. So it's a super, I mean, it's it's so abstract that it doesn't make any sense when you kind of look at it. But it underpins lots and lots of the different kind of software libraries and and things that I've used. And there are some very useful kind of properties that drop out of knowing that something is a category.
Brian "Ponch" Rivera:So I don't know if you cut the episode with Dan Mapes, the part that we recorded with him, he actually brought up that the why is FEP, the free energy principle, why. The the how is active inference, and the what is the oodaloop. That that part he recorded with me, or he brought up after we stopped recording. But the idea there is understanding the principle why, why does this thing exist, how we're actively engaging the active inference part of active inference where we're running internal simulations and they're also also taking action on the outside. So internal simulations, I want to make this very clear because I know Moose um has druthers with this as well. Internal simulations about reducing future uncertainty and future risk or uncertainty and risk. So you do not need to create an OODA R model, an ODA risk model, because it's inherently built into it, right? And that's one of the things that Moose and I talk about quite often. But anyway, the the point there is having gained access to these folks, uh, and and you know, I don't know if we'll ever get Carl Friston on the show, but the things they're talking about are kind of inherent inside of what John Boyd gave us through his sketch and all the work prior to that, right? So you know, I want to hear your thoughts on that. Or Moosey, you have any other uh anything you want to share there?
Mark McGrath:Yeah, I need some learning curve crunch on on category theory, but like prima facie, it seems like a lot of the things that he's talking about in conceptual spiral, like building on things, showing relationships, and then creating new mathematical, in this case math, it seems part and the pun. I mean, it seems like there's some conceptual similarities.
Brian "Ponch" Rivera:But I'll have to If I remember correctly, it's about the context too, right? It's it's the it's the context that matters. So we're talking to data scientists, it's um not the rel it's the it's the context that matters around data and the relationships that matter around them, right? Not the same. Yeah, yeah. So is that true, Ben? Is that a good way to look at it?
Ben Ford:Yeah, so I mean I think I think Dave Dave Snowden, who is uh guest number one, right? I think Dave Dave would, and and most complexity scientists would say that it's that it's the relationships between things that really convey most of the meaning rather than rather than the things themselves. So when you're talking about relationships, you end up talking about graphs. And when you're quite talking about sort of traversing graphs and bringing bringing things together, you're talking about collapsing, right? If you're if you're doing data science, you're doing very generally, you know, some sort of graph traversal. Like if you're doing a database query, you've got a bunch of different entities in different tables, those have relationships to other tables, and to do a query, you're essentially doing a graph traversal across the whole thing and then generally aggregating it down, right? So you're you're traversing across a whole bunch of data and then you're crunching it, destroying it, if we're talking about destruction and creation, down to a single entity. Another, you know, the dual way of looking at sim similar to the unfolding and folding thing, you could be so you could say, well, I'm just gonna take all this data and I'm gonna crunch it down into something. So have I destroyed it or have I created something new? Right? I think where it gets really interesting with destruction and creation um is you know, destruction is creation and creation is destruction, depending on which lens you choose to look at this.
Mark McGrath:Depending on orientation.
Ben Ford:Yeah.
unknown:Yeah.
Mark McGrath:I mean, even unfolding circumstances have to have a perception that filters through orientation. Orientation shapes how you look at those unfolding circumstances too, right?
Ben Ford:Yep. And and the selection and bias of implicit guidance and control from your orientation to your observation is also a form of destruction because you're filtering out certain things that you are not paying attention to or you know, not not aligned with or you know, not ex you don't have Yes.
Brian "Ponch" Rivera:So was that was that IG and C from orientation to observation or orientation to action? Yeah. Yeah. So it acts as a filtering mechanism. It's your kind of like your reticular activating system. It it prevents you from seeing things that are right in front of you, right? Yeah. So the whole idea that I'm learning from FEP and and PCT, which is perceptual control theory, EP, which is ecological psychology, all these things kind of have to me, they're all looking at the Oodaloop from a different lens or different different uh perspective. Uh what that's telling me is that uh and and going back to uh Dave Snowden's work where he talks about inattentional blindness, uh I think it's not his work, but he borrows that we only see what we expect to see. So that IGNC pathway there that that I call perception, you know, that's just your ego, your default mode, uh way of operating or seeing the world, that is a an essential component of the OODA loop that many people forget about. It's just not there. It's like this, you know, it it's the interplay between the interactions between observation and and orientation, as well as the active side of active inference in your predictions. So Bayesian brain, all that comes into that perception. Is that how you kind of look at it as well, Ben?
Ben Ford:Yeah, pretty much. I mean it's it's you know, the really interesting thing of you know, looking at, and again, like I don't really have the mathematics background to understand FEP in all its glory. One day I'll I'll take six months off and learn enough to actually implement you know an algorithm for it. So I really really do understand it. But at the moment, I'm kind of vibe understanding it, right, at best. But the way I look at it is that boundary between, you know, uh Punch, I think you've seen my my different diagram of Uda, where it's like a it's a circle, right? And you've got you know, you've got your observations coming in and your actions coming out from the other side.
Brian "Ponch" Rivera:Yeah, yeah.
Ben Ford:So that that boundary. And I believe I think I think Chet said that towards towards the end of his life, Boyd was reconsidering his sketch and wanting to make it, you know, orientation as as the whole thing in some way.
Mark McGrath:That's what we talk about every day. Like, because that's I just got off something uh a call earlier. We were having that discussion with someone who's a previous guest and all this, and a collaborator. And and it really the the epiphany that people that think they know Boyd have is that really orientation is exactly what he's talking about. Orientation is literally how we do 100% of everything, how we sense the side react, how we learn, how we think, how we feel flows from from orientation.
Brian "Ponch" Rivera:So on that, Ben, you brought up some of the guests we've had on PCT. Uh, we're working on the EP, the Gibson, what do you guess? Gibson affordances, the Gibsonian view of everything. So starting with the outside affordances looking in. What some of these theories say, and I might get this wrong, is that there is no prediction, there is no world model, there is no mental model, it's just us having access. And again, the way I look at it is they're looking at part of the Udaloop and saying it's it's the outside information, the unfolding circumstances, those affordances, those opportunities for action that are acted upon that that drive our perception, you know, perceptual control theory. Again, it's looking at the action, the outside world, and perception and saying those are the only things that matter. And then you get into uh the EP side of the world, which is the like constraints-led approach in sports, which is fantastic. They'll say the same thing. There is no prediction. A baseball batter doesn't predict where that ball is going to go, right? And I'm like, and I'm like, I kind of get that. They don't sit there and calculate where it's gonna go all the time. I don't think that's necessarily true because the neuroimaging and all that says that we do do some calculations, some prediction in there. But it's that implicit guidance control pathway from orientation to action. And I think that's the place that is a moneymaker for all of us. That's intuition, and in my view, it's skills. It's it's it's skills. And what I'm learning from the EP side of the house, the ecological psychology, the the constraints-led approach is when you're training athletes in skill acquisition, the teams, what I like about what they say is the context is always changing. Air pressure is changing, the temperature is changing, the ball pressure is changing, the joints are moving. There is no set one set way to release a basketball on a shot, right? There's there's essential elements, those invariants that John Boyd talked about, right? So I believe when you bring it all together, everybody is chasing Boyd. They're identifying what Boyd brought together. And I think it's when we hate to say break it down and go going back to Moose's point, you're looking at a triangle, you're looking at a pyramid, you're looking at the bottom of the pyramid, you're looking at a square, right? That's what's happening right now. Everybody's looking at the world from different perspectives, saying, that's what I see. And what we're saying is we believe the Oodo Loop, and I don't know how you feel about this, Ben, but it kind of encapsulates thep, constructal law, cat maybe category theory. I don't know about that. I might be off on that. Ecological psychology, a PCT, perceptual control theory, so many things that I believe if you were to sit down and do six months of math, you can put it all in there and go, this is how it actually works. Should I say the quote punch that I say all the time?
Mark McGrath:Yeah. I mean, because of AI, maybe now we can finally catch up to Void, but I don't because he, you know, it's like you were saying towards the end of his life and speaking with Chet, the other thing that he was looking at was also too the like evolutionary biology and all these other things that just didn't, it's so much more than the linear bullshit. I think that the three of us know that. And when you think about like talking about even what we're talking about, when you show people this, you have a real opportunity to advise them to break their perception of what they think is real and get them to a position, you know, guide them to a position or even empower them to a position where they can achieve excellence and thrive versus just merely survive on our own terms and actually live destruction creation.
Brian "Ponch" Rivera:I'd have to say that OODA, Observe Orient Decide Act, is a perspective when you look at Boyd Zoodaloop, right? Yep.
Mark McGrath:I mean, it is I like to say that. But if you take it out though, if you take out the top part, like the labels, that's because that's where everybody gets stuck on. Just take that out and just leave the sketch. You know, observations don't become a step or a phase. Like that's just like when you hear people say that, you're like, and then the most important is the orientation phase. You know, it's like, wow, I wish I was competing against you because you don't know what the hell you're talking about.
Brian "Ponch" Rivera:That's where I draw it like this now, right? For that reason. Because now it looks like areas that matter rather than uh pathways.
Mark McGrath:Well, and the Markov blanket's important too, because I think that people forget that this is all cognitive. It's all happening internally and that very little of it's happening. Anything that happens external is only happening because you perceive it so.
Ben Ford:So I yeah, I might push back against that slightly. Bring it. When we're talking about when we're talking about skills, skill development, like for the longest time, I the biggest mistake I made when I was trying to learn this language, Haskell, was just reading about it, not programming it, right? So my ability to take my thoughts and put them into code depends on me moving my body and typing shit into a keyboard to actually write the code, right? If I'm not doing that, if I'm doing it all in my head, I don't have that embodied feedback loop to the degree.
Mark McGrath:I'm not saying that that doesn't exist. I'm just saying that your whole perception of that is all internal. I want to build on-cause that's when what that's I mean, this is one of the things early on I learned from Potch and how we even got together was I had never seen Oodaloop drawn in a way. Then and then I finally saw the spinny thing, like with the boundary between what's internal and out outside. But I always perceived that. Like I always I always thought that like all this was happening inside of me. Anything outside that has meaning to me, anything that is unfolding, anything that is emergent, it only emerges to me or becomes apparent to me only because of my own orientation, not because of anything else. There's there's no there's no objectivity.
Ben Ford:So that that that's really interesting. So if we think back to Heisenberg, right, there's always there's always another external perspective, right? We're back into fractals now. If we take another example of skill, right? One of the one of the things I did a few years ago, I I took a motorbike on track for the first time. Right. And I read a book, like I like I do. I read a book about motorcycle mechanics. So I had a kind of an idea of what was happening on this on this thing.
Mark McGrath:You had a baseline.
Ben Ford:Yeah, I had a I had a baseline understanding, you know, bike geometry changes when you when you roll off the throttle, the um the wheelbase shortens, that tips you in more. And none of that fucking mattered at all. Because when you're on the track, that bike was an extension of my body. So although all the orientation was internal, me as a system operating that extended to the tires of the bike touching the road. I'm I'm convinced of this, right? So when I was when I was going around round the corner with my knee on the deck and looking looking around the corner, throttle here, I was not thinking about where's that throttle? Yeah. I was thinking about what is this bike doing underneath me right now. So where do where do I think that's the same thing? That's Marshall McLuhan.
Mark McGrath:That that that system, that technology, that medium became an extension of you. It's just like what Punch talks about a lot when what what Boyd came to the realization from the Navy Top Gun guys was that the plane is an extension of the pilot. That's that's the most important thing in the thing, not not so much the engineering.
Brian "Ponch" Rivera:So I've been looking a lot into constraints-led approach, ecological approach to coaching, Ben. And what you just brought up and going back to your uh learning about the code, is knowledge about versus knowledge of, and that's actually something we get from Gibson. So knowledge about this thing. So think about like Scrum. Scrum provides you knowledge about the way teams work. They plan, execute, assess, right? That's it. It's like taking kids and showing them to kick around a cone for soccer or football, where you are, soccer here in the U.S. Knowledge of how is what you just described. That's the exploration, your body, your proprioception, interception, all that stuff, the connection connectedness to the external world by the act of doing, right? Acting. Right. So, so what again, I'm learning a lot from the ecological side of coaching side of the house, where they understand constraints and things like that. They get a few things wrong here and there, that's fine. But you nailed it. You have to actually go out and execute these things. You have to do these things. And John Boyd, when you go back to his aerial attack study, right? Or let's start with aerial attack study. He had both knowledge about and knowledge of. He was able to break that down to provide that information to others, right? And that's that's what that that matters is knowledge of how to do something doesn't always mean bring about knowledge about, right? So I'm glad you brought that up, Ben.
Ben Ford:Sorry, interruption from you got a you got a live child in the I've got a I've got an entity in my environment that is searching for external energy um and utilizing all means at her disposal to uh find it. Means being dad who's set on a computer who needs to order something. Nice white. Um sorry, I my internet was cutting out a little bit there.
Brian "Ponch" Rivera:No, no, it's fine. We were just talking about the uh knowledge about knowledge of what we were uh bringing back from uh your point.
Mark McGrath:What about understanding? Because that's I think knowledge is one thing, but understanding uh uh is is a whole completely different thing. I uh I think back to uh I'm pretty sure it's an MCDB2, MCDP to intelligence, but also five planning about the hierarchy of knowledge or the hierarchy of information. And then there's there's unprocessed data, there's processed data, which becomes information, which becomes knowledge, and that's where most people stop. But then beyond knowledge would be understanding. I understand. So I can see phenomena, these circumstances are unfolding, and I understand it because I'm oriented that way. And then higher above that is wisdom. I have the wisdom above the understanding. Because I think that I think that most are stuck in knowledge traps. I think that most are thinking that just because they know a lot of stuff. I also think that understanding too is that's how could you do destruction creation if you couldn't understand? How could you do analysis and synthesis if you didn't understand?
Brian "Ponch" Rivera:Well, uh my point there, Moose, is that there's a difference between reading something and then doing something, right? So there's there's a coach that knows to run certain things, to run drills around cones. The problem with that is perception is not coupled with action. So as a kid going through the cones, I don't know why I'm doing that. I'm just doing that, right? So we're missing out on that perception or that information from the external environment. I'll argue that's what happened with Agile. Scrum gave us that. They gave us the knowledge of knowledge about the way teams work, right? So we're just going through this thing, but we're we're missing that information. How do you actually work together with somebody? How do you work together with somebody? You can think of it as like relative effectiveness, right? Yeah, relative effectiveness. If your objective is to suck less than everybody else, then scrum. Is a good thing, right? I don't need to be better than you. I just need not to suck, I just need to suck less than you, which is kind of the same thing. But if you want to dominate, you need to find the most effective way to do things. And I think that's what we're talking about in skill acquisition and the use of the OODA loop is hey, perception is top-down inside out. We do have some predictions. We understand perception and action need to be coupled, perception action loop, all that important stuff. We need to build the you know, how do you build those skills matters. The the skills on IGNC, how you do that is I think what we're talking about here is what is the most effective way to do that. And I think it is the things that we teach we coach through uh like the Flow Learning Lab, doing simulations and exploration, letting the people explore and discover for themselves how to work together rather than giving them something and go, here's knowledge about something. And I think that's where a lot of organizations go wrong. So uh thanks for letting me go through that. That was kind of stuck in my head. But that's what we're really talking about a skill acquisition to dominate, dominate in this world, right?
Mark McGrath:That's kind of the difference between how we advise and consult versus other places, though, right, Ponch? I mean, because we're we're we're we're trying to lead people to understanding, not to lead them to knowledge.
Brian "Ponch" Rivera:Yeah, yeah, I agree. Yeah. Yeah. But they they get that through experience. And um it has to be an experience. And that's what Ben brought up is I read about this stuff, but it doesn't do you any good until you actually go out there and explore it, right?
Ben Ford:Yeah, even something, even something super academic like programming. Even you know, in terms of a skill, I mean that's the most unembodied skill you could possibly imagine, right? It's knowledge work. In theory, you should be able to sit and work out all this shit in your head and and make it work. But actually in practice, it's far, far more effective to be typing stuff into a into a keyboard and going through that kind of failure loop of I tried this, didn't work. You know, I I start to build some intuition about what good code looks like. I start to build some intuition about what the compiler errors are telling me and what I need to do about it. But none of that stuff is anywhere near as effective if you're just trying to learn it from a book at first. The learning from a book helps a bit because you you have that kind of pre like kind of warming up of knowledge, but there's no there's no bridge between knowledge and skill without practice, right? Without actually doing it in any in any field.
Brian "Ponch" Rivera:Yeah, and that that goes for teamwork. You know, we we talk about that quite a bit. Uh folks just go, uh, we already know how to do this. Actually, you don't. Um, you need a refresher course on it from time to time, and you need to get re-blued uh the way we talk about it from the weapons school. You need to learn how to do these things. Um Yeah, I think we're all on the same page. Where do we differ? Uh what are you seeing from time to time that we're putting out where you kind of disagree, Ben? He calls football other than that, yeah. Gridiron.
Mark McGrath:He spells flavor O-U-R. We spell it O R. Same with colour. Well, Ben heard me say the first time we met aluminium. Well, nobody speaks English like the English. We we have destroyed that language in this country.
Brian "Ponch" Rivera:I speak American.
Mark McGrath:Yeah.
Ben Ford:I'm sorry. So I I don't think I'm I you know I I I hear all of your kind of guess. I mean, this is the beauty of UDA right, Mark, you said something earlier. You know, now that we've got AI, are we gonna catch up with Boyd? We won't ever catch up with Boyd because there is always there's there's all you know there's the there's there's the learning curve, right? And there's always somebody building more curve because everything that people do, because Boyd is so general and so ahead of his time, everything that people does um kind of adds adds to Boyd, right? So, you know, you think you've caught up with Boyd, and then you come across Carl Friston. It's like, oh shit, Cole Fristan's talking about Uder as well. Now there's all this other kind of abstract maths and and other stuff I need to learn to understand, you know, to understand Boyd, right? So it we'll never catch up. That's that's the beauty of it, you know, it's an open, it's an open world game.
Mark McGrath:That was his intent. That was his intent, was to leave an open source, open system.
Brian "Ponch" Rivera:Well, put yourself in the shoes of an academic who who looks at us and goes, what the hell are you guys talking about? Because that when they go look on the web, what do they find? Observer to side act are like, it's a stupid loop. What are you talking about, right? So academics generally push back on anything we have to say. Yeah. Because they didn't learn it at Harvard.
Mark McGrath:They didn't, it's credential. I mean, that's why they hated Boyd, because they, you know, he didn't have the quote unquote credentials.
Ben Ford:Yeah, and it I mean, it's hard, you know. I I've I've done a few roles where I've been working with with groups of PhDs, like, you know, computer science PhDs who've been transplanted to work in industry. And the way knowledge is transmitted in in those environments is very, very different, right? So you're not generally going to find somebody who's come from an academic background who would take notice for somebody that only wrote one paper in 1976. Right? So there's that. Yeah, there's that. I mean, you know, there's some some don't don't get me wrong, some papers from the 1970s are absolutely foundational. Like the stuff that we're talking about with you know, category theory was, you know, 1950s, 1960s, and is still being built upon. So, you know, that there is probably a you know, an academic law of Boyd that somebody could write where where all this stuff is cross-referenced, but it's just a different, it's just a different learning culture, for I believe, like from from you know, from practitioners and there's people that would not engage with Boyd beyond a five-minute YouTube video because it's you know, it doesn't have immediate relevance to their work. So they are quite happy with the simple observe orient. I see something, I get my gun out, I orientate on it, I point my gun towards, I shoot it, I'm done, right? I observe him falling down. Okay, great. The the military version of UDA is intact. And that's not, you know, the simple version is like a local minima in in a machine learning algorithm, right? It's sorry, local maxima, right? It's it's better than being at zero, and it's perfectly useful for some people.
Brian "Ponch" Rivera:You just have to suck less. That's it.
Ben Ford:Yeah.
Mark McGrath:Being on a zero, or yeah, being at one is better than zero, but being at 100 is better than being at one. And that's I think what people leave on the table when they misunderstand Boyd, McLuhan, Buckminster Fuller, Hayek, all the good on the list. And why? Usually it's because they're oriented by something else or somebody else that says, this is bad. Like this is this guy is not one of me, or this guy. Um, I was talking about earlier on a call that that blind strategist book. Ian kind of tapped on it when we had him on last time, and he's no one's eviscerated that book better than Ian has. But it's like that guy is so angry about something that doesn't have anything to do with the grand scope of relevance of everything that Boyd's talking about. He's gonna focus on one little bit and say he did not understand blitzkrieg. Cool. That doesn't that doesn't mean that all the academic theory and things that he's explaining with universal phenomena, what is that, discount all that?
Ben Ford:Yeah, he I mean he spent spent 40 odd years studying stuff that wasn't Blitzkrieg's.
Mark McGrath:Yeah, right. Not not only that, we had a guy on to talk about drugs had a role in that. You know, like I mean, there's there's so much that's the whole that I mean basically he by by being an idiot, he ends up proving Boyd's point. Like everything's incomplete, and you can't you can't uh know with any certainty that when you know basically that guy, that author, was trying to determine the nature of a character of a system within itself and in his circle. So at the you know, the uh all the people that hate Boyd, um you could tell by reading the the insert of the book who they are, and it's the usual suspects. Of course they're gonna say that. That's exactly what they would say, because they didn't come up with it.
Ben Ford:Yeah. Well, it's funny, like, you know, all of the my my kind of journey to Boyd, you wouldn't think it from from the military, but I I actually came to to Boyd through Wardly mapping. I'd never heard of it in the military. So, you know, the I I'm not looking at Boyd through a military lens at all. I'm looking at Boyd through much more of a kind of agile software community and wardly mapping lens. And then I discovered him and then I and and I started reading from from that perspective. So I think I think there's this, you know, that there are some really interesting islands of very focused academic study, which are kind of like closed systems, like, you know, some some aspects of computer science and category theory, right? Or very, very many other kind of areas of academia, they're writing papers about papers, right? So they're digging deeper and deeper into more and more esoteric stuff and maybe getting further away from their environment, but they are discovering new structure. And then there's somebody who is interdisciplinary and not wedded to any of those things. Like, I think it's pretty natural that he that people that do that don't get much traction in these areas because they're not they're not looking inward, right? They're not they're not focusing their whole career on on understanding one thing. And you know, I'm not I'm and you know that might sound like I'm I'm being negative on people that do that, not at all. Like people who are able to focus on one thing for their entire career make amazing breakthroughs, but the applicability of those things has to come from people that are having feet in many camps and synthesizing, you know, destroying and creating.
Mark McGrath:I don't know how you can survive without being interdisciplinary. I I don't understand it. And I think that that's one of the great crimes of Western civilization the last 50 years or so to completely downplay the importance of having a liberal education. And when I say liberal, I mean a well-rounded liberal arts education that includes letters, science, you know, art, philosophy, that gives you a complete understanding of how to challenge assumptions, how to question things. We're so over-specialized. And then the problem is all these things about being replaced by AI. That's exactly what Buckminster Fuller said. Is like when the generalist turns into an extreme specialist, they're going to be replaced by automation at some point. And I think that we're seeing that in droves and probably with a lot more to with a lot more to come. But one thing I I think is a a symptom of that, or like a pattern or a weak signal or whatever, is the fact that we have rejected classical education across the board. So people have no understanding of their environment, they have no understanding of feelings, they have no understanding of taste, they have no understanding of culture, they have no they have no empathy, they don't understand logic. You know, if you've seen on articles I've written on The World about the 5T protocol, one of the easiest tactics to identify when someone's assaulting you verbally in a in a in an article or whatever, it's almost always logical fallacies. Stacks of them, like like absolute stacks of them. You could you could run political speeches through it, you could run articles, you could run all kinds of things, and you see the stack of giant fallacies, and you say, Nobody knows who Aristotle is anyway, let alone Aristotelian logic. Like nobody nobody has any damn idea. Or you you're a Euclid, they don't they don't understand Euclid. That whole quest for like truth and beauty is completely it's completely out the door.
Ben Ford:Yep. And the scary the even scarier thing is that it doesn't kind of fucking matter because pe people who are doubling down hardest on that kind of rejection of truth and beauty and logic and all of those things are the ones that are kind of winning in today's world, right? If you look at Yeah, I want to remember, but but you know, the it's a bit the modern system just seems to be set up that people can weaponize that's that being oriented from without, not from within.
Mark McGrath:So so your orientation is being rewritten by somebody else. Um the again, this call earlier, I I had seen this video. This guy was talking about the downfall or the apex of or the peak rather of everything in our in our our contemporary times was actually 07, it was the introduction of the iPhone. And and ever since then, everything's dropped off. He was talking about like, you know, he was he was using movies and comedy as his backdrop. And he was saying there's all these great movies, all this great film, all this great cinema, all this great cinematography, all this great comedy, it was just classic. And then all of a sudden it's just stopped. And then like, you know, the figure, the knee jerk is, well, Barack Obama got elected, and he was real liberal, and everything just went off the hill, and that's that's why nothing's funny anymore, and that's why movies are woke or blah, blah, blah. And this guy was actually, and he, by the way, he is a conservative, he was actually saying, No, I reject that. It's actually the iPhone because what ends up happening was when things become so packetized, when thinking became so your orientation started to meld together, um, then you lost interest really fast. So it's really easy to start to, it got easier and easier and easier for you to be oriented from without. And now where we're at. And again, it's McLuhan freaking 101. If you go back and you re read his things where he quote unquote predicts the internet, he's basically saying exactly what happened to it, literally to a literally to a T. And and and um Yeah, I mean, that's so, but it goes back to what you're saying. But I mean, that that's a complete rejection of how we were classically trained. I mean, where you live, I mean, Oxford and Cambridge, and those things were set up as like spiritual monastic centers where people were inquiring and trying to figure out what the hell is going on in this universe. And they did it through different, different angles. They had math and science and geometry and music and art and whatever, history. Now it's like you come to my history class, you better think exactly how I make you think. I'm gonna influence you to think this way, whether it's right wing, left wing, or something, but but it's not, it's not dealing with reality. And then what it ends up doing is it it's orienting people in a way that is not harmonized with what's actually going on. And then they also, too, they don't have the ability to even do destruction and creation because they're they're incapable of rewriting and revising those models that they've been inculcated with. Yeah. I mean, we had we had more intellectual freedom in the Marine Corps than I did in the civilian world.
Ben Ford:You know, there were there were strict practice, though, right? What's that? If you don't practice something, you lose it, right? And if if you're not or if you don't value something, you lose it. Yeah. Yeah. And you know, if you're not if you're not practicing the ability to destroy and create, because you're kind of taking stuff, and I'm I'm as guilty of it as anyone else. Like I sit on YouTube and like slurp up fucking slop. Um just like just like everybody else. But I I think that becomes quite dangerous when it's you know, it's it's the low energy, it's the kind of the low energy pathway, right? Of I'm not going to destroy or create my mental models because that's too much effort. What I'll do is sit here and be entertained.
Mark McGrath:But that that what the trade-off is, is that I'm letting my orientation be completely written and authored by someone that's not me. And I'm still subject to the consequences of my own my own Oodaloop sketch. Like I'm still I'm still subject to the consequences, but I'm I'm not reorienting in a way that that is unique or original to me. I'm I'm having it shaped by others. It's being disrupted and designed by by others, not by me. And if I try to, then what happens? Well, I'm a conspiracy theorist, or I'm a quack, or I'm another team player, or I'm, you know, whatever. We've heard all the tropes.
Ben Ford:Yeah, and and and the other thing is like from from from an you know an energy point of view, the further away you get from the centroid of everyone else's point of view, the more energy you have to expend to stay there, right? So it's that becomes a like a harder and harder fight to keep fighting. Right? So if if the you know, if the society is telling you to be a certain way and you decide not to be that way, you know, and society, I don't know, I don't know if that's even the term we can probably use anymore. But you know it becomes harder and harder to maintain your own little island if if the big island is kind of drifting away from you in a direction that you don't really want.
Brian "Ponch" Rivera:There may be an opportunity to say that uh the higher energy states lead to more maneuverability, right? All of energy maneuverability theory, right? So if you do have and I'm talking like free energy principle, but if you are further away, then you have the ability to see things before others do. You can can not predict, but you can get ahead, anticipate what's coming next.
Ben Ford:Yeah. I I think that's actually a really interesting aspect of of FEP that that I think resonates really strongly with Boyd is you know, Boyd, Boyd's idea of the capacity for free and independent action. Right, that that is your free energy in the environment. The the interesting thing, I I can't remember if anyone's maybe Mal did talk talk talked about this, but you have to balance the extract versus it versus explore, right? So if you're if you're just sitting in an extractive, like I know this, I've got this down, I know this domain, I'm the expert, you know, I'm just gonna continue with my kind of low energy uh IGNC pathway because it works for me, it works for you until bang, it doesn't work, right? So you have to have some of that some of the energy spent on or wasted on on exploring, right? So you have to have the ability to spend some of that on making mistakes and and other otherwise you end up having a big dislocation between what you think the world is like and what it what it's actually like. And I think that's a really interesting aspect, especially in today's world with AI accelerating the the change in everything. Like for me as a programmer, how I write code has changed immeasurably in six months.
Brian "Ponch" Rivera:Yeah.
Ben Ford:And that's some some of that's good and some of that's bad. You know, you have to maintain your ability to have what's the word, like a kind of you know, a sense of what's what's good and bad. Like you can't just be driven by the tools, the whole kind of vibe coding. Me, my people are saying, Oh yeah, I can build anything in a day, and yeah, sure, they they their their AI tools will make it look like they've built something in a day, but what they've built is definitely not something that is robust or or or capable. But at the same time, that ability to build anything in a day, if you direct that correctly, means that my capacity as a programmer has increased I don't know, conservatively, probably 10x. Wow. So and if I if I was still in kind of Haskell land, which doesn't really embrace the kind of the AI side of things because it's not deterministic enough for us all, we're we're all very snobby about determinism, that I would not be exploring that, and that I would become less and less effective until such a point as the world has moved so far away from me that it's now very, very difficult to catch up. And I think that happens that happens across all domains all the time.
Brian "Ponch" Rivera:So I'm still stuck on this point that you brought up earlier, and that is you found oodaloop, the oodaloop from worldly mapping. And uh how many years were you in the Marines before that?
Ben Ford:Four.
Brian "Ponch" Rivera:Okay.
Ben Ford:So they don't teach the Oodaloop in the UK. Not not in the not in the Royal Marines. Bearing in mind, this was you know 2000 to 2004, so you know they may well teach it now. I know I know they they teach it very fucking badly at staff college in the in the UK, so like major to colonel, I think staff college is. But you know, as a as an enlisted Marine, I didn't hear about it at all. I mean, the mot the motto of the Royal Marines is um be the first to understand, the first to adapt and respond, and the first to overcome. So we definitely know and understand Uda on one level. But um, yeah, I never I never never came across Boyd's work at all until I, from the agile software side of things, found Wardley and then from him found Uda.
Mark McGrath:Your brothers across the sea didn't get it authentically. I mean, we got Boyd, but not in the scope of Boyd's work. We got it in the warfighting friendly version and basically the basic decision cycle, but things like conceptual spiral strategic game, or even I don't think I ever saw Ouda Loop's sketch until I was actually out of the Marines.
Brian "Ponch" Rivera:Yeah, so even in the book Team of Teams, they get the Oudaloop wrong, right? Yeah. And then Simon Worley does a nice job, but it's still linear observer-oriented side act, right? And it's again, it's it's it's not wrong. It's a perspective. This is how we see it, this is what it means. And what we're saying is there's more behind that. So that means to me that if you're to look at worldly maps now and what could you make stronger, what can you make better? And I think there's a lot of room for improvement. I think it's fantastic, it's all relative, right? Before worldly mapping, we didn't have much, right? I mean, when when when we started using worldly mapping, what, eight, nine years ago? I was it changed. It was a it was a talk about destruction and creation, right? That model of how to do strategy is is mind-blowing. And the reason it is because it forces you to look outside, right? Go outside your own Oodle loop and understand the environment. What are the opportunities for action that are out there? What do you need to do? And now we're starting to see people do energy time mapping. We're doing a little bit of that. There's some folks in safety that are bringing over their safety approaches into this mapping world, getting it ready for agentic AI, which we've seen, and it's just freaking mind-blowing, right? Because it's it is about the uh relationships between, I'll call it objects for now, the functions. The relationship between functions now is is gonna be driving things. But at the moment, most organizations don't know how to wardly map. They don't know how to leverage the wisdom of a crowd, how to do red teaming techniques. They don't understand basic OODA. They understand basic OODA, if you want to put it that way. They don't understand the real ODA loop. And what we're that's what we're trying to let people know is hey, we're gonna give you a low energy approach, which I believe teaching the Oda Loop is the lowest energy approach to understanding how things work, right? That's because it kind of connects everything, right? Uh I mean, if we want to talk about mental health and psychedelic assistant therapy, boom, we can talk about that. We go right into fifth generation warfare. We we could flip over and talk about the importance of strategy, which we just talked about, right? With the with the with the idea of worldly mapping, but improving on that. We could talk about you know the ecological approach to coaching and sports, how that works. We can do so many things. Well, we talked about mission command, we could talk about controllers outside and bottom up from the toilet production system. So many things we can bring to the fight with this package called the OODA loop, right? So no, uh again, I I I was just kind of shocked that you uh came across OODA in uh boardly mapping.
Ben Ford:Yeah, I mean it's it is bizarre, isn't it? Yeah, I you know I've talked I've talked to people multiple times who've described, you know, how things have unfolded on ops and things like that. You know, I never I never did anything super, you know, I got out before Afghanistan was really, really kicking off. But you know, that was pure, pure Uda over there, right? You know, you can drill all day long, right? You can teach you teach the components of fighting all you want, but you actually have to learn fighting by getting in a fight. And getting in a fight, as I as I understand it, you know, in in in Hellmand and everything else, was pure, pure Oudah, right? Just you know, pure adaptation all the time.
Mark McGrath:And you know, and and I think it's interesting that you say that. In that location of the world, they certainly understand it in the world.
Ben Ford:Yeah.
Mark McGrath:Despite despite being very Stone Age, I mean they they they understand that better than and despite being in a very you know, one might naturally assume that that kind of hierarchical kind of tribal culture would be very difficult for ideas to propagate through.
Ben Ford:But you know, um what's that paper, one tribe at a time? You know, that's pure that's pure Uda as well, right? And and rejected by uh rejected by big military because, you know, apparently 40 years between Vietnam and and there wasn't long enough.
Mark McGrath:Well, the biggest military, General Petraeus, told everybody to read that, nobody did. And he was on our show and we talked about it. I mean, it was still definitive. I used to use that on Wallet. You could run sales teams with that paper.
Ben Ford:Yeah.
Mark McGrath:It's that good. You know? Yep. But yeah, it's like anything else. It's just maybe we're all meant to be hermits and live in a monastery and talk about this shit. It's like I I read an article, it's up there on on the world, and I'd been saying this thing for years, and it was uh uh from you know, re-reading about the Trojan Wars, and you know, you think back to the Trojan Wars if you remember that Cassandra worked in the temple, and she was one of the like the girls in the temple or whatever. Cassandra's curse was like she was she knew the truth and she knew what was gonna happen, and she told everybody all the everything that was gonna happen, and the curse was that no one would believe her. And I think that a lot of the times there is a Cassandra trap for what we talk about because we can easily see the patterns clearly and we can show it to people, but they don't want it, they're just like, shut up, Cassandra, I don't want to hear that. That was the title of the article, was hey Cassandra, shut up. So anyway.
Brian "Ponch" Rivera:So Ben, what are you working what are you working on these days? What's uh what what kind of how's how's your business going?
Ben Ford:Yeah, so I've I've been doing quite a lot of agentic AI projects. I mean, like there's a whole there's a whole kind of AI OODA thing that we haven't even touched on, which I think is super interesting, to look at how LLMs have gone from basically nothing to everywhere, everywhere in a few years, and to look at the emergent kind of practice and learning of how these things are being used and the software libraries that are coming. I mean, that's that's very waldy mapping-esque as well, right? You've got the you've got the thing coming in from the left of the waldy map, moving across to being a commodity, and then all of the unfolding richness of stuff that happens on top of that. So I'm I'm doing a lot of agentic kind of software engineering projects at the moment. At the same time as recognizing that LLMs, I think, are probably largely an evolutionary dead end when when you can when you contrast them with I, you know, I kind of wish that whatever the guys that Versus were working on was a bit more in the kind of open source world. I know they've got their protocol, which is open source, the in the FEP-inspired AI.
Brian "Ponch" Rivera:Well, let's talk about this. Um the the when I talk about large language models, I usually associate it with the linear oodaloop, right? And you see a lot of folks talking about linear oodaloop with large language models. You know, in the last couple of weeks, I don't have the articles up right now, but they're they're talking about a gentic AI and the OODA loop, and I'm like, well, that's that's you're talking about linear oodaloop, right? Which is, again, not completely wrong. It's a perspective of looking at this. What you're looking at and what we're looking at with Versus and um active inference AI, and then there's some other AI out there as well, reinforcement learning, which is more, it's less linear OODA um, but not as strong as like free energy and active inference AI. It is, it's growing. So it's growing within that image that we showed of the Oodaloop earlier. It's it's it's another perspective saying, hey, we're actually actively doing these things and we're lowering the energy requirement, just like we do with natural systems to engage with the external world. So again, what's happening is they're discovering the real Oodaloop. They're like, hey, this thing isn't working. We're like, yeah, we told you that. It's not going to work, right? So is that how you're kind of seeing it, or or any any differences?
Ben Ford:So so I think within large language models, you know, there are so the the interesting thing that's happened is that you know you've got you've got your open AI, which, you know, depending on who you speak to, could very well be the Enron of of the kind of the current times, who came up with this, they didn't come up with, they don't come with come up with the transformer architecture, but they came up with ChatGPT 3.5. And it was this fucking magic technology. Suddenly I can throw text in there and I ask it in human language to translate that text into something else, and I get something that looks that's magical. It was actually absolutely incredible. And as I think I mentioned earlier, like the programming kind of subculture that I come from somewhat dis discounts AI because we prefer to work on these kind of mass m mathematical structures and build stuff that's more deterministic. So it was it was kind of an eye-opener for me. And then that very, very quickly was followed by lots of other companies who are also working on similar things, and you know, lo and behold, once you have an LLM that's trained up well enough, actually building a Chat GPT-like experience on top of it is not very fucking difficult at all, which is why there are so many of them now, right? So you look at DeepSeq when it came out, you know, obviously there's a bit of spin here, and there's probably a bit of fifth generation warfare from the Chinese government here and how that was put out and how it became open source. You know, I'm I'm speculating here, but I'm assuming that there's some pretty dialed-in doctrine and strategy going on when it comes to that. And and suddenly, you know, DeepSeq came out, it was slightly different than ChatGPT, but much better, 10% the cost. Apparently, it'd been trained for next to nothing, and the methodology you've got made open source as well. So now there's this whole kind of plethora of different AI models, different inference providers. Uh, I don't know if you guys have ever come across like Cerberus or Sanbanova, but those you go on those, and that's like another chat GPT experience. You go on Sanbanova and it generates text like 10x quicker than ChatGPT. It's fucking amazing, right? So now there's this real forest of different LLMs, different providers, different inference providers, different infrastructures, and on top of that, there's a whole forest of open source agentic frameworks and things like this. So you can build some pretty amazing shit even though at the same time you recognize that LLMs are probably from that kind of energetic and evolutionary point of view, a dead end. But there's an you know, like I Someone said to me the other the other the other month or I've read it somewhere, if all progress on LLMs was to stop tomorrow, you would still have 10 or 20 years of absolute mayhem in in the kind of economic world because of just how how effective they can be. But the thing that I'm working on on one of my projects at the moment is a um is a way of answering cold emails. So previously you would send out thousands of cold emails, someone would come back and be interested, and then you'd have to hire somebody to converse back and forth with them and book a call, because what you really want is that person on the call, right? And now you can get AI to to do a pretty decent job of that, right? So the ability to send and handle cold emails is now hugely cheaper, faster. And that that pattern will pertain across whole swathes of the economy. And that's even knowing that LLMs are probably an evolutionary dead end compared to active inference-inspired AI.
Brian "Ponch" Rivera:Can you create an LLM for me that answers those cold emails? Sure. All right. So we'll have our bots talking to each other.
Ben Ford:Yeah. You mean that you want one that tells people to fuck off and they send you a lot of people.
Brian "Ponch" Rivera:Well it doesn't matter what I tell them.
Ben Ford:I just want them to mess for them.
Mark McGrath:That's the thing, uh like how to really maximize using agents. You know, you hear about these people like, yeah, I have a staff of twelve. Oh, really? How many people and and it's 12 agents, like AI agents that keep their calendar and book meetings.
Ben Ford:Yeah. And you know, you can build a reasonable facsimile of an oodle loop using an LLM and the ability to have tools and access to the outside world, right? It might not be a particularly great idea to do that because of prompt injection and things like this, but you know, you can build something with an agentic framework that looks a bit like an oodaloop, right? Good enough, right? Better than good it good enough. It's definitely not human replaceable, but good enough that your humans can be augmented and their capacity expanded quite significantly. But it still isn't proper what I assume will come out of the active inference world, which is you know, well, let's talk about that.
Brian "Ponch" Rivera:So what is it so you're you're saying that these LLMs you can get OODA-like capabilities currently, but what's missing? Uh I want to go I go back to the conversation with Dan Mapes and what I learned about the spatial web. Dude, I didn't understand Web 2.0, Web 3.0, or Web 1.0. I'm like, what are you talking about? So what I learned is the three dimensions or four dimensions, right? So where have we seen three dimensions and four dimensions before? Well, we saw it from John Boy, right? His aerial attack study gave us uh a lot of stuff that he gets into e theory, right? Which gave us a structure to understand dog fighting in a three-dimensional, four-dimensional world of time, right? Time compression. So from that, it's all about relationships, it's about the geometry. And then you look at what Dan Mapes and Gabriel Renee came up with, and that's the same thing. They're looking at, hey, we need this new Web 3.0 or whatever you want to call it to have these agents interact on it to access this three-dimensional world. So is is that I'm sorry, is it kind of a leading question? Is that what's missing? Is the protocols for these agents, or is it is it much more?
Ben Ford:I think it's more than that. So, you know, you can have the protocols, like, for example, I a thing that I played around with a few months ago is using using just an off-the-shelf LLM model to understand knowledge graphs using like an ontology language, right? And an ontology is a way of representing knowledge. Yeah, that's pretty cool. It works quite well. What that doesn't do is change the orientation of the model, right? When you have an LLM, what you get out of the box will never change, right? Its orientation will never change, right? You can you can augment it with um what's called retrieval augmented generation, right? You can take information from your business domain from from whatever you care about, and you can package that up and you can feed it into the prompt that you use to get your answer. But fundamentally, all you're ever doing, even if you dress that up with fancy agents and whatnot, is you take some text or some input data, could mean could be an image, you take some input data, you give it to the model, the model does its thing based on its in in internal inherent training, and it gives you back text. But no matter what you ever feed that model, it will never change its inherent training. Whereas my understanding of what active inference does, although the AI built on top of active inference, is it is changing its orientation as it goes, right? So it's it's updating its training, is updating its updating its model as it interacts with the world, which is far, far more.
Brian "Ponch" Rivera:It's going through something that we brought up earlier. It's going through cycles of destruction and creation, right?
Ben Ford:Correct. Yeah, correct. And it's doing it at a far lower energy cost as well, because it's not taking you know a rack of GPUs to to do this.
Brian "Ponch" Rivera:Well, that brings us back to DNC. Is the purpose of DNC is a low energy approach to survive, but potentially. It's the Alpha and Omega, right?
Ben Ford:Yeah. Yeah. So you know, I although there are, you know, there's huge strides being made in in LLMs. You know, I've got you know low power LLMs that I can run on my laptop. They're not particularly smart. You have to break things down and give them very small problems to solve. Just like a marine.
Mark McGrath:No offense taken. That's why Boyd came to the Marine Corps. You know, he knew it was a good place to lots of material to work with.
Brian "Ponch" Rivera:All right. Sorry about that. You just described a Marine as a giveaway.
Ben Ford:Um, that's completely throwing me off the strike. Yeah, sorry about that. You know, you you can you can get LLMs that will run on a laptop, and I think eventually we'll have LLMs that will run on a phone and be more kind of edge devices, but we won't ever have LLMs that are intelligent enough to act like the act what I've seen coming out of the active inference world at the low energy strike, you know, these things run on a on a small CPU. So, you know, I think you know, naturally LLMs will evolve to be less and less, you know, requiring less and less energy, and people will come up with clever tricks. But I don't believe LLMs will ever be able to have that kind of continual destruction and creation OODA loop built into them. So at the moment, people like me have to construct about OODA loop around essentially an unchanging LLM.
Brian "Ponch" Rivera:So I have to ask you guys, do we come up with a simple way to explain what's the difference between an LLM and and an active inference agent or just an agent in general, and that's DNC? I mean, to me, that's the simplest way to what's the difference? Hey, they they don't destroy and create their own world model. Yeah, I think so. Problem solved.
Ben Ford:Well, get it out in the wire. Get get Denise out there um, you know, spreading the message.
Brian "Ponch" Rivera:Yeah, I have to tell you. So when we were out in California, uh a bunch of folks were talking about all the LLMs they were building, and they were all excited, and they heard us get up and speak, and they're like, the hell are you guys talking about? That was the first time they heard about this, right? This is downtown Hollywood. Big money, folks that work with uh Meta and folks that work with or had work with uh Steve Jobs, here they are, and they're just like, this is new to us. What are you guys talking about, right? And for us, it's like, hey, if you understand the ODA loop, you could see that this is coming. And I I told Dan this. I said, when when I found out about active inference from the psychedelic and sociotherapy world, I start wondering who's actually using this in anything, anything. And to me, it was two places. One, Hedgeye up in Connecticut. They're using the Oodleop, okay oodle loop, not the best Oodle loop, it's getting better. And then it was versus. And it wasn't until uh I think they made the announcement that Carl Friston became their chief scientific officer. I was like, holy shit, I know who that guy is, at least I know his work. They've got to be onto something that the rest of the folks don't understand, right? And then we've, you know, I've had some chats with uh Gabriel Renee about construct the law, a little bit about the OODA loop, uh, and of course the conversation with Dan Mapes and a lot of conversations with Denise. Um, and then I don't know if you I don't know if you're following. Is it Dennis O? Yeah. Yeah to um Gaussian. And I'm like, dude, I uh you you lost me at the Gaussian splatting.
Ben Ford:So uh even that is all it's all destruction and creation at the end of the day. Like the the the the thing that I'm kind of playing with at the moment is from category theory, there's there's the you know this idea of algebras and coalgebras, right? Co-algebras can can can um compress things down and algebras expand things out, and all of you know, all of the stuff, even even actually LLM training, right, is is essentially built on this whole kind of idea of convolution, right? So you've got you've got some like call it a two-dimensional grid, right, for for simplicity's sake. You've you've seen um Conway's Game of Life? You ever seen that? It's like a little um it's a two-dimensional grid, and there's the squares and it can be either full or empty. And then there's rules about what if there's too many neighbors around a square, it dies, and if there's just enough neighbors, it continues.
Brian "Ponch" Rivera:How big can these how big can they go? 16 by 16, 8x8, or what are we at?
Ben Ford:Yeah, it can be huge.
Brian "Ponch" Rivera:Okay, okay.
Ben Ford:And so even with a super, super simple environment and very, very like the three rules, I think, right? You end up getting really complex emergent behavior and cycles of emergent behavior, right? So you get these kind of really kind of almost rich environments, and people have built versions of Conway's game, you know, multicoloured. There's a um there's a guy, he was on Jim Rutt's show. He built like a he built a model economy, I think it was called Sugarland or something like that. So it's it was a basically a a version of Conway's game where there was energy in the environment, and you know, it ended up like the you know, these societies generated from these very, very simple rules. So think like ants and how simple the algorithms that an in an individual ant works on. But the the reason I brought that up is because that is an example of convolution, right? You take a you take an area in a space, you look at the area around it, based on the properties of that area around it, you calculate a new value for the space. That is what reinforcement learning is. That's what machine learning algorithm does, right? It's that's what that's what the training essentially boils down to is look at look at the area around you, it's the graph, the neural network. Right? So that is destruction and creation, right? It's look around you, take that information, create a new and a new state from it. And all we do in programming is pretty much like that as well. I mean, we dress it up with lots of like fancy bullshit, but essentially that's what we do, right? When we read a database, we do that. And when we send text into an LLM and get text back, we're doing that internally within the LLM. So I think I think the um the the thing that's come out to me of of like studying Boyd in the context of category theory is that there is I'm convinced, I don't have the maths for it yet, but I'm convinced that there is a category theoretic model of Boyd's Udaloot, which then is applied to FEP, it's applied to the destruction and creation, sorry, the creative destruction economics paper. Um what else have I got in here? There's a whole bunch of stuff that it's applied to. And there's there's also uh semantic space-time, so promise theory. So all of all of these things are made of these kind of small, composable, mathematically derived rules that interact together to kind of generate these really complex and and rich systems.
Brian "Ponch" Rivera:Hey Ben, uh appreciate your time today, man. We're gonna wrap it up there. Where can our listeners find your work if you uh have a website or uh I know you have a LinkedIn profile? Anything else you want to share?
Ben Ford:Yeah, LinkedIn profile is probably the best. My website's hopelessly out of date. I've been too busy writing AI shit to do my website.
Mark McGrath:LinkedIn's hopelessly out of date. You talk about destruction creation, man. Could go on and on.
Ben Ford:Yeah, I mean that yeah, it it's um it is getting all very samey, isn't it? Very AI sloppy. And anyway, I'm at Commando Dev on LinkedIn. I'm I am planning to get back into kind of a bit more regular posting soon, but I'm I've I've been a bit quiet.
Mark McGrath:Come over to Substack. The water's just fine, and it's just a different, it's a completely different experience.
Ben Ford:Yeah.
Mark McGrath:Yeah. Look at that.
Brian "Ponch" Rivera:Nice. Well, and glad we finally got this uh this conversation uh recorded. Uh I know that we've had other conversations in the past. I think we did the Oodaloopers, that was like six years ago. Do you remember that?
Ben Ford:Oh, it wasn't it long time.
Brian "Ponch" Rivera:Four years ago. Damn, man. We and it's so much, I guess we've learned so much since then, too, right? I mean, it's actually if you look at that, where we started with that, that was actually pretty interesting. And what we went to went through today, night and day, right?
unknown:Yeah.
Ben Ford:Big guy. I mean, it was it was that was a really good format, I think, you know, multidisciplinary, people with different backgrounds. You know, I know you had um well, Dave, Dave was your first guest.
unknown:Yeah.
Ben Ford:Who else have you had on that was in the Oodaloopas?
Brian "Ponch" Rivera:Uh we've had uh Nigel, we've had uh who else was in there?
Ben Ford:There's a few others. Lou was on the other day, right?
Brian "Ponch" Rivera:Yeah, Lou's in there.
Mark McGrath:Lou, Chet's been on a lot.
Brian "Ponch" Rivera:Was Charlie Pratzman in there on the Oodaloopas? Yeah, Charlie Charlie was on one of the episodes. Yeah, and then his whole background with uh the you know, what do you call it? CSS or CCS in in Japan, it's pretty amazing. Out of uh a lot of posts in the last week on LinkedIn about that, actually, which is kind of kind of cool. But yeah, uh who else was in that? Do you remember? Was Sonia?
Ben Ford:No, I don't think Sonya was there. Was um and um Kim, that's right. Oh my god, my my memory's terrible. It's uh it's late here. Yeah, we're yeah, but the the really the really nice thing that was that you know people were all able to bring their individual kind of backgrounds and perspectives, and I think we were able to fairly successfully map that to Uda in a way that everyone could understand. So Uda was almost like the kind of common, you know, the common thread that everyone could map to, which I think is a real testament to the Boyd and his work.
Brian "Ponch" Rivera:Concur, man. All right, hey, we'll keep you on for a second, but uh appreciate your time. Uh it's been I know it's late for you. Looks like you're uh go to sleep. All right, man.
Mark McGrath:All right.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
 
        
      Shawn Ryan Show
Shawn Ryan 
        
      Huberman Lab
Scicomm Media 
        
      Acta Non Verba
Marcus Aurelius Anderson 
        
      No Bell
Sam Alaimo and Rob Huberty | ZeroEyes 
        
      Danica Patrick Pretty Intense Podcast
Danica Patrick 
        
      The Art of Manliness
The Art of Manliness 
        
      MAX Afterburner
Matthew 'Whiz" Buckley