No Way Out

Command and Control Systems with S. Anders Christensson

April 23, 2024 Mark McGrath and Brian "Ponch" Rivera Season 2 Episode 6
Command and Control Systems with S. Anders Christensson
No Way Out
More Info
No Way Out
Command and Control Systems with S. Anders Christensson
Apr 23, 2024 Season 2 Episode 6
Mark McGrath and Brian "Ponch" Rivera

Send us a text

This conversation explores the concepts and principles of command and control, building upon the work of John Boyd. The guest,  S Anders Christensson, discusses the dynamic OODA loop and the design hierarchy in command and control systems. He emphasizes the human-centered nature of command and control, highlighting the importance of understanding the purpose, design criteria, and functions of the system. The conversation also explores the challenges of modeling and simulating complex systems, the role of artificial intelligence in command and control, and the need for agility and continuous learning in adapting to changing environments. 

Overall, the conversation emphasizes the value of Boyd's work as a foundation for further development and improvement in command and control practices. The conversation explores the concept of the possibility space and the role of affordances in shaping human behavior. The guest, S. Anders Christensson, recommends reading Gibson to gain a deeper understanding of the psychology behind the possibility space. Ponch and Mark expresses enthusiasm for the concept of affordances and its application in mind, time, and space mapping. The conversation concludes with gratitude and an invitation for the guest to return in the future.


S Anders Christensson on LinkedIn 

AGLX Confidence in Complexity short commercial 

Stay in the Loop. Don't have time to listen to the podcast? Want to make some snowmobiles? Subscribe to our weekly newsletter to receive deeper insights on current and past episodes.

Want to develop your organization’s capacity for free and independent action (Organic Success)? Learn more and follow us at:
https://www.aglx.com/
https://www.youtube.com/@AGLXConsulting
https://www.linkedin.com/company/aglx-consulting-llc/
https://www.linkedin.com/in/briandrivera
https://www.linkedin.com/in/markjmcgrath1
https://www.linkedin.com/in/stevemccrone


Recent podcasts where you’ll also find Mark and Ponch:

Acta Non Verba – with Marcus Aurelius Anderson
Eddy Network Podcast Ep 56 – with Ed Brenegar
The School of War Ep 84 – with Aaron MacLean
Spatial Web AI Podcast – with Denise Holt
OODAcast Ep 113 – with Bob Gourley
No Fallen Heroes – with Whiz Buckley
Salience...

Show Notes Transcript Chapter Markers

Send us a text

This conversation explores the concepts and principles of command and control, building upon the work of John Boyd. The guest,  S Anders Christensson, discusses the dynamic OODA loop and the design hierarchy in command and control systems. He emphasizes the human-centered nature of command and control, highlighting the importance of understanding the purpose, design criteria, and functions of the system. The conversation also explores the challenges of modeling and simulating complex systems, the role of artificial intelligence in command and control, and the need for agility and continuous learning in adapting to changing environments. 

Overall, the conversation emphasizes the value of Boyd's work as a foundation for further development and improvement in command and control practices. The conversation explores the concept of the possibility space and the role of affordances in shaping human behavior. The guest, S. Anders Christensson, recommends reading Gibson to gain a deeper understanding of the psychology behind the possibility space. Ponch and Mark expresses enthusiasm for the concept of affordances and its application in mind, time, and space mapping. The conversation concludes with gratitude and an invitation for the guest to return in the future.


S Anders Christensson on LinkedIn 

AGLX Confidence in Complexity short commercial 

Stay in the Loop. Don't have time to listen to the podcast? Want to make some snowmobiles? Subscribe to our weekly newsletter to receive deeper insights on current and past episodes.

Want to develop your organization’s capacity for free and independent action (Organic Success)? Learn more and follow us at:
https://www.aglx.com/
https://www.youtube.com/@AGLXConsulting
https://www.linkedin.com/company/aglx-consulting-llc/
https://www.linkedin.com/in/briandrivera
https://www.linkedin.com/in/markjmcgrath1
https://www.linkedin.com/in/stevemccrone


Recent podcasts where you’ll also find Mark and Ponch:

Acta Non Verba – with Marcus Aurelius Anderson
Eddy Network Podcast Ep 56 – with Ed Brenegar
The School of War Ep 84 – with Aaron MacLean
Spatial Web AI Podcast – with Denise Holt
OODAcast Ep 113 – with Bob Gourley
No Fallen Heroes – with Whiz Buckley
Salience...

Ponch Rivera:

Hey.

Mark McGrath:

Mark.

Ponch Rivera:

Great to see you. And today we have Anders Kristensen on with us, who's from the Swedish Defense Forces and spend a lot of time working on NATO command and control. So why are we here today? What are we going to talk about? Well, a lot of times, folks want to go beyond Boyd. They want to take apart Boyd. We want to make sure that it's not a stagnant thing. Right, john Boyd didn't give us something, that's just hey, this is a dogma, you got to follow it. So we want guests to come on and kind of challenge Boyd, and we have that today with Anders. So, anders, welcome to the show. How are you doing today?

Anders Christensson:

sir, great, absolutely Glad to be with you. Great From Stockholm, sweden.

Ponch Rivera:

Yeah, and thank you. Thank you so much for being here. So right before we went live today, we were talking about the. Did you call it the Dota loop? Is that correct?

Anders Christensson:

Right Dynamic Oda loop which essentially fuses in that. You have to be aware of that. Your decisions will alter the outer system and the inner system. And outer and inner system comes from the terminology from Herbert Simon, a Nobel Prize winner. Often we call the outer system the operational area and the inner system is our command and control system.

Anders Christensson:

Now our professor, Bant Bremer, now deceased, has made a theory based on Boyd but expanded it upwards. By that I mean he uses Jens Rasmussen's design logic or design hierarchies, because, as Herbert Simon says, this is design logic when you are going to design a command and control system. So here, by the design hierarchy, Jens Rasmussen states okay, what's the purpose of command and control, what is the design criteria, what is the functions, what is the processes and what form is it that are designed to accomplish, the processes that are executed in the functions that the functions has to meet when it comes to design hierarchy, to meet the purpose. So that means that five steps going downwards and as well upwards. And Boyd emphasizes the processes, the human processes, in his what Chet and the other guy, Chet and Chuck mentioned about yeah, mentioned about the, but essentially it's a process.

Anders Christensson:

He states he's not going dwelling so much into the abstract functions. What is orientation, what is decision? What is in a mathematical term that is lacking on what Boyd has? You know that Mickey Ensley also, which is a psychologist, yes, has focused on form, layer five, and how that accomplish in processes. But you have to move up to set the criteria, what the do da loop has to accomplish, and there you have to have observability of the external system, the outer system, and you have to meet sterability with the inner system and that harmonizes with Ashby and Viq and so on. Absolutely so if you want to calculate with it, you have to go from layer one, purpose, layer two, design hierarchy, design criteria on the functions. Then you can observe the processes that are going on in the staff.

Ponch Rivera:

And when you mean staff, you're talking about the organization, correct?

Anders Christensson:

Yes, the organization and operational level from strategic to operational level, right and further down.

Mark McGrath:

And as you say in your paper, with that you wrote with with Dr Bremer, I mean, this is all human centered, this is all centered on the human, and the human is the one that makes the command and control function.

Anders Christensson:

Absolutely Humans. Do click in from processes and form, because in form, layer five, you choose people, you organize them in a certain way, you add up technology, you add up methods and so on, all those artifacts that you actually can do, and they are combined in the processes.

Mark McGrath:

Yeah, well, you say. I love this line from your paper. It should not be designed in such a way that the people will have to help the command control system solve its problems, as if a lot of the problems that wind up coming from these systems. That's what becomes. The function of the system is to solve all its problems and it's not centered on.

Anders Christensson:

You know, people ideas things in that order, as Boyd was saying you have to have help from the layer one, two, three to support the processes. So humans know okay, we're doing good now in our command and control or we're doing lousy. You have to have the ability to see now we're not, we're not coping with the right stuff.

Ponch Rivera:

And I want to bring up something on command control. Not everybody, you know, it's kind of a four letter word out here in the US right now that command control is bad in organizations. Our understanding of command control is all living organisms, living systems, need some type of command control. It's just, you know, when you look at complexity theory you need some type of command control. Now John Boyd talked about leadership and appreciation. Can you walk us through what you mean by command and control and a command control system, so you can help our listeners ease into that?

Anders Christensson:

Okay, in the command and control systems. It consists of people, how they are organized, what type of tools they have, what training they have and if they are working together, and that you can observe. You can always observe processes. You can see that. Okay, this group is actually producing sufficient information that has to go into next in the line of organization. You can actually see that Now.

Anders Christensson:

If you want to make some sort of test or experiment, I urge to visit Bloom's Taxonomy. Bloom's Taxonomy is stating what stages do you have to go through if you're learning? So there you have. You have to know the terms, the relationships and you have to have the ability to do analysis and synthesize and then value. These are things that you can observe that you're the learner is doing. You can actually see, okay, he has met the criteria.

Anders Christensson:

Because if we don't can do that, the Bologna process would be obsolete, because 46 countries is using Bloom's Taxonomy for learning. That's the common, a commonator between all university and technology the educational system in 46 countries, so in US. So we can use that to actually see. Or is the staff actually learning Now? That's the process Now, when it comes to level one, to direct and come up with a plan that we can execute. Plan is feed forward, control is feedbackwards. So it's essentially focusing the processes and level five, but they are omitting level one, two, three, which you have to have. It's mathematical criterias that are demanding the processes to do right.

Ponch Rivera:

Andrew, there's so much we can pull out of this conversation. Number one when John Boyd was looking at the Toyota production system, he saw that control was outside and bottom up. It was from the outside, from the customer. When we think about that, do you concur with that?

Anders Christensson:

line. Thought Absolutely absolutely.

Ponch Rivera:

Yeah. And then you brought up many names and I just want to highlight those here. I may miss a couple. You brought up Ashby, you brought up Galvik. You brought up Micah Ensley. I'm going to just touch on these real fast. Requisite variety we've talked about that on the show, how critical that is and how that connects to John Boyd's ootl With Micah's work. And you may be talking about different work that I'm very familiar with, but that's the different levels of situational awareness, which is very important and we can make that connection to Boyd's work. And then there's Carl Weich, who we talked about quite a bit on the show, and that is in the world of, of the sense-making stuff.

Ponch Rivera:

Sense-making, yeah, yeah. So we have sense-making with Carl Weich and we have sense-making with Dave Snowden he's going to pound me for forgetting his name there, but Carl Weich's work is going back to. What I want to highlight here is you can observe things, and that's what he did. He went out and observed how these organizations operate at the edge of the envelope and, instead of creating a case-based approach, he looked at these things and said hey, this is how this works in these high-risk environments, such as the flight deck of an aircraft, carrier, nuclear power plant, submarines, aviation, things like that. So what I'm going with this is everything I'm hearing from you is authentic, right? That's why this is an important conversation. I haven't heard anything that isn't inauthentic and isn't saying that Boyd's work is invalid. What I'm hearing from you is it's good, but we need to go. We need. Boyd gave us a gift, right? He gave us something. Take this and run with it and make it better, right?

Mark McGrath:

Do you agree with that? Yes, okay.

Ponch Rivera:

And that's how we I think we agree on that 100%. Yes, mark, do you have anything to add on what?

Mark McGrath:

we heard so far.

Mark McGrath:

Well, I mean, that's what our podcast is dedicated to is the development and advancement of Boyd's theories, because even his If you've read the Franz O'Singh books Science, strategy and War the very last line is that this is intentionally left open for exactly what you're talking about, for us to continue to build and use it, not as a religious dogma, which Boyd would inject himself to do exactly what we're doing, to talk about it, and that was one of the things when I was reading your work that jumped off the page.

Mark McGrath:

You're talking about something that's absolutely human-centered and that the systems and the technology support decision-making, that the humans a staff, a commander, those are human beings that have cognitive biases, they have learning, they have other things, absolutely. Just one thing. I would ask a clarification point. When you say Duda like dynamic, observe, orient, decide, act, am I hearing you correctly to say that our process of Uda, our orientation that cycles through Uda, has to be dynamic and not static, because dynamic would Absolutely because both the outer system and our inner system I mean the troops will be tired when they have done something.

Anders Christensson:

They have to reload, they have to rest, they have to do so the state variables in the inner system and the outer system is altering because of our decision, because of actions. Decision has to go back and reinforced, and so on.

Mark McGrath:

And what you allude to in the abstract of your paper, a command post for complex operations that you wrote with Professor Brenner. When you say that line it should not be designed in such a way that the people will have to help the command and control system solve its problems the dynamic Uda. If you keep your orientation dynamic, you're avoiding that, you're trying to avoid that. So what you're saying in that sentence of the system solving its problems, that's the static orientation, that's the static Uda loop.

Anders Christensson:

Yeah, the static when it comes to decision making and the process of science of that Brenner came through that saw that originally we see decisions as a decision and then it takes a long time before next decision has to be done. But in military operations this is much faster. So that's why he says, okay, we have to have a dynamic. So that states the mons on the staff to have a better notion of what is going on both in the operational area but as well as in our own troops, our inner system.

Ponch Rivera:

Andrew, what are the dangers of knowing too much? And let me give you some context. We learned about the 1,000-mile screwdriver from the idea that a general who can watch a predator feed can do something from his office in Ramstein, germany, or here in the US. How much is too much visibility Too much is?

Anders Christensson:

when you're blurring or confusing, because all the information is not leading to your end state. It's your end state that is defining what is this strategic objectives and how are we going to meet those strategic objectives. All other information that is not supporting this is obsolete and confusing.

Mark McGrath:

That's the focus and direction, right, that's the Schwerpunkt, as Boyd would say, absolutely.

Anders Christensson:

Thank you, so you need to source down. Okay, what we have, this situation today in our outer system, in our operational area. Is this what we want to have? Can we describe the future demanded situation that we strive for? How would you describe that? And you describe it as if you were there.

Ponch Rivera:

Right. So I'm going to throw a couple things at you on the OE, the operational environment, the external environment, some things we're learning from complexity theory, some things we learned from effects-based operations too. So I want to expand on this because this is critical to what we highlight on the show From complexity theory. When we look at the operating environment, the external environment, we look at where we are. We want to understand that through situational awareness. Where are we now? And we look at the evolutionary potential of the present to modify the future, because we can't define a future high definition destination, a smart goal or anything like that in a complex environment because it's complex. So we can run multiple probes, go through the OO Probe, get that feedback from the environment and move in a direction rather than go to a destination, and this is something that we've been talking a lot about on the show. And I want to bring up effects. You may be familiar with effects-based operations from many years ago.

Anders Christensson:

Absolutely.

Ponch Rivera:

That was something that we were trained on and fighter pilots and military members. They learn how to start with the end state in mind and work backwards. And a lot of situations dropping a GBU-12 or 16 or a GBU-28 on a target that's kind of a complicated thing. It's a known thing. I know where the target is most likely. I can work backwards, I can find out the time I need to launch from the carrier. I can look at all the threats in the environment and adapt to them. But that's the effects-based operations. I can put an effect on something that I know. Right. The problem with this type of thinking is when you bring that back into and say, hey, this is how it works in the military. It has bounded applicability. That type of thinking works in a complicated environment and I'm using complicated from Dave Snowden's Kinevan framework.

Ponch Rivera:

It doesn't necessarily work in the complex environment. So this is a shift in thinking for leaders in the military and all leaders is hey, we can't take these one-size-fits-all approaches to all-deeds command and control and apply it to the complex environment. So, if I'm hearing you correctly, the way we look at the external environment and the decisions we make, those decisions are going to change the external environment, or we have to update our internal environment, which is our orientation to always adapt. So I think we have alignment on that. Is that what you're talking about?

Anders Christensson:

as well. Sinofin, if you look at his model of simple, complicated, complex and chaotic system, we can add the atomic quantum system in the middle. But since that is for atoms, relativity and Schrodinger his equations, we leave that out. Now, if you look at the structure on a system, it's stable over time. When it comes to simple system and complicated system I mean a submarine is a complicated system but it's stable.

Anders Christensson:

If you go to the other side, you have quick, dynamic altering systems, structures. The structure is altering If you see on the flock, or it's warm or whatever. They are altering in structures forever. And also, when it comes to chaotic system, they will smoothly, not knowing, run over the edge and take another state. That is completely different, which is very much sketched by Loren Cobb's the Cusp theory, which is the butterfly, the butterfly phenomena. But all those three systems has to meet Ashba's requisite variety at all levels, and Janir Bariam in US has stated that this has to be met on each decision level. Which means that here we can be start in very abstract terms. But you have to meet the requisite variety. You go down one decision level and that also has to be met by requisite variety.

Anders Christensson:

Now it's very difficult to calculate complicated systems versus complex systems. But if you I mean here I have to do an anecdote, if we were about to rewind the last 15 minutes we will never end up in the same process flow. So reality is not taking the same path over and over and over again, that is, it's not on the map. So we have to have a notational system that reflects sufficiently your outer system and that when you come into modeling and simulation, you have agent-based simulation, you have fuzzy sets, you have all those beautiful mathematical tools that you actually can mimic some sort of possibility tunnel over time. So if you're keeping yourself within that capability or the effect tunnel, you can actually simulate your own possibilities as well as your adversaries. I mean, he can't walk to the moon from here and up, he has to have a rocket, he has to have all sorts of beautiful things to move up to. So you're bounded by rules, right, and those rules, yes.

Ponch Rivera:

So these are things that are like counterfactual. There's things that we can do and things we cannot do, and the enemy and our opponent, or however you want to look at, is the same thing. And that understanding if I hear you correctly that simulation, that modeling, if you can model that, which is not a landscape like we think about when we drive our cars, but it's a mind and time space landscape, absolutely, if you can model that and I think this is really important because we've been doing a lot of work in this lately this is where leaders need to spend their time in understanding Absolutely.

Anders Christensson:

Okay, absolutely, I mean Joint Chief Staff said for some months ago, we will have. We are in deep problems when it comes to say, okay, if we're going to defend Taiwan, all simulation packages are showing that we're losing, so how are we going to meet this? So, essentially what you're saying we have to take much, we have to study much more time on the possibility space that we can do. Yeah.

Ponch Rivera:

So do you need a simulation or can this be done manually with just like a tabletop exercise?

Anders Christensson:

No, because of the complexity. It's difficult for a human brain to calculate all the processes that individuals are going on and producing in interactions. So you have to have it in some sort of simulation, computer-based simulation, and that was actually done during the MNE4 at Virginia. So it came from us, so it was there where we tested it and that was 2006.

Ponch Rivera:

So is there a danger with this, though that's what I'm concerned about. So what I understand about complex systems is we really can't model them and we get into systems of system snaking, and I know NPS Naval Postgraduate School is looking at these, trying to take $180 million and simulate what's going on around us. If my understanding is correct and I may be wrong on this we can't really. The map is not the terrain. We cannot map an external system. Any effort to do that is and you go back to John Boyd it's gonna change the system. What I'm hearing from you is there are people that believe we can map an external or our system. Is that correct?

Anders Christensson:

I believe so. I believe so strongly Because you just need sufficient mimic to the reality, not everything. And of course you have to be critical. You have to live with your adversary all the time when you're producing a model and then come to counter-motivated. So there you have the struggle, which has to be very vivid.

Ponch Rivera:

Right. So we talk about mapping, I hate to say using a mapping. You call it a mental model, a map, you can call it a digital twin. This is important to understand the external environment. So we have to take the different perspectives of our human system to understand what's going on the outside. The danger I see with that is and I'm gonna use an analogy here when you look at your phone to see what the weather is outside and you see that it says it's sunny. You walk outside and it says it's rainy, it's raining. Which one's right? This one? The reality to me is on the outside, but we can get the reality is always right.

Ponch Rivera:

Right, so we got. And that's a danger with this. We don't want people to get fixated on the simulation, we want them looking outside. No no, no, it's a tool.

Anders Christensson:

It's a tool Mobility, simulation, gaming and evolving is tools.

Mark McGrath:

And you did use the word sufficient. It suffices, it doesn't have to be precise. As Boyd would draw out the OODA loop, he would say this is an illustrative abstraction, it's not a precise measure of Right, right. So let me ask you this, because I see this when people explain and describe OODA loop, or I see this even with command control systems People are so hell bent on identifying exact precision to that reality that they would go back to you can't.

Mark McGrath:

Yeah, you can't. So they would go to Pontius's example to say, well, the weather said this, but it's doing it outside. Now I'm completely confused and I'm derailed. What you're saying is, if I hear you correctly and correct me if I'm wrong what you're saying is models and maps and systems have to be sufficient enough that empower the human cognition and we learn as we go, that we continue to adapt and develop. Is that correct? Absolutely absolutely.

Anders Christensson:

I mean, if you're looking at the phone on its sunny outside, when you come out it's raining, but one thing is that the temperature is almost nice. So there you have one system state that you can rely on. It's just to allude to your example Don't take me on that.

Ponch Rivera:

So to say, so, anders, you've been working with a lot of NATO folks over the last several many years and I wanna ask what's preventing them from understanding and I'm not saying we understand this completely, but we're all students of this, so I'm talking about our discussion, we had up to this point what's preventing leaders to understand this type of thinking? How do we look at the external environment?

Anders Christensson:

I would say it's coming down to the words command and control. Command is stating your will I want to see this done and control is downwards up and they have to meet. So NATO folks are either talking about command or they are talking about control, Not in the same that they have to meet each other. So the induction, the intel guys, has to be better to describe in notations that you can actually simulate your operational environment. So the commanders can what if I have this target or this goal? What happens then? And between these two, when they meet, they source out or they calculate or they visualize the possibility space. So again, you have to be critical. It's many dimensions on this one, the commanding stuff, but it also many dimensions in the infraration, in induction process that the intel has to do. And it's lucky that the intelligence, and it's lucky that we now can see that artificial intelligence is actually extracting text to also loop diagrams where you can see enforcing or balancing loops within the environment.

Ponch Rivera:

And this goes back to an earlier point that our commanding control system is observable. We can see, okay. So we talk about this a lot. We say that teamwork is observable, therefore I can measure it, but I gotta know what teamwork actually looks like. And the same may be true in what are you looking for with artificial intelligence within a commanding control system? What can be fleshed out of that system and be usable by AI? And I wanna go back to the Kinevan framework too, which is, hey, that complex world where we can only understand things in retrospect may not be a good place to put I may be wrong on this put artificial intelligence. That may be a space where we want humans to operate, because humans have going back to flocking and things like that, the dynamic systems. Humans are pretty good at detecting weak signals.

Anders Christensson:

Absolutely.

Ponch Rivera:

Okay, so is this the same approach that you're trying to take?

Anders Christensson:

with this commanding Again, modeling and simulation, ai and all that beautiful tools. They are artifacts and you can always reason or contradict what you are stating and you have to have a vivid discussion in the staff so they actually can pinpoint down and highlight those weak signals. Often there are devils in the detail is the weak signals, I would say.

Mark McGrath:

And at no point are you ever in your proposition. You never violate that order. That Boyd says people ideas, things. It always goes in that order Ideas and things serve people.

Anders Christensson:

Well, when it comes to behavioral signs, I'm not fighting those guys because they are thinking right, because they have studied people and they have seen that there are things, that there are coming good things out of it. But you have to frame those guys with the purpose design criteria, which is observability and steribility or, as Celiax says, reachability. You have to reach, have the ability to reach your state, otherwise you won't be able to do anything. You have to sufficiently reach your state.

Mark McGrath:

And you can't do that by looking inward alone. And Boyd admonished in organic design for Kareana Control. He said that any command control system that forces his adherence to look inward is going to disintegrate. It's going to become unglued. What you're suggesting is you have to look out both ways, both ways yeah.

Anders Christensson:

Both ways. It's Janus head, janus looks in and out. You can actually, if you take the classical do the loop or the order loop, the observe, orient, decision and act and make a control circuit of it. You can actually build a transfer function from your steering to the response. And if you look at that equation you'll see that if and you try that what happens? If the now under the line is zero, what happens Then? You can explain all sorts of what happens in the outer world and your system because of its zero.

Ponch Rivera:

Yeah, andrews, I'm hearing this from you. My expectations of this conversation were to look at Boyd and kind of rip it apart. What has worked anyway. What I'm hearing from you is the opposite, and that is there is value in what John Boyd provided to us. Plenty of value.

Anders Christensson:

But on the process and form. Level four and five, and they are framed by the level one, two and three.

Ponch Rivera:

Okay, and I'll have to go through that model again. Level one, just remind the audience.

Anders Christensson:

And purpose. You produce a direction and how to reach that direction. That's the purpose of command and control. Now the criteria design criteria is that you have to have it. You have to have the external system, the outer system, observability you have sufficient state vectors, tensors or whatever variables that are actually mimicking what's going on outside there. And, as Ashby says, the steering, the inner system has to meet those criteria, those variables, so you actually can manipulate towards an end state. And number three, the functions. The abstract functions of data collection, presented for orientation, where you actually pick out which are the essential states for our mission.

Ponch Rivera:

Right.

Anders Christensson:

And then you go further and say okay, how should I do this? Yes, that's the planning.

Ponch Rivera:

Yep, that's a counterfactual or plan. Yes, Right.

Ponch Rivera:

Yeah, so what I'm hearing from you is we're learning a lot about the same things from the neuroscience community. The free energy principle we talked about that on the show, everything you walk through just is the same thing we're hearing from them. In my opinion, I may be off on this, but the observation is that sensing capability it's sending that new information, that information that is surprising, because our brains want to minimize surprise. We don't want all the information coming into us. We always want the difference between what we expect and what's actually coming in. The counterfactuals, or that planning or that prediction. That's an internal thing to the internal system that's making a prediction about. Let's do this and run it through our system to see what happens before we act on it. So that's a simulation, right, and this might be related to Gary Klein's work too with RPD. Absolutely.

Mark McGrath:

Okay.

Ponch Rivera:

So there's a and I love this conversation. So many things were pulling from and we have to apologize to our listeners because there is so many references in this and this is kind of like a synthesis of what we've been talking about. You hit on so many things today, from quantum physics to biology and anthropology.

Anders Christensson:

Yeah, this has been awesome. It's why the science of command and control is cybernetics. That's the host area. I would say it's cybernetics.

Ponch Rivera:

And as we start to and this is important for us, as we start to, we're generalists, right, we're not specialists by any means, but we're bringing all this together on this podcast to go hey, look, when you're talking about artificial intelligence, go look at how different modalities of prayer or meditation, or even psychedelic assisted therapies help you mitigate your past experiences, right, that trauma, that PTSD and all that. There's a connection there. Or take a look at the connection between those things and agility in an organization and resilience in an organization.

Anders Christensson:

And that's why this is so Agility. For me, agility is that you actually have a quick turnaround in your dynamic decision. You see weak signals in the outer system and you source out and try to what are we going to meet this with? That's an agile system for me.

Ponch Rivera:

Yeah, so you have tempo and time there, or speed, right. You have the how are we observing the external environment and how fast can we respond to something we see in it? Right, and I think is that kind of aligned to what you said about agility.

Ponch Rivera:

Right right, right, right. I like that line of thinking. It's not when we're probing in a complex environment. We're running multiple safe-to-fail tests, or probes, if we will, to find out where we get repeatability so we can amplify it or dampen it right. So we want to move faster and that's what we're trying to do with improving an oodaloupe in a complex domain.

Anders Christensson:

Absolutely.

Ponch Rivera:

Okay, yeah, this is great. I love this. And I'm sorry if you're losing or Because you have Go ahead.

Anders Christensson:

You have to be aware of that. You can outmaneuver. You have a higher pace around all this than the adversary. And now we're getting the tool AI, which is shrinking the time of modeling tremendously, so you actually can pinpoint down faster what is the sufficient reachability and how am I going to meet this?

Ponch Rivera:

Here's another question what happens to organizations leaders that don't embrace this type of thinking? What's going?

Anders Christensson:

to happen. I think they will be outmaneuvered. They are going to ask what's happening. They are not doing the action Guys that are actually in for this fast pace. They are dominating the environment. But the other guys that are not with you, they are going constantly asking what's going on. What's going on? Fill me in, Fill me in.

Mark McGrath:

So when you say fast pace, it's not merely fast on its own, it's at a faster pace, deliberate action with continuous learning and adaptation. Absolutely, Absolutely, absolutely. Yeah, that's the perfect word for it.

Anders Christensson:

I mean, if you're faster than the adversary, you win.

Ponch Rivera:

With higher quality.

Mark McGrath:

Yes, Of course Of course.

Anders Christensson:

All right.

Ponch Rivera:

Andrew, this has been an absolutely fascinating conversation. I just want to thank you for your time today. We look to have you back on the show. I mean, again, I want to set the stage for everybody. I thought we were going to attack John Boyd here, but I think we all agree there's value in it and we can continue the work by taking a look around and what's happening with the advances in technology, lessons from biology, complexity theory and making that Oodle-loop better, Absolutely.

Anders Christensson:

Absolutely.

Ponch Rivera:

Mark, do you have any last minute thoughts for Andrew today?

Mark McGrath:

No, I think that Andrew has said multiple times absolutely. I mean, this is absolutely why we have our business and this is absolutely why we have this podcast. It's building and developing and advancing these ideas because it was left open intentionally with a tremendous foundation to go in different directions. It's, you know, punch you use the term generalist. This tolls from every angle and I think the imperative thing for those that are listening is that your systems, that you have to command and control your sports team, your company, whatever it is, if they're only looking inward, your competitors are out there, they're going to identify that quickly and they're going to be able to strike and outmaneuve you, as you say.

Ponch Rivera:

Well, we want to thank Andrews for being on our show today. Thanks for taking time in your late afternoon to be with us. Again, we want to have you back on the show Maybe middle of next year to look back and see where we are with AI, because I think that's going to be a bigger thing. Absolutely, but I do want to turn it over to you, Andrews. Is there anything you want our listeners to know about your work, what you're doing and how they contact you?

Anders Christensson:

Well, you have my email and we're working on two things, and the first one is this plan, which is the command stuff, and I am working to use AI to actually reach sufficient reachability so they can work together, these two, and define some sort of possibility space that you can maneuver in. If you want to read about the possibility space, I urge you to read Gibson. He is a fine guy that explains psychology for this.

Ponch Rivera:

And that's affordances, correct.

Anders Christensson:

Yes.

Ponch Rivera:

Right, yeah, and we're doing a lot of mind time space mapping with affordances, so I love it. This is great. Yeah, all right, andrews. Hey, thank you so much for being on the show, being on no Way Out Again. We'll invite you back on in the middle of next year and I just want to thank you again, please.

Anders Christensson:

Absolutely, it has been. Thank you, andrews.

Expanding and Challenging John Boyd's Theory
Boyd's Emphasis on Human Processes
The Human-Centered Nature of Command and Control
Command and Control System Design
Command and Control in Organizations
Bloom's Taxonomy and Learning in Command and Control
Command and Control in Complex Systems
Related Concepts: Requisite Variety, Sensemaking, and Effects-Based Operations
Boyd's Theories as a Foundation for Further Development
The Dangers of Over-Information and the Importance of Observability
Effects-Based Operations and Complexity Theory
Differentiating Simple, Complicated, and Complex Systems
The Limitations of Modeling Complex Systems
The Importance of Looking Outward in Command and Control
The Challenges of Understanding Command and Control
The Value of John Boyd's Work
The Importance of Agility and Continuous Learning in Command and Control
The Role of Cybernetics in Command and Control
Exploring the Possibility Space
Understanding Affordances