No Way Out

From Big Bang to Brain: How Entropy Shapes Safety and Systems with David Slater, PhD

Mark McGrath and Brian "Ponch" Rivera Season 3 Episode 120

Send us a text

Step into the fascinating world where neuroscience meets safety in this mind-expanding conversation with Professor David Slater. Beyond conventional safety thinking, we explore how our brains actually construct reality and what this means for creating truly resilient organizations.

The discussion begins with an unexpected parallel between Formula 1 pit crews and workplace safety. Professor Slater reveals how these highly choreographed teams regularly "cut corners" to achieve sub-two-second pit stops—highlighting the universal truth that humans adapt systems to meet demands, regardless of formal procedures. This adaptation, far from being problematic, forms the core of what makes systems work in reality versus theory.

What makes this episode particularly valuable is Slater's masterful connection between thermodynamics, entropy, and organizational safety. He guides us through a compelling framework where safety isn't simply the absence of accidents but the maintenance of quasi-equilibrium states in complex systems. The human brain serves as the ultimate control system in this equation, constantly working to predict and respond to environmental changes.

Perhaps most provocatively, Slater challenges the very notion of "human error," calling it "too facile" and "a get-out-of-jail card" organizations use to avoid addressing systemic issues. Instead, he offers a more nuanced understanding of how perception shapes decision-making, explaining why two people can experience the same situation completely differently. This insight alone transforms how we might approach incident investigations and safety culture development.

The conversation extends into practical territory, examining how organizations can foster the conditions for adaptation, psychological safety, and high performance. Rather than relying on checklists alone, Slater advocates for systems thinking that accommodates human variability while ensuring everyone understands how their role contributes to the larger whole.

Ready to challenge your assumptions about safety, perception, and human performance? This episode will leave you with practical insights and a deeper appreciation for how neurosci

NWO Intro with Boyd

March 25, 2025

Flow Learning Lab

Find us on X. @NoWayOutcast
Substack: The Whirl of ReOrientation

Want to develop your organization’s capacity for free and independent action (Organic Success)? Learn more and follow us at:
https://www.aglx.com/
https://www.youtube.com/@AGLXConsulting
https://www.linkedin.com/company/aglx-consulting-llc/
https://www.linkedin.com/in/briandrivera
https://www.linkedin.com/in/markjmcgrath1
https://www.linkedin.com/in/stevemccrone

Stay in the Loop. Don't have time to listen to the podcast? Want to make some snowmobiles? Subscribe to our weekly newsletter to receive deeper insights on current and past episodes.
Recent podcasts where you’ll also find Mark and Ponch:

The No Bell Podcast Episode 24
...

Brian "Ponch" Rivera:

All right, welcome to no Way Out. Professor David Slater, you and I have been in communication over the last three or four years and recently I came across an article kind of latch on to something like the free energy principle, and the reason for that is I've read, you know, decker's work. I've read Hall-Nagel's work. We follow Conklin. In fact, the flow system that we created several years ago is built and, I guess, founded on safety culture, safety thinking, safety to complex adaptive systems. But what I saw in a lot of the writing about, in this case John Boyd's Observer-Oriented Side Act Loop, it was a linear representation of Boyd's work and what we see is things like your work on perception. How we understand the world and how that applies to safety aligns a lot better to the free energy principle and the way we explain John Boyd's observable inter-side-act loop. So anyway, welcome to the show You're calling in from the UK. Is that correct?

Brian "Ponch" Rivera:

That's correct, Hereford yep, okay, and I just glanced at a couple of your articles in the past and one thing that struck me was I just saw the movie F1. I don't know if you saw that or heard about the movie. So you did a little bit of work on pit crews, I think in the past. Is that correct?

David Slater:

Yeah, that's correct. Yeah, yeah, and I think a in the past. Is that correct? Yeah, that's correct.

Brian "Ponch" Rivera:

Yeah, and I think a lot of folks are familiar with the videos and safety videos and teamwork videos where they show the pit crews in the 1950s and how slow they were in like 30 seconds, 45 seconds, a minute long. And now we're getting to what? Is it below three seconds, is that?

David Slater:

right. Two is the minimum. If you go below two, it's illegal.

Brian "Ponch" Rivera:

How can you? Yeah, so you're into complex adaptive systems and I think you follow a lot like the Kenevan framework when we look at a pit crew and we see them work at that speed, at that rate. To me that's a complicated domain thing. It's highly repeatable. I mean it's highly interdependent. Don't go get me wrong, but that's not necessarily what teamwork looks like. When we talk about teamwork and safety, well, maybe it is. Maybe you could talk about that. What does teamwork look like in a safety environment or an operational environment?

David Slater:

I think, to start with, I think safety is a word that's mis safe, quote, unquote, and in certain circumstances you fail safe, so it kind of gets complicated. So if you just stand back and be objective and say I'm looking at a system, I want to know how that behaves in all situations. Okay, so a pit crew, right, is a system, and if you do it by the book, what they're supposed to do, you can build this model and you can see exactly how they choreograph basically, and it's just amazing, everybody's got his choreographed thing that he does at a specific time, specific sequence, and it's got to work exactly like that to be right. And if you're doing everything like that, it's about 2.8 seconds. But these guys get below that and the way they get below that is they cut corners.

David Slater:

Okay, so they're supposed to wait until the car stops before they put the jack in and before they put the guns on the wheels, and they don't. They basically follow the car in and bang and they're supposed to have everything finished before they let the jack go and they get away, and they don't. They basically the jack comes down almost before the guys have got their things off the wheel nuts. So that's how they do it, they cut corners, they adapt, et cetera, and that's how the real world works. People get used to what they're supposed to do, what they can get away with, what they need to do, and maybe you know, if the wheel nut sticks, choreography goes out the window and you've got to make it up as you go along. And I think that's when it gets interesting, because that's safety. Basically, you have to have safe release. What does that mean? And what it means is you've got to do everything and you haven't got to kill anybody or run anybody over, and that's it. So it is very interesting.

Brian "Ponch" Rivera:

So, again, it's just having the movie out and released over the weekend and, having seen it, it let's get down to some of your recent writings, your thinking on looking at neuroscience to inform safety, human factors. I want to get some of your thoughts on how you started making these connections and what it means for safety, and not only safety but for performance in an organization.

David Slater:

Right, where do we start for performance in an organization?

David Slater:

Right, where do we start? It's a big topic. Yes, can I start at the beginning? Absolutely Okay, in the beginning there's a big bang. Right, it's a little too far back, I think. Well, no, just bear with me. Okay, so what happened? You had a big release of energy, okay, and over time, this release of energy decaying and it's going to decay and decay and decay, and the thing that's driving it is entropy. So basically, what you start off with is 100% energy and what you end up with is 100% entropy, and anything in between is what we call life and everything to do with it. So how do we get from bang to silence, if you like?

David Slater:

And you realize what happens is you go through a series of what they call quasi-equilibrium states. So you don't just what's the word? It doesn't come down, entropy doesn't decrease linearly or nicely smoothly. It goes through a series of bumps, what they call quasi-equilibrium states. Okay, so it hangs, and then it goes and it goes and it goes, and these are natural. I mean, you go to atoms, you go to ions, you go to molecules, you go to compounds and go to ions, you go to molecules, you go to compounds and then eventually one of these compounds finally works out a way or evolves a way to stay in a quasi-equilibrium until it dies. So what you've got is a whole basis where you're going from energy right creating energy into something which is a state, energy right creating energy into something which is a state, and that state is only stable, safe, as long as you basically keep the quasi-equilibrium right. And that's what it's all about.

David Slater:

These quasi-equilibrium states, now, what happened was in an organism they have to develop a system to do this control, to keep them safe. And the original one was about two, two neuron or three, two or three neurons which basically just could do very basic things respond to light, respond to touch, whatever. Anyway, that's built up over the years until you get the human brain, which is amazing, right, absolutely fantastic, and that does a whole lot. So there's a whole lot of these control loops to quote the the john boyd, he's absolutely right, a whole lot. So there's a whole lot of these control loops to quote the John Boyd, he's absolutely right A whole lot of control loops which are nested together, all working, and what you have to do is now you're not controlling two neurons, you're controlling a whole bunch of neurons, right, and you've also got a library of experience with which you can learn how to control these things. So the brain puts this all together and gives you John Boyd's picture. So the brain orientates the organism as to what's going on, okay, and then it looks in its library to see if it's got a pattern that it can apply. And so most animals have got this pattern, experience and they can basically survive.

David Slater:

But again, the human brain this again is John Boyd's classic breakthrough right. You can't exist solely on known patterns or experienced patterns. You have to get into some situations where what's gone before isn't right and if you try and extrapolate what the system is doing, it's a complex system. So it emerges, behavior emerges. So you have to have this ability to change your mindset, to change your mental model. And this is John Boyd again, you've got to destruct before you construct. You've got to basically have the humility to say I don't know everything, this is actually, et cetera. So you're going from entropy to the control of the system, and the brain is the control of paramount. So all the neuroscience then informs how we're doing this control, what's affecting this control and how we do the models.

David Slater:

There's one last thing that makes a link, which is something called chaos theory. Okay, chaos theory says that there are a number of states and these things can interact I think you call it tractors which are stable states which can interact. Okay, so what you need to do and what the organism does is switch from one stable state to another. Right, and if he knows what that other one is, that's fine. But the human has the ability actually to adapt, invent or initiative into another attractor. So the whole question, then, is how you do this transition from chaos into stability, and that's the key question how do you stay safe? How do you recognize? And the thing is, you have to adapt, you have to take the system as it is. You have to realize that reality is not something you read in a textbook. It's something you experience and something you've got to put together. So that's where I'm coming from.

Brian "Ponch" Rivera:

Well, there's so much to unpack there. I'm going to start off with homostasis and allostasis. I think one of the comments you wrote up there is safety is about homostasis. Could it also be about, I think it's, allostasis, which is stability through change, that adaptiveness. Any thoughts on making the connection there between safety and homeostasis?

David Slater:

Well, homeostasis was something which was frowned upon, or maybe still is frowned upon in certain circumstances. They don't believe it, but it must exist. I mean, this is the way equilibrium works. I mean you've got a well, a potential, well you know. Basically you perturb it and see what happens, and basically you can perturb it too much. So you've got this thermostat which has got a control system. So it's all about what your control system is set at, and a very, very rudimentary view of safety is getting an organism, a human, to accept this control. And basically you learn that through historical, through family, through society, You've got this expectation of what this control loop looks like or what the parameters are. So that's where homeostasis comes in. But it's much more complicated than just homeostasis.

Brian "Ponch" Rivera:

Yeah, yeah, and that's what we learned, or learning from the neuroscience of the brain or the biology of the brain on how we're always adapting to new environments.

Brian "Ponch" Rivera:

I do want to talk about some other aspects of your work in the past and I'm going to make a comment here, and I think I want to make sure people understand my perspective on safety and business management. I always look to the safety community to understand how to build high-performing teams. I look at the safety community to understand complexity more than anyone else. Unfortunately, I don't think people share my view on that because they rather go look at the pseudoscience that's out there on how to create change in organizations. I'll argue with folks that it's very important to understand what's going on in the world of HOP, human and Organization Performance Resilience Engineering, fram, which we may talk about here in a second, some of the latest and greatest, like HVACs that we had from Naval Aviation, human Factors Analysis, classification Systems, understanding those human factors. So I wonder how you could tie in the importance of things like how humans perceive reality to human factors and if you've done any work in that space.

David Slater:

That's a very, very interesting angle, I mean.

David Slater:

I do agree to human factors and if you've done any work in that space. Yeah, that's a very, very interesting angle. I mean I do agree. Can I put another angle to you Please? I did something on stories, okay.

David Slater:

So I learned this from a conference we were at and there were scientists there and there were social scientists, scientists we were talking about safety two, okay, and safety two is what you get at the end when you push them to really define exactly what you mean, exactly how you're going to achieve this, what you do in an organization to achieve safety too, et cetera. And you realize that it's not a thing, it's not a mechanism, it's an idea, it's a story, okay, and there's all kinds of stories. I've got a lot of truth to them, a lot of things that resonate with people. But at the end of the day, when you push these guys to what do they do? They say, well, you've got a theory, we want to put it in an organization, we want to see if it works, and if it works, we don't need to know why we just got this theory, et cetera. So you've got all these theories, these brands. You know there's safety to safety differently, the whole bunch of things. It's a way of encapsulating ideas. They don't have to be right, they just have to motivate an organization to do something better or to give them a reason to change what they're doing. So it's lots of psychology, if you like, rather than actual science. So I try and say well, safety is a story, right, and I think you've got to have the story, but you've also got to have the science to back it up.

Brian "Ponch" Rivera:

I love that connection.

Brian "Ponch" Rivera:

I've never heard of it as being a story, but I do agree that foundationally we have to I hate to say trust the science, but we have to start with the science of the first principles to be able to hang the methods or approaches on that.

Brian "Ponch" Rivera:

And I think that's where a lot of I'll call it the agile community I think that's where they went wrong is they didn't have an underlying principles or the science as foundational to what they did. They had the pseudoscience. They went down another path and I know this because I think you're familiar with the work of Carl Veik Hopefully I'm saying this last name correctly High Reliability Theory, High Reliability Organizations. We know from some of the signatories of the Agile Manifesto that they wanted to go down that path of Carl Veick, that sense-making path, but they went down a pseudoscience path and I presented at Agile conferences where we talked about safety, high reliability theory, the connection, the importance of human factors, team science and things like that, and I was accused of creating pseudoscience by going there, because that's not what they were used to. So I agree, I appreciate that narrative approach to how do you get an oil and gas company to to create safety right, and I think that's there.

David Slater:

You capture the minds. It's motivation. I mean, I guess politics is the same, I guess in some areas, and maybe even religion, it's a way of getting a common theme which people can believe in and actually apply it, and it doesn't matter what the details are, it's what the effect is that matters. Social behavior, whatever it is. You can generalize this, but it's a human thing. I think people like to belong, like to believe, like to be motivated, and I think the best thing, if you've got the science and you can quote the science with a sugar story, then that's the best way to do it.

Brian "Ponch" Rivera:

So there's a lot of lessons from aviation that feed into what we call team science, even distributed leadership and complex adaptive systems. Often we use the analogy that a 747 is a complicated machine. Once you put people in it it becomes a complex adaptive system because we don't know what you're going to get once the human is in the system. I do want to go back to something about safety and complex adaptive systems. We often say that safety is an emergent property of CAS, of complex adaptive system. It's the interactions, not just between the humans, but the humans in the system, so the socio-technical system. Can you elaborate on that a little bit more?

David Slater:

What do you mean by safety in that situation? Safety is a state, if you like, and again it's an equilibrium, right? Okay? So it emerges and all those different things have got to go the right way and achieve so all the systems in the 747 are going to work, the controller's going to work, everything's going to interact.

David Slater:

And the thing about that it's so complex that you're going to have situations which you haven't thought about. The situations are going to well, as they say in Fram, they're going to resonate. You're going to have something that happens at the maintenance they put the wrong screws in the window, the cockpit window, and you're going you get a crash, or they forget to put a screw in the door at Boeing subcontractors and it blows up. So you've got so many things going on. So how do you control that? And that's the problem with safety is how do you establish sufficient control?

David Slater:

It's not just being aware of what can fail. I think failure awareness is crucial, but you've also got to think wider than that. You've got to think this is a system. How can that system even if everything was working as it should have done, because it's so complicated that can actually give you a situation which you hadn't thought about. So again, it's Boyd's OODA loop. You know, you've got to be aware of everything this system can do and it is amazing to me that aviation is so safe. It is amazing. The record is fantastic.

Brian "Ponch" Rivera:

Well, a lot of folks attribute that safety to the way from the black box thinking we got in the 70s, from learning from mishaps, from accidents in the 60s, 70s and 80s, right, and we discovered it wasn't necessarily mechanical as humans in the cockpit, there was a hierarchy in the cockpit communication problems, planning issues, situational awareness, planning issues, situational awareness. So you know, we use a model in naval aviation called DAMCLAS decision-making, assertiveness, mission analysis, communication, leadership, adaptability and situational awareness. That is, human factors, crew resource management, sure, and we've seen that that type of thinking, or and it's not perfect, don't get me wrong there. But you run into cultural issues where you have natural hierarchies within the culture. You have psychological safety issues with, you know, a junior pilot coming into a crew where a senior pilot has 5,000 hours, 6,000 hours. But I want to talk about that aspect.

David Slater:

Can I just come back on that? I mean I think that's very interesting. Can I put a back on that? I mean I think that's very interesting. Can I put a Gun in the movie if you think you're dead? So what happens is you build up a whole lot of reflex ability to adapt on the fly, ability to do the UDA thing, which is to adapt, and that's really what keeps safety, keep aviation safe. And I think if we could have more of that in the workforce the ability to, because a lot of people, a lot of these guys, work around problems, a lot of the accidents, are because people didn't do exactly what they're supposed to do. They actually got the job done and they botched something in there, whatever it is. But you've got to have this awareness of people are people, they will do it, they'll do their best. If you don't give them ways to do it better, then you're asking for it.

Brian "Ponch" Rivera:

Let's unpack this some more. This is fascinating. So in fighter aviation and naval aviation, we have checklists right. Checklists are great for the clear domain, using Dave Snowden's work on the Kenevan framework.

David Slater:

But take Sully Sullenberger. I mean, how far down the checklist did he get?

Brian "Ponch" Rivera:

They went down the checklist because they were trying to figure out what was wrong and then they had to just abandon it and go off the systems they knew and what they knew.

David Slater:

Exactly.

Brian "Ponch" Rivera:

Yeah, to figure out what was wrong and then they had to just abandon it and go off the systems they knew. And what they knew exactly, yeah, but that's a great example of that is miracle on the hudson. They that when they go through it, they're actually going through the whole dynamics of the kenevan framework, and it is his prior experience with gliders, I think it is. Is that right? Uh, and understanding the situation that, hey, there's no other choice but to put it down in the Hudson. But he also cites it's not him, it was the whole crew, right, and the customers, the people in the back that made it happen.

David Slater:

It's a great example, great example.

Brian "Ponch" Rivera:

But this goes back to your point. So in organizations they do not have the ability to simulate. Correct me where I'm wrong on this, because I'm going to be wrong on this. In aviation we run through simulations all the time. They would fail things. We would actually be our worst enemy when they failed things on us because we would take actions that would create a bigger problem even though we didn't have a major problem. Organizations don't have time to go through simulations like that. Maybe they do, you know, maybe there are some places that need to do that. So one of the sayings we have is you know, in naval aviation we spent 99 percent of our time practicing for that one percent opportunity, or you know that that that one thing that may happen in our lifetime, whereas in business you get 1% of your time to train and 99% of your time to actually do so what are your thoughts in ensuring that organizations are applying the right approaches in that 1% time that they get?

David Slater:

Where do we start? Okay, I think that's a very interesting point actually, because I mean, I think the really good organizations it's the leadership, it's not what percentage you spend of your time, it's basically what do you do in that 1%? And you know, if it's just management by walking around, well okay. But I mean you've got to buy in, you've got to have, in fact, the crews on the ground doing their huddles, doing their comparisons, doing their check, and if they've got enough time, they will do the stretching. You know what if we do this, what if we do that? If they've got enough time, they will do the stretching. You know what if we do this? What if we do that?

David Slater:

And the basic example, the classic example, is logging New Zealand loggers. They go out and they cut down trees and it's quite dangerous, so safety is relative and they get a very high percentage of accidents. So the management basically gave them all the training, sent them all on courses and nothing happened. The accident rates stayed the same. So they brought in this young girl from New Zealand.

David Slater:

She went in and she said you know, they showed her. This is a team that's got the most accidents. That's where you want to start. Okay, no, no. She said I want to see the team that has the least accidents. And she went and talked to them. What do you guys do? You know what's the difference, et cetera. Then started sharing that experience, you know, and once they started sharing the groups actually okay, well, we do this, we do the et cetera, and what about this? So if you can get that kind of adaptive, responsive, responsible responsibility down to the floor, rather than just following checklists, they have the ability to say, well, it's going to go wrong and we've got to plan B somewhere. I think that's the way you spend your 1%. You engender that kind of adaptability in your workforce.

Brian "Ponch" Rivera:

That's kind of like a leader creating the conditions for that to emerge right.

David Slater:

Exactly.

Brian "Ponch" Rivera:

It's a whole lead, like a gardener concept that people often talk about. But let's go back to the systems here. You're trying to create a system, again, right, that enables the emergence of I hate to call it safety again. But let's go back to what you called it earlier. And then, forgive me for going back to safety as emergent property, but you have to create the system. Stability, stability, okay, that's it. Yeah, almost that homostasis there For that organization to become their full self at work, right. So they need that skin in the game, and I think I mean they're the aircraft carriers, they're the workers, if you like, who've got this kind of built-in.

David Slater:

What's the word? You've got to delegate responsibility down the chain. You've got to make sure it's at the sharp end, where it belongs.

Brian "Ponch" Rivera:

So we want people that are thinking out there not just doing right, but also give them the tools to think.

David Slater:

I mean a lot of the accidents I've seen and I've seen quite a lot have been the result of the actual workers trying to do the right thing but with not enough knowledge about what they're doing and they take an action which they didn't realize. I mean about what they're doing and they take an action which they didn't realize. I mean Flixbra, opal, all these things. They do things and the shortcuts actually come home to roost. So you've got to give them that kind of awareness of what the whole system does and how it works.

Brian "Ponch" Rivera:

Clear, okay. So I want to repeat that. I'll use some of the words that we, or language we, use. People have to understand what the whole system does. What are they trying to do? And there's a famous story that, hey, a janitor at NASA, when asked what he does, he said I'm trying to put a man on the moon. Right? This is many years ago. He understood his role in this bigger picture, right, it wasn't there to just do that thing. And I've seen this in oil and gas companies, where you ask people what they do, why are you here? They cannot explain what their system does. Who is their primary customer, right? So I think what you're saying is you want to elevate everybody's understanding of what they do in that system.

Brian "Ponch" Rivera:

the importance system, the importance yeah, yeah, that autonomy, that mastery, that, yeah, yeah. So you, you have to set that and then give them the right tools. Um, kind of lead you back to human factors, maybe when we go back to macondo. Well, and what we learned from io gp 501, I think it was, or 502, where they identified, after those massive accidents 15 years ago now, that they should adopt lessons from aviation to build higher performing teams. Any thoughts on where that's taken, safety or performance of organizations since adopting those?

David Slater:

performance of organizations since adopting those. Yeah, again, I'm a bit of a cynic on these sort of ideas. I mean, I guess, just give me a few more clues as to what you're looking for here.

Brian "Ponch" Rivera:

Oh no, I'm not looking for anything, I'm just trying to get back into it. Yep, like the fixation on. Let me put it this way when we work with organizations and this is a common thread and a lot of folks dismiss the basic things that you and I talked about earlier about like aviation crew resource management, learning how to communicate, learning how to do a handoff, learning how to do effective planning, how to do an effective brief, how to do a really effective debrief or after action review Many people dismiss that as child's play, that that's something we all know how to do. Well, if that's true, then how come you're not doing it? Yeah, yeah, well but it's difficult.

David Slater:

Then how come you're not doing it, right? Yeah, yeah, well, but it's difficult for them to admit that they can't do it. So you've got to basically allow them to. You've got to allow humility. Okay, you've got to allow. I mean, I got a strapline on LinkedIn which says we're all learning. I truly believe that we are all learning and we're all learning. I truly believe that we are all learning and we never stop learning. If we stop learning, then you know we're toast. And it's again. Just coming back to John Boyd, you have to be able to have the mental agility to realize that actually, you need to have the options, you need to have the thinking, you need to think out of the box, yeah, and so I go on.

Brian "Ponch" Rivera:

You mentioned you were a cynic about something. Is it about?

David Slater:

Yeah, a culture, for example. What's a culture? It sounds great and everybody buys into it. It's a story. Maybe there are ways to implement a culture. I think it's a bit like safety. You can tell when culture is wrong, you can tell when you're not safe. But how do you know when you're safe and how do you know how much culture is good enough? So it's an idea, but how do you actually measure it? How do you actually program it? How do you put it into the system? I mean, you can tell the leader he's got to do this, do that. You know, send him on a course. Okay, we've got great culture. Now We've got good SMS safety manager systems. We've got good documentation. Does that mean we've got a good culture? Well, maybe not it means we can apply yeah.

Brian "Ponch" Rivera:

After the Mishaps at sea in 2017, I worked on some teams in the US Navy that looked at culture, and one of the things we identified was, on the surveys, that those organizations, I hate to say, failed, that had the mishaps. They had decent cultures, you know, based on their surveys, but you dig a little bit deeper and you'll find out that that's not necessarily true. And this goes back to the narrative pieces. What are the stories people are saying to their family members when they go home? What are they saying to their friends? What do they say around water coolers? And I've seen this inside aviation, where we have the I think it's HVACs again, where you have a HVACs human factors analysis classification system, where you have these surveys that are, you know, on a scale of one to 10, how do you feel about? You know? Blah, blah, blah, right, so we lose context in these things. I want to get your thoughts on you know, measuring culture through surveys, what you know, how you feel about them or don't feel about them.

David Slater:

Well, I think it's great. I mean, it's all good information, okay. But his question is what you do about it. You know it's okay, I got the data. So what kind of model is that data fitted into? What kind of organization do you design? What kind of instructions go out to the top, middle, bottom management? How do we get the culture in? I mean, we can put slogans in management. How do we get the culture in? I mean, we can put slogans in, you can put goals and all kinds of things, but how do we know it works? And if it's purely surveys, I guess surveys are as good as anything, but I'm sorry, I just need a bit more. I need a bit more in terms of system. So, actually, what are we controlling here? How do we control it? So we need to be much more system, specific, you know, not generalistic. Okay, yeah.

Brian "Ponch" Rivera:

Yeah, so let's talk about that. So we've had some pretty horrific aviation accidents in this past year the 787, we've had the CRJ up in Washington earlier this year I think it was earlier this year Washington DC. People like to pinpoint the failure on human error and then they go, it's human error, like, and then they go, oh, it's human error. You know, it's just human error. Give me your thoughts on that.

David Slater:

You know, Hold on, I think, says basically there's no such thing as human error. Okay, there are things that a human does which are not normal, not what's the word I want? Not approved of, so people. Well, I guess there are two kinds of human error. If you can class it as human error. One is deliberate and one is non-deliberate. So one is conscious and the other is unconscious. And the way in which the brain works, then you can see how these unconscious errors work.

David Slater:

Misapprehension, not trained enough, unconscious competence is so widespread. People go into a job and they're expected to turn up and do a job. They haven't got that kind of experience, that kind of ability to adapt to a situation because they haven't seen it. And you find that the more and more companies now they retire guys early and they lose the corporate experience, they lose this kind of competence which is crucial. Sorry, but human errors, I think, is too facile. I think this is, if you like, this is the kind of get-out-of-jail card. In fact it's the get-into-jail card that insurance companies and organizations use.

David Slater:

Find somebody to blame. We've got somebody to blame. It's your insurance. When you get into inquiries, we always find that you've got the. Once you get into a legal situation.

David Slater:

You know the science goes out the window, that you've got basically people just trying to say it wasn't my guy, you know it was him. So blame is something which is used as a kind of weapon, whereas science should be something that's used as a searchlight, a spotlight. You know, you're trying to find out what it is. I don't care what happened, whether it was his company or his company, I just want to know what it is that went wrong. And you usually find that it's not one thing, it's a combination of things, and you usually find it's a way in which people understand things perhaps are different. So there's all kinds of factors involved. So you can't say this guy was to blame. Why was he to blame? Why did he do it? What situation put him in that spot and what was he expected to do and what else could he have done and why didn't he do it? I mean, there's all kinds of things. Error is just too simple.

Brian "Ponch" Rivera:

Yeah, and I think this is a great segue into the a constructive reality and perception, right so that that individual's orientation they may not have had the experience that somebody else who's had 20 years on the job. They, the system, may have failed them in training them on a particular thing. They could have been distracted because of the meal they ate or the lack of sleep they got the night before. It could be attributed to working overtime. It could be attributed to so many things in this, but I want to zero in on this.

David Slater:

Yeah, kind of just underline that. I mean, I think you're exactly right, the thing which I think is a better word than human error. Okay, perception is reality to people. Okay, Think of your brain as a Schrodinger's cat in a box. Okay, your brain doesn't know anything, it just gets sensory perceptions. And it's your ear, your eyes, you know, and it's your brain that's processing all these signals into a picture, into a perception. Okay, and it's your perception. Right, my color red is not exactly your color red and that's just a trivial example.

David Slater:

So you have to look at everybody and say you're an individual, what you're seeing is not necessarily what I think you're seeing, or what I understand is not necessarily what you understand. And once again, it's humility. Once again you accept that, okay, I don't know everything, but not everybody knows everything. You know we're all in it. I don't know everything, but not everybody knows everything. We're all in it together and we're all different. And I think once you've got that kind of humility, you understand perception really is something we need to address. Do you really know what's going on and you can check the perceptions. You need to check the actions, you need to check the competence in ways which are non-threatening actions.

Brian "Ponch" Rivera:

You need to check the competence in ways which are non-threatening. So I want to build on this more. I recently did a small little introductory workshop where I started with perception as a controlled hallucination. It's constructed top-down, inside-out. How we perceive reality. Going back to, we all see different color reds, if you will, from I think it's Neil Seth, and I can't remember the other folks that are studying this, even Carl Friston.

Brian "Ponch" Rivera:

But the idea here is in fighter aviation, the moment you go through flight training, you start to go through flight training, they take you through physiological training, right, and they spin you around in a chair and they show you how your vestibular system lies to you and all these other things. They show you the dangers of looking at stars when you're flying at night and things like that. Right, they're trying to give you an experience. The same thing is true now is if you go back to perception as a you know, excuse me, a reality is constructed top down, inside out. It's different for everybody.

Brian "Ponch" Rivera:

The moment you show people that and I've done this they get scared, and that was my first reaction is like oh, I scared everybody, because now that's all they're worried about is am I actually here? And that's not the point. The point is we all see something in a different light. An accident, whatever it may be, a meeting, a planning session, a debrief that's important, right. We need the diversity of all of us to construct what happened so we can anticipate what's going to happen next Thoughts.

David Slater:

Yeah, there's one more thing too, which is perception. It's what Slovich called affect heuristics. Okay, so it's not just a brain in a box, it's got spectacles. So you've got rose-tinted spectacles and you've got sunglasses. So actually, the filter and your cultural filter, your upbringing, your perception, your parents, your social, all these are filters to a perception of reality. So, again, it shouldn't scare you, it should actually make you more, I think, knowledgeable about your actual environment, if you realize that there are these things that might be getting in the way and how to get around them and this is core to the orientation inside of John Boyd's OODA loop is right there.

Brian "Ponch" Rivera:

It says exactly what you said genetics, culture and experience right, exactly, plus the new information that's being brought in from the outside environment, which I think you wrote about the reticular activating system in some of your papers, that filtering system as well. Can you talk about that a little bit and how we filter that reality?

David Slater:

That's really important. When you talk to neurodivergent people, then they tell you I mean, it is quite apparent that their thresholds of different sensitivities are different to your sensitivities, of different sensitivities are different to your sensitivities, and the emotional signals we read a lot. I mean I'm looking at you now in the thing and I'm reading from your face what you're thinking about, what we're talking about and et cetera. So all that's coming through to me and I can basically respond to you in a way that picks those up. We're not saying anything, but there's a lot of information traveling Now some of that in.

David Slater:

Well, everybody's different. That's the big medical conundrum is that they treat everybody as an average. We're actually not, and everybody's mental processes are different. Their thresholds are sent this filter, if you like, for the unconscious coming through to the conscious, because most of the time you work on autopilot and you only get a problem when your autopilot tells you something which your conscious says isn't true. So you've got a mismatch. So if you don't pick up these mismatches, then you don't pick up the signals and I think that's what you've got going on. And once you realize again, people are different, they've got different habits et cetera, but they're also built. They evolve differently and again, you know, this is the way the human race is going to exist or survive if the global warming goes out. The guys that can tolerate the heat are going to survive, et cetera. So, anyway, it's an understanding of perception which is really important, and not just perception of signals, but how your brain processes them.

Brian "Ponch" Rivera:

Right, I got a couple of questions about that. We've seen in some organizations that consultants come in and put people in a box and say, hey, you're this color and you're this thing, or you're purple over pink and you're chocolate over vanilla and you're, you know, a dream sickle, and I'm like, okay, I don't care. But what matters to me is if we all understand we see the world differently. Yeah, um, and and we have a hate to call it methods or tools or um, I guess you can call them uh, exacted from other domains, such as effective planning and effective debriefing. Then we could leverage that diversity right there, right, rather than sit around and talk about you know, I'm a Reese's peanut butter cup and you're an almond joy. Okay, that's great.

David Slater:

Yeah, but go further. I mean, I don't like the boxes either, because somebody can be in this box for this bit and this box for another, etc. So boxes, what matters is the combination, that actual thing which is a person, an individual, and the only way you're going to find out what he's all about is talk to him right Right right and you make your own mind up and between the two of you you actually establish some kind of rapport.

David Slater:

So the organization is going to work when your platoon gets together or when your bomber crew actually bonds. Okay, that's what matters, Never mind the boxes. The consultants are there because they've got a little checklist which tells them what they're going to do to this company and how they're going to explain it and how they're going to earn their fees.

Brian "Ponch" Rivera:

The thing is, does it work? It's the interactions, it's the quality of the interactions, and if we can help organizations have higher quality interactions, that's what you ought to target. How do you do that? It's through the system.

David Slater:

And it starts at the top. Okay, I used to be a regulator so I could tell you how good that company was by five minutes with a CEO. You can really understand what's going on in that organization by how he behaves, and that behavior filters down to the whole organization. I guess if it's something as big as the military, then maybe not, but in most organizations the CEO is the guy that actually sets the tone.

Brian "Ponch" Rivera:

Great. I want to kind of wrap up with a few questions about artificial intelligence and the free energy principle, why you're bringing that into safety. So I guess we start with the free energy principle. What have you found? What are you bringing from that to safety? Why do safety experts need to understand that?

David Slater:

Yeah, I think Frisum's breakthrough was expressing entropy in terms of this variability. So I'm not sure it's totally right, but it makes sense to me as a concept that actually the more variability you've got, then the bigger the entropy and therefore if you're trying to reduce the variability through a predictor-corrector mechanism, then the system is going to become more stable. You get into the trough, which is where you're trying to get to. So I think that free energy is a very good kind of lever to remind you that it is all about entropy, it's all about managing the situation. So that's where it comes from thermodynamics, as you know, as Gibbs, free energy. I'm not sure it totally translates into that, but it's a nice thought, it's a nice story.

Brian "Ponch" Rivera:

Yeah, it's a good story, Right, right, it may be wrong, just like some parts of cybernetics. We've kind of abandoned them, right, yep? But you're building on the second law of thermodynamics, cybernetics, biology, anthropology, neuroscience. So we'll never have a perfect framework or method or theory, right, yep, but it seems to be a good path or direction of travel to go. Okay, there's something over here we need to look at, so the connection to safety.

David Slater:

Go ahead. The thing is, the brain is such a fantastic I mean it's amazing. We haven't a clue how it really works, but I think anything that can give us a nudge or a foot up into it is great. So wisdom, I think is, can give us a nudge or a foot up into it is great.

Brian "Ponch" Rivera:

So wisdom, I think, is to be applauded. Yeah, and I saw, a few folks pick up at the. I think it was a FRAM conference last year where they talked about free energy principle and human factors and things like that. So it's starting to emerge. Human factors and things like that so it's starting to emerge. Do you expect to see more?

David Slater:

connections from the safety community to the neuroscience. That's the safety to community. I think, again, they like stories and it's a nice story. I don't like stories, I like science, I like real facts. I like real facts, I like real things which happen. So I'm not that much in favor of that. And Fram actually you know we took a story about Fram. That should be a system model. It shouldn't be about stories or free. It's basically how does this system work, what are the functions of this system, how do they interact? And we can now basically calculate errors and mismatches of functions so we can predict how systems are going to behave. So I think it is a complex modeling system. It's nothing to do with safety, nothing to do with human factors. It's actually a way of modeling a system. It's a meta model. So, in other words, you go up a level of abstraction. You don't talk about the pump, you talk about what's it do? It pumps water or it pumps fuel, the interactions, yeah, sorry, It'd be the interactions, right yeah absolutely Okay.

David Slater:

So what was the last thing? Ai?

Brian "Ponch" Rivera:

Yeah, I want to. Okay, as we move towards, we're currently in these stochastic parrots which are known as large language models. They're kind of closed systems. We're moving into what they're calling agentic AI or even sentient systems, or even sentient systems. I want to hear your thoughts on some of the threats and opportunities that are facing organizations as they start to bring AI into their systems.

David Slater:

Yeah, yeah, yeah, I mean these are still machines, right. I mean they're non-intelligent. Intelligence is something which has got a way of processing information and generating insights which are I can't think of the right word, but I use AI a lot. I use it as a kind of in the same way you'd use a spanner. I can turn a nut by hand, but I can get a hell of a lot more leverage with a spanner. So it's basically putting a spanner to your thinking. You need to generate the questions, the prompts, et cetera. It's got the vocabulary, it's got the library, it's got the relationships, if you like, and you just basically use it as a tool. Right, and as long as you use it as a tool, it's fine.

David Slater:

If you use it as a substitute for a function, it's not going to work, unless that function is very mechanical. But I think that's where we are. I think, if you ask me what's going to happen in 10 years, I have no idea, and I think we may well be in a situation where it's doing the thinking for us. But there you go. I mean, at the moment it's great and I think it's a help, and particularly in safety, right, you can do a lot of initial checking, particularly with failures, checking out systems, checking out mismatches, checking out things, but you can't do a safety audit. You can do an audit, but you can't do a safety study. You can't do the kind of imaginative what if this happens? What if that happens?

David Slater:

So I use it to generate a system model to find out the basic errors that could possibly happen. And then you take it away and you analyze it and say, well, how does it really work? What happens if this guy hits a point? What happens if you haven't got enough of this? What happens if you whatever? So it's a start. It's like this guy hits a pint. What happens if you haven't got enough of this? What happens if you whatever? So it's a start, it's a great start, but don't rely on it.

Brian "Ponch" Rivera:

I'm curious what are you working on now? I know you just pushed out that FEPP safety paper in the last month or so. What's the latest and greatest? What are you working on these?

David Slater:

days. Well, I do quite a lot of work in the medical healthcare because they have a lot of problems with healthcare, because they've got teams which have got to. Again, this is where error. I think the big problem with the health services at the moment is that the well, in the UK, for example, you can't make an error. If you make an error, you're going to sue, etc. This is a system where you have to take a decision under stress and you've got no time. No action is not a solution and you're going to make mistakes. You're going to make mistakes, which are normal, so you have to have a no-fault insurance.

David Slater:

Once you go into a no-fault insurance, you can start to basically take a lot of the bureaucracy, the self-defense, the defensive mechanism out of the health service and you can make it much more, if you like, logical, much more controllable. And again, we're using modeling systems to find out exactly how all this trauma team works. What are the interactions, what's going to happen when, what can happen if this isn't right, etc. It's using a systems approach to try and understand exactly how it's going on in teams. We're doing this again in aviation. In aviation, you get a lot of data from, say, flight landings, runway incursions how do they handle that? How does the pilot handle that? What are the critical factors? And again, if you've got a system model, you could start to do that. So that's where I am.

Brian "Ponch" Rivera:

But within this modeling, you're not necessarily looking for near misses, You're looking for the things that go right too correct.

David Slater:

Anything. I mean you want to know how that system behaves. What are the critical factors? What are the things that have to work? What can happen if they don't work? What can happen if you can make them work?

Brian "Ponch" Rivera:

better.

David Slater:

So it's basically. You know, I'm not looking at safe failure, whatever. I'm looking at performance, efficiency, thoroughness, reliability, it's all the same thing, okay.

Brian "Ponch" Rivera:

I'm wondering can you use these models to help improve the quality of learning in a debrief or an after action review? Is that something that's possible?

David Slater:

You could. The nearest we've got is legal work, but I mean, I think that's interesting.

Brian "Ponch" Rivera:

And I think and this goes back to perception when we look back at what happened, even if it's a 10-minute simulation that we run for a group of five or 10 folks, they're going to have a different take on what just happened. So if you go back two weeks a month, they're going to have a completely different view of what happened and, of course, we misremember the past. So, with a quality like in fighter aviation, we have these things called tax ranges that can help us reconstruct the past. It may not be perfect, but it's pretty good. I'm just wondering if we can use these models to help the system not just humans, the systems have a better understanding of what happens.

David Slater:

They can find the you know why things are happening Absolutely. That's good. We can talk about that later.

Brian "Ponch" Rivera:

Oh, there's so much. Yeah, yeah, you know I'd get an hour with you. I just it's amazing how, again, my view of how things are evolving is, as soon as we have a common thread that connects, like the agile community, resilience, innovation, strategy, safety we're going to reduce the energy that leaders and companies need to have these emergent properties work for them in a positive way. Right now, I think we have too many silos out there that are talking about the same things but missing the opportunity.

David Slater:

What did I say? Inqualities steal shamelessly.

Brian "Ponch" Rivera:

Yeah.

David Slater:

I mean, there are a hell of a lot of good ideas out there. We should basically pull ideas and basically look at the pattern, look at the bigger picture. It's like Fram Do the meta-modeling, what's the system? What's the system doing? I don't care about the program or I don't care about which particular guy is doing this. How does the system react to this?

Brian "Ponch" Rivera:

Now, hey, I want to thank you for joining us on no Way Out. You know we talked about John Boyd Doodle Loop. You brought it up quite a bit. We talked about free energy principle safety. There is a huge connection between all these things. I think many folks in our network will have these conversations. You know I do a lot of work with folks in Europe and the UK on simulations. We have folks in New Zealand that are doing the same thing. But before we go, I want you to share with our listeners, if you don't mind, how they could find you, if there's a website where they could find your work, anything like that.

David Slater:

Okay, I guess the easiest way is I'm on LinkedIn on ResearchGate. Way is I'm on LinkedIn on ResearchGate and my email address is dslater at cambrancisorg, so happy to contact or support wherever.

Brian "Ponch" Rivera:

Alright, appreciate your time, professor Slater. We'll keep you on here for a second and then we'll sign off here in one moment. Thanks again.

David Slater:

Okay, you're welcome.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

Shawn Ryan Show Artwork

Shawn Ryan Show

Shawn Ryan
Huberman Lab Artwork

Huberman Lab

Scicomm Media
Acta Non Verba Artwork

Acta Non Verba

Marcus Aurelius Anderson
No Bell Artwork

No Bell

Sam Alaimo and Rob Huberty | ZeroEyes
The Art of Manliness Artwork

The Art of Manliness

The Art of Manliness
MAX Afterburner Artwork

MAX Afterburner

Matthew 'Whiz" Buckley