No Way Out

Reorienting Safety: Human and Organizational Performance (HOP) with Todd Conklin

Mark McGrath and Brian "Ponch" Rivera Episode 121

Send us a text

The gap between how work is imagined and how work actually happens sits at the heart of our most persistent safety challenges. In this illuminating conversation with Professor Todd Conklin, we explore how Human and Organizational Performance (HOP) has evolved from its origins in high-consequence industries to become a powerful framework for understanding and improving safety across sectors.

Conklin traces HOP's development as a response to the limitations of behavioral-based safety approaches, explaining why scared people don't take scary jobs and how high-risk environments require systems thinking rather than worker-focused interventions. The discussion reveals a fundamental shift: redefining safety not as the absence of harm but as a capacity organizations actively build.

Perhaps most striking is the transformation in how we view workers' roles. "The worker is not the problem," Conklin emphasizes. "The worker is the problem solver." This perspective upends traditional safety management by recognizing that expertise exists at every level of an organization, and that workers constantly adapt to hold together imperfect systems.


Pre-Accident Investigation Podcast

Todd Conklin on LinkedIn 

NWO Intro with Boyd

March 25, 2025

Flow Learning Lab

Find us on X. @NoWayOutcast
Substack: The Whirl of ReOrientation

Want to develop your organization’s capacity for free and independent action (Organic Success)? Learn more and follow us at:
https://www.aglx.com/
https://www.youtube.com/@AGLXConsulting
https://www.linkedin.com/company/aglx-consulting-llc/
https://www.linkedin.com/in/briandrivera
https://www.linkedin.com/in/markjmcgrath1
https://www.linkedin.com/in/stevemccrone

Stay in the Loop. Don't have time to listen to the podcast? Want to make some snowmobiles? Subscribe to our weekly newsletter to receive deeper insights on current and past episodes.
Recent podcasts where you’ll also find Mark and Ponch:

The No Bell Podcast Episode 24
...

Brian "Ponch" Rivera:

We love conversation. So, Professor Todd Conklin Dr Conklin is with us today. I want to give you a disposition of how we kind of know each other. This is probably the first time we met online and Steve McCrone is our co-host today.

Todd Conklin:

Do I owe anyone money?

Brian "Ponch" Rivera:

Let's get that out early.

Todd Conklin:

That depends no, the check's in the mail, I'm sure it's in the mail. It's on its way.

Brian "Ponch" Rivera:

It's on its way.

Brian "Ponch" Rivera:

It's on its way. I appreciate that. So 2017, I want to go back to then. There's a lot going on. Back then we had, I believe, 17 sailors lost their lives in two mishaps 2017.

Brian "Ponch" Rivera:

I came across you in 2018. I came back into the active duty Navy. Our job was to understand culture safety, how to build better teams. Our job was to understand culture safety, how to build better teams. Leaders get leading indicators and your name came up right. So this thing called HOP Human and Organizational Performance, safety 2, safety Differently, resilience, engineering, high Reliability Theory All these things kept coming up up. And the Naval Safety Command, naval Safety Center, then brought you in to talk to several military leaders and kind of paved the way for what the future of safety looks like. I come from a big safety background fighter, aviation, aviation, crew, resource management, high consequences. You know we've learned a lot over the years from failures, if you will. And that brought me to the world of complex adaptive systems and that's how I met our co-host today, steve McCrone. We met at a Kenevan retreat, up in Whistler actually, and I was trying to figure it. Yeah, you guys go to good meetings.

Todd Conklin:

Yeah, kind of fun.

Steve McCrone:

Nice. So, todd, my background is in ammunition storage, logistics and destruction, so again, another high consequence environment. My first, my first introduction to safety was um, basically, how do you cover your ass when idiots make mistakes? Um, and from there, we're kind of fairly dissatisfied with that view. Um went on to do a lot of work, believe it or not, in helping organisations really understand safety as an emergent property of a complex system. This is when, like me and Brian, got together with another couple of people around the world Marion Keeley, michael Sheveldave, gary Wong, those guys and started really looking at that in detail and, frankly, we drew quite a lot on the early work that you did. Pre-accident investigations forms a big part of the way that we approached ammunition safety, for instance.

Todd Conklin:

Wow.

Steve McCrone:

What's the worst that can happen. That was our catchphrase.

Todd Conklin:

Yeah, that's true, and the answer was always really bad answers, oh yeah.

Steve McCrone:

Yeah, not always right. And when it wasn't, you could say, okay, let's lighten the load a little bit. But oftentimes it was and that's when we really needed to pay attention.

Brian "Ponch" Rivera:

Yeah, so hey, welcome to the no Way Out, Professor Conklin, great to see you, just call me, todd, are you sure that will drive me crazy?

Todd Conklin:

All right, yeah, the only person I may call me Dr Conklin is my dentist.

Brian "Ponch" Rivera:

Why is that?

Todd Conklin:

I think it might be more about me than my dentist. All right, but we can talk about it. I think that's a different podcast, but we can talk about it. All you want to. I need the therapy.

Brian "Ponch" Rivera:

So Hop? I came across Hop again inside Oil Gas. A lot of folks are following this philosophy right. It is a philosophy We've had recently on the show. We've had Dr David Slater. I think you might be familiar with his work. Fantastic conversation, looking at safety through another lens as well. But I kind of want to get your background. My first question is who coined Hop? And I have a couple ideas of who it might be, but where did it? Where did it come from?

Todd Conklin:

So I think that's a really interesting question. So so I worked for a really long time at Los Alamos National Laboratory, so, which was operated by the Department of Energy, and I worked a lot with high risk, high consequence stuff. You know the department of energy and I worked a lot with high risk, high consequence stuff, you know. I mean we all kind of come from the same place and maybe even the same stuff in many cases. Right and um, we started looking gosh, I need to pray. It had to have been past 25 years ago, so 27, 28 years ago um, kind of a rough estimate.

Todd Conklin:

We started looking at there must be a better way to manage really this high-risk safety, because the industrial safety rules and the paradigm was okay I don't know if we thought it was bad at that time, it's what we had and behavioral-based safety was starting to get really sexy. The problem is, with high risk, high consequence, you learn really quickly that it's almost never a behavioral problem, because scared people don't take scary jobs. You know what I mean. So you don't really have a problem with people that don't understand the risk they're in the job because in fact, if they don't like risk, they don't do what you guys did for a living. I mean, it's just that it's not attractive to them, right? And so we started looking around and we found through an organization called IMPO. Are you guys familiar with them at all?

Steve McCrone:

It stands for the Institute of Nuclear Power Operations.

Todd Conklin:

Ok, and, and it's, it was really the North American kind of version of WAMO. It was a consortium of all the new power operators and they had gone through really the tutelage of James Reason into this discussion around how humans interface with the systems. And it wasn't really human factors, not in the traditional European definition or the aviation definition, which are somewhat similar. It was really more of an understanding of how systems interface with work and it was really now something we call kind of the socio-technical interface, how systems and people come together to function and create success in a highly adaptive, highly dynamic environment with high consequence and really zero acceptability of failure. So this human performance stuff became really an interesting part of our discussion and several of us within the Department of Energy community kind of jumped on it and we soon developed some capacity to actually go to all of our facilities, all of which are super high risk. But again, I'm sort of preaching to the choir with you guys and so we started having this discussion but we couldn't call it HP OK, because HP was already used up by the health physics people and you learn really quickly in the radiation community that the health physics people are a really good friend to have radiation community, that the health physics people are a really good friend to have. You want them. You want them to like you because if something happens they're the ones that get you naked and scrub you with a big wire brush. So the more relationship you can build early, the higher the payoff is if you have to get crapped up right.

Todd Conklin:

So we started calling it HPI human performance improvement and I don't think I mean I don't want to speak for everybody, but I don't think any of us really gave a crap what we called it.

Todd Conklin:

But HPI was for sure a little wonky and a little weird and as it started to diffuse, especially in manufacturing and especially in the energy sector, it became really an interesting discussion. Europe had been using a form of HOP and then it was really picked up, probably first by General Electric in their appliance manufacturing facility and they kind of ran with human and organizational performance. It's the craziest thing, you guys, because whatever it's called and punch, you just kind of did it earlier when you ran through the litany of names, kind of safety to and safety differently, and I don't know how you guys feel I don't really care what it's called. I mean it's not that important, but this one stuck because I think it's kind of clever and a lot of people really liked the fact that it includes the organization in the discussion, so it does sort of take some focus entirely, just off the worker. So that was the longest answer ever, but I mean it's kind of a weird question because I kind of think it's here to stay for a while at least.

Steve McCrone:

I agree I think a lot of people have been trying to carve a space by the nomenclature that they use and, frankly, they're all branches of the same tree, so you can see the kind of root in all of those approaches, regardless of what you call it, and, interestingly enough, people who do this work for kind of at the pointy end of the stick don't really care, I mean I haven't found anybody especially, you know, invested in one name over the other.

Brian "Ponch" Rivera:

So is there a connection back to Carl Veik's work in high reliability theory? Oh, yeah, yeah, okay.

Todd Conklin:

So Carl is amazing and Carl and Kathleen, who were a team of people, they were doing some interesting work. Carlene Roberts and Todd Laporte at Berkeley were doing interesting work and it was kind of coming at a simultaneous rate. Carleen was really looking at, well like for sure, the Navy and she was doing carrier ride-alongs and her kind of compelling question was how do they get 18-year-old kids to care enough? How are they doing it? And man, her observations were amazing. My employer has always been the University of California, so I had kind of a nice little connection in with Carlene and Todd and so I was able to talk to them and then Carl and Kathleen were just doing amazing work and really Carl sort of kind of wrote the book. I mean, not kind of he did, he did write the book and he's the one that sort of encapsulated everything. And it was very interesting. Because the high reliability depends kind of where you're from, because that title's been in flux too. But you know it used to be called a highly reliable organization. No, high reliable.

Brian "Ponch" Rivera:

I can't say, I can't even get it right. High reliability, organizing right, is that? Well, that's now, that's what they call it now.

Todd Conklin:

Okay, but they used to just use the word organization and they they changed it from a noun to a verb, which is, which is you know kind of um, very Eric Holmegel and Holmegel in you know kind of very Eric Holneglen, holneglen, you know, to show it as an active part of it. And that's been really an interesting part as well and that played in really deeply early on in the stuff we were doing at Department of Energy and it was great. The problem, if I can say that that seems judgmental, is that what happened is is that they sort of came up with the characteristics of an HRO and then they kind of told companies, go become this, and what was missing was sort of the pathway to which to become this. So they started with kind of the end, which is probably good, but they didn't really help people along the way. And one of the interesting things about this is that now the HR discussion is coming back into pretty popular discussion and I think the timing is much better now because now people have an understanding.

Steve McCrone:

I will just tell you that HRO and I'm pretty biased, but HRO has a difficult time working if the organization believes every event is preventable, okay, so if they're really honed around this idea of zero events, then becoming an HRO is virtually impossible, right, but that was never part of the early discussion, I think early discussion, I think With the likes of HRO, what we see is a lot of these methods start out as a pretty authentic response to the complexity of the safety situation and then people try and scale them by replicating them, by saying if you do this, you get that Because we did this and we got that. And a lot of the work that we do at the front end is really founded on establishing context. What works in our context may not work in your context. So context is king, becomes the phrase we use around safety.

Todd Conklin:

And you saw examples of exactly that. Like remember when they started looking at health care really early, then one of the things I said is health care is going to adapt hro, which is fine, that's a good target. And then you saw journal articles and pretty elaborate documentaries on you know why hospitals don't fly? Yeah, yeah. Well, one reason is they're not planes. I mean, I'm no genius, but it's not a plane, and so it was exactly that it was. This works in one context, therefore it should work in another context, and context drives everything. You're exactly right. Everything lives in that context.

Brian "Ponch" Rivera:

So I'm not sure if you're familiar with the agile community or not the whole agile movement and software development. Have you been tracking any of that over the last 20 years? A?

Todd Conklin:

little bit, not as much as I find it super, super interesting, yeah, but not as much as I probably should be.

Brian "Ponch" Rivera:

I mean, this is interesting, I know the cats, I know the players.

Brian "Ponch" Rivera:

Yeah, the players. So we've had some of the players on here. We've worked with several of them in the past and there's a pathway that they wanted to go on it on about 25 years ago and that pathway is the one that you're on now. They wanted to go down the Carl Veidt pathway, but they end up going down a pseudoscience pathway and this is coming from some of the signatories or people that signed the Agile manifesto. So they identified that the pseudoscience thing was just easier to do. It's just. Let's just follow this.

Brian "Ponch" Rivera:

But I think what we're seeing today is a course correction back towards trusting the science, if you will, towards high reliability theory, towards resilience engineering, towards HOP. And when you're in an organization like a large oil and gas company, you see the safety HSE safety community doing HOP. You have over here doing design thinking, over here you have them doing agile and over there they're doing lean, and then you got Six Sigma over there. At the end of the day, that's just way too much energy for an organization to spend. And I think what we're seeing is and this is my opinion and I want to get your view on it is kind of the convergence on I'll call it hop human and organizational performance, or flow thinking, or whatever you want to call it for maximizing performance in an organization.

Todd Conklin:

OPEX operational excellence, yeah, operational excellence, same thing, right.

Brian "Ponch" Rivera:

So OPEX has a big connection to fighter aviation, a planned execute assess or planned brief execute debrief model, which is fantastic. You know, all teams should do that. How they do that matters, and that's something that we're learning from the team science world, the world of team steps, from healthcare, the lessons we get from Gary Klein and others, eduardo Salas, that we get from team science, scott Tannenbaum and, of course, amy Edmondson. Looking at all this, but I want to get your thoughts. Are you seeing a convergence now or what's happening in your space?

Todd Conklin:

So yeah, and I think it makes sense, right, that it would faction out and people would sort of own their own individual ideas, and they probably needed the time to do that, to develop their ideas and to understand deeply. And then people start seeing connections or, as the kids would say, collaborations, collabs. Right, they can collab across ideas and the community becomes stronger, better because of the depth that each of these individual ideas brings into the discussion and we start to really synthesize a lot of these ideas across the board. The other thing that I think was really interesting is that it became really clear that again at kind of the sharp end, there is not a big difference between lean and Six Sigma and black belt and safety and environment. And you know, it's all procedural use and adherence, it's all, it's all the same thing in the mind of the worker and they're just trying to get work done in a in a rapidly changing, highly volatile, adaptive, demanding environment. And so the discussion, I think, becomes, it's becoming much more meaningfully synthesized as people really at the practical side of the house are having to take this information and use it in a way that makes a real difference to the organization. That's probably normal, I mean, if you look at kind of the study of how ideas diffuse, like Ev Rogers, the diffusion of innovation guy, right, he would tell you that you're going to have these really early adopters who are very sure that what they're saying is the truth and the only way and they're going to fight really hard for it because they have to, because they're really up against relatively significant forces that don't want to change and that, as the early adopters make a space, then kind of those first users and second users and the rest of the bell curve if I may draw a bell curve and every conversation has to have a bell curve in it at some point they become part of that discussion. That's actually, I think, one of the more exciting parts of what's happening.

Todd Conklin:

But I would caution us that I think what's missing is really the foundation, and it is so seductive to go into a place where the world's better, where we'll have fewer events, things will be brighter and smell better and there's free cookies. But we can't get there until we establish some really foundational items, and one of the things that I think we're seeing some shift on is really the definition of safety. So the biggest challenge that personally I would tell you there are two that I would talk about but that I've had to carry throughout my career is the belief that most people think safety is the absence of harm or the absence of risk. You can sort of put any word you want in there, right. When in fact what we know is that really robust systems don't ever remove risk. They add control, right.

Todd Conklin:

And so when we started really shifting the conversation from safety as a deficit to safety as an additive, or, as Eric would say, safety to safely, right you go from sort of noun to adverb that that actually helped organizations because they needed to see safety really as a capacity. We needed to see safety really as a capacity, the same way you look at, like fuel or food or I don't know, paperclips or you know right. And I think that's one advantage that you guys in the Navy, especially Navy aviators, had pretty early is that you saw safety not as an end point but as a capacity in doing operations. I mean, this is how we do operations. That shift has been really really an important shift, and I think I mean I'm probably naive, but I think that's kind of happening- I mean I don't think it's happened, but I think it's happening.

Todd Conklin:

The other shift oh, go ahead.

Steve McCrone:

I was going to say it is happening. But in the real world there's a real tension between the capacity to stay safe, the capacity to adapt to change, which is a different type of capacity and the need, particularly in a governance and management perspective, to prove that we're safe, the need to be compliant, the need to show that the ass covering you might call it that happens at the top of the organization can oftentimes dampen the ability for the operational side of the organization to remain adaptive and maintain that capacity to stay safe.

Todd Conklin:

And predictably so. I think you're again 100% right on that. I think that is really a data point that tells us we haven't done a good job with the top side of the organization and we do an especially bad job with the regulator in helping them redefine what it is we're trying to accomplish. The other big shift that I would toss in here just because I think it needs to be tossed in pretty early in the conversation is the changing role that workers or operators have. So traditional safety the safety we all grew up with the idea was that really the operator was the problem.

Todd Conklin:

So if you want to do high explosive ordnance safety better, you ask high explosive workers to be more safe and all safety programs are directed at them. You tell them what to do, or oftentimes in in our world, we tell people what not to do. For god's sakes, never do this right, and that actually probably reached the end of its value. But that's when we started understanding that the worker is not the problem. The worker is really the problem solver. Welcome the adaptive community, you guys right. The agile community right, the worker has to be agile enough to hold a relatively wonky, pretty poorly designed system together in order to perform some mission, and so, instead of seeing the worker as the problem to be fixed, when you start seeing workers as a solution, as experts, then instead of telling them what to do, you ask them what we need. And what's interesting is, recognizing that expertise lives in every level of the organization really changed the way we did problem identification and problem framing, which obviously is going to change how we do problem solving.

Steve McCrone:

So is this where we start to kind of overlap with the idea of psychological safety and wellbeing. I mean the the workers today have an increasing kind of cognitive workload which can, if not managed, diminish their ability to adapt and solve problems as they emerge, and that has an impact on people's ability to stay safe, both from a mental and a physical perspective. So I see a lot of health and safety people that now have well-being kind of tacked onto their job role and I don't think it doesn't seem to me like many of them have the capability really in that space. Are you seeing that?

Todd Conklin:

same thing. Yeah Well, and it makes sense, right? Nobody said that was going to be a part of the game, right? So they're adapting that and the wellness part of it, I think, is becoming really an important part of it and we're understanding more. But it's early days. I would be really cognizant in our discussion and I'm wide open to having this discussion because I'm curious what you think I would really separate out.

Todd Conklin:

So, unfortunately, I think psychological safety is kind of unfortunately named. So Amy Edmondson's idea of psychological safety is not open doors, warm hug, healthy work environment. Her definition of psychological safety is how easy it is for you to disagree with power. How easy is it for you to give truth to power, which does absolutely impact mental health. I mean, there's a lot of crossover here.

Todd Conklin:

But I think one of the things we're learning is that, as we understand workers have knowledge that planners and leaders don't have because they do the work and they understand the agile nature, the adaptive nature of the work. And as we start to understand that as technology increases, the pathways to failure are increasing and it's creating an environment that's much, much more complex under climate sort of the many pieces, tightly coupled definition that it is having dramatic impact on mental health and we're seeing it in some industries greater than in other industries. I can't have this conversation with you after the Air India crash of the 8-7 and not think about the overlap between complexity, operational pressure, professional discipline, expertise and mental health. I mean just it's. It's difficult, even though I know very little about the event, it's very difficult to not think this could be a player in many catastrophic failures that exist, and we certainly know of some, at least plane crashes that have happened because mental health conditions being mental health conditions yeah, there's a lot to go in there.

Steve McCrone:

Um, I know we. We talk about, um, you know, managing in complexity, or, um, you know safety, adaptive safety. We talk about distributed decision making and distributed cognition. So one of them is about having the people close to the work making the decisions that affect the work Because, as you say, they see the emergence, they have a contextual understanding that cannot be gained from a more kind of linear, categorical approach to describing the work, and they also have a diverse range of opinion, a diverse range of thought that can be harnessed in order to see, you know, areas where they might have weak signals of risk or weak signals of opportunity to improve. In a standard bureaucratic organisation, that's very difficult to achieve and oftentimes you have this conflict between centralised control and distributed decision-making, which does have an effect on both mental wellbeing and physical safety. Absolutely One of the challenges, I think, is to try and convince leaders that they're better off letting go of some of that control.

Todd Conklin:

That's tough, right? Yeah, because then you could have people running with scissors and cats would be dating dogs and the world goes crazy.

Steve McCrone:

Yeah, we have the phrase, and I heard it the other day and I'll repeat it, and hopefully the person who said it is listening to this is it doesn't matter what we seem to do, people still do stupid stuff.

Todd Conklin:

Yeah, so how do you handle that? I'm always interested in that.

Steve McCrone:

Yeah, yeah, and we think about it in terms of what is the system that, or the journey that person has taken through the system, or what is their understanding of the system that provides the circumstances where it's okay or it's necessary to do that. So and I come from New Zealand If you want to hurt yourself at work, go be a farmer. Farmers will take partially or unmitigated risks all the time. We did a big sensemaker project talking to them about gathering narrative around that, and really what we try and do is say what is it about the system that creates the affordance or the opportunity for that person to act in that way? And then how do we change the system so that that affordance or the opportunity for that person to act in that way? And then how do we change the system so that that affordance or that opportunity changes? What we don't do is try and blame and change the person.

Todd Conklin:

Yeah, because it's not, I mean, it won't really happen. It's funny, because that idea you'd run out of farmers pretty quick, yeah, yeah.

Todd Conklin:

And then you run out of um cheese and dairy goods, which is what you guys in New Zealand do, baby. So one of the interesting things about that is that idea you can't fix stupid. That's a word we hear a lot, certainly in North America and I hear probably mostly in the energy sector. I mean people say that all the time so you think, okay, first of all, if you're hiring stupid, you suck at doing employment and someone else should make your decisions. But generally, when they say you can't fix stupid, what they're really saying is it's much easier to blame the operator than it is the system, because it's much easier to create some kind of corrective action to fix the operator terminate three days without pay. I mean, we've got lots of elaborate ways to do that. Then just to go back and say, wow, the way we do logistics or the way we bring material to the site is fundamentally complex and needlessly stupid and we're trying to control items that we don't need to control and we're trying to control it the wrong place. I see it all the time and it really goes to a deeper question which I'd be curious for you guys, because in the world you live in. Do workers have agency? And let me give you a background.

Todd Conklin:

Why I'm asking this is I spend an awful lot of my time with companies who start the discussion by saying this worker made a bad choice.

Todd Conklin:

Right, these workers made a series of bad choices. I hear a lot about bad choice and what I find when I look into it is it's not so much that the worker made a bad choice, is that the worker had a bad choice, yeah, and they're choosing from, uh, sort of a, a palette of all relatively crappy choices. They're choosing the least crappy choice amongst crappy choices. That makes sense to them at the time to move forward, often based on past experiences, and we could go into the context of this forever. Yeah, that idea, I think, is really pretty devastating, and that's part of that fundamental discussion we have to have, I think, because I think we have to tell them workplaces are not perfect. They're really quite poorly held together and systems, processes, practices, procedures are not where safety lives. Safety lives in the adaptive nature of doing the work, and so we actually count on these stupid people to actually create success in real time in all kinds of conditions, and they mostly do it.

Brian "Ponch" Rivera:

So I come from the world of ass clowns and stupid people in fighter aviation, right? So we the way we are taught and the way we run through simulations is that the NATOPS, the guidance that we have, the checklist that we have. They're very, very important, but not as important as the gray matter between our ears. Right, we have to make solid decisions. You'll find yourself in a situation that nobody's ever been in ears. Right, we have to make solid decisions. You'll find yourself in a situation that nobody's ever been in before. Right, you don't have that experience, but you just know from the stories that are very important that people tell the lessons you learned and you have to make a decision that may go against what the overall objective is and go.

Brian "Ponch" Rivera:

I have to do this because of X or Y, and I think in naval aviation, I'm not going to, and I spent some time with the Air Force.

Brian "Ponch" Rivera:

Naval aviation gives you the opportunity to find novelty and be successful, because the way they trained us, the way that we've been trained I don't know if that's true today, but in the 90s and 2000, thousands and 2010s that's how things are or were.

Brian "Ponch" Rivera:

So it is a you know, having that background in fighter aviation and coming into an organization go. The reason I'm here is because I come from the world of maximum performance right, flying air shows that type of thing. The reason I'm here is because I want to help others get back to that level. I want you to enjoy that type of lifestyle, if you will, what it feels like to be max performing and high performing. And that's hard to do in organizations, right, it's very difficult to do because I don't think people have the agency that they need to make those decisions. And I want to go back to this point you just made about they just have I guess they're afforded bad choices. Yeah, so I'm going to make a choice between one bad or another bad, or one evil or one other evil, and that's what I'm given and I have to make a decision.

Todd Conklin:

And it happens all the time. Workers are constantly thinking I can follow the rules or I can do the work. But I can't do both, because in this case the rules won't allow me to do this work, but I have to do this work, and so and we measure the work. We don't really measure rule following. Rule following only becomes weaponized when they audit me and find I didn't follow the rule, and that's almost always based upon a negative outcome. If nothing fails and I didn't follow the rules, then let's call it creativity or agility, adaptive nature of work.

Steve McCrone:

At choices. Oftentimes, particularly when you're doing an investigation, you look at a choice independently of the system. So then the incident becomes an isolated frame by which you start to opine on what caused it, what could have happened, what are the system. So then the incident becomes an isolated frame by which you start to opine on what caused it, what could have happened, what are the alternatives, who followed the rules, who didn't? And that's where you know, critical path analysis, five whys et cetera kind of lead us into a very narrow view. Once that becomes the dominant narrative, then you know people are looking at those choices independent of the system that afforded them.

Steve McCrone:

And I think organisations, particularly when they see safety as an independent element, often get locked into a very sort of categorical and narrow view and we think about safety in terms of an emergent property of a very complex system. You have to understand the system and its inherent complexity and then you can start to understand how and why people make the choices they do. That takes a lot of time and effort. It's much easier to send someone in to do a five-wise investigation on a very narrow, very specific set of circumstances and then jump to a massive conclusion not helped, frankly, by behavioral psychology that makes big assumptions about people's mental state based on their observed behavior, when most of the time people are just trying to get through the day.

Todd Conklin:

Yeah.

Steve McCrone:

Yeah, it's true, some deep-rooted theory.

Todd Conklin:

Add, as a subtext to what you're saying, is that they mostly are looking at counterfactuals, so those choices become more of a discussion of what they didn't do and what they did, yeah. So then we say, had they followed the procedure here, this accident wouldn't have happened, ok. And then, because we so oversimplify events, that it really looks like the problem is the worker didn't follow the procedure, when in reality, when they do this work, there are always giant gaps where they didn't follow the procedure, because the gray matter between their ears really tells them that this is a better, faster, more efficient, safer, more reliable way to do the work and that's based upon that experience. All of these things are fundamentally true, but I had actually and I'm open to discussion on this because I'm not a smart person at all, but I would actually suggest to you that I think those are symptoms.

Todd Conklin:

I think the bigger problem it goes back to really the paradigm that the organization has for what safety is. And if they see safety you know, as you were speaking earlier quite eloquently and beautifully as sort of something that they will attain, not an emergent property of actual doing work, then of course they're going to really try to define things and take it to where they think the biggest change is going to happen. And I mean, don't underestimate how emotionally satisfying it is to blame the bad guy. I mean it just feels good to say you know this, this guy's an idiot.

Todd Conklin:

Yeah, I believe the phrase you use was ass clown, ass clown.

Brian "Ponch" Rivera:

Yeah, radio ass clowns. We have a, you know so, in naval aviation we had human factors analysis classification system HVACs.

Todd Conklin:

I know it well. I know it well, yes.

Brian "Ponch" Rivera:

So I want to get your thoughts on that. We were looking at it about four years ago to see if there's a way to evolve that away from. You know, blaming loss of situational awareness. You know that, hey, that person lost SA. I just want to get your thoughts on where HVAC sits in the hop world right now.

Todd Conklin:

It's probably too linear.

Brian "Ponch" Rivera:

Okay, that's what I'm thinking.

Todd Conklin:

So I mean, it just is, hvac was created to really do one thing and that was to unify the accounting process across major enterprise efforts like naval aviation. Yeah, I think HVAC was a really important developmental step. Yes, because it really helped us create a vocabulary. It helped us understand we're better because HVAC existed. Yes, the problem is that it's not the panacea, it's not the answer and it's awfully linear. It goes right back to what we were saying earlier. We look at choices in isolation and then we label them with pull down menus and it helps us trend things across the enterprise, even though I would suggest I don't really see a lot of value in that.

Brian "Ponch" Rivera:

But yeah, so let me ask you this the loss of situational awareness, is that the same as saying that's human error?

Todd Conklin:

No, so so I really this is going to make me not very popular. All right, um near. As I can tell, the only three people that can even use the term situational awareness are are you guys fighter pilots you know, high performance pilots scuba divers and astronauts. I'll go with that, because you really can get to a point where you don't know where up is or down is. I mean, right, that's a, that's a?

Todd Conklin:

I'm not certain I've ever worked a case or done an investigation or done an event learning where the worker lost situational awareness, that they just didn't have the awareness on the situation that they should have had their awareness on. So I just did a case where a guy walked through a hatch cover on a platform and he felt it was death, and so I had to go to the senior like the big bosses, right, the senior team because they were about to meet with the board of directors, and I said you know, the only way you can walk through an open hatch to your death is to believe the hatch is closed. Right, I promise you. I mean, if he'd had any inkling at all that the hatch was open, he would not have stepped in the open hole to his death. I can pretty much guarantee it. Right, that's not going to happen.

Todd Conklin:

Well, the the boss said the big boss. So well, that's, that's, that's not true. He was not paying attention. That was sort of senior management talk for loss of situational awareness, right? Well, I promise you, he was aware of something and he was so aware of what he was aware of that he didn't think to look. Well, lo and behold, when we went into the actual contextual learning, when we started talking about the context, he's carrying this other hatch cover.

Todd Conklin:

So the one thing he couldn't do was look down at his feet, right? I mean, that's the only way you can fall through that hatch is you have to believe there's not a hole, because if there's a hole, you're going to say, hey, I'll use this other hatch cover to cover this hole. Until I get across that idea, I think has become a really elegant and I think people think it sounds pretty impressive a really elegant way to sort of blame workers. Further, I'm not sure it helps us from an improvement standpoint. I mean and you'd be better to answer that than I am I don't know what to say other than I don't know how to fix that.

Brian "Ponch" Rivera:

It's just an easy way to say that. To me, it's the easy way to blame it on a person.

Todd Conklin:

Oh it is, and it sounds science-y. I mean, it sounds like a famous person typed it.

Brian "Ponch" Rivera:

Yeah, and you don't have to look at the system it gives you a reason to not look at the system and go yeah, they did that. Yeah, their goggles were dirty or whatever. Yeah, yeah, yeah, Dust coming up and all that. So the next question for you is surveys. We have this problem in the Navy. We have these Likert scale surveys and all that.

Todd Conklin:

So we look back at culture and we look back at what happened. What's going on in your organization. I just want to get your thoughts on where surveys fit in the hop world at the moment. So let me just. We probably share this, so we're going to probably make a little magic moment right now. From my standpoint, in my institution, we're so survey fatigued that they're absolutely horrific. They're a horrible intervention that leads to almost nothing, and we've trained people beautifully that it doesn't really matter, even if they say you have to fill it out, I'll fill it out, but nothing's going to happen.

Todd Conklin:

Surveys are so. One of my favorite scholars is Edgar Schein and he recently passed away, kind of the father of organizational culture. Super interesting man and got more interesting as he went into his 90s. I love a scholar in their 90s because a lot of the barriers go down and they just tell you what they think. And they just tell you what they think and he would tell you straight to your face, using lots of colorful language, that a survey is just a gigantic waste of time and that if you want to know how people are thinking, go out and talk to the people, have a conversation, put together focus groups, Do event learning, do learning teams, ask the people, but understand that when you ask them what's wrong, the commitment you've made is that you have to do something with that information and that's really difficult.

Todd Conklin:

So surveys are almost sterile. There's a level of abstraction between the person doing the survey and the people taking the survey. That level of abstraction goes away when you ask them, because now it's just you and them and so they become impersonal and I'm not sure the data has. I mean, you guys jump in, but I don't think it has any value at all.

Todd Conklin:

I mean what you see is you see a lot of especially with the Likert scale. You see a lot of 1-5 splits. So you get your data on your leadership and you've got half of them say you suck and half of them say you're great. And tell me what you're going to do with that.

Steve McCrone:

Yeah. So we want to see methods that hold those things in tension. You want to see where those natural tensions occur in the organization, occur in the organization, and surveys are a really good way to create a categorical view that makes people feel certain, feel in control, feel like they have knowledge that they didn't have before. The emotional response at a high level is really what drives, in my opinion, that feeling of certainty. That feeling of new knowledge is really what keeps surveys alive, because they get to determine, and yet everybody knows that it's a waste of time. Everybody knows that the result is skewed because people are either gaming or gifting the result, depending on their view, and everybody knows nothing's going to get done and yet the human desire for certainty kind of drives the continuation of that behaviour.

Steve McCrone:

There are better ways to do it, you know, in terms of getting a really granular understanding of people's. We call it the dispositional understanding of people rather than a categorical understanding. We want to see, we want to know how they feel in relation to the work and how that changes, changes over time or changes, um, due to, you know, changing circumstances. It's fluid. Surveys give you a snapshot. It's um, yeah, really is a waste of time.

Todd Conklin:

You kind of know that and they cost a lot of money.

Steve McCrone:

Oh yeah, it costs a lot of money and if you're in a big government agency.

Todd Conklin:

You just they just survey the crap out of you I mean, mean it's part of the fallacy, right?

Steve McCrone:

You value what you pay for. You don't pay for what you value. Yeah, exactly.

Brian "Ponch" Rivera:

I want to shift over to risk again. Oh, risk. So we had risk matrices back in the day. People love managing risk right and identifying all the potential things that go wrong and coming up with a contingency plan against those things. If possible, Get your thoughts on that in the world.

Todd Conklin:

Let me ask you a question.

Brian "Ponch" Rivera:

Sure.

Todd Conklin:

Why do you think? Why do you think we did risk risk matrix? Why do I have risk matrixes? I can never say it Risk there's a lot of continents.

Brian "Ponch" Rivera:

Yeah, there's a lot of continents.

Todd Conklin:

in that those two words.

Brian "Ponch" Rivera:

Yes, there are. Why do I think?

Todd Conklin:

Don't overthink it, okay.

Brian "Ponch" Rivera:

You go ahead, then I want to.

Todd Conklin:

You want me to tell you the answer? Yeah, tell me the answer I'm going to. Risk matrices are developed entirely to do one thing and that's manage resource level. I would have failed. They have nothing to do at all with risk. What they have to do is where can I get money to manage a problem?

Todd Conklin:

Because if you're going to ask me to assess my own risk, you know, depending on how I feel and where we are, depending on past practice, depending on current stability, you know I can make it high or low. I can make any. I can just tell you I've done a lot of investigations and I've done a lot of fatal investigations, a lot of them and I will tell you that a lot of people die falling downstairs, right? Well, that's not going to be on the risk matrix because nobody's going to give you resources to do that. So you know we're going to make signs that say, use the handrail, but that's actually pretty high risk, right? So I've always been a big believer that that's kind of the wrong direction and in fact I would suggest and I'm super interested in what you guys think I actually think risk is not the problem.

Todd Conklin:

Risk is dynamic. It's difficult to assess over time. I can take a snapshot, but that snapshot will be inaccurate the next second and that I really believe risk is super normal and somewhat attractive to people, right? So risk is risk. I absolutely agree. Sorry, yeah, I actually think the risk part is a waste of time. Instead of doing a risk matrix, what I would go out and do is go out and ask where our control is low. So, instead of focusing on the risk, focus on the safeguards or the barriers or whatever word people use, depending on their industry. Because I couldn't agree more.

Steve McCrone:

I just gonna I just sorry, I'll go ahead and um, and then I'm gonna go and teach a class on adaptive strategy, so I'll be gone when I'm finished talking.

Todd Conklin:

We're going to talk about you, man. We're going to talk about you a bunch. I know you can go meet that guy Such an idiot.

Steve McCrone:

If only he knew. So here's the thing on risk. Certainly, I think I agree with everything you've said. Certainly, when I see a risk matrix, what I see is a categorical statement that is essentially laundering your assumptions into fact. And then people treat that matrix as if it's real. What we would do typically is go back and say how much do you really know about the likelihood or the potential consequence of this? And, to your exact point, is say forget about trying to guess those things, let's go back and say when that occurs, how do we stop it Killing people? I'm using your words here, todd Always say what controls can we put in place so that we can either mitigate or accept that event or that risk or that potential outcome?

Todd Conklin:

And the automotive industry taught us. I couldn't agree more. The automotive industry taught us that your car is designed around the notion that every time you drive, there's a 100% chance you're going to crash 100% you probably won't thank God, but 100%. And what they're looking at is what controls need to be in place to make that crash as elegant or, as David Wood would say, gracefully extensible, which is a great set of words. Thank you, my friend. It was a pleasure meeting you.

Steve McCrone:

Signing out Cheers Tom Carry on Brian. Thanks, mike.

Brian "Ponch" Rivera:

No, I appreciate that, steve. So I want to continue with capacity and going back to risk. One of the things about planning or a plan is a lot of people look at a plan as something they need to follow. We look at planning as something that is continuous and we try to help them understand the components of a good planning approach or mental model or process that allows them to scan the environment, to make sense of the environment as they go through the day. So I just want to get your view on am I using capacity in the correct term or way there when we're trying to build that capacity for that team or that organization to make sense of the external environment so they can?

Todd Conklin:

act. Yeah, I think so. Okay, I mean, unless you can think of a better word than capacity, and I'd love to have it, but capacity. We drifted to the word capacity because control is probably not right, and yet it is right. Barrier is not right, but it is right. You know all the things we do. We're creating capacity, we're creating margin, Margin. That's less elegant, but it's kind of the same idea.

Brian "Ponch" Rivera:

Is margin the same as optionality?

Todd Conklin:

Yeah, optionality, you know what they call it in medicine Rescuability no, I didn't know that Right, I mean, which is sort of a quasi-generic term to sort of encompass all that stuff, because it's all kind of important and it's really context dependent. So it depends on what you do. The interesting thing about that capacity part of that is that planning is a huge part of that capacity. But planning really in and of itself, as a discrete activity, is not terribly valuable. In fact it's probably not helpful at all. But as you guys realized and really helped develop, you know, remarkable tools like the OODA loops and the things you guys work with, planning as a living practice is really a great thing and maybe we should not call it planning because planning's kind of already got ownership properties. What we're constantly doing is really detecting and correcting. Maybe we should not call it planning because planning is kind of already got ownership properties.

Todd Conklin:

Yeah, what we're constantly doing is really detecting and correcting, detecting and adjusting maybe even a better word. So you're understanding the risk in real time and it's constantly changing and you're adjusting your performance to match the risk that you've just identified and then you do it again, and that's a real-time process that people do. And again, the example I think about all the time is how do you know you're safe enough while you're driving? Well, the quick answer is it's really easy to tell if you're safe enough if you get there and you didn't wreck, you were safe enough. Or if you wrecked, you weren't safe enough. Or if you wrecked, you weren't safe enough. But while you're driving you don't really know. But that's okay, because driving is not one activity. Driving is thousands of micro tasks, right, and it must be what it's like to fly a plane. I mean, I don't fly a plane, but it seems like it must be Thousands of micro tasks, where you're constantly assessing conditions and adjusting controls constantly.

Brian "Ponch" Rivera:

Not everybody does that, though, and they arrive to their location safely. A lot of times, you see people doing stupid things where they're texting and driving. There's something that you could identify and say, hey, that's probably not something you ought to be doing, but the outcome was you arrived to a place. The decisions you made to get there may not be very good, right, yeah, and so then you ask are you good or are you lucky, lucky Right?

Todd Conklin:

And which is a really good question to ask, because either way you answer that, it's pretty valuable. If you're good, then you reinforce what's good, and if you're lucky, then you've identified really a risk that you didn't have control for.

Brian "Ponch" Rivera:

So separating decisions from outcomes is important and I want to go to learning teams and learning here. So, going back to fighter aviation plan brief, execute debrief for the team life cycle plan, execute, assess, assessment is critical, right. Learning is the most important aspect of a team life cycle the most important aspect of a team life cycle. So, when it comes to facilitating learning teams and how they do their work, is there anything you share with us on how you do it, or maybe we can have a conversation on that?

Todd Conklin:

So my first advice is don't overcomplicate them and don't overthink them. Okay, so the team's already learning in real time. What you want to do is swarm together a group of people and you want to ask them to do two things Identify the problem gap, suggest some solutions. Now, the reason I said gap between identify the problem and suggest a solution is because you want to make very discreet the problem identification phase from the solution generation phase, because the enemy of the question is always the answer. Interesting. All right, let me say that again. Yeah, that's great.

Todd Conklin:

It's kind of like what is the sound of one hand clapping, tell me, grasshopper? So the answer always stops the question. Right, because once you have the answer, you don't need the question. Stop asking the question, you don't need the question. So if you tell them this first meeting, first half or whatever, no solutions, none Don't even bring them up. We're not even going to write them down, we don't even care. Tell us what the problem is and dig into the problem as deeply as you can and then, once you identify that problem gap, then take that data and come back with a solution.

Brian "Ponch" Rivera:

I love where this is going. So we do a lot of work in effective debriefing and so you got to pull things apart from multiple perspectives. Too often people come up with a solution to something rather than identifying what happened. And we can run a simulation, a team simulation, that lasts 10 minutes and just ask them what happened in the last 10 minutes, and most people can't tell you what happened. You have to reconstruct it and you have to have data to help build that up. And going back to situational awareness, that's actually one way to build SA is understand what happens so we can understand what's happening now. So what we found in the world of debriefing is in the algebra community. They do retrospectives debriefing, but they focus on future things. What are we going to do better in the future? Well, how do you know you need to do that better in the future?

Todd Conklin:

You have to say that again. They call it retrospective debriefing and they focus on the future.

Brian "Ponch" Rivera:

No, I'm sorry it retrospective debriefing and they focus on the future. No, I'm sorry they call them retrospectives, but a lot of the processes they use are just forward looking Like.

Todd Conklin:

Just tell me what you're going to do better, why the hell do they call it retrospective?

Brian "Ponch" Rivera:

It's in the name, isn't it Kind of funny? Yeah Well, it's kind of a little counterintuitive there Right.

Brian "Ponch" Rivera:

So the hard part and this is, I think this connects back to psych safety and allowing information to flow. I want to get your thoughts on this. I guess having a leader on a team that may be running a learning team or an effective debrief provides an opportunity for that leader to show fallibility, to stand up and say I did something wrong right. Plus, it affords an opportunity to ask powerful questions. All these things connect back to psych safety. Are you creating the right conditions for your people to speak up right?

Todd Conklin:

I think a lot of it's more than speak up to disagree with you. Yeah, yeah, yeah To tell you.

Brian "Ponch" Rivera:

You're wrong, yeah, so to have that tension, that healthy tension, to have that cognitive diversity that everybody wants, you need to have that touch point, if you will, and we find that debriefing is a great place to do that. And that's what we learned in fighter aviation. If you go back to Amy Edmondson's work and where she learned about psych safety from healthcare organizations or surgical teams, they were being trained by aviators, right. So there's a nice connection here. It's pretty cool how all this all kind of connects to HR.

Steve McCrone:

I like it. I think it's fun.

Brian "Ponch" Rivera:

Yeah, but I'd say the majority of folks we work with don't understand that there are these things, things like team science and effective debriefing approaches, effective planning approaches. How do you actually leverage the diversity of a group through things like a premortem or a red teaming technique, that type of thing? So these things are out there. But getting back to learning teams and building psych safety, what is it in your workshops or what do you recommend people do to help learn from the past and, at the same time, improve future performance?

Todd Conklin:

Well. So whether the leader is in the room or not is pretty context dependent, and if it creates a chilling effect so people won't talk, then you don't want the leader in the room. That's pretty obvious. The ability to model that, I think, is pretty powerful, but that's probably not the best place. The learning team exists to really perform one function and that's to understand the problem. So when you have a, an especially curious problem like this is why aren't they doing this? This is, this is, it's our most important safety rule, and no one's following it. How come, why aren't they getting the ladder right?

Todd Conklin:

Then you really swarm together a group of people that do that work. You put them in a room and you say formulate, tell me what, what we need to learn, because we don't know. Clearly we're answering the wrong question. Tell us what, and they're going to tell you well, you know, that's great. We probably need some engineering people in here, we need some maintenance people in here. That's fine. Just bring them in and just allow leadership to sort of develop. You could have a facilitator, maybe, to take notes, but just allow the conversation to happen. And what's interesting is that the conversation between the group is probably as interesting as anything that's going to happen, and they're going to develop, and in a very adaptive way. This emergent understanding of the problem will be co-created by this team of people. That's what a learning team does. The reason it's advantageous is not because it has strong cultural influences although it does or that it allows leadership the opportunity to respond humbly and to be a part of the problem identification. It does that as well. What it really does is, in a very fast and accurate way, gets you a lot of information that you probably didn't even know you needed. So I've done a million learning teams, a million.

Todd Conklin:

We started them years ago in Los Alamos and they're not a necessarily novel idea. I mean, they've existed a long time. We just started doing them for safety curiosities, for really reliability and resilience, curiosities around high-risk work, and what was amazing is these teams would get together and have this deep discussion about what the problem was, almost without fail. The problem they brought out of the room was completely different than what we imagined it would be. Yes, I mean without fail. I mean it was amazing and it was always a much better, much more mature, much more enlightened and expert-based understanding of what the problem was. The cool thing about.

Todd Conklin:

That is, once you identify the problem, the corrective action writes itself. It's super easy to write the corrective action and it's super easy to prioritize the corrective action. And what it does is it saves you a ton, a ton of time, a ton of time. And what it does is it saves you a ton, a ton of time, a ton of time. It also, I think, creates an opportunity for workers to tell the truth and they can really speak truth to power because they've got the protection of the other team members. I don't think it fixes the cultural problems of an organization, but that's not its intent. Its intent is to really help understand why aren't people using the right ladder, or why are they not stopping at this stop sign or why are they not doing lock and tag on this? You know it's a pretty dangerous job and nobody's doing lock and tag.

Brian "Ponch" Rivera:

But you want all teams to be learning teams. All right, I think all teams are learning.

Todd Conklin:

Teams exist to perform a function, right, and if they're not learning, they're not a team. Yeah, yeah, they're not a team. Yeah yeah, they're not a team and they're not going to behave like a team.

Todd Conklin:

Yeah, I mean if they're not. And so the difference is that think of learning team like a tool, like a data collection. There you go, there you go, and then what I think they're really good at is helping you prioritize. So if you use a learning team and they come up with three solutions, the next question I would ask that learning team is tell me which one we should do first, which one we should do second, which one should do third? Yeah, and let them own that prioritization, which is really really powerful. And, speaking secretly, no one listening as a leader, it's a million times easier. A million times easier and way faster way faster Switching over to VUCA.

Brian "Ponch" Rivera:

you use VUCA quite a bit, or you have. I do. I'm a big fan of VUCA.

Todd Conklin:

But you know I worked a lot with those, with both Team Green and Team Blue, a lot.

Brian "Ponch" Rivera:

So OK, so we get volatility, uncertainty, complexity and ambiguity. Some people actually say that there's a tuna framework, there's a, there's a taco framework and you know, there's all. It doesn't matter. You're just describing the external environment, am I correct? Right, yeah?

Todd Conklin:

What I was grasping for and the reason I pulled VUCA out of my brain head I mean my memory. I didn't invent it by any stretch of imagination, but I remember working with these you know highly effective teams that were just really good. They had very little leadership. Leadership and they would talk a lot about the VUCA framework is because uncertainty went so high during the pandemic that we really needed an opportunity to talk about. Vuca does a lot of things really well, but one thing it does is, it said, you have to increase the diversity of your data set, not decrease the diversity. So I was working a lot with, like, boards of directors and senior leadership teams for these really big companies and we were doing all these Zoom meetings because I started teaching this class called Bouncing Forward, because I don't know, I was bored and it really caught on. Tons of people I mean big companies would call and say can you do this for our senior leadership team, and it was a really great opportunity using VUCA to tell them that.

Todd Conklin:

I know it feels like, as things become more and more uncertain, you want to circle the wagons, shut the doors and talk to your trusted advisors. That's exactly the wrong thing to do. You couldn't make a worse decision. You want to open the doors, open the access and you want to talk to levels of the organization that you normally would never talk to, because the more diverse the opinion is, the better you are, as a decision maker, in understanding the options you have before you on how the organization is going to move forward in an uncertain world. I mean, it's a really interesting part of that and that's what I wanted VUCA to do, and it really allows that conversation.

Brian "Ponch" Rivera:

Well, I think we use it the same way, so we use it to explain the external environment, and the name of this podcast is called no Way Out, right, I?

Brian "Ponch" Rivera:

know, I saw that it's kind of cool. Let me give you some background on this. So the background on that is from John Boyd in his conceptual spiral brief, which was originally titled no Way Out, and in the brief he has a section in there where he identifies features of the world. There's several of them quantum uncertainty, numerical imprecision, ambiguity, so forth, like 10 or 12 of them, basically VUCA. And he says there's no way out of this. There's no way out of these things, you know, unless we go through a world of reorientation. And that's his OODA loop. And in order to do that, we need multiple perspectives, a diverse perspective, and it's everything you just said. So that's that's where we got the name of the podcast.

Brian "Ponch" Rivera:

Was from that brief, about understanding that external environment. It's a lot of fun, so, uh, unpacking that. You know things like affordances and you get into the adjacent possible, the acceleration of technology, and I will shift this over to artificial intelligence right now. But all these things are happening at the moment and I want to get your thoughts on the pros and cons, if you will, of artificial intelligence, on improving human and organizational performance or making it more challenging.

Todd Conklin:

It's really interesting you'd ask that question. I just for my podcast. I just interviewed this person I found who told me that he was not very good at this and he had no expertise. But in fact, he's probably one of the leading strategic thinkers in AI and it's really hard to talk to AI people that don't want to sell you something. At least at our level there's a lot of people that say they have expertise in AI and if you buy this product, you're going to be great. This guy had nothing to sell. I mean he was such a professor. I mean I'm kind of surprised he was dressed. I mean it was crazy, right yeah, but he was amazing. I mean it's the podcast and you do these a lot, so it was so much better than I thought it was going to be and I didn't think it was going to be bad, but it's really good.

Todd Conklin:

And what he helped me work through because we just had this conversation yesterday is a couple things. One is what scares me about AI is that it's slowly I'll take that back it looks like it's kind of rapidly drifting towards an elaborate surveillance system. We're going to use this to monitor worker behavior. We're going to use this to monitor pilot behavior. If you don't think that's going to come out of that Air India crash, I'll buy you a hamburger. I mean, they're going to think of elaborate ways. One of my favorite Navy stories, if we ever have time, was when that submarine hit that undersea mountain and their solution was to buy the best high-definition cameras they could buy and put them on the navigator's desk in every submarine. And I told the guy wow, that is amazing, because the next time you hit an undersea mountain with a fast attack submarine, you're going to have the highest quality video of the guy hitting it.

Todd Conklin:

I mean, if it becomes a surveillance tool, it's going to suck. We don't need new ways to surveil workers. I mean we just don't. And I'm not sure that has any value at all. What this guy taught me yesterday was that, in fact, as AI increases and it's gobbling up pretty much all the data in the world, so it is pretty much consumed pretty much the entire web as it increases, the necessity for humans to be a part of that process is going to be much more important, not less important.

Todd Conklin:

It's going to replace jobs, no question Some jobs are going to go away. Replace jobs no question. Right, some jobs are going to go away. But if you provide that adaptive nature in a complex environment that creates success, like a nurse or or a pilot, it it probably will never replace that fully, because if you provide that sort of creative value, that's something that at this point ai can't do yet right now. When it starts to write its own questions, then we have to revisit ai. I left the podcast mostly encouraged. Now I don't know if other people are going to do this, but it was very interesting to me to realize that that human part of the human and organizational performance is still really what holds the system kind of together Right. Good workers working in often bad systems create successful outcomes.

Brian "Ponch" Rivera:

Yeah, that's just how it works. So here's something for you Create successful outcomes. That's just how it works. So here's something for you the loss of capability to like automation bias, to like Waze or Google Maps or something like that. Yeah, or even just the ability you know, you and I were able to retain several phone numbers many, many, many years ago. Yeah, I got none now, I barely know mine right. So these capabilities may be lost, right, right, they probably are lost.

Todd Conklin:

True. How's your buggy whip doing my?

Brian "Ponch" Rivera:

what.

Todd Conklin:

Your buggy whip. Don't even remember what that is you don't have a buggy whip. No, because there was a time that technology was really important when we rode in buggies.

Brian "Ponch" Rivera:

But we rode in buggies. Yeah, it's not important now.

Todd Conklin:

Yeah, and I think, 100 years from now, that's what they're going to say about.

Brian "Ponch" Rivera:

Memorizing a phone number.

Todd Conklin:

Yeah.

Brian "Ponch" Rivera:

Or following them up.

Todd Conklin:

Why would you? And it hurts my heart because we're old, right? I mean, it drives me bananas that a friend's kid came to my house because his folks live pretty far away and he had to pay taxes for the first time.

Todd Conklin:

He freaked out. He's talking about adulting. It was kind of cute because I was like dude, this is just the beginning and I don't want to freak out, but he had to pay almost $300 to you. So I mean I was like right. So I mean I was like right. He did not know from the jump how to write a check. Oh yeah, no concept at all. He didn't know how to address an envelope. He's a super smart kid. They don't do that Right.

Todd Conklin:

I think the question I would ask in response to your question is is our bias getting in the way of this discussion? Maybe Because we're going to lose expertise, there's no question? Or maybe lose is the wrong word. Expertise is going to move and in some places, like I would suggest, phone number memorization is probably better suited with machine reliability. Now, if you drop your phone in the toilet, you're screwed and that's a problem, yeah, but less so now that everything's kind of on the cloud. I mean it's very interesting to see where progress takes us, but it's also kind of scary because it means we're going to have to let go of some paradigms that we thought were pretty important and kind of normal.

Brian "Ponch" Rivera:

But this pace of change, can we handle it as humans? Do we have a choice? I don't think so.

Todd Conklin:

How do you feel about driverless cars? Would you get into a Waymo?

Brian "Ponch" Rivera:

Five years ago, no way. Today probably yeah. But this brings up another interesting thing. So when a Tesla or a Waymo hits somebody, you've got to blame somebody, right?

Todd Conklin:

Yeah, yeah, how does that work? Who do you blame? Well, so I helped with an event involving a driverless car and it was very interesting because in this case the two cars were coming down the road, going the same direction in a four lane urban road, so two lanes one way, two lanes the other way. The outside car was the driverless vehicle. The inside car, near the curb, was being driven by a person. The inside car hits a pedestrian, so the person driving the car hits a pedestrian, tosses her up and she falls from the top down in front of the vehicle. Okay, okay, so it's a pretty icky wreck. I mean, right, the vehicle at the time, because on their failure modes and effect analysis no one thought that the pedestrian would come from above. Right, because that seems kind of weird. But you said this earlier things that never happen happen all the time. I mean, that's the secret to what you found so interesting in the Navy is that things that never happen happen all the time. These novel ideas.

Todd Conklin:

That car failed and it was a pretty big event. Determining where Blaine lived was really really, really difficult and my guess is I didn't really bother myself with that because I don't know anything, but my guess it went to whoever had the biggest pockets. Okay, right, I'm pretty sure. Like I wouldn't sue the person that hit the person, I'd sue the company that had the giant. There you go, right. Yeah, the challenge is that the learning out of that was really rich and the technology is so fresh and new that they were very willing to learn from it and were very unwilling to blame it Right, and part of it is it was such a weird accident because it came from above that they were just like it was hard to pin that on somebody.

Brian "Ponch" Rivera:

Yeah, I mean, I guess, other than the driver that hit the you know, the same things happen in commercial airlines with the removal, you know, maybe having one person in the cockpit. Now Again, going back four or five years, everybody in my community around here would have said, no freaking way are we going to take humans out of the cockpit, but at least have two in there. Now people are leaning towards maybe there'll be one, maybe there'll be one or two operating multiple aircraft with AI. So these things are all possible. This goes back to your point that it's definitely possible for sure.

Todd Conklin:

Yeah, my guess is this would be a whole other podcast. So aviation got so safe, which is so naturally occurring. Phenomena of being really stable is that you become complacent. So complacency is always a function of stable system. Complacency is not a failure of the person, it's a failure of us. If the system is stable enough, you can become complacent, and every improvement initiative we put in to safety which creates higher resilience and reliability, will be eaten up by production. So when we make it safer to fly a plane, then they're going to fly more planes safer, and so a capacity we create will always be taken away by greed or production or whatever. Whatever you want to call it. The idea that high-risk operations can function with one pilot is fine. The sociology around it is probably a bigger sale. I think it's going to be hard to sell that, and part of it is going to be right back to the wellness and mental health stuff.

Brian "Ponch" Rivera:

Yeah, hey, Todd, I appreciate your time today. Thanks, brother, this has been fun. So if there's any questions you have about what we do here on the podcast, anything about the OODA Loop, just you know it's your time to shoot away and ask no, I'm good, I'm totally good man. You like the OODA Loop?

Todd Conklin:

I'm just super familiar with it just from work. Yeah, I mean we've talked about it a lot.

Brian "Ponch" Rivera:

So we've made a lot of connections here to things like neuroscience, free energy principle, of course, physics, and I think you found something fascinating in physics over the last few months.

Todd Conklin:

Yeah, complexity, yeah, yeah, yeah, have you looked into that complex physics stuff?

Brian "Ponch" Rivera:

Well, we look at complex adaptive systems all the time.

Todd Conklin:

Look at. It's called complexity physics. It's worth.

Todd Conklin:

It's worth looking at yeah, anybody to look at specifically, anybody, oh I'm trying to think I got it from carissa sanbamatsu, but I don't think that she's doing it. I think she just turned she's into it. I don't think I she's doing it, I think she just turned she's into it. I don't think I mean. Once you look at it you'll be like, wow, the physics community is kind of they must be listening to your podcast. Oh, everything's converging right now. It's amazing. Yeah, because the complexity physics talks about the conditions in a system. Yeah, and how. If you can understand the conditions, you can predict the outcomes better in a complex process, it's very.

Todd Conklin:

It's kind of cool.

Brian "Ponch" Rivera:

What do you got going on here? In the near term, anything fun, exciting.

Todd Conklin:

Just the same cool stuff? Yeah, I'm trying to think if I have anything. No Any events that you want to share with our listeners, or anything like that, oh we're having a conference in Santa Fe on September 9th, 10th and 11th Nice, yeah, it's got a good. The crowd of characters is great. There's seven or eight people doing it. It's just kind of a deep dive into definitely learning, operational learning, and a pretty big dive in accountability. Yeah, Because that's always a big issue.

Brian "Ponch" Rivera:

There's a question for you what does accountability mean to you?

Todd Conklin:

uh, so, think, think of accountability as an act of clarity, okay, and when you frame it that way, accountability makes a ton of sense. So accountability, like we used to call it in in the laboratory r square a square, did you have that?

Todd Conklin:

maybe responsibility, accountability authority, yeah, and you, anytime you were on a big project, you you'd write on the whiteboard r squared, a squared. And yeah, well, that's a clarification exercise, because accountability is the discussion you have before something bad happens. Yeah, not a conversation you have after something bad happens. And so when you think of accountability as providing clarity, then accountability makes a ton of sense. The problem is is that generally speaking, this is kind of a big paintbrush, but generally speaking, we've sort of convoluted accountability and discipline and they're not the same thing. Discipline is a response. Discipline is relatively important. I mean, I wouldn't take discipline, discipline is a response. Discipline is relatively important. I mean I wouldn't take discipline away, but discipline is not accountability. Accountability is really an act of clarity R squared, a squared.

Brian "Ponch" Rivera:

Let me try this on you.

Todd Conklin:

I've heard this definition, and that is the ability to recount what happened, you know, looking back. Well, that's, I mean, if you want to do like an exegetic definition, I mean that's the definition of the Latin root, the notion of account. A bank account is a recollection of what's in your account, or a production of what's in your account, and I guess it does provide clarity, yeah, but I think that's that's not what you're. I think it's an incredibly desperate attempt to define accountability. Okay, that's somebody who's getting banged up pretty hard by leaders saying I got to hold people accountable and you can't even tell me what accountability means. Well, it's an account of what happened. Well, no, no, that's actually not. I mean, that's not an account. I mean that's not accountability.

Todd Conklin:

Accountability happens before. Think of every good project you've been on. You knew who. That's not accountability. Accountability happens before. Think of every good project you've been on. You knew who was responsible for what. Oh, yeah, and you counted on them doing it. Yeah, and because it's a good team, they did it. Yeah, roles, responsibility, accountability, authority those have to be clarified before you can move much further, especially in high risk work.

Brian "Ponch" Rivera:

So I think that's a great place to wrap it up. I want to appreciate your time, uh, and thanks for being here with us on no way out. Uh, again, uh, human and organizational performance. Todd Conklin, uh appreciate your time today.

Todd Conklin:

Thanks, man, it was fun.

Brian "Ponch" Rivera:

It was a lot of fun. Appreciate it.

Todd Conklin:

See you later, alligator Cheers.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

Shawn Ryan Show Artwork

Shawn Ryan Show

Shawn Ryan
Huberman Lab Artwork

Huberman Lab

Scicomm Media
Acta Non Verba Artwork

Acta Non Verba

Marcus Aurelius Anderson
No Bell Artwork

No Bell

Sam Alaimo and Rob Huberty | ZeroEyes
The Art of Manliness Artwork

The Art of Manliness

The Art of Manliness
MAX Afterburner Artwork

MAX Afterburner

Matthew 'Whiz" Buckley