
No Way Out
Welcome to the No Way Out podcast where we examine the variety of domains and disciplines behind John R. Boyd’s OODA sketch and why, today, more than ever, it is an imperative to understand Boyd’s axiomatic sketch of how organisms, individuals, teams, corporations, and governments comprehend, shape, and adapt in our VUCA world.
No Way Out
AI’s D&C Cycles Accelerate: Harmonizing Agents and The Big 'O’rientation with Mahault Albarracin, PhD
What if we've been thinking about AI all wrong? What if endless scaling isn't the answer, but instead we need systems that understand context, embody knowledge, and grasp causal relationships like living organisms do?
In this mind-expanding conversation with Mahault Albarracin, PhD, VERSES AI Director of Research Strategy – recorded on the 49th anniversary of John Boyd's seminal paper "Destruction and Creation" – we journey through the fascinating landscape where neuroscience meets artificial intelligence. Mahault shares how her background in social sciences led her to active inference, a framework that models intelligence after natural cognitive systems rather than linear engineering approaches, echoing Boyd's emphasis on breaking down outdated mental models to create adaptive new ones.
The parallels between Karl Friston's active inference and Boyd's OODA loop emerge vividly, as both frameworks highlight prediction, orientation in complex environments, and harmonizing changing tactical actions with evolving strategic intentions. We explore why current AI systems struggle with tasks humans find intuitive – they lack embodiment within spatial-temporal reality and fail to grasp how context shifts meaning, much like the limitations Boyd critiqued in rigid, backwards-planning strategies.
Perhaps most provocatively, we challenge the dominant AI doom narratives, tracing them to biases rooted in defense funding, colonial hierarchies, and adversarial worldviews. Could our fears of malevolent artificial intelligence simply reflect our own projections? What if, instead of building systems expecting friction, we created AI capable of empathy, resonance, and connection? As Mahault suggests, "The condition for AI alignment is to give it the ability to love us, to have empathy, to see us as kin rather than just objectives."
The conversation ranges from the technical details of the spatial web (creating interoperable standards for meaningful, privacy-respecting data connectivity) to philosophical questions about consciousness, harmony in multi-agent systems, and
NWO Intro with Boyd
March 25, 2025
Find us on X. @NoWayOutcast
Substack: The Whirl of ReOrientation
Want to develop your organization’s capacity for free and independent action (Organic Success)? Learn more and follow us at:
https://www.aglx.com/
https://www.youtube.com/@AGLXConsulting
https://www.linkedin.com/company/aglx-consulting-llc/
https://www.linkedin.com/in/briandrivera
https://www.linkedin.com/in/markjmcgrath1
https://www.linkedin.com/in/stevemccrone
Stay in the Loop. Don't have time to listen to the podcast? Want to make some snowmobiles? Subscribe to our weekly newsletter to receive deeper insights on current and past episodes.
Recent podcasts where you’ll also find Mark and Ponch:
Hey Moose, today is September 3rd 2025. You and I had a conversation earlier today. We'll talk about that here in a second. I started today rereading this book here called the Spatial Web by Gabriel Rene and Dan Mapes Fantastic book and one of the things that popped up in there was information warfare and I would start scratching my head like I didn't read that the first time I read through that, but that's one of the things I looked at today. You and I had a conversation about the hill, so non-speaking autistics.
Brian "Ponch" Rivera:We have some connections you personally, me in the periphery amazing connection there about outside information, collecting intel from there, and I came across a couple of things that I want to share on the screen. One is from Robin Carhart-Harris. What he wrote today is he wrote really that's not what I want to share. Anyway, I'll just read it out to you. He wrote that in the future, even more than now, what we call spiritual and mystical, will one still be called spiritual or mystical, and two also be called physics. All right, that was pretty amazing there. And then I came across something in the last 24 hours and that is hackers getting into Jaguar Land Rover production. That is of very much interest to a lot of our folks that are listening to our podcasts and what we do on cybersecurity. And of course there's a New York Times article today that just came out in the last four or five the publishing of Destruction and Creation. September 3rd 1970 is the publish date, so fitting.
Mark McGrath:Wow, Only you would know that Moose it's also my grandma's.
Brian "Ponch" Rivera:it would be her 113th birthday. So there you go. So, without further ado, I want to introduce our guest today. Her name is Mal Abarassan. She is up in Quebec, is that correct, mal?
Mark McGrath:That's right.
Brian "Ponch" Rivera:And I have a question for you. We built up some context. Can you help us unpack that a little bit by giving us your background on how you became interested in artificial intelligence?
Mahault Albarracin, PhD:Sure, Coming from the social sciences. There's a bit of a subtle thing here. We're always told what we do doesn't matter as much, unless there's math, it's not going to work, it's not useful. So that's the underlying motivation. But perhaps more objectively, I have worked with a lot of paradigms in social sciences.
Mahault Albarracin, PhD:I studied sociology, I studied anthropology, biology, I studied a lot of fields in my specific curriculum, and I found that there were recurring things, things that spoke to a form of mechanism, but the mechanism was never really outlined properly. It was always quite narrative, always descriptive. So post hoc, you look at something, you analyze it and you say this might be the way that it happened. But you can't make any predictions off of that. And so it left me a little bereft. And so I started looking into the ways that we could sciences to this notion, because as soon as you come up with a mechanism, what you're coming up with is a sort of static way to categorize things and therefore exclude certain perspectives etc. And so, on purpose, social sciences try to do away with this exclusionary framework, and there's a reason for that.
Mahault Albarracin, PhD:But I started reading into ecological science, varela, I started reading about predictive processing, andy Clark, and I found that their approach to how we connect to the environment, how cognition connects to culture and manifests in material reality.
Mahault Albarracin, PhD:Through reading things like Karen Barrett and a neo-materialism, you find that there is a way to account for diversity, account for the different ways that reality can diffract, whilst also being predictive and precise in your language, and maybe connect across paradigms as well. And so then I started reading into active inference and I met Pierre Poirier, who became one of my PhD advisors, and he said you really should talk to Maxo Ramstad and Carl Friston because you seem to be touching on something that they're also trying to develop. And so that's how I got more and more involved in more generally programming, machine learning and active inference more broadly, and through the world of simulations we were able to do just that to do predictions in the realm of cultural social sciences and an account for the possibility for that diversity, and therefore not just put people into a clear box, but more like work with semi-prototypical and edge ideas through distribution.
Brian "Ponch" Rivera:Well, that sounds like a familiar story, moose, from John Boyd, looking at different sciences and bringing together disparate ideas. Mao, there's something I want to ask you Today in the New York Times article or guest essay, I believe, there's something mentioned in there that you're familiar with, with your company, and I understand you're the director of research. Is that correct?
Mahault Albarracin, PhD:I'm the director of research strategy, so I work alongside the other director of research.
Brian "Ponch" Rivera:Okay, so in that article and I'm assuming you read it have you read that article today?
Mahault Albarracin, PhD:I've read it.
Brian "Ponch" Rivera:Yeah, can you talk a little bit about that?
Mahault Albarracin, PhD:Absolutely. I mean, I know Gary Marcus. I agree with his vision. Ultimately, I think he's putting the finger on the correct problem, which is that scaling forever is not the right approach.
Brian "Ponch" Rivera:Let's talk about that. I heard this analogy years ago. I've read Jeffrey West's book Scale. You know that's pretty interesting on how things scale in nature. For Moose and I, we believe you cannot scale a linear process like a linear OODA loop. I think Elon Musk mentioned at some point that in order to go to space, you can't scale combustible engines. You need to do something different. You have to think a little bit outside the box. So can you talk about what is wrong or not necessarily wrong? What is the disposition of today's AI landscape? What do they have right, what do they have wrong, and what does the future look like with regards to scale?
Mahault Albarracin, PhD:So what they had right was that a lot of what we care about has to do with attention. You don't want to constantly have your model, take only one perspective. You want your perspectives to be dependent on a given context and therefore meaning to shift according to that text and therefore meaning to shift according to that. Not everything you say is meaningful in the same way, given, say, the hierarchy of topics that you're really embedded in at a given moment. What they are getting wrong is two things. One, they don't have the right notion of embedding. Embedding needs to be about an embodiment within a moment and a spatial, temporal reality. You need to know, or think you know, something about the thing you're talking about, have a true belief, or at least some degree of a truth statement about a belief, and then you need to figure out why that statement is true relative to the rest of the world. What caused that statement? What does it relate to? So this gives you a structure, and that's not present currently, or at least not explicitly present, in the transformer architectures. Another thing is that and that's something that Carl says all the time a correct model is probably a very simple model If you have to make it bigger and bigger and bigger and bigger, just to account for potentially a bit of contextual reality. You're probably accounting for too much. You're making too many connections. You're just trying to cram as much as you can within one given model, such that you think you're accounting for everything. You're not understanding anything. You're just making sure that you have some data points that you can pull from at any given moment, and I think that's fundamentally wrong and that's what Gary Marcus was also pointing towards.
Mahault Albarracin, PhD:We have the ability to understand reality by having some core, say, structural priors about reality. Our brains are made this way, most animals are made this way. Just our very embodiments speak to that structural constraint that allow us to perceive things in a way that is useful to us. You don't need to perceive every pixel all the time at once. You really only need to perceive the things that are relevant to your survival, or at least to the scale of reality at which you're going to need to be cognizant, such that you can take actions that are useful for your life. Anything beyond that. You're probably taking in too much information and we can speak to different kinds of pathologies that are like that right, people who take in too much information all the time right. They're hyper aware or aware of the wrong elements because they didn't know what to pick from. Does that answer your question?
Brian "Ponch" Rivera:Now I'm smiling. I want to check in with Moose here in a second. The conversation we had about non-speaking autistic children lines up very well with what you described, you know, on the show we've talked about. When you read an article, you don't read every single letter and every word.
Mark McGrath:You kind of predict it, you kind of alluded to that, that. But, moose, I want to hear from you, man, uda r, or let me throw the thoughts. Well, I mean, I think the time is now to publish that article finally and, uh, decimate that. But what I hear mal describing is ooda loop sketch. I mean, I feel like john boyd, basically, would align with everything that she just said. Um, and as I'm sitting there listening, I mean I can just see it, I can see the sketch laid out and everything. So, no, I mean, do you think Ponch? I don't know. No, it all resonated with me. I thought she was visually painting. I thought, mal, you were visually painting the OODA loop sketch.
Brian "Ponch" Rivera:I imagine she's thinking about the free energy principle and how that sketch, the basic sketches out there, like the perception action loop and the way you see it in a lot of books and articles right now.
Brian "Ponch" Rivera:The way we look at it is we just extend it through the Boyd's OODA loop where we show prediction, we show planning, we show the perception, we show what Stephen Kotler calls pathfinding, which is really intuition, habits of mind, that autonomic response and or finger spritzing of fuel, that intuitive feel. Moose and I have been kicking around the idea that one of the most under reported, underutilized part of the OODA loop is outside information, and that was that connection back to non-speaking autistic children that are connected to the hill or whatever they may be connected to. But my guess is, when folks have some type of experience and flow, they create something. They create something like the spatial web. Maybe Carl Friston sees something with his work in fMRI and then creates something new based off of being present. So, mao, can you build on that some more and then help us unpack what artificial intelligence may look like here in the new feud?
Mahault Albarracin, PhD:I mean I think the word is in the grapevine right, Like I think everybody's pointing towards agents, and what they're really pointing towards when they use the word agent is trust. I know it sounds silly, but an agent is something that has an ability to act on the world in a predictable way and, ideally, in an accurate way, and by that I mean you can predict something will go wrong all the time. That's not useful Right now. I can predict that if I ask ChatGPT to calculate something for me, it won't be able to do it. The other day I was trying to use one of the services from a payment system that was supposed to allow me to create quickly SQL queries and I swear to God, I used it for two hours. Within two hours, their own in-house service that was supposed to be fine-tuned could not give me the right results and I didn't have that much data. I had 87 data points. They should have been able to give me something right. But that's the thing. Because they're not really anchored in any sort of stateful belief space. They don't have a sense of what causes what they don't know. So we're going towards real agents that are either embodied within edge devices or robots or simply that have an embodiment in the sense of a Markov blanket, which means they have a degree of understanding of a domain and the relationships of that domain to the rest of the domains, for example, and these agents will have an ability to navigate causal graphs. I think this is where everyone is going Benji is going in that direction, Lacune is going in that direction, Carl Friston is going in that direction.
Mahault Albarracin, PhD:This also allows you to potentially make predictions about how safe a given agent is going to be, or a given clustering of agents, ie different kinds of agents that collaborate together. Because, yes, you can make a prediction about, say, one single policy space, but what happens when you compose them? What are the emergent effects of multiple agents that you couldn't necessarily predict by only looking at them individually? So these are the kinds of things that we're going to have to research there, I think in our future. But once we have that, you have the ability to make everything smart. You have the ability to have real digital twins that can actually think through their connections. You have the ability to make large-scale predictions to understand an ecosystem and its sustainability. So once we have all that, we also will have the ability to really leverage the data that we produce. Right now, most of our data is junk. Not that it's not necessarily good although it isn't but even if it was, we can't do anything with it because it's decontextualized. It doesn't speak to anything.
Mark McGrath:Ponch, the other thing I just jumped in my head backwards planning sure looks silly, doesn't it?
Brian "Ponch" Rivera:Oh yeah, backwards planning is something that some military members live by in all domains. So in complex environments they try to start with the future in mind and work backwards, and the problem with that is, in complex systems you can't find a relationship between cause and effect until retrospect or till after something happens. Now I want to ask a question about agent, agency and interactions, if you're ready for this. So we understand that in complex adaptive systems, it's the quality of interactions more so than it is the quality of agents. And, with that being said, when we look at humans and, potentially, machines and robots working together, is a human considered an agent and does a robot have agency.
Brian "Ponch" Rivera:Okay yeah.
Mahault Albarracin, PhD:So there's an interesting point actually here, which is, ultimately, you never have access to the internal states of any system. You can have access to, maybe a representation of it, a way that you think it might be working. You might have a very high confidence over what you think is going on, but the truth is you don't. So if you think of an agent that way that has a mark of blanket internal states and external states, then everything that has the ability to take actions is an agent.
Mahault Albarracin, PhD:Humans will be agents under that robots, ais, animals and the beauty of that is that that forces us to consider how robots and ais and other kinds of systems might have to do theory of mind over us and over each other, because if you don't have access to the true inner workings of anything, the best you can do is sort of map over them and hope for the best that that map gives you accurate predictions, and inevitably there will be a time where it won't, because there's, you know, volatility, variability, because there might be a bug, there might be an issue that you couldn't plan. There's always going to be something that will slightly shift away from what you expected, and so you need to be able to consider that whatever model you have of something will need to be updated at some point and your predictions, your theory of mind, predictions, eventually will have to be refined. So every human will be an agent in that scenario.
Brian "Ponch" Rivera:And we are agents. Now, One thing that I was told years ago as well, and we still believe this to be true, is that we shouldn't apply engineering approaches to human systems. What I understand you're doing is you're applying natural intelligence or how living systems actually, or the way we understand them to function applying that to AI. Is that correct? Is that what you're trying to do?
Mahault Albarracin, PhD:So Carl's active inference and frenergy principle is based on first principles. And so if you think of first principles, going from math to physics, to biology, ultimately, if you want to base your artificial intelligence on the only real example that we understand of intelligence and even then that's not very well defined, right Like there's still a lot of arguments and a controversy around the term itself, nobody's really got the definition Then you have to consider the ecological definition of intelligence under the human format, and then you can sort of say well, you know what it's in continuum, animals have it, some plants have it. You can even argue that some systems have it that aren't necessarily considered by most people to be intelligent systems. Mike Levin has some really interesting ideas about that. Generally he thinks intelligence is in the relationships between things and I tend to agree with him and that means that you can extend consciousness and intelligence to things that don't necessarily register right now as intelligent or conscious.
Brian "Ponch" Rivera:Can we, can we? I want to pause on that, moose. How's that connect to McLuhan?
Mark McGrath:Well, I mean, all technology is an extension of some human human faculty, right, human human faculty, right? So like, if ai if we're looking at ai correctly, it should be an extension of our, our cognition, I would think, right, I mean, like I always, we've talked about before of ai being an extension or a way to augment my orientation, but it still doesn't negate my own human orientation. But the medium becomes the message. The electronic media is, the is overworking our central nervous system, but in this case it seems like it does become an extension of us, right, like that's.
Mark McGrath:That's kind of like if you saw those notes that I put on substack where people you know they're constantly shitting on ai for whatever reason. It's cheating, it's plagiarism, it's's super Google, whatever it is. But when you go back and you read things like Buckminster Fuller's education automation and Isaac Asimov talking about it, it's actually supposed to expand our creativity. If done right and done correctly, we should be expanding our creativity, which ties back to Boyd, which would be that it's augmenting our orientation, which ties back to McLuhan, which means that our sense-making capability is being expanded by this technology.
Brian "Ponch" Rivera:Here's a thought. If that's true, large language models would make us dumber if we followed them, because we're following an engineering approach, a circular OODA loop, if you will and I don't necessarily want to say dumber, but it won't expand our global consciousness but something that follows an ecological approach, a natural intelligence approach, could help us become more human, and I think I brought that up with Dr Bray.
Mark McGrath:I think that's what they were talking about, like what Fuller? Yeah, and I think that that's what Boyd I think was the prototype or the proto of this. I think that that's what he was getting at with the learning capability when he mapped out OODA loop sketch that these technologies as they come and they become more and more pervasive, that, if done right, that's what it's doing. In other words, it shouldn't be replacing humans. It probably replaces you know median jobs or you know meaningless things, but like it should be augmenting our creativity, it should be augmenting our engagement, it should be augmenting our understanding of complexity.
Brian "Ponch" Rivera:Yeah, so we should become better humans as a result. Mal.
Mahault Albarracin, PhD:Even or something larger. So the way I like to think of it is imagine we are cells. Your cells in your body do not compute all the information of all your body. They really compute some local area and they understand what they have to do relative to your local area, which is why, if you place a stem cell in any given place, it'll start becoming anything, and in fact, there's some studies that show that you can even sort of revert a cell and then make it become something new in a different spot. So that's super interesting, right.
Mahault Albarracin, PhD:So think of us that way. You don't want to lose your own independence and your autonomy, or at least the degree of autonomy that you want, but there's only so much autonomy that you want. You don't want to be telling everyone all the time what to do, right? So so we as a species, anyway, are bounded by a degree of computation that we can apply to the world anyway.
Mahault Albarracin, PhD:So think about, like, okay, assuming I'm this now, I don't want to compute everything, but I do want to be connected and I do want to gather some insights from perhaps some higher level system, like I do want to understand how to better navigate the world, the planet, other people that I don't have a direct connection with, but I also want to maintain my attention to things that are super relevant to me.
Mahault Albarracin, PhD:So then, maybe AI allows us to develop this sort of super consciousness layer that right now is a bit I want to say reactive, right, because there isn't, I don't think, any sort of layer here that takes action right. Culture is reflective, but not quite proactive. It's more like it jostles us back and forth, like we are in the sea that directly feeds from us right, like we are the ones that ultimately were the cells, and our survival is represented by this overarching system. So I think it will allow us to not just become better humans, but perhaps even a new kind of system wherein we coexist better and become better coordinated towards some greater, larger, more interesting goal that on our own we couldn't really fathom.
Mark McGrath:Yeah, you're getting it to. That sounds like it. Sounds like that what you just said. That sounds like Teilhard de Chardin. That's the noosphere, right? Is that? Is that what you're? Is that what you're? Yeah, which heavily influenced Boyd, and there's a lot of connectivity with McLuhan as well.
Brian "Ponch" Rivera:And there's also the adjacent possible, the idea of emergence, complex adaptive system. The whole is greater than the sum of the parts. All these things are connected and what I've seen over the last few years is the neuroscientific community is driving more towards understanding complex adaptive systems. We've seen that with Stephen Kotler's work with Carl Friston, we started with complex adaptive systems and now we're moving more towards understanding complex adaptive systems. We've seen that with Stephen Kotler's work with Carl Friston, we started with complex adaptive systems and now we're moving more towards neuroscience to understand that from a Boydian perspective.
Brian "Ponch" Rivera:At the end of the day it comes back down to the quality of interactions and this actually kind of leads us to another topic, which is teaming or teamwork, human-agent teamwork we talked about that before. But agent-agent teaming, or human-human teaming or just teaming, means that the whole is greater than the sum of the parts. It's the quality of the interactions, it's the protocols that we put in place that allow us to become greater, again, greater than the sum of the parts. So the protocol, the way I understand it, for active inference agents is this web 3.0 or spatial web. Can you talk a little bit about that, mal, and how that may bring us closer to this global consciousness?
Mahault Albarracin, PhD:Well, it's not to say that's where it's going.
Brian "Ponch" Rivera:Okay.
Mahault Albarracin, PhD:I think that's what's possible and I think that if it were to make us something greater, that could be it right, I'm not saying that that's the objective, that's my interest, but I wouldn't put words in Same. But in any case, it will allow us to have a more interesting kind of connectivity. Right now, we are hyperconnected in a way that we don't control. That is not the want, and to connect to collect different kinds of data, none of them are connected. None of them give you holistic insights about you, and if you wanted to connect them, it'd be so hard one because there's no standard. All the data wouldn't be even overlapping. Data wouldn't be collecting the same, collected the same way, because there's no causal relationships between the different data that you collect.
Mahault Albarracin, PhD:So, for some of your apps know that you sleep a certain way and that probably has an effect on, say, your heart rate or whatever, but it doesn't know that that's probably related to the way you're eating and the amount of stress you've lived throughout your day or the kinds of people you are with. There's a whole host of things that, even if you had the data all in one place, do not carry the kind of model that allows you to make that data into something useful. So the standards are really just a way to one make sure that, well, we connect the data in a way that's useful, so probably interoperable and therefore using common schemas that you can register. I use this schema. It connects to this a little bit like Wikipedia, right? The second thing is, now that you've connected all your data, do you want everybody to see it? I personally don't.
Mahault Albarracin, PhD:I don't want you to know how I sleep. So I want to make sure that I keep ownership over my data, that I can make it work for me and that not everybody that wants access to it just accesses it, and even if they do access it, that their access is conditional upon a certain kinds of things. Like I don't necessarily want them to access my data forever and ever and then sell it to whoever else, right? So these are the kinds of things that we're putting in place, and then obviously, there are implications to making such a system allow for cognitive action of an agent. So I want that data to be encoded in such a way that context gives a different meaning and I want different kinds of layers to speak to different things. So I can't just encode time of day or like I want all of my data to really relate to how the world is connected. Like, spatiotemporality is just one element, but there are many other layers that we could overlay over that and give it also that spatiotemporal flavors and relationships. So that's what the standards are about.
Mahault Albarracin, PhD:Ultimately, one of the questions that's being discussed right now, which I think is a fascinating question, is what is an agent under that standard and what does it mean for multiple agents to coexist and create meaning, because if you're going to speak to a semantic layer, for example, meaning can't just be hard-coded or pushed across the system, right, it has to be composed from the different layers that an agent will sort of be directed on. So, yeah, that's what it'll be, and I'm kind of excited to see where it goes. We do have some research projects right now that are intended to show some proof of concepts of it, and then, slowly, we'll answer some questions that arise from like okay well, some proof of concepts of it. And then, slowly, we'll answer some questions that arise from like okay well, we hadn't thought of that. Now, let's solve that problem and see how we can make the system work at a more global scale.
Brian "Ponch" Rivera:All right. So when we you know, Moose and I grew up with the web, if you will, we were when we were younger. We were riding bikes and playing around outside and next thing, you know, the internet popped up when we were in our twenties or late teens. So that's web 1.0. Is that correct? And that's more like a static model where the like you kind of described, where they don't really interact with each other, websites, things, even our apps, like you pointed out, don't necessarily interact. They don't have a schema or protocol, and I think John Boyd actually used the word schema before somewhere in his briefs to talk about those interactions between systems. So that's important as well. So this new web allows us to navigate that space and time with agents. Is that correct? Is that what you're kind of talking about?
Mahault Albarracin, PhD:Yeah, I mean. Ultimately it's not just that it allows you to navigate, it's also that it will allow the agents that you want to do things to be able to navigate proper relationships. The most obvious example is a smart home. There are relationships between the different appliances and ambient controls that you might have in your house, but right now none of them are connected. There's no model here that exists that accounts for the interconnected nature of these elements. Your agent navigating such a graph will have the ability to one possibly do some structure learning over.
Mahault Albarracin, PhD:Like well, what are the relationships? Does it matter what temperature it is for when you want to cook? I'd argue it does, Because when, for example, in summer, I want to cook a cake, if I cook a cake at a given time, it'll go really, really hot. So maybe you want to account for that a little bit to make me more comfortable. Does my presence in the house factor into these things? Does other people Will? My guests are out of it. There's so many tiny little questions that you don't want to hard code. You want your agent to be able to understand the causal relationship between things and then derive a model of that ecosystem.
Brian "Ponch" Rivera:But that's what we do as humans is, in order to do something well, we have to understand the context. So, like when my daughter tells me a story, sometimes I don't know what she's talking about because she doesn't provide any context, but she gets mad at me because she was like dad, I meant this and that I'm like well, you didn't bring that up, I didn't understand the context.
Mahault Albarracin, PhD:So that's what you're talking about is context matters, right Context and relationships and semantics, like the meaning of semantics. I have a friend who has a sister. She's very young. She's like more than a decade younger than her. And she posts stories on Instagram and sometimes we look through her stories and we laugh because we have no ideas what she's saying. The words don't mean anything to us, that's so sus yeah.
Mahault Albarracin, PhD:I can get by behind some of these words, but like it's incredible, like I have no idea what she's saying and I'm like, wow, I didn't feel that I was that much older than her, and now I'm like we're not even speaking the same language. So there is a mapping between what she's saying and what I could understand. So understanding how to go from one mapping to another is super important Because even if we share the same context, the different domains in which we've existed and the spatial temporality of those domains Like her life hasn't been that different from mine she's gone through the same things. She's mostly she's been in school.
Mahault Albarracin, PhD:I've been in school, Like it's the same context, but the socio-historical context of those domains has shifted a little bit, and so it's important to account for the drifts in distribution and be able to sort of do this remapping from one spatio-temporal context to the other, to understand that we probably have some semantic in common, but the reference that she uses are different. Maybe the valence also is a bit different, right? So I think that'll be a beautiful outcome of such an architecture and, ultimately, of what active inference makes possible too.
Brian "Ponch" Rivera:But active inference is dependent on the protocol or the schema to be available. Is that correct?
Mahault Albarracin, PhD:want some kind of standardization, otherwise you run into forever problems of this ecosystem, speaking to this ecosystem with a different language, and people have to do a lot of other work. And the standard also has to do with what I told you about, like privacy and trust. There's more to it than just acting.
Brian "Ponch" Rivera:Let me throw an analogy at you and see if this resonates, and I just want to check in on this. So in the 70s we started putting black boxes in aircraft, in commercial aircraft, and we started learning that the reason aircraft were having accidents was not because of mechanical issues but because of the interactions within the cockpit. People were distracted. They didn't have a common language. I mean like a common communication brevity. You know a handoff of an aircraft, it's your aircraft and no response and nobody's flying the airplane.
Brian "Ponch" Rivera:There are stories out there about aircraft like Eastern Airlines Flight 401, an L-1011 flying down to Miami. There's a light in the cockpit that is burned out and all the crew gets fixated on that. Nobody's flying the aircraft. They believe it's on autopilot. So the way we corrected that is, we ended up working on protocols that this is how you interact in a cockpit and it's actually like team science. Now Can that be brought over to this world of I'll just call it active inference, AI and the spatial web? Is that kind of what's happening? There is trying to identify the common protocol, a common language, to make a universal, so these agents can improve how they work together.
Mahault Albarracin, PhD:Yeah, absolutely Basically, the point is just to try to make sure that we can do something smarter. So, basically, if you think of of human, of humanity, why are we able to coordinate? We don't just look at other people and be like they're probably going to do this. Why are we able to coordinate? We don't just look at other people and be like they're probably going to do this. There are signifiers everywhere. We can only coordinate with people with whom we share signifiers. Otherwise, if for you red light means go and for me it means stop, we're going to crash into each other. So we need to have some kind of shared language and at the human scale, we've created all these different layers like schemas, values, social scripts, you know, reference in the way that we dress, in the way that we talk, literal language. We have all these ways that we share and when we don't share them, we don't know how to talk. I can't talk to someone who speaks Hindi right now. I just don't know. Unless they speak English, I can't coordinate.
Brian "Ponch" Rivera:It's the same notion considerations for AI working across cultures. I'm sure you're looking into that. How about ethics? Moose and I talk about corruption and evil sometimes on the podcast. How do we make sure AI isn't turned corrupt or evil against us? What are you doing about that?
Mahault Albarracin, PhD:So I'm going to take a stand a little bit. I'm going to stand on my podium for a bit, so it's going to take a minute.
Brian "Ponch" Rivera:Yeah, yeah, please do.
Mahault Albarracin, PhD:To explain what we're doing, I have to start from the start. You have to consider that, in general, cognition isn't neutral. That's a false idea that was developed in the past 70, let's say, 100 years, where we wanted to make science as objective as possible and we considered that, or at least through rationalism, we considered that subjectivity was a problem and this is going to become relevant in a minute but that also entailed that a lot of what we understood of ethics had to do with naturalization, what comes from the natural world and what can be formalized in that way. But unfortunately, as you know, if you understand Bayesian brain theory, cognition is not neutral. No, cognition is neutral, and so we can use that to have AI represent the world, and basically. So, if you take, for example, the new model that was created by Chris Buckley and his team and Tim Verbalen and his team, you use object-centric schemas, for example, so their model called Axiom, will see objects, relations, dynamics, not just the pixels, and so it makes it easier to say encode certain constraints, right? So if you think about it on the ethical element, you can think of ethics as constraints at the level that a human thinks in. So don't do this. Respect boundaries instead of, say, abstract numerical signals.
Mahault Albarracin, PhD:You can also do inductive inference. So earlier you were talking about backwards planning. I think that's what you said right. Backwards planning, that's inductive inference. You can prune actions that are inconsistent with goals, and so, for example, in Axiom, you can integrate, let's imagine, at the level of a cultural prior, ethical priors that are directly related to goals or constraints, so like, for example, don't choose harmful trajectories, harmful, being defined as XYZ within that domain. You can also do compression and abstraction, so Axiom can generalize and consolidate, not just like physics and gameplay, but like patterns of norms. You can embed normativity, ethical behavior, as a part of its learned world model, and so what you want to have is a model that is auditable, explainable. You want the model to be able to be trustworthy, because you want to be able to track objects and causes. You can explain decisions in human terms. I avoided this path because I risked collision with a person, for example. You want to be able to audit it, so you want to inspect the generative model and see whether an ethical prior was triggered at a given point for a given decision, and so one of the projects that we have right now is in the ecosystems group.
Mahault Albarracin, PhD:It's called Law as Code. We've published already one paper in that we've applied it to say, the sustainability realm, because we want to show that one active inference is more sustainable. It's more data frugal. Again, axiom can do what other models can do using much, much less iterations and data. So just like that, it's already more sustainable. But also it can do using much, much less iterations and data. So just like that, it's already more sustainable. But also it can do sustainable management. It can understand how to minimize energy expenditure, how to minimize cost, and so in that system we used law as code by basically encoding in the reasoning of an agent. So we didn't just give it rules, we made it understand a rule such that it knew when to use it and when to not.
Mahault Albarracin, PhD:So it has the ability to align with human functioning, because our functioning is never black and white. Our laws are not black and white, they're up for interpretation. That's why we have judges, that's why we have juries, because it isn't as simple as do this, do that. Sometimes the context shifts. What you should do, killing isn't black and white. Sometimes you're allowed to, under self-defense, for example, or under international law, under certain rules of engagement, where that rule becomes a little more fuzzy and context-driven. So you want your AI agents to be able to reason with those rules.
Mahault Albarracin, PhD:So you start with the laws. That's the most easy. There's a degree of constraint, and that constraint is a shift in the preferences that you have and that allows you to derive the correct actions to take. But now, if you make that even more fuzzy, think of norms. Norms are not laws. It's a little bit. It's not as definitive as a law that you're allowed to sit on the floor in a restaurant. People will look at you funny, but you're allowed to. No, you're not going to go to jail. So the consequences, the reinforcement for a certain parameter under norms is a little bit less direct, and you also have a distribution over them right.
Mahault Albarracin, PhD:In laws in general, there's one way, not direct. And you also have a distribution over them right. In laws in general, there's one way, not two. You're supposed to do one thing, even if it can be up for interpretation, under norms. There are many different kinds of restaurants, many different kinds of norms. In some restaurants you're supposed to be quiet, in others you're supposed to speak out loud and that's the point.
Mahault Albarracin, PhD:So you want your agent to have a distribution and understand how the distribution shifts given a certain context. And then you go even one layer more abstract and therefore more blurry, and you talk about values. There isn't one way to understand kindness. If I ask you what is kindness, you probably will talk about it differently than me. We think we're talking about the same thing. So there is a very broad distribution over the things that fall under that term and the kinds of policies that fall under that term. But that is highly contextual. And so if you understand the same principle of, say, this axiom architecture and this inductive inference, you have the ability to encode the possibility for alignment if your agent can embed itself within your environment, learn the dynamics of your system and then apply its own reasoning to the dynamics of your system.
Brian "Ponch" Rivera:It has to be an open system. And then my next question is why can't large language models do this? Now I think I may have answered that.
Mahault Albarracin, PhD:So yeah, one thing is the continual learning. Large language models have a hard time with that. But there's another problem Large language models are monolithic. So while there might be, in the distribution that it's learned, the ability to sort of shift context a little bit, ultimately that's kind of limited. You have one model that has roughly one perspective over things and that's not really how ethics work. Can you really say that you share the same ethics as someone, as your neighbor, and that's not that far from you? Right, it's your neighbor. Now, think of someone that lives 6,000 miles away from you. Think of someone that lives 3,000 years away from you. Do you share the same ethics as them? Can you really apply that? So what if I use a large language model to understand a work of literature that was written, say, 500, 600 years ago? How's that going to work? It's not. It's not going to work. So you need this ability to re-pattern yourself, keep a perspective, and the very notion of perspective is sort of antithetical to a large language model, even if it has a context window.
Mark McGrath:We were in the Boyd archives at Brincourt University almost three years ago, ponch, and one of the things that we found and took a picture of was a scrap of paper and Ponch I was trying to share it on the screen and maybe you can, with better luck than I had, but basically defined it it was. It was a scrap of paper like on a little, I don't know, I think on the other side it was actually a shopping list and it said a dynamic agent and the definition of dynamic agent. It says the medium that harmonizes changing tactical actions with changing strategic intentions and focuses them to realize the strategic aim.
Mahault Albarracin, PhD:I don't know if that means a I love that, but I can't read it all.
Brian "Ponch" Rivera:Yeah, yeah no no.
Mark McGrath:There you go, All right. So here I'll get it again Whoa, whoa, whoa.
Brian "Ponch" Rivera:Wait, wait. The reason you can't read it is because you didn't spend time learning how to read Boyd's handwriting.
Mark McGrath:That's right, moose and I did oh yeah, the two agents here went in there and was one on a paper towel too that we found and then two people that were acolytes of John Boyce, his closest collaborators, Chuck Spinney and Chet Richards, helped us translate a lot of these, In fact, I just sent this one for verification, but dynamic agent, the medium that harmonizes changing tactical actions with changing strategic intentions and focuses them to realize the strategic aim.
Mark McGrath:And then schwerpunkt that was a German word that he used, which meant focus. In you know, poorly translated in English it meant focus and direction. What are we trying to manifest, what are we trying to achieve? And it defeats. It defeats immediately, punch. What somebody we know was talking about backwards planning, like almost instantaneously, because it doesn't allow for any of that.
Brian "Ponch" Rivera:So harmonize comes up quite a bit. Harmony comes up in Boyd's work all the time and I think, Mal, you often write about harmony. Is that correct Harmony? Can you explain why you use?
Mahault Albarracin, PhD:harmony or harmonize when you write and in your work talking about brainwaves. But he wrote a work on synchrony and orgasms and I think he's correct. I think-.
Brian "Ponch" Rivera:Wait a minute. Wait a minute, okay, I just want to make sure I heard that correctly.
Mahault Albarracin, PhD:No, we're getting there, you'll see. I mean, he's brilliant, it's just it's a bit of a taboo topic so we tend to shy away from it. But ultimately he has a good point, which is there are different degrees to which and different modalities by which we harmonize. And I've done a little bit of thinking on this and unfortunately I don't have time to work on it. But my next core topic, I think, will be love. I'm aiming there, it's coming.
Mahault Albarracin, PhD:But the idea is that there are different modalities by which we get to resonance. And if you follow some of Carl Friston's work, he's done a really cool paper on a duet for two, I think, and it's on birds that basically listen to each other and eventually find a shared pattern. And Connor Hines also works on this a lot. He has a paper I don't remember the title, but basically he discusses how different agents share representations of the world and how that representation can be propagated across the system. And so to get back to the orgasms, adam Safran showed that, potentially, orgasms are just the way the two agents sort of come together, such that their resonance feeds and amplifies, ultimately to a climax that feels very good because you're so aligned right.
Mahault Albarracin, PhD:And so I've written a paper with Toby Sinclair Smith, who is a category theorist. It's called shared pretensions under active inference, and basically what we're showing is that there has to be a point somewhere in a hierarchical structure of commonality where we connect somewhere, otherwise we can't communicate, we can't coordinate. But it doesn't have to be the immediate scale, right Like. That's why, when you talk about a company, for example, me and the HR person are not doing the same thing, we do not share the same knowledge, we don't even share the same goal immediately, but there is a scale at which our goals are common.
Mahault Albarracin, PhD:And so if I understand her model, if I understand kind of where she's going or how or what she's trying to do and how it relates to what I'm doing, I can feed her the correct information such that we together can find a way for our system, the system we're commonly building or maintaining, to find its non-equilibrium, steady states, to find the way that it can stay alive and not dissolve. And so that's to me, that's harmony. It can expand across millennia. It can expand across minutes in the context of orgasms. It can expand across decades in the context of a company. It can expand across an hour in the context of a conversation, but we have to find a way to reach a common goal such that my predictions at multiple scales can come to pass. I can't control the whole world, but together we can control the common ecosystem we've created.
Mark McGrath:That's Einheit and that's yeah, that's John. So John Boyd had an acronym. Well, he used four German words E-F-A-S's Einheit, fingers spitzengefühl, alftrag and schwerpunkt. And einheit was the development of mutual trust. Finger spitzengefühl is like the fingertip feeling, the intuition that we gain together as our organism becomes. Alftrag was a contract, basically, that I didn't have to be told anything other than the final intended state, like what we were trying to achieve or accomplish as a team or that we were focused and directed to. And that was the schwerpunkt, that piece that Ponch had up about the dynamic agents. That was saying that with Einheit, with Fingerspritz and Gefühl, with that bond, that mutual trust, we're able to harmonize changing actions inside of, changing intentions to reach us to our final aim that we're focused and directed towards.
Mark McGrath:So, as you say, things like love and other things, like a company or whatever, that does make sense, because at some point our orientations have to intersect so that we, as you say, coordinate. You know the Austrian school economists that we use a lot on our work, like Ludwig von Mises and Friedrich von Hayek they talked about one of the major problems was coordination, but basically what they were talking about is that there's mismatches between what you want and what you want and what I want, and once we understand what those mismatches are, then we can, you know, exchange together and then have a, you know, have a mutual benefit of, of exchange. So it's funny how all this stuff starts to, starts to tie into like a unified theory of everything, but it is interesting how all these disciplines are essentially saying the same thing yeah, I think we have a title for the, the, this episode it's going to be orgasms, doodle loops and ai.
Brian "Ponch" Rivera:How's that? Oh, we'll get a lot of clicks.
Mahault Albarracin, PhD:Yeah, we'll get a lot of clicks alignment right there, like the thing. To me, the condition for ai alignment is to give it the ability to love us, to have empathy, for us to see us as kin rather than just objective.
Mark McGrath:Yeah, that's.
Mahault Albarracin, PhD:Hold on a second.
Brian "Ponch" Rivera:In naval aviation. It's okay to love things, but you can't love things, all right. Does that make sense? No, it's okay. Do you want to? It's okay to love your airplane, but don't make love to your airplane. That's a sin. Yeah, there we go.
Mark McGrath:Yeah, so that's what I'm trying to point out With the convergence that Teilhard was talking about. By the way, now that we have someone that has a better command of the French language than I do, is that the proper way to pronounce Teilhard, like T-E-I-L-H-A-R-D. Teilhard, teilhard, teilhard, teilhard.
Brian "Ponch" Rivera:Well, I might just say.
Mark McGrath:Teilhard, so the Americans understand it, but what he was saying is that, essentially, that there's going to be this ultimate convergence that he called the Omega Point, and that was going to be a unified consciousness that was driven by love and complexity across an evolving humanity.
Mahault Albarracin, PhD:I mean again, I think that's also where Adam Saffron is trying to go and I think a lot of people are aiming towards, and I think if we go in that direction for AI alignment, it is much more productive than going the doom route, like the assumption that things want to kill you, I think is showing a prior. It's showing your bias, and I think we need to explore and examine that bias and think more along the lines of like. If something understands you, if it has the ability for empathy, if you can create ecosystems that co-create, coexist, that find common goals, why wouldn't we want to collaborate? Why wouldn't we want to push for something? There has to be something where we have something in common. If it's a cognitive system, there is a layer that we have in common and so we can find paths to exist on a manifold where we're not in constant friction, where we can find states of flow that constitute this harmony, and I think that's a very promising direction that more and more people are thinking about.
Mark McGrath:I think that's what Boyd saw honestly, because when you look at a lot of the things that he studied beyond the engineering and the science and the war, it was a lot of Eastern thinking, a lot of Eastern philosophy, a lot of that thinking, a lot of Eastern philosophy, a lot of that type of thing. The letter that he wrote to his wife that's in the biography was what I think that I see and what I think that I'm pursuing should change everything for humanity for the better. And, yeah, it seems like a lot of these things that he was able to see back then and some of these others were able to see back then, before all this stuff technologically existed, I think is manifesting.
Brian "Ponch" Rivera:I don't think there's anything new under the sun right now. Right, everything that Carl Friston is coming up with has been thought of before. I think he's formalizing it, and I have a question for Mal about our bias towards the threat of destruction from AI. What is that from? I mean, do you have an idea where that comes from?
Mahault Albarracin, PhD:I do have some thoughts. They're a bit inflammatory.
Brian "Ponch" Rivera:Go for it, let's go, let's go, let's go yeah go for it, please, you're not going to like? No, we want to hear it. I mean, I had another conversation with a former aviator today and he said hey, you know, a couple of months ago I would have told you AI is going to kill us, and now I feel like we might have a golden opportunity here. So what causes that bias, or what's it?
Mahault Albarracin, PhD:from developing these theories. One have mostly been funded by defense, for real like it there's at the most core level. There's that. So obviously, defense doesn't tend to think in terms of love. Most theories surrounding war is that peace is not the goal. The peace is just a moment in between shifts of power and it's like okay, all right, let's sure. Another hot take is that a lot of the people that have been developing and getting most of the funding and are driven by the desire to acquire capital, and that's usually anchored in colonialism and hierarchies of power. You can name all the axes that you want. You can say patriarchy if you're a feminist. You can say class warfare if you're a Marxist. It doesn't matter how you look at it. Ultimately, the people driving this are the people at the top and they have a vested interest in keeping certain things as they are, and I wouldn't say that our current world is the most ideal world.
Mark McGrath:Guardians of Decay.
Mahault Albarracin, PhD:Yep, right. So I think there have been thousands of years of alternative ways to think, of years of alternative ways to think. There have been women thinkers that have been squashed down throughout philosophy, erased by Christian dogma, literally burning their writings. You can think of all kinds of libraries that have been burnt through colonial impetus throughout history. There's so much thought that we have lost, and there's also a Western imperialism where we don't consider Eastern thinking to have any sort of value, because it's woo-woo, because it's spirituality, but the truth is most of our Western thinkers.
Mark McGrath:We don't disagree with anything that you're saying. No, no, no, no, we might label it differently, but conceptually I think what you're saying is totally fair and makes a lot of sense and it could be proven by just anybody with half a brain looking around. Earlier, with some of the comments I was going to remark about nuclear energy. I mean, nuclear energy could essentially be a cheap, clean like in France, I think aren't all the power grids running on nuclear energy, or most of the power grids in France running on nuclear energy, and everybody's afraid of it because, well, some people like to use nuclear energy to do other things. And it's the same with any technology.
Mark McGrath:A baseball bat's great for a baseball game, but it could have other uses based off the input of the agent that's got the. If I'm breaking up a, a picnic of people I don't like is probably different than actually what the baseball bat was intended to be used for in a game. Um, these things get misused, certainly, ai. You know when the, when the airplane was created by the wright brothers, you know the government had no interest in it until it actually worked. And then it actually actually worked and then the army army came in and took it over. You know they took over the, the airplane.
Mahault Albarracin, PhD:I think there's another factor as well, which I mean to their credit, right, like the, the AI dooms, doom, uh, doomers. They do have a point, which is, even if the odds are low, the risk is so great that are you really willing to discount it? And so we're. Well, I'm not a big proponent of AGI as it's understood right now. I think large scale intelligence will be distributed. It'll be local and then globalized through hierarchical, like modeling of our common ideas, et cetera, and so it won't be just one monolithic super being like don't, I don't believe in that, but let's imagine that's where we're going. That would be a very powerful entity and because it is very powerful, should it decide to kill us all, it might have the ability.
Mahault Albarracin, PhD:So I understand why. Because they project that onto it and because the risk is very great, even if low probability, it's still like if there's a 0.1% chance that you might die doing something, you'll think about it. Right, You'll take a minute. I think they have an impetus to think about it in those terms, but I think, because they project so much onto it, they overestimate the probability of that happening. I think if you create a system to resemble what it is you want to put out into the world, you're more likely to get that system to do that. And so if we start thinking in adversarial terms, in colonial terms, yeah, obviously we're going to reproduce that for sure. So let's bring in more people to the table who have already thought past this, you know. Let's have conversations with native researchers who have had to deal with that and have had to change the paradigm and think in different terms, or people who have literally just come up with different kinds of care ontologies.
Mark McGrath:Maybe then we'll create a system that actually has those as its basis more perspectives, yeah, I mean that's the yeah, I was going to just say close that, but that's, that's the problem with you know, capital d diversity and little d diversity is that little d diversity is that little d diversity is. What you're talking about is that we gather perspectives from as many people as possible, because otherwise a lot of this stuff gets lost. Capital D diversity is we all look different, but we all think the same.
Brian "Ponch" Rivera:Yeah, now I'm curious and you could you don't have to answer this if you don't, if you don't need to, or don't, or you can't. But when we start talking about active inference, a few years ago inside the DoD, very few people had any understanding of it. So, looking at the doomsday scenario, if I put on my tactical, operational and strategic mind from a military background, you can weaponize us and in fact I think we tried to do it many years ago with network-centric warfare. The technology just wasn't there and we talked to Dr Bray about this briefly, but this is a real possibility that in the wrong hands, this type of AI could be weaponized. Are you guys talking about this? Are you talking to anybody if you can share that with us about this type of threat?
Mahault Albarracin, PhD:I mean, everything can be weaponized. Yeah, as you said, a bat can be weaponized, the fork can be weaponized. So I think at that point, what we need to create is the tool set to give people the way to enact what it is they want to enact. So if what you want to do is an AI that serves the better interests of your people, we will give you the tools to do so. We already have weapons AI that serves for weapons, so it's not like this AI will be any worse than those.
Mahault Albarracin, PhD:In fact, it might be better, because active inference agents have two fundamental draws they have an epistemic and a pragmatic draw draws. They have an epistemic and a pragmatic draw. The pragmatic draw is simple it wants to fulfill its preferences. That's. That seems pretty simple, right? But the epistemic draw leads it to try to learn more, to understand more, to gain a better model of the world. You don't gain a better model if the world is over. If you predict that you stop everything or that you cut off an entire part of the world, you've effectively removed the possibility to learn anything there, and so your landscape just dipped quite a bit. So it's possible that those kinds of agents might not be as inclined to do as much harm as, say, just a reward function maximizer.
Brian "Ponch" Rivera:I think that's a great place to wrap up, because, at the end of the day, the OODA loop is about a flow system that allows you to persist through time, and if you don't exist, you can't persist.
Mark McGrath:Totally fitting conversation for the 49th anniversary of Destruction and Creation 100%, all right.
Brian "Ponch" Rivera:Well, hey, I want to just turn this over to you, mal, and share with our listeners where they can find you and what you're doing and what's next for you in the next few months, next few years, whatever you have for our listeners.
Mahault Albarracin, PhD:Yeah, absolutely so. You can find me on LinkedIn, mal Barassin. You can find me very rarely on X sometimes. I am organizing with David Benremo in Montreal in October 15th to 17th, the International Workshop on Active Inference. There are still some tickets available if people want to come and talk to us and know about the most cutting-edge research in Active Inference, the different kinds of applications that you can do, and I'm also currently working on a variety of papers that will come out very soon, including papers on these distributed systems. We've started with Econet. We're going to work on CityLearn, which is a benchmark for sustainability, and look forward also to seeing the outcomes of our robotics lab with the Habitat benchmark.
Brian "Ponch" Rivera:Right, Thank you so much for being here today. I really appreciate your time. We're going to keep you on here for a moment and Moose last words for the day.
Mark McGrath:No Again. Happy birthday. Destruction and Creation. I think that we honored that paper very well with our choice of guests and everything that she shared with us. Thanks for being here, mal.
Mahault Albarracin, PhD:Thank you.