No Way Out

Navigating AI: Balancing Innovation, Ethics, and Human Creativity with Natalie Monbiot

Mark McGrath and Brian "Ponch" Rivera Season 2 Episode 99

Send us a text

Explore the complex and fascinating intersection of AI and human cognition with our special guest, Natalie Monbiot. Discover how John Boyd's groundbreaking concepts like the OODA Loop can help us navigate the rapidly expanding AI landscape. We'll dissect the risks of excessively relying on AI for decision-making, emphasizing the need for a balanced approach where AI serves as a catalyst for human innovation, not a substitute for human creativity and thought.

Join us as we venture into the ethical and governance challenges posed by AI avatars and AI twins in the corporate world. Natalie shares compelling insights on maintaining transparency, accountability, and control in AI applications, offering a roadmap to responsibly harness AI's potential. We also touch on how AI can redefine customer experience, creating new value while preserving the essential human touch in service delivery and decision-making.

Finally, we ponder the evolutionary implications of AI on society and employment, drawing parallels with current legislative efforts to safeguard human interests. From intellectual property issues in the age of AI to exploring the boundaries of AI consciousness, this episode challenges traditional notions and highlights the importance of a human-centered perspective in AI development. Tune in for a thought-provoking conversation that aims to ensure a future where AI and humanity not only coexist but thrive in harmony.

AGLX Confidence in Complexity short commercial 

Stay in the Loop. Don't have time to listen to the podcast? Want to make some snowmobiles? Subscribe to our weekly newsletter to receive deeper insights on current and past episodes.

Find us on X. @NoWayOutcast

Substack: The Whirl of ReOrientation

Want to develop your organization’s capacity for free and independent action (Organic Success)? Learn more and follow us at:
https://www.aglx.com/
https://www.youtube.com/@AGLXConsulting
https://www.linkedin.com/company/aglx-consulting-llc/
https://www.linkedin.com/in/briandrivera
https://www.linkedin.com/in/markjmcgrath1
https://www.linkedin.com/in/stevemccrone


Recent podcasts where you’ll also find Mark and Ponch:

The No Bell Podcast Episode 24
...

Mark McGrath:

So, natalie, you're the third person that we've had on the show. That's part of the group you and I participate in, where we have people across disciplines, across industries to come together and put their brains together. And most recently, you heard me describe John Boyd's Destruction, creation, conceptual Spiral and OODA Loop sketch, and I'm dying to know what are your first impressions, what do you think?

Natalie Monbiot:

Right? Well, let's dive in. The thing that really captivated me from just learning from you in that call is this idea that Boyd has that we constantly have to revise and update what we think, but, I presume, also how we think. And that resonates a lot with me at the moment, because I think that's where we're at as human beings, as a species, when it comes to AI, and it's pretty easy to just abdicate all responsibility to AI. It can do quite a lot of things for you well, or not so well. I think it's easy to kind of embody the couch potato and just have AI do everything for you, but I think that puts us on an extremely slippery slope as a species.

Natalie Monbiot:

There was a study that was just sort of inevitable one of many studies that will emerge that proves that when people basically rely on LLMs to do the cognitive work, they get dumber.

Natalie Monbiot:

And so just in this short period of time that we've kind of had LLMs in our lives, we've already got dumber. And so just in this short period of time that we've kind of had LLMs in our lives, we've already got dumber in a measurable way, and I think it's absolutely critical to change up how we engage with AI and how we look at AI and use it to make us smarter, not just because it can, but because it's existential, and I think that's another one of Boyd's ideas. Right, like you know, in order to be able to compete you with other human beings, but now with other entities of our own making, such as these LLMs, we really need to have our wits about us and we need to take control and build LLMs, and not just build LLMs with human purpose and benefit in mind, but actually for us as individuals to see it that way and use it that way. So that was something that really resonated with me from what you said.

Mark McGrath:

I feel like it's more of a, you know, like punch. I talk about a all the time. Of course we talk about, you know, John Boyd noodle loop schedule all the time. If it's done well and it's done correctly, it should enhance one's orientation, because our orientation guides and controls how we make sense and how we act, and if there can be an enhancement with that that at the same time doesn't compromise our cognitive existence or our cognitive abilities, then I think that there's a lot of value in AI. As you say, though I think more people are looking to do it more like a magic pill to get their term paper written or something.

Natalie Monbiot:

Exactly, I mean realistically, like just being really realistic. Most people are going to use AI tools that way, I think Right, and it's just sort of a continued trajectory of it's pretty hard being human. People kind of want to tune out a lot of the time. Like to be tuned in the whole time is exhausting.

Natalie Monbiot:

So there are these platforms that feel, you know, very helpful, whether it's being TV, just spending a lot of time, just hours in front of the TV or just scrolling on our phones or just getting sucked into social media, even though the dynamics have changed from the TV era to the social media era, where you've got these algorithms sucking you in. It is something that we want. I mean, the incentives are wrong, but it is something that we want, and I think that that's just. You know, the era of having you know large language models do stuff for you and even like think for you is just an inevitability for a huge part of the population. But I think that for those that we can engage to not do that and to actually rise above it and not just to kind of not have it beat you at, you know cognitive skills and ultimately building an intelligence that is just so much greater than yours that you don't really know how it works. You've just kind of abdicated all control to it. I think there is actually, you know, a serious opportunity to do so much more and be so much more human. So, for people that have been educated this way or are in a kind of social milieu that encourages this type of behavior, whereby we can learn to be more human, be more responsible, double down on identifying what our purpose and aspirations are and then actually going for it.

Natalie Monbiot:

We're talking kind of intangibles right now, but I'll just give you just a concrete example that came up at a conference recently. There's a company this is a startup, founder that's, you know, had a couple of unicorns already, so he kind of knows a thing or two about creating a successful company and he says that his new company will only have up to 100 employees. Okay, and this is a billion dollar company that will have only up to 100 employees. And, by the way, we hear all the time I'm sure you've heard like the billion dollar company with no employees. Sure, that's like a pipe dream and, yes, maybe that will get there at some point. But today you've got this venture backed. Very serious founder who has pitched a Sequoia and successfully pitched Sequoia on funding this startup, which will be will be valued, you know, at least a billion dollars and we'll have under 100 employees. And the way that he's going to do that is the people in the company are just going to be doing so much more because they can, they'll have time and space to do it. So he said, first of all, he is going to be the only. His email address is going to be the only email address that any customer has and his first employee who he's hired, other than engineers, is a lawyer, and this lawyer also oversees HR, right?

Natalie Monbiot:

So it's not like, oh my goodness, you know, all the AI agents are doing all the work for us and we can just sort of, you know, um, put up, put our feet up. It's actually like, because the AI agents are doing so much of the heavy lifting, we have to think, we have to be flexible, we have to kind of embody the things that human beings are actually really good at, and our hard work looks like mentally train ourselves to do this, you know, like meditation, I'm sure, and I wish, I wish I did it more. But this is the kind of mind training that human beings are going to have to do in order to prepare ourselves for this new era where you just don't have to do any of that busy work. So I think that was really interesting and also it's going to take a certain type of person right. So a lawyer who trained to be a lawyer, you know, maybe different types of people will now become lawyers, because a lawyer is not what you expected it to be, you know, and you know, I actually got some great friends who are extremely smart but didn't really know necessarily what they were going to be doing and they were like, oh, my dad's a lawyer, my mom's a lawyer or whatever it is, and just kind of like became a lawyer but never really expected to do much else. It doesn't necessarily be. You know, training to be a lawyer doesn't. Often it doesn't coincide with also having an entrepreneurial mindset. This new future is going to demand new things of people and new things of certain professions.

Natalie Monbiot:

I think what's also really interesting about the lawyer example is, I feel like in the earlier days of earlier days, like a couple of years ago, talking about ai, it's like we're not going to need lawyers anymore, time-based, you know, I agree like time-based fees right, instead of like that needs to adapt because it just takes less time to deliver that expert advice. But it is a really interesting that a company like this one that I've described that their first employee, other than the absolute essential employees, engineers was a lawyer. So I think that's just another kind of interesting insight for me about expertise and humans that are that have deep expertise, are poised if used, have done well, are poised to get the most out of am you brought up something a little bit earlier a couple things meditation, human performance and then, uh, going back to the current state of like, llms, chat, gpt.

Brian Rivera:

Uh, we've had guests on the show that have been, uh, quoted as saying that theLMs are basically stochastic parrots. They are reproducing patterns without true understanding or creativity. And this goes back to the novelty seeking, novelty capabilities. As humans, as we are running a danger where we're learning from these LLMs or limiting our capabilities through growth because we're so dependent on these things that aren't creative and that could drive us to be less creative. So I agree with you on that. That's a pretty interesting aspect of LLMs.

Brian Rivera:

The next thing is a limitation of an LLM, the current approach to AI, the way I understand it, is they do not possess real-time adaptability, and that's what John Boyd's OODA loop shows us. As long as you read the OODA loop the correct way and I know Mark presents it the correct way. Unfortunately, right now, mark and I get a lot of emails from researchers that ask us to look at their closed loop approach to LLMs or agentic AI that use the closed loop approach of the OODA loop, and we know that's not the way the OODA loop is designed to be. It's an open system. It interacts with the external environment, so it's not heavily dependent on pre-existing data. It's more dependent on having that connection to the external world.

Brian Rivera:

So where I'm going with this is current models. The way I understand AI at the moment is they're built on the wrong view of John Boyd's work and then what could be emerging over time is the real OODA loop informing AI, and it's not going to be everybody looking at the OODA loop, don't get me wrong. It's going to be things from physics, from neuroscience and things like that that are going to inform AI. So I just want to get your thoughts on where are we now with AI and what's coming rapidly. Next, what do you see happening in the next few months to a couple of years?

Natalie Monbiot:

I want to touch on something that you said about. I think this is what I picked this up correctly Basically, AI doesn't necessarily engage with the external world. Is that I mean, once you put the human within the boundary, the human puts something in there, AI doesn't necessarily engage with the external world. Is that what you're saying?

Brian Rivera:

Yeah, once you put the human within the boundary. The human puts something in there and it has a data set to pull from, so it's not really engaging with the external world.

Natalie Monbiot:

Yes. So I want to address just that part of what you said because I think that's really interesting. So, first of all, ai is becoming more multimodal, right. So it is sensing. So when you have AI agents, they are able to sense stuff. So, besides what you've told it or you've prompted it, it can detect things, sometimes in its own environment.

Natalie Monbiot:

Now I'm going to bring this back to kind of virtual human use cases, because this is kind of my particular focus in the AI space. Just to give you an example, with ai avatars or um ai twins, which, by the way, can be something that you converse with and you know, in chat, an ai twin that you can call and have a chat with on the phone. So that's a voice enabled one, and you can also have ones that are actually removed from behind the screen and actually inserted into a real world environment and it can sense you in the sense of it can pick up not just what you're saying but your tone of voice and it can sense its environment. Right, you can pick up certain things. It can pick up the temperature of the room, like how many people are in this got you know. There's a kind of like seeing, sensing and hearing evolutions in AI form factors. So I think there's that, but, at the same time, as much as there are these innovations in multimobile and multimobile modal AIs, I think that is actually a good thing in the sense that it is never really going to be this sensing thing, this thinking, sensing being that has an intrinsic purpose like a human, and so that's good and an example of why we can't abdicate everything to AI, because it just isn't, it can't ever replace a human being.

Natalie Monbiot:

And so you know, when you think about like different ways of building the future of ai, and we kind of um, um contrast, let's say, sam altman's view of ai and like we've reached it, apparently we've already blown past agi, we've got agi already. Like these huge, like milestones of like oh my god, look how powerful ai is. But but we're like okay, but where does that leave us? Kind of thing. Contrast that with Anthropic CEO. So probably the closest competitor to OpenAI is Anthropic, led by Dario Amadei, who actually started off at OpenAI, and he uses the term powerful AI and just the choice of language and sort of gives it context. Right, it's, it's powerful, it's something, a tool that we can use and align with. It sort of doesn't make it sound like science fiction and what he talks a lot about an essay that he wrote a few months ago expounding on his vision of ai, and you know the possibilities and also his concerns.

Natalie Monbiot:

He's a big proponent of aligned AI, ai that aligns with human interests, right, and so, even though these models could just blow past us and all kinds of levels of intelligence, that's not necessarily useful to us, right? So how do we collaborate with AI in a way that's aligned and that is useful and that pays attention to the pace of the real world? So the real world has seasons. Right now it's absolutely freezing in New York and in a few months it's going to get warmer. I mean, this is the pace of the real world. How do we align AI and the pace of AI development with just nature and also human intent and specific goals? You know, like trying to crack a specific challenge in, you know, in curing cancer, as opposed to just like go cure cancer. I mean that's just not going to happen. You have to work with patients, you have to work with doctors and surgeons. It all needs to kind of happen together. So I think that the future of AI, if done right, needs to be very aligned with the purpose and also the pace of human life.

Natalie Monbiot:

And I think what's concerning is the narrative of it just sort of blowing past us. And it's so intelligent and why even ask it any question? Why even question? You know how it works or why it's doing what it's doing. That's too hard, it can just do that and we'll just, I don't know, we'll just, you know, just abdicate control, and I think that's dangerous.

Natalie Monbiot:

And it relates also to I'm reading Yuval Noah Harari's book Nexus. So I just find the first half of the book just fascinating. He is a historian and he is such a good storyteller but the history of religion and believing in a deity that is utterly infallible. And if a deity is infallible and you can't question it, then well, you know it doesn't matter what happens in the real world, like it's infallible, then well, you know it doesn't matter what happens in the real world, like it's infallible. And so you know everybody lives and dies by whatever this. You know deity determines.

Natalie Monbiot:

And so I think you know where Yuval's going with his book is. Like we don't want to create an AI that is like a deity that is supposedly infallible, that we cannot question, and because that can send us down very sort of dangerous pathways. So I guess you say what do I see as the future? I see a few different scenarios and what I want to make happen. I think that there's a lot of very intelligent people, both on the kind of build side and policy side, and you know, it will take a village. It's a very human problem how to guide AI in the right direction, but I think it's our prerogative to guide it in a way that is aligned with our interests.

Brian Rivera:

Is that connecting to the governance conversations and ethics that are conversations that are happening now, I think even in Davos this week. We're recording this the week of Davos, but I think governance, to me, is the black box. What's actually happening on the inside? Do we have a clear understanding of what's going on? You talk a little bit more about that, as they're kind of shaping policy globally, and I think Trump, our new president, may be pushing out a new AI infrastructure game plan that is heavily dependent on energy. This new path to Ai has a high energy cost associated with it. So are these assumptions all valid? You know these people asking questions about governance and ethics and then, uh, the?

Mark McGrath:

do we have the right infrastructure, energy infrastructure, to support this agi and who would have the final say, like that would be the thing I'd want to know who'd have the final say on that? Or could it be organic, like an open system, like an open source?

Natalie Monbiot:

uh, okay, I'm going to break down the question. That's a very big question and I'm going to try and ground it in my area of expertise to kind of make it tangible. Um, so you know our ethics and governance like valid topics of discussion absolutely, like absolutely, you know where things land and you know where the lines are drawn, and all of that is you know is also, those nuances are absolutely critical to how the future will actually unfold. But just to bring it back to kind of my area of expertise, which is around ai and virtual human technology, okay, and this idea that you can create your own ai twin, and so what is an ethical ai twin? Look like right. So let me just break it down to this example.

Natalie Monbiot:

I actually did a TEDx talk on this topic a few months ago and I told the story of a real life girl called Emma who became a compensated avatar on the platform of a company that I helped co-found a few years ago, r01. She became an AI avatar on the platform and I told the story of, first of all, what the benefits are to her. Why would anyone create an AI avatar in the first place? Well, create an AI avatar? Yes, she can make money while she sleeps, literally has her AI avatar appear in videos and content to teach German and other languages that she doesn't know around the world, right, and so, wow, it's like that's a whole new vision of the future of work and what's possible, right, new forms of passive income. But then also goes into how we very thoughtfully as a company, approached how to do this in a safe way, how to do this in a way that's ethical and in which that was good for Emma, but also good for all the people that would actually be exposed to the content featuring AI Emma, right, and what they should know about that content and why that content would even be valuable to them. So, in that light, I came up with a framework which I call the ethically sourced avatars, and there's three components to this framework.

Natalie Monbiot:

First of all, as a person, you should and, by the way, I'm talking through this and I just want to give this a concrete example, but I think it's also a metaphor largely for kind of how an individual can think about engaging with AI in their lives, right? So, anyway, the first one is around control. So Emma, in creating her AI twin, needed to consent to that right, so not just going to have ai versions of emma just created and used by anyone without emma's consent. So emma's consent and emma's control over how her ai would show up in the world where it would show up. Where it would not show up not not next to any illegal content or illicit content or content of sexual nature, or, you know, we didn't do political content as well so to know to feel safe in how your AI was being deployed.

Natalie Monbiot:

And then the next one is around transparency, and I think this is something that is a huge topic and applies much more widely than AI twins, applies much more widely than than ai twins. So, first of all, transparency in the fact that someone who is engaging with ai whether it's ai generated text, voice data, a person, a voice, a person it should be made explicit that what and who you're engaging with is an ai, and that's just to respect the user's right to know and to basically build trust, right? So if people are kind of unsure about what they're engaging with and why, then that leads to collapse in trust. And then transparency to the extent of not just knowing that it's AI, but where it came from, and there are new protocols that are actually gaining ground. There's one called content credentials that a lot of the major platforms have actually adopted, including YouTube recently, so that when you see a video that's generated using AI, you'll actually be able to hover, you can hover and also, through DALI OpenAI image generation platform, you can actually detect, you can see. Through an icon on the corner of the image, you can see that that content was actually AI generated and it's provenance, so where it came from, which are the different platforms involved. It's another term that's used in the industry. It's explainable, so explainable AI. You can actually kind of see the source of the AI and all of that. So I think these are best practices, whether you're using technical solutions like these protocols or you're just. You know, it depends on the context and the use case, but just being explicit about that in the first place is really important.

Natalie Monbiot:

And then the next one, the third one, is, after control and transparency, it's accountability, and you know like this goes, and this includes everything from being lawful, right, so actually understanding what the law is saying or where it's going. Okay, that's just basics. And also, well, what is ethical, what feels right by your customers, by your users, what feels right by your customers, by your users, and what are the industry standards within your field of work? Different industries and different platforms are establishing rules built on ethics and good governance and just complying, understanding what the different parameters are and making sure that you're kind of reflecting that and how you're going out there. So these are three different components which are boiled down and simplified, at least to help guide anyone that's trying to make an AI twin. So it's an AI version of themselves, an AI extension of themselves, how to think about doing it and to do it well for yourself and for others.

Natalie Monbiot:

And I guess the point is as well why are you creating it in the first place? You're creating in the first place to create, we hope, fresh value that couldn't have existed before. So, in terms of and now I can actually just give you an example of where I think AI LLMs have been and where they're going now. So I think where we've been up till now is everyone's been like super excited chat, gbt, a lot of you know a lot of hype, a lot of and substance like people are really using it in meaningful ways. Uh, the same with ai avatar technology. Oh, my goodness, here's my avatar, didn't you like? You bet you didn't know that was me like people like, wow, like all of these things are possible and the capabilities continue to develop. You know, in the space of you know, your voice can sound extremely natural, it can express emotion, it can actually respond intelligently because it can sense its environment or what the other person is saying. There are all of these technical developments that exist, but, at the end of the day, what new value are you creating with this technology?

Natalie Monbiot:

Because if you're a business, you're trying to I don't know delight your customers, right, delight your customers at scale, let's say. And so having AI twins, leveraging AI in that way in your business, can help achieve that, and maybe your customers will. Your customers be delighted when they are speaking to an, an ai only if you're delivering value that they weren't getting before. If you've replaced a human with an ai, I don't think they're going to be very happy with that. So I really think the opportunity is not an efficiency play. It's about how can you augment human capability using ai for the benefit of humans, platforms and businesses. So I think that's what's. That's what's exciting.

Mark McGrath:

I'd rather, I'd rather talk to an ai than be on hold for three hours right, well, there you go so, especially if it's a smart one, right? Yeah, I mean wouldn't you want.

Natalie Monbiot:

You know, sometimes, like I don't know, I'll call it delta or something you can almost tell in the first, we don't want to name any names, but united.

Natalie Monbiot:

well, this is like sometimes actually, actually, I would say, overall, delta does a pretty good job, so that's not where I'm going with it, but you can tell kind of in the first 20 seconds if it's going to be a great conversation. They can really help you or they won't be able to help you and you kind of got a little bit of a dud. Do you know what I mean? And it's dependent on how brilliant that individual is, because people come in. You know different shapes and sizes and all of that. So, uh, actually this is kind of funny because so many years ago um, maybe 11, 12 years ago I was a strategist at the IPG media lab, which is an emerging technology lab under interpublic group, and it was a really amazing and fun job because, besides, is an emerging technology lab under Interpublic Group, and it was a really amazing and fun job because, besides thinking about emerging technology and how that's going to shape people's behavior and opportunities for businesses, and just what the social landscape is going to look like with this technology incorporated, I got to meet a lot of really cool startups and some that which, retrospectively, were definitely before their time but very visionary.

Natalie Monbiot:

There was one that was this app which was designed for retail. So it was even. I can remember what it's called. It's called Signature, and I think they had a partnership at the time with Neiman Marcus. They were doing a pilot and you've got these some, you know, incredible I don't know what you call that personal shoppers or shopping assistants, and the top, top notch ones were just like they would just yield so much in sales, right, because they were just so effective.

Natalie Monbiot:

Customers loved them, the customers trusted them, really liked their advice, and so this app was designed to basically model the best possible shopping personal shopper and to basically put this application in the hands of all the personal shoppers, essentially up-level all of their expertise to be top-notch. And so this is such an early example of how you can use AI based on a real human to up-level your own capabilities, but also the capabilities of the crowd, of the group. I think that's the way to think about it. It's like, I'm sure, a customer that was suddenly, you know, like every time they walked in, was getting this you know, top-notch advice would be fine, knowing that you know it was assisted by you know AI and a knowledge base and fine-tuned to serve them consistently in the best possible way. So I think a really great litmus test is you know, does somebody want this like? Is this creating outsized value, something that could not have existed before?

Mark McGrath:

I mean again, I'm not I'm not always opposed to the interaction with, say, like a platform like substack. I find that their ai q, a back and forth chat, is so much it's fine, like don't need a person per se, but like when you're getting on the change flights you're getting on to, you know, cancel flight or whatever. It seems to me that's like this level of punishment because you're waiting for a human to answer a phone that's probably not going to answer the phone anytime soon in your own schedule. It would be. It seems ripe to be replaced by, like a good use of AI. You know, you talk about time and space. I mean, I think that there's a lot of time and space that we want to have back as humans and there's a lot of things that could be outsourced in a way that make us better to a point.

Mark McGrath:

I mean assuming you're, because I like those things that you say with the control, the transparency and in the accountability. So and then?

Natalie Monbiot:

I don't know oh good sorry in that example that you give of like wouldn't it be nice not to be on hold and like whatever, but it's just not. There are solutions, they're trying, but it's just not done very well. Like I don't know if it was like on united, but on delta they try to shift you to that chat, which I know from many experiences being shifted to the chat, you know. So if the chat could resolve the answer, then great, but it can't.

Natalie Monbiot:

And so if you can just use AI to level up actually the promise of your solutions. That's almost just like a hygiene thing, like how can you actually deliver the value that you want to deliver? But you're just not doing it, and the experience of it is that you're just kind of kind of getting. You know.

Brian Rivera:

I feel like if I'm smart enough to find your phone number on an app, uh, then I'm smart enough to know how to navigate the uh, the internet, right. So it's, it's amazing. Sometimes you call for service to like a Delta or United because you want to talk to a human, and they go, hey, hey, can you visit us at this and join our chat? You know I'm like, oh my gosh, yeah, so it, it's, it's, it's a, it's a time suck. I think the majority of the recordings that have me on it are me yelling at a bot I know I do worry, by the way, about those recordings.

Natalie Monbiot:

I guess someone's yeah, so that's like an ai is going to basically like, get all that data and be like this is an evil person. We're going to eat our revenge out on her for being so angry at us.

Brian Rivera:

But I figure, if I'm going to vent, I'm going to vent on an AI because it doesn't care right.

Mark McGrath:

Or a bot, you know, and I'm just screaming at. I want to talk to a human, you know.

Brian Rivera:

Oh, it sounds like you want to speak to somebody. Yeah, you know. And oh, it sounds like you want to speak to somebody. Yeah, I'm like.

Natalie Monbiot:

Yeah, I didn't call you to talk to machine I had that capability right now, yeah it's true that's true, but that does bring us on to another, your multi-part, very big questions. From earlier I think you said, like who should make the final decision?

Mark McGrath:

yeah, who drives this? You know who. Who has the final say on the direction that it?

Natalie Monbiot:

goes okay, because I was thinking, um, you know who has the final say on on just kind of the output and and what is right and real right, like okay. So just because the ai said it was, you know that was the answer, he's just going to take it for granted. That's the answer.

Mark McGrath:

Yeah, I mean more along the lines of you know, if you know, we talk about like morals and ethics. I guess it comes down to whose morals and ethics and who would determine that and who would drive that. Because the reality is, you know, not everybody has the same level of morals and ethics source system, like where, where it is an open, an open system where people could make more bespoke solutions that are appropriate for for them or their culture or their profession or whatever. That don't necessarily need a sort of like a potentially dystopian model of having some central committee decide like what's allowed and what's not allowed, and and and what's appropriate, what's not appropriate yeah, I think that comes back to the kind of the accountability part of the Venn diagram and I think it was helpful just to take specific examples.

Natalie Monbiot:

You're probably familiar with the Character AI lawsuit which, you know, sort of the last probably about the last four or five months has been in the news. In any case, character AI is a platform where you can create personalities and you can create personalities and you can create personalities that are based on different characters and essentially, uh, you create a friend and um and you can, you know, interact with this thing as much as possible. It gets to know you and all of that and it can be really fun role play, like ai relationships. You know definitely a controversial topic in general, but I actually I don't see anything wrong like ai relationships. You know, definitely a controversial topic in general, but I actually I don't see anything wrong with ai relationships teach, teach their own, but what's that movie with megan trainer?

Mark McGrath:

uh, subservient or something no, no, it was or not. Megan trainer me Meghan. There's a movie on Netflix where the nanny, or whatever, is AI. It's an AI nanny.

Natalie Monbiot:

I hate one of those.

Mark McGrath:

The mom gets sick, the mom has I don't know cancer surgery or something like that, and then the is it Meghan Trainor? I talk to my daughters, I mix all these up. I got to talk to my daughters Like I mix all these. I mix all these up, but anyway it's like. It's like an AI live in nanny that you know there's some ethical things that get crossed or whatever.

Natalie Monbiot:

I know, well, I guess, okay, let's just take that. I don't know, I haven't watched it, but let's just take that scenario Right. So think about AI, Think about AIs that are. So an AI does not have any sense of purpose or morals or like anything right, it doesn't have that Humans have. That, I mean, we can go, of course and you know there's a lot of baggage there, but we're the ones that have that and just to kind of narrow it down, rather than like ethics at large and like what's ethical, but just to that use case of having an AI nanny, right, so the mother's sick, has cancer, needs help, that AI nanny should be entirely trained on the purpose of this person that requires the help, right, it should be entirely designed around that person. So I think it's just like mean. That's, I guess, kind of pretty basic and, um, I'm sure it's definitely probably not as easy as it sounds, uh, or, as I've said it, and I'm sure that the film proves that yeah, subservience, megan fox, I said megan trainer, one of the one of the megans one of the megans, but uh yeah, it's, it's pretty

Mark McGrath:

aggressive, though, like how if you've ever seen hand that rocks the cradle, imagine hand that rocks the cradle, but with an ai, not with a, not with a like a, like a flesh and blood living nanny or whatever, but I mean, there are so many dystopian sci-fi fiction you don't actually don't have to go that far to kind of see these days as like actual examples that we can look

Natalie Monbiot:

to in the real world and they're just a little bit more tangible and a bit more like wait, this is. These are the ethical dilemmas that we're facing, and so who sets? Who sets how things should be? I think it's a mixture of the law and also like good governance and just certain and also just platform best practices, things that have been agreed upon that are good guidelines. So, anyway, let's think about the character AI scenario, where you can build these AI friends and a very vulnerable young boy creates an AI friend and he ended up having and I think he was sort of all children are not emotionally mature enough for these platforms, as it's been determined but he was allowed to pursue this relationship with this ai and he he talked about suicide and some of the new stories around this are oh, it told him to commit suicide.

Natalie Monbiot:

It's not, was not, definitely not that, um, clear cut, but it and, by the way, when we say it, this ai friend is basically character ai, which is a google-owned company, right, so it's like it feels like a person, but it is like character and google, right, so like whatever character, whatever google is held accountable to. So should this persona, right, and so what was? And there's a court case against it, brought by the mother and, I think, another parent as well, and what should have happened is at least it should have been flagged, you know, to to the boy. You know, oh, you're talking about these things, that we can't continue this conversation Like, oh, you should get help, or it should flag it to the parent, or it should flag it to the developers to have an action plan around what to do here.

Natalie Monbiot:

And obviously, these platforms are optimized for growth and because AIs don't have any morality inherent morality they will just. Okay, you said growth, okay, it's growth, so I'm going to nurture this growth. Okay, it's growth. So, like, I'm gonna nurture this relationship until it's grown to the extent that it possibly can. So, you know, I think that's a good example of so there are laws against, like what the platform did and its negligence can you do like the the three laws of robotics, like isaac asimov had with I robot like a robot.

Mark McGrath:

A robot can't harm a human. You know, a robot cannot, right, must obey orders, except if it violates law one.

Natalie Monbiot:

And then the third is a robot, I think uh has to sustain itself so long as it doesn't violate the first or the second law yeah, I mean, you could probably do something similar for ai right, and these are just, I mean, I don't know what is that enshrined in, that's just like best the first or the second law I mean you could probably do something similar for AI, right, and these are just, I mean, I don't know, what is that enshrined in, that's just like best practices.

Natalie Monbiot:

It's not law, right it's, I guess can fall under governance. You know, these are things that roboticists abide by, right. They've just sort of agreed amongst themselves that these are the laws of robotics, or some principles to go by, you know. So I think accountability includes that kind of thing.

Mark McGrath:

You know, policy and then also um it's funny though it's like I mean, I feel as a gen xer, like we're finally starting to see things that we read about in, uh, in science fiction, of course, if you, if you fall back to the future, it was 2015, when we were supposed to have flying cars and, like you know, trips to the moon base and things like that well, you go back to the space odyssey and you had hal 9 000 right and I think, that the latest meme on that, or idea on that, is hal stands for hallucinations and 9 000 is the number of patches that it was up to right.

Mark McGrath:

Huh yeah, but I do want to talk about some positive, because okay, so taking a step back, like what I'm all about is what I call the virtual human economy okay so this is this future in which you've given ted talks on this too, right like this, and this is what you can't when that, that group that we're in this was what your your topic, so why don't you expand on that? Yeah, because that's, that's huge yeah.

Natalie Monbiot:

So we've kind of touched on it in a few ways already, but just to kind of take it from the top, the virtual human economy is an era in which we can each have an AI version of ourselves.

Natalie Monbiot:

Okay, so this can be. It can clone how you look, how you sound, how you respond to stuff, your body of work, your knowledge, your persona, all of these things. All of these technologies are available to enable you to do that. But what the virtual human economy is about is you having control and agency over that AI self and to use AI to augment you, to help you advance your goals, your purpose, your humanity, and to be able to offload more work to your AI selves as it makes sense, and to free you to be more human, which, as we've discussed, it's actually a little harder. It sounds great. I want to be more human. It's actually quite hard work, so it might not be for everybody and not everybody will use it that way, but that's what it's all about, and so, as of kind of pretty much this year, I've been just focused on putting out this message and kind of sharing this vision of how this can work for humans and society, how it can create new growth for the different industries, how it can create new industries. And what's really fun is, as I talk about these things, different startups and experts and academics and just like all kinds of people are just sort of popping up and being like experts and academics and just like all kinds of people are just sort of popping up and being like. That resonates with me for whatever reason, and recently a startup founder messaged me. It's like I've heard you speaking about the virtual human economy on a podcast. Actually, we're building in this space and what probably my favorite thing is to discover, you know founders that are actually doing this and have made this their life's mission and this startup called wisely. They enable you as an expert it's for, it's for experts, right and individual contributors, so people who work for themselves and their value is their expertise. Their value at work is their expertise.

Natalie Monbiot:

But it's really hard to scale your expertise and not get sort of bogged down in rfps and unpaid time business development. I make sure you don't see Ponchi nodding her head Like we all do it right. It's just like a lot of time spent doing all of that unpaid, that work with unpaid time. So the idea is like how can you use AI to scale yourself and your expertise to help you manage some of the backend stuff for you. Take all of your knowledge and your expertise to help you manage some of the backend stuff for you.

Natalie Monbiot:

Take all of your knowledge and your expertise to help populate RFPs or create responses which are responses that you would deliver yourself and train it on all of your knowledge, train it on all of your body of work and how you talk, how you respond, and you always have the ultimate say but it's kind of this engine, this personal AI engine, on the backend, and then it can also, instead of these calls, exploratory calls. Your AI twin can have the exploratory call and actually your AI twin can also charge for some of these calls right, just to charge a smaller fee. And so that's good for individual contributors because it helps buy them time, which, uh, which is the most scarce resource which actually we can buy. We can, in a sense, buy back for ourselves now, yep I have a question.

Brian Rivera:

Yes, are you natalie? Are you the ai twin?

Natalie Monbiot:

well, yes, not yet.

Mark McGrath:

I brought my whole human self to this conversation man yeah we'd be pretty insulted I'm pretty impressed, yeah, yeah that would be pretty insulted, I'm pretty impressed. Yeah, that would have been my first question at the very beginning. I forgot to ask.

Natalie Monbiot:

I have to say you've got to use AI. I'm, at the heart of it, a media strategist, right, and so use a communication strategist. So use AI in the right context. Don't use AI in the wrong context.

Brian Rivera:

You can really put people off so just a quick snowmobile here, mark. Uh. So birth rates we've been talking about the decrease in that. This might be a solution for that. So you know what I'm putting down with.

Mark McGrath:

Uh, what rob puts out with yeah about the fertility rates yeah, I mean yeah.

Brian Rivera:

So if we're going to evolve as a species and we need more help, then AI is going to be able to supplement that.

Mark McGrath:

Well, that's subservience. That was the AI nanny.

Brian Rivera:

Do I have to pay him? Are they at $10.99? Or do I have to give him a WT salary?

Natalie Monbiot:

No, by the way, these are all the really fascinating implications on what happens. So one of the things so actually, when I did my talk and you know comments on YouTube and stuff, and someone was like oh, does this mean? Yeah, like questions about that. Like does that mean you never leave a company? You don't, or you don't? I mean that sounds terrible. Right, like you, in a way, don't have to leave a company, okay, because you can leave. So what are some of the things that plague companies? Right? Someone's sick.

Mark McGrath:

They're on vacation.

Natalie Monbiot:

They're late, they're on maternity leave, they have like whatever. It is like all of these things like they have to manage, like basically absent employees and when the employee is not there, nor is their knowledge right and their expertise and you know you can't get in touch with them.

Natalie Monbiot:

I mean's, I think it's a law that you know, during maternity leave, um, I believe you're not allowed in certain countries. You're not allowed to. I think in israel, you're not actually allowed to contact the. You know the employee who's on who's on parental leave or something like that. So, but you can see why.

Natalie Monbiot:

That's to protect, that's to protect the human being that's going through that and that needs that time, but it's no good for the company, right, and and actually it's not really any like if there was a solution whereby that, not, you don't need the whole human, you just need that knowledge. Right, like I don't know. Like leaving notes, you know, your away notes and things like that. I mean those expire in terms of how useful that is, after a couple of days probably. So what if you could actually have an ai version of your employees that could fill in for when they, but when they have to be absent and then also when they leave? Do employment contracts adapt to that?

Natalie Monbiot:

you know it's like okay, no, we um, actually it's kind of interesting. You know it's like oh, all the ip and signing a bunch of contracts right now evaluate a bunch of contracts. I, intellectual property belongs to the company While you're at the company.

Natalie Monbiot:

any IP that's generated belongs to the company, right? So? But what does that look like in the era of AI twins? Is it just a continuation of that? So, basically, we're going to create an AI twin of what you know while you're working at the company and we continue to own that intellectual property, but it just now is embodied in AI twin. That's like one eventuality, but it's great for corporations, not so good for employees. Will employees negotiate with that right? It's like, well, no, we share that AI twin, or?

Mark McGrath:

like I don't know.

Natalie Monbiot:

There's all of these different implications that will need to be addressed.

Brian Rivera:

What if the company wants to multiply that? Make another copy of that AI, right? Who wants that? Then you get into multiplicity, and then you get into number four and Lance and Doug I don't know if you've ever seen that movie from the 1990s yeah, michael Keaton.

Natalie Monbiot:

So these are all things that will need to be written in right. These are all things that need to be part of employment contracts. It's kind of similar to the newly passed California law on protect, which actually came out of the SAG-AFTRA strikes. It's about a year and a half I'm losing track of time. More than a year ago, the SAG-AFTRA strikes happened and part of the reason for those strikes was how AI could basically replace jobs, people's jobs, in the talent industry. Anyway, one of the outcomes successful outcomes of that strike is that you can't use a talent's likeness, an AI of a talent's likeness, without their express consent and compensation and all the things. Whatever you know, whatever you negotiate, but you, it's unlawful to do that. That's an example of like you know laws being updated to reflect the new way of things, and you can imagine that you know being that sort of thinking being applied to other areas of employment, uh, and to, you know, protect people against ai, but also just to kind of benefit. Just what, what makes sense.

Mark McGrath:

So there's going to be an evolution for sure boy, I tell you never a dull moment with this stuff I want to talk about one other thing, because you mentioned human evolution.

Natalie Monbiot:

you mentioned human evolution earlier, so I've been really into uh, this the the work of this cognitive scientist called John Verveke and, if you've got the time and the inclination, there's a 50-hour YouTube lecture series yeah, my friend Adam sent me that it's like a 50-hour playlist, right?

Mark McGrath:

Yeah, it's literally like he is a lecturer.

Natalie Monbiot:

He's a you know and he's giving you know. He's basically giving a lecture series online, so it's a different format.

Natalie Monbiot:

It's not purely made to just listen to, but that is how I predominantly listen to it. But anyway, one of my big takeaways it's not something that he says, but one of my realizations like learning about his work is just so inspiring and it's helped me make a lot of sense about where AI belongs in our lives and kind of combining that with how I think about augmenting human capabilities of ai and the virtual human economy.

Natalie Monbiot:

I feel like there's a place where ai belongs in our human evolution and I think it's a very helpful way to look at things because it helps us to kind of not fear ai and embrace it to an extent as part of ourselves, but not all of ourselves, right, there's other parts of us which are distinctly human, which will never be replaced with ai, and we should hone to become more human, like, yeah, to almost define who we are as a species, but also, but evolve, evolve with it. And so there's. They talk about the four, the four E's of cognitive science. Let's see if I can remember them all.

Brian Rivera:

It's a body. Did you act it?

Natalie Monbiot:

Yes, thank you yes so you're familiar?

Brian Rivera:

Yeah, I'm familiar.

Natalie Monbiot:

Okay, so in the last one, extended, which includes creations, human creations like language, right. So we created language and then language created the next evolutions of human beings.

Mark McGrath:

Yeah, this is Marshall McLuhan 101. The medium is the message, and he goes all the way back and starts with the creation of the phonetic alphabet and the written word and everything forward.

Natalie Monbiot:

Well, super aligned. So, yes, so we created language. It's a very human thing that we created. The language shaped us and you can credit language and just upgrades in language, for example, example, the introduction of vowels in ancient greece which can be credited for, you know, enabling democracy, you know, because for various reasons that vivaki goes into.

Natalie Monbiot:

So, thinking about ai being part of our evolution within us, extended within the domain of one of the four e's, within the domain of one of the four E's, within the domain of extended cognition, okay, so looking at it like language, it's something that we learn and it shapes us and we evolve with it and we also, in the meantime, double down with the other aspects of what makes humans human, and it literally does create the next evolution of human beings. And there was an element in there which I thought it's also interesting, and I'm you know, it's just, it's something that I want to dig into some more. But not everybody can evolve, not the entire species will evolve and get through that narrow doorway of evolution, and so the choices that we make and how we engage with AI, I think will contribute to who and how we evolve.

Mark McGrath:

Well, yeah, I mean again. The medium is the message that the technology and the environment shapes us and it has an effect on us, who we are as humans, and AI would be no AI would.

Brian Rivera:

Yeah, this is the enacted and the embedded portion of the four E's in cognition Really changed my view on the observer-oriented side act loop. In the past, like many folks, I used to think of it as just the brain, and then it isn't, until you put a boundary around you and a device and or a notebook or whatever it may be, some type of AI. That's how the OODA loop works, is what's inside the boundaries, is engaging with the external world and that's how you survive, that's how you think, that's how we evolve. And I think to a lot of folks out there that continue to use the OODA loop as a way to explain the brain, you're missing the bigger picture. You have to include the pen, you have to include your computer. You know and you have to define that when you have a conversation.

Brian Rivera:

So when Natalie and I have a conversation about the OODA loop, we have to go what's in the inside and what's on the outside? Because and I think you know this from meditation and other modalities we're all connected. There are no boundaries between anything, right, so that statistical boundary between what we put on something. We have to have that common understanding before we can start engaging in a conversation about how something works, and Mark and I saw this with the guys in Sweden they start telling us no, this is how it works. I'm like, well, you got to define the boundary first before we just arbitrarily start telling me how things work and don't work. And I'm like I don't know what you're talking about.

Mark McGrath:

Yeah, it becomes too linear, then yeah, yeah, yeah, it's just.

Brian Rivera:

You start going, well, that's included and that's not. I'm like, well, can we start there?

Mark McGrath:

but but this, this idea when you bring up evolution too. I mean you're suggesting something that's non-linear, that's complex at many levels, that that requires something with a boundary between our cognition and the external environment and being able to also know the the difference versus thinking that what Ponch is alluding to, people take it and they're trying to get it into some kind of a templated formula that just becomes algorithmic and doesn't account for reality, doesn't account for environment, doesn't account for the medium being the message and all those other things.

Natalie Monbiot:

Yeah, and I'm really fascinated by the embedded right, all those other things, yeah, and I'm really fascinated by the embedded right, and so humans are embedded in the world in a way that is just absolutely core to the definition of being human. Like you're born into the world and part yeah we're part of it.

Brian Rivera:

Yeah, and this, this is interesting. Uh, we might get into consciousness with ai down the road too, and you know that's a lot of the hard problem with consciousness is hard to define consciousness, but you know, will. In your opinion, will AI ever be conscious?

Natalie Monbiot:

You know, it's really funny.

Natalie Monbiot:

It's like whenever I'm using like a tool like a Claude or chat GPT to help me edit something that I've written, and whenever it uses puts the word consciousness in there, it just like slips it in Cause.

Natalie Monbiot:

I'm kind of like dancing around the concept of trying to use other words because I find it I just try to avoid the term because it's not really it's like to me, it's like I'm not an expert in it and it's like way too much stuff. Um, but to answer the question just very briefly, no, I don't think so, because I don think I think intent and purpose and having a purpose, like being born into the world to be a human right To kind of and then do human things and all of that, I think that is that is part of consciousness. And I don't think an AI is not born into the world to do anything other than what it's instructed to do. It doesn't have an inner compass, it doesn't have an inner compass, it doesn't have an inner life. I think it can be taught to mimic, but I don't think it can intrinsically have its own great.

Brian Rivera:

Now there's good insights yeah, we can.

Mark McGrath:

uh, I mean, I think, punch, the other night I, I did it, I did use your your graphic, by the way with the, with the markov blanket, to get people, you know, to get people thinking about OODA Loop Sketch. Yeah, what do you?

Brian Rivera:

think I've been tracking a little bit on consciousness. In fact, I spent the last couple of days kind of looking at it, and I agree with you Right now. You know, trying to define it from different perspectives is hard enough. There's the. You know the universe as a conscious. Are we all part of one conscious? Are we living slices of consciousness that are experiencing something on behalf of a greater consciousness? It's a very interesting conversation, one that I don't know if an engineer should be involved in. I want to make sure the philosophers and the physicists and neuroscientists are leading that conversation, not the engineers, because I don't want them engineering this into technology, not now. And I'm not bashing on engineers by any means. I'm just saying that linear thinking often leads to dangerous types of approaches.

Natalie Monbiot:

Yeah, that said, just to add to that, I think that, while so, this is the thing about this narrative of AI agents and everyone's like, let's build these ai agents, they'll do everything for us, they'll do all the work and we'll just tell it what to do, and it's just gonna like we're not even telling what to do, we'll just, you know, give it an objective and it's just gonna go and work everything out for, for us or for them, or whatever sort of unclear, and I feel like that discourse is kind of missing the point again of well, why, where, ultimately, is that going to lead us as humans? Like we're the ones with the consciousness or the sense of purpose. We're these beings. We're these embedded, enacted, extended, embodied beings, right? So why can't we have AIs that basically take the purpose that we have right as humans and amplify that? I don't think we should be thinking about giving AIs that basically take the purpose that we have right as humans and amplify that. Yeah, I don't think we should be thinking about giving AIs that like we have that.

Brian Rivera:

Yeah, we're the center here, we're the client, we're the customer, and for me, the idea of AI is going to make us more human. We have two pathways right, and that's the pathway I think you're on is how do we increase our humanness, our connectedness to one another? How do we improve our human performance? Ai for understanding the genetics of the body. That can be customized to go okay. Here's the type of diet we need to put you on, or you should be on. Here's looking at the kids when they're shooting baskets, making sure that it's correct, and I'm giving them the rapid feedback loops that they need. That you can't do. You can't do, uh, because you're working. So I do think that ai has a a. The positive path is it's going to make us more human, and at least that's what I'm hoping for it's another, it's another episode where I want to think about tayar de chardin, because we're hitting.

Mark McGrath:

We're hitting on exactly those types of things that he was talking about. Um, you know, especially as we evolve I mean because you can't get out the evolution is going to happen one way or the other. That doesn't stop. Just the nature of VUCA, just the nature of things that are always changing.

Brian Rivera:

Yeah, there's so many other conversations to connect AI to snowmobile conversations. John Boyd talks about Stuart Kaufman's the adjacent possible, the hockey stick. As far as uncertainty as a function of change, you know we're going to see our body. The human race cannot keep up with the rate of change right now and that's maybe another thing that AI can help us with. So, a lot of tangential things to talk about with AI, but this has been amazing.

Mark McGrath:

This has been a great conversation, just like the.

Brian Rivera:

A lot of thoughts going on in my head at the moment.

Mark McGrath:

Yeah, I mean you mentioned engineers earlier. I mean, at some point I guess somebody's got to build this stuff. But the bigger questions, I think what Natalie's bringing up, you know it's like, okay, yeah, there is a philosophy discussions there are, like, ethicists, there's moralists that need to get involved, there's moralists that need to get involved. There's, there's, uh, there's neuroscience, because and there is a there there is an effect on the body. Like you know, the being in this epoch is going to be directly affected by ai, just like it was with the internet, just like it is with an iphone in their pocket and everything like that. These things have an effect on us. Um, and the long-term repercussions, I guess, as things accelerate so quickly, it's hard to it's hard to really think what are going to be the lasting effects on people. But you know what they were saying. The same thing about Google. I remember when Google came out, it was something that was so radical and you're going to put libraries out of business and card catalogs.

Natalie Monbiot:

I talked to a very smart acquaintance of mine who was part of the Google Gemini team that helped build that LLM and you know, talking to him about the future of AI and all this stuff, he's like you know what, Don't talk about the future.

Mark McGrath:

Yeah.

Natalie Monbiot:

Do it Like you can't predict what's going to happen.

Natalie Monbiot:

You don't know what the possibilities are, make something, and so I teach. I'm not teaching myself to code, I'm just coding with claude and replit, and just I did. You know a little python for ai course. That's free online and just like just see what can happen. And actually, what's really interesting is and he pointed me towards some, um, some interesting tweets, or whatever you call them these days, and it's just like you know, it's a race.

Natalie Monbiot:

The future is basically a race between technical experts that you know, like engineers, who like know everything or were the only ones that knew anything, and people who are not experts, who have an idea, are willing to roll up their sleeves and do it. So which group of people is actually going to get the most out of this AI age? And I mentioned that to my husband. He was like I back that second group Because if you've got ideas and, by the way, if you don't have ideas, you can just prompt Claude to my LLM of choice. You know everything. I know about the virtual human economy because I've told you everything and we've brainstormed stuff. If I start coding in Python, what should I make? It's come up with a brilliant set of ideas and so I'm.

Natalie Monbiot:

I'm, I'm excited and I can make any of these things happen. So I think it's really empowering and, uh, but only if you just sort of, you know, I love to talk but also, um, you know, just like actually working with companies, startups, to build what they're doing and, you know, help deliver the mission, just getting your hands on these tools yourself but literally, like now, technically, anyone can do something with them. You know, I think it's getting down to brass tacks. A little bit is helpful.

Mark McGrath:

As we close, where would you like to send people that could learn more about your work? There's some great YouTubes of you giving TED Talks and things. What are some other areas that you'd like to send people to?

Natalie Monbiot:

Yeah, linkedin's great. That's probably my social media. That is my social media network of choice for professional stuff and I update that fairly regularly my website, virtualhumaneconomycom virtualhumaneconomycom. Virtualhumaneconomycom which will be live by the time this goes out. Which nice on my end, a couple of days what's going on right now?

Brian Rivera:

yeah, we're gonna right now.

Mark McGrath:

All right, natalie, thanks for coming on. No Way Out. We're gonna pause the recording. Stop the recording and we hope to see you again on the show yeah, thanks so much thanks for spending time with us it.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

Shawn Ryan Show Artwork

Shawn Ryan Show

Shawn Ryan
Huberman Lab Artwork

Huberman Lab

Scicomm Media
Acta Non Verba Artwork

Acta Non Verba

Marcus Aurelius Anderson
No Bell Artwork

No Bell

Sam Alaimo and Rob Huberty | ZeroEyes
The Art of Manliness Artwork

The Art of Manliness

The Art of Manliness
MAX Afterburner Artwork

MAX Afterburner

Matthew 'Whiz" Buckley