Strategy Meets Reality Podcast
Traditional strategy is broken.
The world is complex, unpredictable, and constantly shifting—yet most strategy still relies on outdated assumptions of control, certainty, and linear plans.
Strategy Meets Reality is a podcast for leaders who know that theory alone doesn’t cut it.
Hosted by Mike Jones, organisational psychologist and systems thinker, this show features honest, unfiltered conversations with leaders, strategists, and practitioners who’ve had to live with the consequences of strategy.
We go beyond frameworks to explore what it really takes to make strategy work in the real world—where trade-offs are messy, power dynamics matter, and complexity won’t go away.
No jargon. No fluff. Just real insight into how strategy and execution actually happen.
🎧 New episodes every Tuesday. Subscribe and rethink your strategy.
Strategy Meets Reality Podcast
From AI Hype To Competitive Advantage And Real Change | Dr Mark Bloomfield
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
AI is moving fast, but the real risk is moving fast in the wrong direction. We sit down with Dr Mark Bloomfield, founder of Turbulence and a fellow at Cambridge Judge Business School, to get past the hype and talk about what AI transformation looks like when strategy meets reality. If you have ever heard “we need an AI strategy” and felt the room skip the hard questions, you will recognise the boardroom tension we unpack: change management, competitive advantage, and the uncomfortable truth that with AI, there is no neat finish line.
We challenge the efficiency-first story that dominates so many generative AI rollouts. Yes, AI can cut cycle times, but we argue the bigger prize is capacity: headspace for better judgment, clearer choices, and the courage to reimagine work. Mark explains why AI is best treated as a capability, not a magic USB-C plug-in, and we explore practical uses like strategy simulation, horizon scanning, synthetic personas, and using voice agents to interrogate ideas rather than blindly accepting “synthesis”.
We also get honest about the darker edges: outsourcing judgment, metacognitive laziness, AI obesity, hidden operational costs, token economics, and the way incentives can trigger fear or even sabotage. From governance and accountability to how humans and agentic AI might coordinate work, we keep coming back to one theme: intentional use, with humans staying responsible for meaning, context, and decisions.
If you want a pragmatic, human-centred take on generative AI, organisational change, and strategic planning, hit play. Subscribe, share this with a colleague who is drowning in AI noise, and leave us a review with your answer: where will you draw the line on what you will not outsource?
Find Mark's work here: https://www.linkedin.com/in/drmarkbloomfield/
Enjoying the show?
Subscribe and leave a review on your favourite platform — it helps more people find the podcast.
🔗 Full episodes, show notes, and resources: https://www.lbiconsulting.com/strategymeetsreality-podcast
📺 Watch on YouTube → https://www.youtube.com/@StrategyMeetsReality
Connect with host Mike Jones → https://www.linkedin.com/in/mike-h-jones/
Strategy, Reality, And Blind Spots
Mike JonesMost people do think of strategy that way.
SPEAKER_00Developing a new strategy.
Mike JonesStrategic blind spots. When strategy meets reality and innovation in the strategy world.
SPEAKER_00Drive their strategic goals.
Mark’s Unusual Route Into AI
Mike JonesAnd welcome back to Strategy Meets Reality Podcast. Great to have you back on the strategy meets reality podcast. Today I am delighted to be finally, and I'll say finally, because I've been trying to drag them on this podcast for a very long time. And also, Mark is probably the main reason why I actually set up the podcast in the first place. So I'm delighted to invite Dr. Mark Bloomfield to the show. Welcome, Mark. It's been a pleasure to finally get you on. Thanks, Mike. Really looking forward to this. It's going to be fun. Yeah, good. Just for our listeners, because I have the joy of knowing you for many years, but um, can you give a bit of background about yourself and a bit of context of what you've been up to lately?
SPEAKER_00Yeah, sure. So I'll give an a brief introduction. So my career is slightly odd in the sense that I learned to fly when I was 16, wanted to become a pilot. You can probably see by the airplane paraphernalia behind me. Um found out I was colourblind, couldn't do that. I did a PhD with Airbus in aerospace engineering, uh designing wings, uh, and then I had a commercial career. So I spent three years in travel, 10 years in media, uh really leveraging uh AI analytics and data through that period of time. And having worked in the AI sphere now nearly 20 years, um, I don't I know I don't look old enough, Mike, to have done it that long. Um but then I had a corporate career, very, very fortunate corporate career. But in 2022, I left that to start my own venture called Turbulence, which is an AI transformation uh firm. Uh but I'm also extremely privileged to be a fellow at Cambridge Judge Business School, where I teach on AI innovation and transformation.
Mike JonesNice. Such a such a crazy like you know, story. I love the way that nothing's ever linear, you know, straight. Oh, I started into this. And it also what's interesting is that um for us naive people like me, when it comes to AI and say you've been working with AI for 20 years, I'm like, God, I'm sure it's only been around for about a year or two.
SPEAKER_00It's it's interesting. I mean, so if I'm right in 2006, I kind of fell into it. I mean, obviously, then it was machine learning, and then we've seen advanced analytics, data science, and the field itself has kind of gone on slight rebrands and subsections. And obviously, there are purists who'll say, well, you know, only AI is this. But the field has been evolving for many years. But I I think I've been very fortunate in the sense that I've been at the very much the commercial side. So I've I I I've seen how it can be used, I can see how it can be create value and make money. Um, but I almost like describe my path as trophies and scars. I think that might be the end of my book at some point, trophies and scars.
The New Boardroom Questions On AI
Mike JonesYeah, I think it'd be good. I'll definitely read it. The so of all that experience in in the AI world, what what what's the what's the challenge now that we're we're seeing organizations face with AI?
Why “AI Strategy” Often Fails
SPEAKER_00Yeah, great first question. Um so so if I rewind, almost like 18 months, two years ago, a lot of the conversations I was having with boards, and I I'm very lucky to advise boards, was around actually the technology. And I think fast forward two years, I I think probably two of the key questions that are emerging. One is on the change management side that's required for AI. And I I have to share almost an inconvenient truth that there's no point B when it comes to AI change because the capability is maturing all the time. And you know, traditional change programs, as you know well, Mike, that often people think there's fixed stability. And it's not. We're not going from A to B. There is no B. So what does that require of us? And and I think the other question is around where does it add or can it add actual competitive advantage? And I think organizations are seeing that. But equally from the same perspective, as more and more of us use AI, does AI become the commodity? And therefore, what's the point of differentiation? Where is the differentiation? Is it in the human? Is it in our own data? So I think a lot of the conversations are circling around change management, competitive advantage, and and almost how do we differentiate ourselves?
Mike JonesYeah, I think that's really crucial. Definitely when you get into the strategy world, because um I have to often pause a lot of senior leaders or boards when they go, we need we need an AI strategy. You know, you as you know me, I'm like, well, there's no such thing. I said, but what for? Why what what is it about AI? What what is it that you need? Where do you need it? Um, and how is it going to support you? Um how's it going to enable you to maintain viability? And that's often where the the conversation goes to a grinding halt.
SPEAKER_00There's many versions of these memes that exist over history, but you'll see the funny one where someone says to a group of CEOs, what do you want? AI. When do you want it? Now, why do you want it? We don't know. And I mean, to some extent, I think that that's why some people look at this and go, Is it is it just the next fad? Most leaders are looking for growth and and and are they looking at AI as the next vehicle for growth? But we're not talking about the same AI as we were in 2005 or 2010 or 2015. However, I see AI as far more than the technology, I see it more as a capability. And for it to be useful, your your strategy kind of has to recognize the need for it. If not, then why? Uh, and equally many business problems don't require AI to be solved. I mean, they can use automation, rules-based logic, etc. So it's as you know, there's an overhead that comes in with deploying AI or any novel practice prematurely. But I do think given organizations are looking at scale, they're looking at competitive advantage, I think what AI gives that's unique now is permission to reimagine, but it still creates the individual bravery to reimagine. But but having a dedicated AI strategy that doesn't link coherently with what you're trying to achieve or your intentions as a firm, I think is dangerous. And I've said on another podcast before, and I'll say it to you as a friend, that if I had$10 for every CEO who said to me we want to be AI first or AI native, I'd be somewhere sunny right now. Um, yeah, I have to say sometimes people, I think you might being AI naive more than AI native. Uh and I think what we do need to have, despite the extraordinary technology and capability we have, we need to have stronger, bolder, braver conversations about where do we want to use this capability intentionally.
The Efficiency Trap And Vendor Narratives
Mike JonesYes. And I think that's where a lot uh are missing when you think about strategy. Strategy is saying that you know this is this is where we're going to find viability in the external environment and advantage, but that strategy has a demand, it has a demand on the organization. So for you to do this, it it demands your organization to either change in a way or you know, you've already got existing capabilities and you just need to redirect that. And I think that the AI and you know, challenge me, please, on this, but that comes into we don't quite have that capability, and actually, to enable us to do that, we really need to focus that capability in these areas, and if we can do that, that would give us the advantage that we need after thinking about either it'll give you you know agility somewhere, it gives you strength somewhere, you know, it'll cut cycle times, cut change rate, you know. Where how is it helping and where? I think it's crucial.
SPEAKER_00Yeah, I agree, I agree. I think I think the vendor sales message hasn't helped over the last 18 months because without without sounding too trite, I think the vendors are often selling the efficiency narrative with AI. This will make you more efficient. And we've seen companies that have fired people, then had to subsequently rehire people. And the problem with the efficiency narrative, almost as a story, that it forces you down a recourse of action, that you you start looking at every single thing through the the lens or the glasses of how do I make this more efficient? How do I make this more efficient? Yet we we forget to look back and say, well, with AI, do we actually need to do this anymore? How do we reimagine the process? How can AI help us coordinate the work in ways that's unique that we've never done before? And so the the narrative efficiency also creates the narrative, well, if we're more efficient, we need less people because we can automate tasks. And then we see the the almost the reaction of Job McGeddon that everyone's out of a job. So uh, and that then affects into the mental models of um a CEO in terms of how they think about structuring the business. But we have to look back and go, but where does this narrative come from? And this comes from someone trying to sell something, yeah. Because it's it's much harder to go into an organization and say, How does this organization create value? It's easier to say, well, we can create value uniformly by being more efficient and reducing costs.
unknownYeah.
SPEAKER_00But as you know, when you work with a business, organizations are far more nuanced than just making them more efficient.
Mike JonesYeah. And I I wrote about this before about the regime of anticipation where you've got this idea where this is the future, you have to go there. And we don't, as you said there, we don't sit back and actually question it. You know, we don't we don't question actually is that future where we we want to go or need to go? And to your point where you were saying they're really crucial, stepping back and saying, actually, do we need this?
SPEAKER_00Yeah, I mean, you said something interesting there too, uh, about future. I mean, I know I know you and I are both passionate about this, but organizations have futures rather than future. And it's and it's interesting to say by going on this path with the second and third order consequences, where does this take us? And and I I had a call uh with a client, obviously, which I remain confidential, but uh almost 18 months ago now, and and he said to me, this chief exec, he said, I honestly thought once we rolled out Copilot, we would all be transformed. And I and I and I love the honesty and the humility in that conversation, but it but it does point to something around technological solutionism where AI is not just something you're gonna plug in. It's not like a USB-C cable and suddenly the lights switched on and everything's rosy. We we we have to sit back and critically say, how is work going to get done? How are we going to create value in different ways? And to my best knowledge, Mike, no one's worked out how to get a group of humans to work together optimally, let alone AI plus humans. And so I think we're about to enter a new phase of almost, I don't want to say organizational theory, but challenges and questions that force us to question assumptions that strategy, design, all these things that have existed for decades. I think we'll have to reinvent part of the theory to enable the practice.
Mike JonesYeah, I agree. And I um and I do think we we live in this utopia of complete efficiency. I think there's always that that that uh false god of efficiency that if we just do this one more thing, we're gonna be this really efficient machine that is going to cut costs and you know be really um and I think that's something that we need to give up, especially when we're talking about things like AI and stuff like that, because it's because it's so novel, I know it's been around for a long time, but it is novel because it's it's constantly developing to what you can do. You you need that element of inefficiency in there to be able to experiment to see what actually can it do. Um how can we use it?
SPEAKER_00I I agree. I mean, if I if I start by asking you the question, does technology make you more efficient as a person? Exactly. Yeah. It's a hard question. And that's if you're an organization of one, if you're an organization of 20,000. Now, don't get me wrong, technology can offer huge efficiencies and and and the ability to reimagine work, and I think it has done. But when we think about how technology works in our lives, so if I think about me, I run my own business, I'm always asking myself a different question is how do I give myself capacity? And of course, you can say, well, isn't that is that the same thing as efficiency? I don't I don't think it is. So so, in the one way, I have AI agents that I use as researchers to help me stay ahead of the latest research and some of the case studies, the challenges that emerge. But the way I engage with that is not I'm just delegating work to it and therefore I'm more efficient. I talk to this research through voice agents with my own curiosity and challenge to help me shape and think differently. So that's why I think efficiency generally is the wrong question, because that's given me both capacity and capability, which in turn is shaping for me to have better conversations with people and clients.
Mike JonesYes. I think I I often think that you know this idea of efficiency is a bit of a misnomer because what are you really trying to create in a business and it is that capability um and that capacity, especially for um, you know, you know, both a pleasure working with boards and senior leaders. Um, a lot of them do not have the capacity um to do some of the thinking because we're getting drawn down into all this myutia. Um and it's exactly the same in in the strategy conversations. You can't people are rushing to go to AI to create them a strategy, they're not doing it as in your doing it, where you're going, actually, how can we have AI support us in this thinking? There's multiple ways you can do it, rather than I think there's a lazy approach that people are going, and it's like, how can I just offload this to AI because it's going to solve my problem?
AI For Simulation And De-Risking Ideas
SPEAKER_00Oh, exactly, exactly. I mean, if we if we take that one step further, I mean, so for me, again, don't get me wrong, efficiency can add value, and that's definitely part of the spectrum of benefits that AI can afford us. But when I think about generative AI specifically, I often say one of the biggest value unlocks that it gives us is the ability to simulate. I know strategy simulation is most something you and I are a fan of.
unknownYeah.
SPEAKER_00But this ability to create synthetic personas, the ability to test and de-risk our ideas, I think probably is one of the biggest benefits it can have. So when we when we think about strategy and obviously different definitions of strategy, is it a portfolio of intentions? Is it your deliberate resource allocation? So that far be it for me to add another definition to the table. But whatever we're trying to do, can we use AI as a capability to test, simulate, and rehearse to challenge and improve our thinking? In the same way for innovation, the goal of innovation is to de-risk an idea as quickly and effectively as possible. I I look at AI as an enabling capability to help us do this. So rather than having a dedicated AI strategy, which could be more about allocation of resources and projects, can AI become an enabling capability for the firm to help it achieve what it wants to win at or what its intentions are or indeed its strategy?
Mike JonesOh yeah. I've been playing around with stuff and I've used it with clients recently where we've had AI in in the modelling process for strategy. So we're looking at modelling the external environment, and we've got it so that the AI picks up the actors that we've we've done and the real actors, so then we can start interrogating and saying, Well, what what strategy is this this organization playing now? And it will go it will go search, but has to provide the evidence to say it's doing this because this, and we're still we've got human in the loop. Um, and we're using it that way. Okay, so if we were to do this, what do you think the likely response will be? And it's never gonna be perfect, but what it has done is it starts to challenge the assumptions of some of my clients around what they believe to be true, and it doesn't need to be perfect, it just means it needs enough to to enable them to be more curious and actually go, well, maybe we need to find out a bit more about that stuff.
SPEAKER_00Yeah, I mean, I I I I agree. I mean, as you know, uh I'm a fan of the Oudaloopers as well as you are. Um but when I when I think about this, even when doing horizon scanning or horizon scanning AI tools for clients, I mean you you you can see how you can have different agents that are designed to focus on the societal trends or the geopolitical trends almost as separate devolved processes. But we also know how interrelated these trends and signals are. But you still want a judgment piece from the human at some point, say when it comes to orient, you can get AI's sense of orient in terms of what does it mean for us as a firm or me individually, but I still think this needs to be a human conversation. So I I'm always keen, I think, as you just alluded to quite eloquently, was almost bringing AI to the table in those type of conversations.
Mike JonesYeah, it's crucial, and there needs to be a point as well. I think I'd love to know your thoughts on this because I've seen where people try to use AI, and it AI can be great at throwing loads of stuff because it can it can go off and find loads of information and throw it, but we're we're still essentially human. There's only so much that we can absorb to make sense of in that orientation. Also, not not vast amount of data is is is not going to be useful. So, how do we get it to attenuate um to the human so that it can be supportive in that way is really crucial.
Voice Agents And Better Sensemaking
SPEAKER_00So, so it it's it's an interesting point. I mean, if we if we step back ever so slightly, I I always get quite curious when I hear people say, I I love AI because it helps me synthesize, and I see value in that. But synthesis contains in itself a set of assumptions, and if a generative AI at all, you know, whether it's perplexity, whether it's Claude or Chat GPT, if it if it summarizes or synthesizes a 10-page document to three bullet points, is that success? Well, actually, it might have removed a point that creates a spark in your own thinking. So one of the design challenges in using this is how do we use it in ways that makes us more human and not less? But the default default mode is to go, bah ah, right. Here, you know, I look impressive because I've given you 20 pages of information. But to cut through that, so almost 95% of my work with generative AI is in voice because when I'm having those conversations, it A, it feels more human, but also I don't have the patience to sit there and listen someone read me a 26-page document. So I can I can bring the document to life through my question and challenge. And at the end of this, I then get a more meaningful synthesis. So whether it's human in the loop or human on the loop from the genetic point of view, I think we have to leverage more of us in that process to help make sense, particularly if we're going to use this for strategy conversations.
Mike JonesYeah. And and and what what sort of um tool would you recommend for that, you know, that that voice um conversation?
SPEAKER_00Yeah, so I I'll tell you, listeners, first of all, I'm not on commission, uh, just to make sure this is not a bias to you. I use 11 labs, but I use 11 labs for many, many things in terms of modeling virtual board conversations because I have different voices. Um but you can do voice mode on ChatGPT or the Claude app or Perplexity. I always lovingly joke, Mike, that the first time you talk to an AI agent, paradoxically, you're the one that sounds robotic. So I I I I I've developed quite comfort in talking to these AI agents. And and the first one I built when I left my corporate role was to be a non-executive director, just to disagree and challenge me. And so for me, I'm always looking at this to help challenge my thinking, but I want to own my thinking. Yeah. And I think there's there's two the other part of this is not only just using AI prematurely, but it's also the outsourcing of our judgment to these tools. So imagine making strategic decisions. You're going, okay, this is too complex. ChatGPT, what would you do? Now, this scares me to death. I don't mind it being a data point in the decision, but this whole idea of where it takes us as humans if we continually outsource our judgment. I think the technical phrase is metacognitive laziness. Another phraseology I've heard is AI obesity. Yeah, we don't want to get fat on this technology, but like any capacity, or sorry, any capability, it is a muscle, but that muscle requires intentional deployment rather than universal deployment everywhere.
The Risk Of Outsourcing Judgement
Mike JonesYes, and uh a lot of term AI obesity. I've already got sweets, so they need another thing to make me fat. You're right, because this is the thing when you go to strategy is one thing, like development of strategy, you can get it to go, I'll develop this strategy, and you know, I I'll challenge a lot around you know the orientation, actually, is what what's it based on, and um is it just the orthodoxy wrapped up as something else? But the the crucial point is that the reason why we do the strategy development in a certain way is because it's that cognitive, I think it's the cognitive, effective, behavioural emotion, the human loop that the that's involved in it, and you can get it there to support. But when strategy hits reality and it will diverge, it will go off and do wonderful things because we can't always predict how it's going to meet, and it will drift off to what we expected to do. That's why the human loop is so important, because otherwise you do not have the cognitive framework already or scaffolding to understand where those things would happen and then what you can do to get that back. Um, I think that's where the the laziness will get people in trouble.
SPEAKER_00No, I I I echo that completely. Um, I I also think as well that I I always describe AI to some extent in itself as a bit of a contact sport. So even deploying an AI model or an AI agent in practice, uh which is almost a parallel or analogy to your strategy uh story, is we we can design AI agents on our computers. They look good, they test good, they feel good. We deploy them and they do something rogue. I mean, to give you an interesting story, I mean, I think this is now in the press from somebody else too. Uh someone I work with, they had a whole budget for the year in terms of their AI token, i.e., how much AI spends when it's computing uh and running different tasks. They had a whole year budget, but they've been deploying agents prematurely to take over the work. They have burned through their entire AI budget in two months. And because what looks at what happens in the wild, to your point, is quite different. Now that's not a strategy conversation, but you know, the the difference between theory and practice in theory is nothing, after all. So so there, therefore, we only know once we do it. So I I I agree completely. Yeah. Um that was a shock. It was a shock, but but equally, it's it's it's it's not a surprise in the sense that almost coming back to this technological solutionism or efficiency, we've gone from having a single large language model, you know, like a frontier model like ChatGPT, and we use it to now we deploy it to to take action. I mean, a gentic AI is AI that acts, can do things. And so this is going to have a high cost. Equally, when it fails and it retries, all these different hidden costs. And we we we think AI is a silver bullet. And it has impact, it has benefit, but I think we have to think of it in a far more wider ecosystem to how it operates to truly understand the benefit.
Mike JonesAnd that's something that's quite interesting because I'm thinking about the future challenge because we are quite used to these subscription models that you know I get a chat GPT subscription and I can just go away and you know type away or talk to it as much as I want. And I and I really understand. And I just wonder what would happen if those subscription models were to change and it was going to be purely token based. Actually, you are going to be accountable for every token. And just for people who didn't know, because I only noticed recently, is that obviously every time you you interact with it's a token and those tokens costs money. Most subscriptions don't see it because it's all wrapped up. But you you can imagine the subscription based of what what's what's something like ChatGBT now for a normal person? Yeah, so I probably use it probably about$20 a month. Yeah,$20 a month. And you think people probably use it far more than that$20. So I wonder if it ever go to purely token based. And I wonder how that will really challenge the idea of efficiency and capability and use of AI and organizations.
SPEAKER_00Yeah, yeah. I mean, I I th I think the economics of AI has to change. I mean, there are some large AI organizations of vendors who, without naming them, are struggling in terms of their cost base much higher than their revenue base, which Business 101 says this might be a challenge at some point in the future. But the economics has to change. I'm not convinced, by the way, that the future is going to be token-based pricing, but that's a whole nother podcast. I do I do think that once again, if we bring this back to an organizational or a strategic perspective, if I if I think of AI as this capability and this resource that can do work, that can coordinate work, we have to have a much deeper cost mindset lens, like oversight costs, governance costs, operational costs, rather than just the upfront subscription. Because it can feel like an all-you-can-eat buffet, but at some point, someone in the buffet will say, Sir, I think you may have had enough for today.
Mike JonesYeah. Yeah. And and I heard um, so I'm sure it was in Meta. They've got like a and I suppose this is again, this is the unintended consequences and human interaction. But I was sure it was in Meta that they had a um uh leaderboard for who can expend the most tokens. And there was someone on there for you know, ridiculous amount of tokens, and it was a it became a competition. So they're just wasting tokens just to see who can get the most.
Hidden Costs, Tokens, And Perverse Incentives
SPEAKER_00And worse, they're being bonused incentivized to do so. I mean, just to give you another perspective on this, but not from a tokens, but from a skills perspective, um, there's an organization that I've been invited to work with, and the people across the organization, a relatively large firm, are being incentivized to document what they do, not as a job description, but as as skills, as basically processes that can be documented in the file. Now, in an AI agent world, these are known as skills, agent skills. So skills are repeatable processes that an agent can take on your behalf. So just think about that from a change perspective. I might have a group of people who are slightly fearful, anxious about the future of AI because of the narrative that exists around it. But ultimately, what I think could happen is that people being bonused and asked to document their skills, you can see how this is gonna create a bit of an allergic response inside the organization because you're you're basically writing down yourself. Now, I'm I'm gonna write a post shortly to say, you know, you are more than just a skill file in terms of what we do at work. But even when we're trying to execute our strategy, how we get there, even with simple things like incentives, it's gonna create let's call it odd responses. Yes.
Mike JonesBut I think this brings on to a larger issue I've seen. Uh, I've been sort of looking back to you know, where I was looking at when did strategy lose its way, and I went through sort of the whole history of organizations going through. And you see a really clear path from the sort of industrial age into here where we we separated the the the thinking and the doing, and we created that sort of dichot dichotomy that you know people are just there to do processes and and go through, and and that's where I think the dominant worldview is that we think that we can just then replace a person by you know doing a but we're not really thinking about well, what are the tasks that don't really take cognitive capacity to do, and they're just sort of mundane tasks. How can we best utilize that? Because I'm thinking I I want I want that human to use what you know AI may not really be able to do, um, and it's really understand the contextual things and think about how can I free up capacity so I can actually unlock the the capability of that human rather than have it just sit there wasting a vast amount of time on mundane processes that just take too long.
SPEAKER_00Exactly. And and it's this is the capacity building point. I mean, if you without getting too abstract about a job, I mean a job, a job of clearly is more than just tasks, there's cognitive aspects of it or judgment and relational. And so actually, what we want to be doing is creating capacity to the point where humans can do the tasks that require more judgment. Now, some people might say in time, AI is going to get more advanced and he won't have to do that well. Let's see. I mean, of course, today's the worst AI we're gonna have again, but let's not fall into the drumbeat of doom there. But it can and should be a capacity creating exercise. I mean, to give an example, one of the firms I work with, they have an initiative now where their short-term activities and projects, one day a week, they're trying to reimagine their work. So it's almost like Google's 20% time, but one day a week, they're there to reimagine their work such that it creates one day of capacity. Now, the job is not to go to a four-day week, the job is to say, well, what can we now do with that capacity that we've given ourselves? So this kind of initial efficiency as a lead-in to create capacity to do the deeper, more cognitive work. And I think that that's a pretty good model, I think, to follow.
Mike JonesIt is, because when you think about one of the key tensions of a viable organization is um status quo versus change, where you'd see the predominant tension is always around the status quo because a lot of the time people are just you know, they they they can't absorb what we call the variety. There's there's too much going on, they're trying to absorb that in the day-to-day, that they don't have that capacity to think about reimagining. Well, what what changes are coming? How can we implement those changes? How can we how can we adapt? So the fact that you're freeing up that capacity to handle that tension is only going to serve as a good thing to maintain the viability of the organization.
SPEAKER_00Exactly. I mean, I think even if you're a firm of one like like me or or yourself, we have to look at ourselves as these viable source systems or viable organizations and what maintains our viability. And I think it's quite interesting to look at AI from that perspective in terms of yes, are there things that we do we might not want AI to do? Equally, I don't like writing LinkedIn posts with AI because I think it challenges authenticity. And so for me, I'm being very selective. But when I think about repeatable processes that I don't enjoy, like VAT returns, then but as long as there's oversight and governance to this, that might be AI all day long. But I think ultimately what we can start with is looking where there's friction and how can we say, well, how might we solve this and be open to the idea of AI helping? Because what we don't want is you know this AI hammer looking for a nail where we're trying to apply AI with everything.
Mike JonesYeah. Perfect example with me in this show is that you know AI is a great help with this because instead of me having to go in and meticulously cut every little gap and pause out, um, AI does that for me. Um I just need to watch back and go, yeah, that that looks good. Yeah, happy with that.
unknownYeah.
SPEAKER_00So perhaps if I had an AI avatar version of myself earlier, you would have got me on the show earlier as well. This is probably what could have happened. Yes. But it might have been a good it might have been a good 80% of me. Yeah, yeah.
Mike JonesBut there's there's so much uh I think going on with AI. And my thing is that you know AI is is not as we've alluded to, is not a strategy in itself. It it should be um understood by how it can support. Um, and I think this idea of of embracing um intentional inefficiency, um, so that you can actually explore what AI can do for you in the organization. I mean that at the very edges of your organization, you have to have that intentional inefficiency, otherwise, what my fear is is that you know, up above they're going, we need AI, we need AI, do AI. But the people who you're gonna get the benefit of using AI are not gonna see that benefit because they're they're too busy just trying to deliver what they need to deliver today, they have got no spare capacity because you're trying to be efficient to adopt or experiment. And I think experimentation is something that's gonna be really crucial for organizations um with this moving forward.
SPEAKER_00I I I I agree. I I also think, as well, building on that point, that when there's this cascading directive, and of course it depends on the culture of the firm, but already you see stories of AI sabotage where firms are going, well, hang on, AI is now mandatory, this is what you've got to do. Uh, because it's not just the internal environment where people are trying to maintain the variety or don't have the capacity to do things. They're hearing in the press, you know, the robots are out to get us, jobs are going. And then so the change piece, our autoimmune response is no. And so I've seen evidence of this in certain organizations where you sit on Slack channels where people are deliberately going against the AI narrative. And it's not that the change hasn't been done correctly, but coming almost full circle to the conversation, it it's not recognized that it's a capability, but we have to help people literally make sense of what this means in the context of the firm and its direction intentions.
Mike JonesYes. And that's no different to what we've we've always suggested when it comes to change, is that yeah, because I think about you know, how do we enable people to interpret and understand what this means to them and make space, especially to hear what the dissenting voice is, what are the concerns, what the the worries. Because if you just force it like we do most changes, that just you know, you will do this, this is a new thing coming on, and we're not providing the training and the support to adopt it, you are going to get this, you you are just inevitably gonna go get resistance, which is not gonna be helpful.
Futures, Identity, And Intentional Pauses
SPEAKER_00No, and and but I mean if I think about a lot of the conversations I have with board members now, it's helping them make sense. I mean, because we are going through a quite a profound transition in humanity with this technology and the implications it could have on society, both positive and negative. And I I sometimes mention this in courses in Cambridge. I I often describe where we are is a bit like Sainsbury's, which for your listeners is a grocery store in the UK, and it's taste the difference is their is their food line. But we haven't got to the point where it's taste the positive difference or taste the negative difference. All we can do right now is taste the difference. And so we're trying to help people make sense of this shift. And I do worry sometimes that we we will need more human-based skills, particularly around empathy, to help people navigate the transition that we're going through. Because we're surrounded by noise about what the future could be, and yet we're trying to internalize and make sense of what could this mean for me as an individual. And and I do see sometimes this response to around the future of work and jobs, potentially is because your identity is being questioned. I mean, when I left my large corporate role, it took me over a year to say, hello, I'm Mark from. Um it's wrapped up as part of who we are. So I think there's a hidden part of this debate, is we're challenging the identity of ourselves and maybe the identity of the firm.
Mike JonesYes. And then that comes into what I was talking earlier about the regime of anticipation. So we anticipate these futures. Um, and also because we anticipate and because we're getting told that's what the future is, we're not really challenging it and thinking, are we just prematurely closing down and accepting the future that we haven't really challenged ourselves and thought about, or are we are we opening up futures, as we should do, and maintaining freedom of action within those futures and really asking ourselves that you know, if we go this way, would we like what we become?
SPEAKER_00Yeah, uh, yeah, yeah, exactly. Do we do we do we like the future version of ourselves? I think this is a great question. There was a quote, a famous quote in computer science that says, Premature optimization is the root of all evil, and that's lent from a computer science point of view. So I'm gonna be blasphemous and apply it to a different context. Because I I I think equally that the almost the premature application of AI is this the root of all evil, because this might shape us into a version of ourselves of the future that we we don't want to be. Do we create an AI-abese version of ourselves? Are we a firm or a team or a unit that's lost judgment as our external situation and context changes? Um, but then I think it takes a much braver position to say, are we using AI in ways that makes us more human and not less?
Mike JonesYeah. I think there's a lot of um there's a lot of opportunities there, but I think we need to be really curious and uh I I say curious, it's like a almost tripe in a way of saying it, but we do need to be curious and challenge actually what where where is this taking us? Um what does this mean for us? Is this actually where we should be going? How how could we um be utilizing it? Uh I think there's a lot of questions there, but I do worry that people will just be swept swept away unchallenged into a world they they I don't think they really understood.
SPEAKER_00Yeah, I mean, Jerry Jerry Seinfeld, you know, this the self-proclaimed AI expert Jerry Seinfeld um has this quote that says, as humans, we're smart enough to invent AI, we're dumb enough to need it, and we're still so stupid, we're not sure whether we did the right thing. And it kind of philosophically, I think, balances where we are because we can see the extraordinary developments in mathematics, computer science that's got us to this place. When he says we're dumb enough to need it, I slightly interpret this differently in the sense that if all we're doing is applying the tools that we have for generative AI just to summarize emails and white papers, I'm not sure this is the best use of this capacity and capability that we now have. So as humans, we we have to be bold and almost reinvent and reimagine what's possible. But his last point, I think, is almost the most provoking, which is we're not sure whether we did the right thing. Because we don't know how this story ends, both in terms of jobs, economy, individuals, cognition. And so I think the insurance policy is the pause, it is this continuous kind of slowdown, not to stop doing AI at all, but to be very clear and meaningful about what you're doing, to understand where it's taking you, a second and third order effects, and understand are you deliberately drifting and emerging into something that is representative of what you want to achieve?
What Leaders Really Ask At Cambridge
Mike JonesYeah, which brings us back to your mean meme that you hear, like we just want AI, we want it now. Um, actually, pause, I think it's really useful. But you you've been teaching at uh Cambridge for a while now about AI and innovation. So you you you have multiple leaders there from various different backgrounds, various different uh organizations. What what what is the common sort of concern, challenge, even the optim, even the optimistic view um from those leaders?
SPEAKER_00Yeah, so I think like anything in life, it's a spectrum. Um, I think for some of the leaders I work with there, you see us as a massive opportunity to completely reinvent the firm. This can be done done correctly. This can be unparalleled opportunities. Very few come to it with the lens of efficiency. Very few.
Mike JonesThat's good.
SPEAKER_00Uh very few are going, right, we can do this now, therefore we can get rid of 10,000 people. Maybe I don't have those conversations, but I don't see this. I think the common denominator is the speed of which it's moving, and therefore, how do we make sense of this? Now, whether this creates a fear and therefore an almost impulsive reaction to AI, but where I get people to start to recognize the conversation is, well, first of all, A capability as we discussed. But but where could it be a source of advantage to you? And some and you see in lots of strategy papers now, you know, what's your strategy for next year? AI. And obviously that triggers me slightly. And I start asking the questions. But at some point, if most of the organizations are using the same frontier models, arguably as the commodity, how do you differentiate yourself? Well, obviously your own data, but also your people. So so I think the leaders that are coming to Cambridge are are really to some extent asking, how can the humans be amplified to be a point of differentiation supported by an embedded AI capability? And I think that is the right frame. But the joy of Cambridge as well is that we see organizations in different parts of the maturity curve. Some are curious, they're just getting started with toolings. Some are confident, they're applying it to small little projects to kind of build that flywheel of activity, and some are kind of starting to build multi-agents across the organization. So the spectrum of capability, the spectrum of application in terms of industry, and then the spectrum of countries where this has been employed gives the sense. But the big thing I think that's emerging across all of this is governance.
Mike JonesYes.
SPEAKER_00And actually making sure that we understand the risks, the costs, the challenges. Equally, you some see some regulated industries and companies who just go, nope, we almost have the six-year-old syndrome of AI. I don't see it, so it don't seize me. It doesn't see me. Yeah, yeah, yeah. But actually, we we want to create governance as an enabler, not just to AI, but for innovation, for experimentation, for growth, for challenge, all these things. So I think to some extent, what people come to Cambridge and what they leave with is that AI is shining a light on maybe gaps or challenges that have always existed, but because of the speed of which the technology is moving at, they're trying to fix these, address these whilst making sense of what's coming next.
Mike JonesYeah. And you bring a really good point on my broader challenge with governance. You just see it, the you know, organizing all this stuff. AI is come really quick, and we always say in the you know, things are moving a lot quicker. Um, I I just really challenge the governance world that you know is the current worldview that we use for governance is that fit for what we're moving into.
SPEAKER_00So it depends on the type of AI work that you're doing. So if if I'm deploying uh a machine learning model that's forecasting demand, it depends what you do with it. If this goes into a live system and this controls who gets access to certain sources of inventory, then you have to understand the risk assessment that goes into this. But now the AI can execute and do things. That requires a completely different set of governance requirements than we've we've ever had before around transparency, around explainability, around accountability. So one of my interesting signals is some insurance startups starting to appear to underwrite AI agent decisions. Now, goodness knows how we price this, but that that changes the ecosystem because that's going to give the confidence of organizations to use AI to execute, knowing there's a boundary in this system, which is there. Now, clearly that's not Panzer, and there'll be things that um come out of this that aren't quite right, but the ecosystem is moving so fast that it does require, and when I say governance, I don't mean regulation. I think they are they are they're arguably part of the same umbrella, but they're different terms and questions. But we do have to rethink governance, yes, yeah.
Mike JonesYeah, and I was I was thinking more about organizational governance than regulation, even though from what I've looked at in the sort of ecosystems, I think regulation is struggling to keep up with the the amount of change that's happening. That's gonna be quite interesting to move forward. But yeah, organizational governance of is that you know, how do you make sure that especially when you go to that AGNic AI where it's gonna make decisions and act, you know, how are we going to enable that? Because I don't know, I'm just laughing because I'm thinking, are we gonna start getting all the agents to a meeting? It might be easier to get them to turn up than the humans. Yeah, exactly. Yeah, yeah. Yeah. I was just thinking, I was thinking about most governance now, it's so poor. Um, and this is why it, you know, we we prevent our own people from making decisions and acting because of the governance um craziness we've got. So, how are we going to manage that with agents?
Humans And Agents Coordinating Work
SPEAKER_00And and the obviously the current version of generity of AI specifically is very probabilistic. It's not perfect, it hallucinates, things go wrong, there's biases, etc. But so do humans. Yes. And so humans are probabilistic too. So I think your governance point is absolutely right in terms of what we stop people doing now versus AI. But what I get fascinated by more is how humans and agents are going to work together. I'm not yet convinced we have the organizational theory that tells us how this is going to work, or potentially the models of operation. Um, it's going to be hard. It is almost akin to working with like your team, your offshore team, your partners, and the coordination of work. I think we might begin to realize that one of my favorite authors is Sangi Paul Chowdhury, and he writes about it in Reshuffle. Um, the real benefit could be the coordination of. The work.
Mike JonesYes. Yeah, I've I've just picked up his book. I haven't read it yet. I want to be interesting about coordination. But yeah, I I can I can imagine that's really important because I just the cynic in me when you start thinking about how do we get agents and uh humans who work better. I look at what happens now in organizations and I just imagine you know working better together sessions with your agents and the AI agents gonna be there while you all throw throw a tennis ball at the screen a bit and do trustfuls with your agent rather than getting back to what you're saying. I keep saying to people if you really want these things to function, you need to look at key things coordination, the value exchange, and the handoff. What are you handing over? And there are those things in play, and then you you can start to work harmoniously together, hopefully.
SPEAKER_00Yeah, and and it is but yeah, it is bounded by an awareness and literacy and understanding of what these things can do. So I had a conversation recently with someone saying agents don't work, you know, Alexa's useless. I'm like, well, if I look at my own home, Alexa's a glorified uh kitchen timer. Alexa's not an AI agent in the sense, it is a voice assistant with very limited diction and scope to what it can do. And so I think all your points are spot on, but as society, as organizations, we still have to invest in this literacy to make sure people can understand what they're presented with. Otherwise, we make sense by putting it in buckets. A genetic AI, oh, that's Siri, that's Alexa. Yeah, yeah. And I and I I worry that that could happen more at scale than we think.
Mike JonesYeah. And I think there's there's there's so much there. Like I've me personally, I've I've down downloaded um Claude's uh desktop. Um and I'm I'm I'm playing around with it. I don't I don't really know there's loads of literature out there to to read idea, but I I'm I'm in a I'm very low and conscientious, so I don't I won't sit there and read it all. I'll go and I prefer to go get hands-on and see and you know make mistakes. I think there is a lot to learn, uh, and and there is a lot to learn for for leaders, um, to your point, exactly what these things can do. But more importantly, what and I'm going by what they can do because someone has said that's what they can do, rather than thinking about our own context and think, what can this do for us? I think it's really important.
SPEAKER_00Correct. And then there's other shifts underway, I think, in AI, where it is the future the off-the-shelf models, or the frontier models, or is it open source models where we get to fine-tune it and post-train it for our own context? So where we're actually, and so we think about our own data, our own tacit knowledge as proprietary information. How do we want to use this? But I think again, I think the common word that we keep bringing up in this podcast is intentional. But that but to be intentional requires us to pause and and it requires a sense of coherence and understanding about those involved, um, or those involved about to make the decision. And I think for firms who are particularly on this journey, it is important. Otherwise, you get caught in the tsunami of Claude released this last night, this new piece of fun, and everyone goes, rah. Last week or two weeks ago when Claude Design came out. This is the end of all design firms. And and and and and so uh I I saw a term last week of AI fatigue. And if I log into LinkedIn, it's caught between this hype, hope, and hysteria of what AI can do. Um, and I wrote about this recently that there's a real danger that I think we have like this mirage thinking, yeah, that I think then start to appear.
Mike JonesYeah, but it also I think it's really gonna bring to the forefront of the perceptual complexity of humans in this, and it'd be quite interesting before we even start getting into these conversations. As a as an executive team, I always do teams is go and really try to surface what is your uh perspective on AI before we go into this conversation? Because we bring so much baggage and so much things into this. Actually, are are we are we actually clear on people's different perspectives and what that means before we even start getting into that conversation?
SPEAKER_00Yeah, I I I I agree. And I think also who are we learning from? There's a danger with AI that we can be quite insular and as humans, we're limited only by our own imagination. So I I think when I look about the speed at which AI is moving, and again, it's a it's a flippant comment, but it's true that today's AI is the worst we're ever going to have again. Who are we learning from? None of us have all the answers. I don't think anyone can purely stay on top of AI. I always say be slightly skeptical of AI, people who call themselves AI experts, notably if they've rebranded themselves in LinkedIn in the last two months. Um, but what does AI mean for us? What does it mean for us in the future? But likely, who are we learning from? And I I'm not trying to move into the world of collective intelligence here, but the ability to shape who do we trust to help give a perspective on how we look at the challenges we have uniquely as a firm. And once again, where do we build AI as a unique capability that could add sources of value?
Closing Takeaways And Share Request
Mike JonesYeah. There's plenty of snake oil people out there, and your point about the the LinkedIn rebranding themselves. I I know those people that you know that have done you see them all the time, the whole history around the latest sort of perceived fad. So yeah, I I think you're right. You we need to ensure that you know who who are we getting the right people in there um to help us, not just force you know, something down us. But it's been it's been great to have you on. Before we go though, is there anything from this episode that you would like your like the listeners to think about from this episode?
SPEAKER_00Yeah, so for me, I think it's on AI, there's no point B when it comes to the change. I think if we start from that place, that actually what we're trying to create with people all the time is this anticipation and almost handling the uncertainty of what's coming next. But I do believe that AI is going to be a profound human transformation and transition, giving us all permission to reimagine what we do, not from some utopia on universal basic income or we're only working two days a week. But it gives us license to reimagine what we want to do and also the future we wish to have.
Mike JonesYeah, it's perfect. And that's why I want you on here because you are one of the most pragmatic people when it comes to the AI conversation. And for the neuroticists out there that are looking at um AI and thinking the whole thing's burning around, I suggest um connecting and reading Mark's work because you'll you'll see that there is hope, it's not all doom. Um, and I think there's that that balance that you bring to the conversation. So I really appreciate that. Thank you, Mike.
SPEAKER_00Thank you. Huge thank you once again for the invitation. Sorry it's taken us so long to get to this place, um, but been delighted to have this conversation in detail. Awesome.
Mike JonesAnd if you're as delight as I was to finding it, Mark, here um, and enjoyed the conversation, please share it to your network, see what value they do. And there's a lot of noise out there about um AI. So please share this and connect with Mark. I'll put his details in the show notes. But once again, thank you so much for joining us, and thank you, Mark, for joining us in this conversation. Uh, thoroughly enjoyed it. Hope and see you again. Yeah, thanks, Mike. Bye.