Strategy Meets Reality Podcast

Futures, AI, and the Illusion of Certainty: Rethinking Strategy with Matt Mullan

Mike Jones Season 1 Episode 9

What if AI isn’t the answer—but a way to ask better questions?

In this episode of Strategy Meets Reality, Mike Jones is joined by Matt Mullan—strategist, technologist, and co-creator of the H-Scan11 newsletter—to explore how leaders can engage with the future in a more intelligent, imaginative, and grounded way.

This isn’t another hype piece about AI. It’s a challenge to the illusion of certainty that dominates leadership today. From human-machine teaming to building AI literacy, Matt lays out a compelling case for why the future isn’t something to predict—it’s something we actively shape.

Drawing on his work at the intersection of AI and strategic foresight, Matt shares why decision-making is stuck in the past, how to spot the weak signals of disruption, and why emotional engagement—not data alone—is critical for leadership.

🔍 In this episode:

  • Futures thinking vs. strategic foresight—what’s the difference?
  • Why AI is a force multiplier, not a crystal ball
  • Building futures literacy across teams—not just execs
  • The danger of chasing perceived certainty
  • How constraints fuel innovation
  • Why strategy needs imagination—not just information

🎧 Keywords: AI, foresight, futures thinking, imagination, uncertainty, leadership, decision-making, scenario planning, strategic foresight, organisational capacity

📬 Explore Matt’s and David Sloly thinking in H-Scan11: https://hscan11.substack.com/

Send Mike a Message

👂 Enjoying the show?
Subscribe and leave a review on your favourite platform — it helps more people find the podcast.

🔗 Full episodes, show notes, and resources: https://www.lbiconsulting.com/strategymeetsreality-podcast

📺 Watch on YouTube → https://www.youtube.com/@StrategyMeetsReality
🎧 Listen on Spotify, Apple Podcasts, and Buzzsprout

💬 Connect with host Mike Jones → https://www.linkedin.com/in/mike-h-jones/

Matt (00:00)
one of the valuable things about stepping into the unknown and exploring and embracing the uncertainty is that it can change your perception of knowledge. Change your perception of what you know and what you don't know.

Leaders aren't immune to fear. They aren't immune to insecurity around their jobs. They aren't immune to the wider implications of not having a job or a business failing.

what AI is really good at is looking for weak signals, for example, those tiny ambiguous signs of change, those pockets of the future that are here now, as William Gibson put it, okay, they exist now, but could be the clues to future disruption.

Mike Jones (00:33)
Yeah.

welcome back to Strategy Meets Reality podcast. I'm your host, Mike Jones, and today I'm joined by Matt Mullen, who crosses or merges the line of futures and AI, and it's great to have you on the show today, Matt.

Matt (01:22)
Hi Mike, great to be here, thanks for having me.

Mike Jones (01:24)
Yeah, pleasure. Just so for the listeners, just give us a bit of context about yourself and what you've been up to lately.

Matt (01:29)
This is like one of those questions you get from your family. What do you do for a living? Really, really hard to answer. A bit of a generalist would be my honest answer. We've got quite a pot of history, 10 years in the army, studied physics at university, 13 years at Thales mixed with engineering and design leadership. For the last two and half, maybe nearly three years, I've been at BJSS and their creative innovation front end Spark. Really trying to grow bottom up.

capability around futures and foresight. That's been quite a journey. It's really a mix between developing your own knowledge, futures literacy, but also deploying a set of entrepreneurial kind of mindsets and skills to do that bottom-up.

Mike Jones (02:08)
Hmm. think futures are such a crucial part. Definitely in, you know, we're talking about strategy meets reality and how important the futures are, the external environment is. So given your recent exploit, so given your recent explorations in AI and futures, how can leaders leverage AI to support futures, thinking.

Matt (02:28)
Wow, that's a big question. I think to give it justice, we've really got to step back. And there's two aspects to that. Why is futures and foresight, the different things, important to leaders, but also why is AI important? And how do you cut through the noise and the hype and develop some real literacy around what AI is? So I think that's the first step. If we take the first part of the question,

Mike Jones (02:40)
Hmm.

Matt (02:54)
difference between futures thinking and foresight. Futures thinking really reflects the way we think about the future cognitively, the way we imagine the possibilities, how we can use creative techniques to bring them to life, the cultural background, maybe some of the ontological epistemological assumptions that underpin that as a strong philosophical element. And strategic foresight really talks about the application of that kind of thinking to strategy, to decision making.

Mike Jones (03:05)
Mm.

Matt (03:21)
practical utility in organizations. That's my interpretation.

Mike Jones (03:24)
That's good interpretation

Matt (03:26)
So, and then when you look at the world of futures and futures thinking, it's broad. There are many, many thoughts, there are many applications. Where I'm focused is very much on how do we understand the futures thinking part, but how do we get practical business utility from that in order to help businesses survive and thrive.

So there's that aspect. How do you take businesses on a journey?

models seek data-driven approaches and data only speaks to the...

you win the argument against these kind of MBA driven, traditional MBA driven, logical, rational minds that imagination and thinking about the future has strategic utility? And then you go on that journey and the way to do that is through engaging leadership and helping them to discover the value rather than try to evangelise in a particular

Mike Jones (04:01)
Yeah.

Matt (04:13)
And at some point, this AI boom blows up and we get to the question you're asking. How does AI cross into that mix? So then you can flip the coin and you can really look at the other side. What is AI? It's the first question. We all use that term these days, but AI has got a long history. You do a bit of reading, you do a bit of looking back. There's been many AI winters, there's been many attempts to create artificial general intelligence, for example.

Mike Jones (04:17)
Yes.

Matt (04:35)
There many modes of AI, machine learning, machine vision, neural networks, we can go on about the techniques. But I think what we mean by AI now is really clouded by what we're seeing in social media, in the press, and that's generative AI. That's the bit of AI that's been democratised and been put in the hands of people.

Mike Jones (04:49)
Okay.

it's good to sort of distinguish between the different ones because I'm not very technical myself. you know, to me, AI is AI. It's good to actually step back and think there's different elements to it and actually be really clear about what element we're talking about and how can that element help us.

Matt (05:12)
Yeah, absolutely. And I think, you know, we could probably spend the whole podcast unpacking what that is. And I'm no expert, no technical expert. I've probably got a long history of AI without knowing it. Back in 2010, we were like doing trials with biometrics. And how do you use commercial technology and biometric algorithms to detect faces in certain environments where the UK was interested in at the time? It's mainstream now. It's commoditized.

Mike Jones (05:28)
Mm.

Yeah, yeah.

Matt (05:39)
But that relied upon machine learning, that relied upon a lot of the foundations of AI that were not generative AI that we're seeing in the current hype. So to come back to your original question around leaders.

Mike Jones (05:48)
Yeah.

Matt (05:51)
how can futures and AI together help leaders? I think that also speaks to developing AI literacy. A real understanding of how AI works, different types of AI, how it reasons, and ultimately how it also creates risks if not used in the right manner or understood in the right manner. And I always say that the biggest risk of...

Mike Jones (06:08)
Mm-hmm.

Matt (06:11)
AI is not that it gets things wrong, it's that we believe it's right.

Mike Jones (06:15)
Yes, I mean, that's really crucial because I know this element that we'll get to and I want to explore is that in futures and strategy, I see that leaders are looking to AI to give them the answer, where if you go back to your original thing about futures and about it's about creativity, it's about trying to get people to use their imagination, there's a real risk there that people just look for the easy answer, what AI has given them and then assume that that's correct.

Matt (06:41)
Absolutely, absolutely. My first operating principle when using AI is it's wrong. It's lying to me. I do not trust its output. That doesn't mean its output does not have utility. And that's because there is an interplay between my cognitive process and the way that AI works. So this interplay of human intelligence and artificial intelligence. And it's not just the manner in which I prompt generative AI to give me a...

generated response but it's the manner in which that response prompts my own intelligence. And that should also include a critical view and a critical appreciation of the way it's prompting my own intelligence.

And if you can maintain that awareness, then it has great utility. Because in the same way many years ago I used to hitting a whiteboard with people would unlock a lot of stuff out of the head. Just the simple act of using the artefact, using pens and discussion would unpack a lot of stuff that maybe you can't unpack from your head in isolation. AI has the potential to do the same. Only if understood.

Mike Jones (07:37)
Yes.

And I think it's good way to look at it. We don't want to get to this point where we are trying to replace this creativity, this, what I call the aha moment, where you've hit with something, you're like, oh, right, yeah, I didn't think about that. I'm understanding people's different perspectives. And then you can utilize AI not to give you the answer, but to help as part of that discussion that.

that prompt something to try and challenge even your own assumptions, I think that'd be quite useful.

Matt (08:14)
Definitely, I've definitely experienced that in some of the futures work we've done with clients. And it also shifts us more towards that position of human machine teaming.

Mike Jones (08:23)
Mmm. I like that.

Matt (08:24)
Let's look at this now, teaming

of intelligences that both have their weaknesses and their strengths. And you mentioned different people, different perspectives. And that's another important factor to bring into that mix. And again, it plays into the way that we approach clients, the underlying principles.

futures and foresight.

Mike Jones (08:41)
Yeah, and I really like that term, teaming. Actually, what's term you used? it generative teaming?

Matt (08:49)
human

machine teaming.

Mike Jones (08:52)
human machine team, and I really like that. As soon as part of a team. And I've sort of used it recently with clients. I didn't think it was that, but what we did was we fed the AI all this information to be something that we were looking at our relationship with. So it wasn't gonna be perfect, but it was enough to give that a persona of this other organization we were looking at. And...

it was interesting to see if we'd done this, what would be the likely response? So I suppose that's in a similar way, similar vein.

Matt (09:24)
Yeah, absolutely. And then the AI literacy part comes into it again, because then understanding the data and the data corpus that a particular AI model is trained upon helps you to become aware of the biases that are inherent in that data and how that might affect the responses to your prompts that AI might give.

But also understanding that it's an intelligence, sentient intelligence, it doesn't know what it's giving you, it's a probabilistic model. Very sophisticated.

that ultimately is looking at the relationship between words or pixels in the image sense, is doing some very, very clever calculations and layers of calculations to look at what is the most probable output that would come next. At the most simple level, the quick brown fox, that's a well-known phrase. You'll know it from the signals. You could put in the quick brown and if it's trained on this data, it will look at it and it might not say well, fox is the most likely word.

Mike Jones (10:09)
Yeah, Yeah, yeah, yeah, yeah.

Matt (10:18)
fox.

lots of parameters that can be tuned to look at a range of problems outcomes that look more creative or even, then we call it a temperature, you can tune the temperature to give you the most likely outcome which is less creative. I'm hesitant to use the word more accurate because there's still inherent uncertainty in something that's

So understanding that, you can understand that actually the way you prompt and ask questions of AI is really a way through the filter of an algorithm of prompting a body of knowledge that it's trained upon. Looking for a response from that knowledge. And that could be no different to having a conversation with an expert or with a non-expert. get different responses to your questions. And so if you use it with those kind of...

caveat to understanding in mind, then you're better equipped to understand that a, never trust its output, and certainly don't write an article and just publish it without knowing what you're talking about, but it can be really useful at accelerating your own thinking process, at structuring your own thoughts, at accessing big bodies of data and spotting patterns in those data which human minds are just not good

Mike Jones (11:07)
Yeah.

Yeah, but it's quite interesting really, because when you're talking about that, I'm thinking, well, it's just like having a team around, know, around me, because we all have perspectives. But, you know, some of these things are right. A lot are wrong. You know, we all hold beliefs within us that may be true to us, but not true in reality. And

actually how you can utilize AI is similar to how we say, utilize your team around you, know, ask them questions, you know, get them involved in the conversation. and they're really there. You're not looking for the perfect answer. You're looking for, you know, looking for insights, you can, for things to challenge your own assumptions. We're trying to move your, your, your vision from here to wide, wide vision to see different.

multiple perspective, maybe learn something you've not thought of or seen before.

Matt (12:13)
So we're really talking about the dialectic there and the process of, know, when you talk to people and make meaning, discover new perspectives, maybe it shifts your own perspective on the topic. That process, that...

Mike Jones (12:19)
Yeah.

Matt (12:24)
relationship between people is the same with AI with some limitations because in human communication it's far more sophisticated how much of it is non-verbal how much of your intelligence is embodied in your body into your gut that's where you get the gut feel from how much of your interpretation is based in smell

Mike Jones (12:28)
Yes.

Mm.

Matt (12:42)
These are questions that of course we can't ask ones and noughts in AI at this stage. So it gives us a view of that difference between computer-based intelligence and biological intelligence.

Mike Jones (12:47)
Yeah.

So how does it play then in, because you said earlier about data, you're totally right in what you're saying about the very NBA driven logical thinking where it's all about data and hockey charts. So a lot of AI is formed by past data. So think about that. How is that then gonna help you look and create new worlds that don't exist yet?

Matt (13:17)
So, I mean that's an application of underlying data. What AI's got at is finding relationships and patterns in unstructured data.

Mike Jones (13:27)
Mm-hmm, okay.

Matt (13:28)
which you couldn't possibly hope to do even with a team of researchers fast.

So when it comes to creating futures, as you've talked about, or more specifically, thinking about possibilities of the future, and representing them as scenarios, then what AI is really good at is looking for weak signals, for example, those tiny ambiguous signs of change, those pockets of the future that are here now, as William Gibson put it, okay, they exist now, but could be the clues to future disruption.

Mike Jones (13:55)
Yeah.

Matt (14:01)
or future new possibilities.

Mike Jones (14:01)
Yeah, okay.

Matt (14:03)
And when we're doing our futures work, looking for those weak signals is a crucial part of futures intelligence. a crucial part of also understanding the uncertainties that exist in them.

Mike Jones (14:08)
Yep.

Matt (14:12)
And being able to scale that and still use human judgement to check it, to validate it, is very, very useful. And what we found in our work is it's helped us accelerate. And then that comes onto uncertainties as well. So often when we're dealing with clients, with businesses, we are interested in the uncertainties that they're conscious of, that their business faces. As opposed to risk.

which are things that can be quantified with a probability and an assessment of impact and managed, whereas an uncertainty cannot necessarily be. And we're interested in that because that gives us some focus, some objective to look after in terms of their business model, in terms of their existing value chains or their place in the value chain.

Mike Jones (14:53)
Yeah.

Matt (14:54)
what the different possibilities in the future that affords might mean.

Mike Jones (15:00)
Mmm.

Matt (15:01)
So AI really helps us to just act as a force multiplier.

Mike Jones (15:05)
Yes. I think that's a really useful reminder about with futures and everything is all about, you know, you're looking out, but it needs to be brought back in. There's no point in looking out and leaving it there as this, this thing, they need to bring it back in to go, well, okay, if that's the, the various futures or various scenarios that we're looking at, how would my organization fare under those different scenarios? And it, and that's my value.

value that I've currently get in the structure, all those things, my strategy, and that's where it could be useful for leaders.

Matt (15:37)
Absolutely, I I always say the future is always about now, or the features more specifically, it's plural. It's also about the process in which you engage leadership minds and intelligence in a different way, and quite often allow them the space to engage their imaginations to think strategically.

Mike Jones (15:40)
Mm.

Mm.

Matt (15:54)
Exploring the futures using scenarios or other methods is just a way of enabling them to engage in a world where they're struggling to deliver yesterday, where they are caught up in the pressures of reporting the numbers.

Mike Jones (16:02)
Yes.

Matt (16:05)
It often comes as a big relief when they get that chance to do that.

Mike Jones (16:09)
Yeah, I always find that. It sounds so simple when you think in that sense, having that space, that capacity to be able to have these conversations, use these tools, think strategically and to think about futures. But again, do they have that time? Because I'd love to see how much time most leaders spend on meetings about the past.

and a lot of their reporting, the scorecards they're using and all that stuff, how much they actually have control over those things and how much of those things are just reporting what happened last month and how much of their time they're actually spending looking at anything that is outside in the future.

Matt (16:51)
Well that's an important point because that's one thing that Futures and Foresight does is it forces an outside-in view on an organisation. in my experience I've observed many leaders and leadership teams chasing certainty or at least the perception of certainty and peddling the perception of certainty. And the reasons for that are quite complex. We mentioned some of them. It's just ingrained in the way we're taught about how the world works, how business works.

Mike Jones (16:57)
Mm.

Yeah, yeah.

Yeah.

Matt (17:15)
being reinforced by the fact that businesses have been successful doing that during stable periods. Which is really a manifestation of, well, that's how we've always done it. But then there's a very human element to that. Leaders aren't immune to fear. They aren't immune to insecurity around their jobs. They aren't immune to the wider implications of not having a job or a business failing.

Mike Jones (17:20)
Mmm, yeah.

Matt (17:36)
As you get more senior leaders aren't immune to the perceptions of shareholders. And so that messaging that goes out to the market and that, you know, the creation or the perception of certainty is absolutely crucial because it could have a price impact. So on and so forth. it's easy to criticize, but then put yourself in the shoes and you can start.

Mike Jones (17:48)
Yes.

Yes. Yeah. It's the structures that we, we create, that drive, drive the behaviors. yeah, it's unfortunate and you see it all the, you see it all the time and it's easy from an outsider to look in and go, well, you should do that. It's not to sort of step in the system and you see everything is constructed to drive those behaviors and it's hard to untangle those to try and get that, that capacity or that

And all we're looking at is how do you create capacity at the right levels to increase the decision making of leaders.

Matt (18:31)
Absolutely. Capacity and awareness. So I think, and this comes back to really, you know, what I view as a foundational bit of knowledge around futures.

Mike Jones (18:33)
Mm.

Matt (18:40)
each other's triangle.

Future Triangle, which really talks about the weight of the past. So we're all focused on the future, okay, and the pull of the future and those possibilities when we're talking about futures of foresight, but a really essential part is the weight of the past.

Mike Jones (18:45)
Mm.

Matt (18:53)
And we've touched upon some of the way to pass in our conversation. So the way things have always been done, the success of the past that creates the perception. we do that again, it will repeat. Data driven decision making, which is data from the past analysis, extrapolation of that into the future. If leadership teams can become aware of that as a.

Mike Jones (19:04)
Yeah, yeah, yeah.

Matt (19:17)
historical factor in their decision making that drives their decision making, they're empowered to challenge them. Now it's not black and white, there might be instances where it's absolutely valid to continue doing that, but then that comes down to the awareness of the context in which you're making decisions.

Mike Jones (19:24)
Yes.

Yeah, think there's path dependency plays a lot of that as well, where you're thinking about decisions that we've made in the past can constrain our future to a point in the sense that, you know, these decisions we've put in place and now they are constrained in the future. It doesn't mean they're not moveable, but they're sent to be aware of. Definitely when we're looking at options for strategy that that...

stuff in the past can come back to bite us, but also it's not all negative because some of that stuff can be opportunities as well.

Matt (20:09)
Absolutely, absolutely. so context is really, really important here. And understanding that a lot of that has come from the industrial age, has come from the success of the industrial age, where the application of linear science, Newtonian logic, seeking efficiency, seeking optimization, putting hierarchies in the workforce, specializations, that was hugely successful for a long...

Mike Jones (20:20)
Mm-mm.

Matt (20:32)
led to huge economic booms, made its way into our education system, made it into the way we think about our MBAs and historically teach our MBAs, forced us to focus on case studies, which are historical representations of what worked.

Mike Jones (20:41)
Yeah.

Yeah, yeah.

Matt (20:47)
So that way in the past absolutely has to be recognised. then coming back to the weak signals, I view those as not the pull of the future, but the push of the present. They're already here. They are affordances of new possibilities.

Mike Jones (20:58)
Yes.

Hmm. Yeah. And that's a really good way to look at it because often people think the future is the future and we, we just have to react to the future rather than there are definitely if use futures, there are ways to shape the future by your decision-making. You can have a look at certain technologies now with electric car and all that was a, that was

that was shaped and created by decision making, by opportunities that they saw and possibilities and they created it.

Matt (21:34)
Oh absolutely, absolutely. And I think what that talks to is the fact that we do have agency in terms of understanding the future is created by us, which is based upon the decisions that we make. Now those decisions are obviously impacted by our mental frames, the way we view the world, that's the way the past we talked about. But they should also be informed by imagining what the possibilities are.

Mike Jones (21:44)
Mm.

Matt (21:58)
as a consequence of those little affordances that are playing out now, or maybe some of the bigger certainties that exist, like the global megatrends. So climate change. There's a certainty around that in the future.

Mike Jones (22:07)
Yeah.

Matt (22:09)
know, aging is a big mega trend.

population shifts. So there are certainties that you can be more certain of over time.

Mike Jones (22:12)
Yeah.

Mm.

Matt (22:18)
lead to uncertainty.

Mike Jones (22:19)
Yes.

Matt (22:20)
So being able to grapple with that provides agency. And that agency is really important because that informs different decisions that can create different futures.

Mike Jones (22:29)
And this is where the imagination and discussion and the creativity of teaming comes in because you're trying to grapple with these weak signals, these mega trends. And what you're looking at is what would happen if these things come together? And yeah, what scenarios will be, it's scenarios that I always find a common trap with futures.

is either people think of just a future or it's overly dystopian. And it's just all about doom and gloom and not thinking in the sense of that, yes, there are constraints that we're going to face, but there are opportunities. And the more that we look ahead and think about futures, the more than we are increasing our options.

Matt (23:14)
You know, got a few things to say on that. think, one, it really comes back to what I said before about the field of futures being very broad. OK? Because if I'm an activist interested in creating positive change in relation to climate change, for example, then using the futures as tool in my activism to try and shift behaviours now might lend me to go to the dystopian scenario.

Mike Jones (23:21)
Hmm.

Yeah, yeah, yeah, we'll see.

Matt (23:39)
Okay, and so when you look at it like that, you can see we're using interpretations of future possibility scenarios as a tool for thinking. They're not predictions. We cannot predict them. The future has not happened. There is no data from the future.

Mike Jones (23:46)
Mm.

Matt (23:51)
Put that in the business context, we're doing exactly the same. We're using them as a tool to aid thinking. Thinking informs decisions.

Mike Jones (23:58)
Yes. Yeah. And that's that bringing it back to now and the future is about now and the future is about our decision-making in the present rather than our decision-making in the future.

Matt (24:09)
brings a new perspective on what could possibly happen given what we know about now.

Mike Jones (24:15)
Mmm.

Matt (24:15)
given what we know about what we can control and what we can't control. Therefore, what are our options for acting differently? Or maybe following the same path, we're doing everything right, but there's a more confidence in that.

Mike Jones (24:23)
Yeah.

Matt (24:28)
But there's a catch. It has to come from within leadership teams. when you dystopian, protopian, whatever futures you present, they have to be logically consistent. have to be believable.

Mike Jones (24:39)
Yeah.

Matt (24:40)
Again, AI has a role to play in stress testing those scenarios. AI has a huge role in how we bring those scenarios to life. So we use design fiction, and we're going to immerse a leadership team in a fictional view of the future. You can accelerate the time it takes to create those artifacts greatly with the use of AI.

Mike Jones (24:48)
Mm.

Yeah, yeah. Because I've done the scenario work before where we've come up with various scenarios and then we've took a fiction view using the Oxford's approach to scenario planning, Rafael Ramirez. We used it fiction and it does take a lot of time and thought to create those where actually AI could

can help that, exponentially increase, yeah, decrease the burden of those. Yeah, that's quite a useful thing about.

Matt (25:30)
Absolutely,

you can take the output of the Oxford approach scenario planning, you can encode that in AI, then you can start to build a custom AI that says, right, use this approach. You can start to train it, and you can start to really accelerate its outputs. I still wouldn't trust its outputs. ⁓

Mike Jones (25:36)
Mmm.

Yeah, yeah, yeah, yeah. I think again, it's

like it's a starting block, isn't it? It's like if you've got writer's block, it could produce something and then you can titivate and make it, know, align it or fill in the gaps that aren't quite right. But it should help.

Matt (26:02)
Yeah,

so force multiplier principle again.

And there are more sophisticated scenario techniques. With leadership teams that are new to it, we tend to fall back to the classic two by twos because it's engaging, they can get their heads around it, and we can get them emotionally invested in the work. And that's a really, really important point. However, if you go to techniques such as morphological analysis and you're looking at many, many uncertainties and the combinations of uncertainties and how they might play out into plausible possibilities, then that becomes quite heavy work.

Mike Jones (26:14)
Mmm.

Matt (26:32)
That scales quite rapidly. I mean, that's something that either specialist tools are needed for, or more and more and more, AI can be, with the right skills, used to address. Good enough to have the effect of enhancing decision making with leadership teams.

Mike Jones (26:43)
Yeah.

Yeah. And you brought a couple of good points there. And the first one is my fear of how people use AI, which is it removes the discussion and the emotional part because we know it's the same with strategy. Strategy is emotional. And you need that to drive decision-making and drive movement. Where I fear that if you just use AI to give you the answers.

you'll miss that and there's no real buying to do anything different.

Matt (27:17)
Yeah, it's true. And that first hand experience, you know, I've worked with leadership teams, because typically that's where we land this. And it should be more than leadership teams, is getting all voices from the organisation in there, because diversity of perspective is critical. But however, it's one thing to say, we'll do the work, we'll provide you the magic answer in a consulting report. Here's your, you know, £100,000 power.

Mike Jones (27:30)
Yeah, yeah.

Yeah.

Matt (27:42)
goes away. Another

thing to put a lot of effort into crafting an experience and creating the space for those leadership teams to engage with that futures work.

to immerse themselves in those scenarios and to have an emotional resonance.

Mike Jones (27:58)
Yeah.

Matt (27:58)
seen that play out and that's the moments that lead to insight and I always say we are not there to give them the answers we're there to facilitate them coming to their own answers and when you see that moment of insight you see the conversion so if I've had big corporates who are skeptical but open-minded they go on the journey by the time we finish the work they're looking

Mike Jones (28:02)
Yes. Yeah.

Mmm.

Matt (28:20)
leadership teams that have said this has been transformational. Not that it's given us the right answers, but it's given us a whole set of tools and different ways of thinking and being that we can take into our strategic plan.

to support that, do you continue to facilitate, how do you spread that futures literacy.

Mike Jones (28:35)
Yeah. And that's what it should be about. know, think people are too easy to look, you know, I know when I do strategy work with clients, I've got clients that are fully open and, you know, they get great insights and they converge to doing some really, like producing really good outputs. We get some that just want the answer and you're like, I'm not, I'm not here to give you the answer.

I could give you the answer, but like you said, we're not going to get the insights. You're not going to get the emotional connection. You're not going to get that conversion to do something. You're just going to be reliant on me. And I know plenty of people have got a consultancy model they like where they go off in a dark room and they produce something and it's a lovely, beautiful PowerPoint deck. Beautiful. But again, it just, there's no investment behind it and it's not believable. And so it just gets put in the drawer.

forgotten about and it becomes a tip box exercise which you don't want to see.

Matt (29:26)
Absolutely.

I haven't an analogy I use quite a lot and it's like being a business therapist.

Mike Jones (29:34)
Yeah, yeah

Matt (29:35)
Well, we're there providing, using futures techniques, therapy to a business for them to come to their own insights, their own learning about themselves, and therefore the opportunity to grow.

Mike Jones (29:45)
Yeah, I like that. That's good. So how would you recommend or what tips would you give to leaders to start looking at using AI? And I say this, I think it's hard for people now to go get a chat GBT login and make some cool photos and get them to help with emails, but looking specifically at futures.

What tips would you give people to start this journey, leaders and leadership teams and how they can leverage AI correctly?

Matt (30:16)
So I draw upon some of my own journey, my own experiences and it comes from constraint. So in the early days of growing this capability and recognizing that importance of being able to do the futures intelligence and the horizon scanning well, what does it take to do that? Having the opportunity to engage with some, you know, some pretty impressive tools and platforms out there that come with a cost. Failing to secure the investment.

created the need to explore how that could be done under constraint. And so we do have tools like ChatGPT, Perplexity, Claude, Notebook LM available. And so that was the start of my journey with AI Collider with Futures and Foresight. So the first thing for leaders to do is to make the tools available. Provide the space for your workforce, your employees to explore.

Mike Jones (30:44)
Yeah, yeah.

Matt (31:03)
And that might start at a basic level. How can I get good at prompting? How can I get good at iteratively engaging with AI through the use of those tools? Which is where I started, and how do we start to land the benefit and experiment with real client work? But then it goes further, because then you ask the question, how does this actually work? So how do you help your employees develop that AI literacy? Because having that is the key to using it well in futures work.

Mike Jones (31:26)
Hmm.

Matt (31:27)
And so for me personally, I've gone as far as learning how to install local language models on a local machine, understanding how

And even trying to replicate the functionality of some of these intelligence platforms by creating agents, using vibe coding, using AI to help me code. I know it's not production quality, but it can prove a concept. And experimenting with narrative structures for the output and testing those by publishing them, with some of which you've seen on the HSCAN substacks.

Mike Jones (31:45)
Yeah, yeah.

Yes, I will link your sub stack to this as well.

Matt (31:58)
And so it's a case of just putting yourself out there, being willing to take criticism but learn. And I'm fairly confident now that myself and other team members are developing a deep enough knowledge of this to really get utility. And what also jumps out from that then is there is a cost benefit. Because if we're building that capability as leaders in our teams, just through the provision of tools,

Mike Jones (32:11)
Mm.

Matt (32:21)
GPT.

then we are removing our reliance upon big expensive platform.

Mike Jones (32:25)
Yeah, yeah, yeah. Like that.

Matt (32:26)
And if we can only

replicate 60 % of their functionality at a fraction of the cost, it's really another way of saying that AI is democratizing or it's commoditizing those kinds of capabilities.

Mike Jones (32:35)
Mmm.

Yeah, I like that a lot because it's very easy, especially for bigger organizations who've got a bit of spare cash to buy these big platforms. But then again, are you actually building that capability if that disappears? Have you lost all that capability just overnight? And secondly, for smaller organizations or organizations that don't have that money available.

how can they build that without having to go to these big platforms? And I think it can help both. And I like that a lot. It's like you're seeing more and more in defence, not British defence though, but in organisations, this rapid testing, you know, call it skunkworks or whatever you want, but it's not looking for the perfect thing straight away. It's trying new things, adapting. And I think that's really useful because

At the end, you're going to get something that's aligned to what you need rather than what a platform has decided you need.

Matt (33:31)
Yeah, absolutely. And I totally resonate the British comment there. We've seen some really good stuff come out of the US, for example, in terms of their approach to new approaches to procurement aligned with that, how they start to rapidly develop new capabilities. And I was involved in some defense work recently here, Futures Work, and seeing the very structured, traditional way it was approached was just an eye-opener for me. So there's huge potential there.

Mike Jones (33:35)
Hahaha. ⁓

Yeah.

Matt (33:56)
but we've got to overcome the weight of the past and the attitudes and perceptions. So yeah, and it also talks to innovation. And we saw that with the recent, you know, fuss around DeepSeek.

Mike Jones (33:59)
Yeah.

Yes, yep.

Matt (34:09)
geopolitical level, know, China being denied access to certain GPUs and certain capabilities that that are behind the whole, you know, frontier model hype from the chat GPTs of the world or open AI view of the world. And they've essentially said, well, you give us that constraint, we'll innovate around it. And we'll give you something that's just just as good. And that's a wake up call. And I think the same principle can apply in all

Mike Jones (34:28)
Yes.

And I think, yeah, cause like we think about it, like creativity is the, you know, the ideas, innovation is the ability to implement those ideas into something. And we need that and we encourage it, but sometimes it, we just got to get over this, obsession that has to be perfect. Like it needs to be a perfect plan that we fought through and we fully understand what the, end state's going to be, um, for it to start.

where we just need to have the courage to try new things, just iterate and you know, some things are gonna work, aren't, but at least you know when you can scrap it quickly and move on, low cost.

Matt (35:11)
Definitely.

Definitely. as I'm listening to you, I'm just thinking, what's just another manifestation of our desire for certainty? Yeah. And experimenting, learning, stepping into the unknown is really a way of embracing and engaging with uncertainty. And there's a final point on that when I, we all use the word uncertainty a lot, but it's, it's, it's a generic term. There are many forms of uncertainty.

Mike Jones (35:25)
Mm.

Matt (35:34)
And on my other substack you'll see a post on, you know, I've got a problem with uncertainty for that very reason. But one of the valuable things about stepping into the unknown and exploring and embracing the uncertainty is that it can change your perception of knowledge. Change your perception of what you know and what you don't know.

And to quote,

Donald Rumsfeld. It translates unknown unknowns into known.

Mike Jones (35:58)
you

Matt (36:00)
So that's where I talk about this whole epistemological uncertainty. Knowing what you don't know is very,

Mike Jones (36:05)
Yes. Yeah.

Matt (36:06)
If you don't know what you don't know, you don't know. And that's some of deeper aspects of this kind of approach and this kind of journey because then you can start to gain agency. If there is a gap in our knowledge and we know about it, what are we going to do about

Mike Jones (36:14)
Mmm.

Yeah, yeah. That's crucial because they're always gonna be, we call it darkness principle. There's always gonna be things that we just don't know, just due to the complexity of it. But like you said, if we don't know, then that gives us agency to then go find out or test, have a hypothesis, act, do something to at least try to learn something new and hopefully start to close that gap or.

or know a little bit more of something that we didn't know.

Matt (36:45)
Absolutely. And so that's your uncertainty around knowledge. But then there's normative uncertainty. So that's the uncertainty created through differences in beliefs or differences in values. And so it talks to the need to have diverse participation in these processes. Because what you value, what you believe is different to what I value, what I believe, and therefore it impacts how I interpret a certain situation. So knowing that...

Mike Jones (36:59)
Yeah, yeah.

Mm.

Matt (37:12)
gives you the ability to, it gives you agency, it you the ability to say we've got to bring different voices, different eyes into the process. Or we've to force ourselves to look at it from different viewpoints.

Mike Jones (37:22)
Yeah, Klautschitz talked about this years ago. it's one of the hardest things about leading is the fact that we are individual agents in a collective system. And we all perceive things, all have different values. We perceive information differently, but that is a challenge, but it's also an opportunity because like you said, get people in together.

you see from different perspective, different angles, that darkness principle will come a little bit less dark because we're opening up to as wide as possible to what they can see. So when you look at it both, yes, it is a challenge, but it also is an opportunity in leadership.

Matt (38:01)
Absolutely. And I say it's a challenge from the point of view is you can keep on appreciating different viewpoints forever, seeking perfection, or you can get to a point where you think, I've got to act, to know, to test.

Mike Jones (38:07)
Mm.

Yes. Yeah.

It's a consensus madness. Yeah. That's the, that is a challenge that I bet no doubt with AI is that there is that thing about paralysis where, cause we don't know. So we've, we've got the, the uncertainty of knowledge uncertainty. we, we.

constantly trying to find out plus the normative uncertainty. So you've got different perspectives. I can often see where leaders would get in a bit of paralysis because again, from the past, they're so looking for that predictability, certainty that they could just go on AI all the time, just constantly prompting and asking for the information. And eventually until, you know, AI might hallucinate a bit and give it the answer and they go, right, that's it, let's go. I can see that could be a

potential risk with it.

Matt (39:02)
Yep, and so hence the focus on literacy. there's so much more to it than we've discussed today, but it gets the key principles across.

Mike Jones (39:09)
Yeah, yeah.

Yeah. think it's been very useful. Definitely with the fact that it's definitely opened my eyes, there's different elements of AI. And I like your idea around a way that leaders can start getting into it. I think it's just trying. It's just trying. It's simple, but I think they've got to realize you've got to that capacity, that space and capacity for people to try.

and to see how they can use it effectively for general stuff in business, but also in looking at futures. And I like that being that skeptic, you use the term skeptic, well, I perceived it being a skeptic of what the outputs are and challenging it. So you're not just going straight off face value, you're checking it. So I think that's really useful. But if you...

I've really enjoyed the conversation. And again, think AI is one of those things that is so vast that we could go into multiple elements of it. But from this podcast that we just chatted about, what would you like to leave leaders with to think or consider about what we spoke about on this podcast so far?

Matt (40:12)
Uncertainty. Okay, I think just focusing on AI is a bit of a red herring because it's driving so much uncertainty at the moment. So I would encourage leadership teams to stop thinking so much about risk and things that can be managed and controlled. What does uncertainty mean in their operating environment? What does uncertainty mean in terms of the different interpretations and types of uncertainty?

Mike Jones (40:19)
Mm.

Mm.

Matt (40:36)
How does that relate to the pressure to create that perception of certainty? How does that manifest in the workforce at a human level, in the workforce employee experience, for example? Ultimately, with that appreciation, how does that impact and affect the leadership approach?

Mike Jones (40:52)
Yeah, I think that's really good. And again, it's that point around perceived certainty, the pressure to provide that perceived certainty. And I know that leaders, like we spoke about earlier, you're to have that pressure on them because they've to appease the board, they've got to do this. But it's definitely worth thinking as a leader how your behavior is driving some of that perceived certainty. So what are you asking for? What are your expectations of your people?

Is it about experimentation, about giving the freedom of action to adapt or are you driving for certainty, driving for perfect answers, driving for information to try and make a perception of certainty that's happening in environment? I think that's something definitely I need to think about.

Matt (41:43)
It also relates to a lot of the conversations around mission command. Putting the boundaries in place, that might be the limit of certainty. This is as much as I know, this is my intent. Now, how do I help you process the uncertainty of how to execute, how to form a plan within that

Mike Jones (41:47)
Mmm.

Yes.

Yes.

Yeah. There's a thing we have talked about knowledge buying, not knowledge buying us, but the gap between what I'd like to know and what I actually know. And we only limit communication to the intent. So I'm not going to make things up. I'm not going to assume certainty, but I'm just going to say this is what we know and this is what I need, what I want to achieve. How are we going to get there? You know, we'll find that out as we go along.

I think that's where people think it's chaos and it creates more uncertainty, but it's not because you're holding your intent firmly, but your plans loosely. Yeah. I think that that helps navigate that, that uncertainty and making sure that we stay on that fine rope between chaos and stability.

Matt (42:47)
Absolutely. As I say, no plan survives first contact with the enemy. That was an inspiration for your podcast name, wasn't it?

Mike Jones (42:48)
Yeah, there's definitely, yeah, it was,

yeah, yeah, that's what it was named after. Bit of von Maltzer. Can't be a podcast about me talking about von Maltzer. No, thank you very much for coming on. It's been a great conversation and hopefully starts to help people unlock the one what is futures and foresight, like you eloquently said.

and also how you can use AI to help you and how you can get started on that. If you've enjoyed the podcast today like I have, maybe other people will enjoy it too. So feel free to share, like and push the podcast to people you think that it may be relevant for. So thank you once again, Matt, for joining me. It's been an absolute pleasure.

Matt (43:29)
pleasure on my end as well. I've enjoyed it Mike.

Mike Jones (43:31)
Cool. And I'll see you all next time. Take care. Bye.