Professor of Management Practice
With each discreet step in information technology capabilities, we ask more interesting and complex questions. Not that long ago, ‘Is the internet making us stupid?’ was a great conversation starter. This was followed by the question of whether Wikipedia is making us lazy. Both questions paled into insignificance given the recent debate about AI’s ability to be truly creative and the ‘big one’; can AI strategise?
Numerous technologies like big data analytics, have played essential roles as tools within the strategic process. They enable us to gather valuable data, simulate various options, and evaluate the financial and risk implications of our decisions.
Compared to preceding technologies, AI promises a new set of abilities for strategic management. AI not only identifies emerging trends and potential market disruptions but also empowers companies to define and adjust their strategies in response to these insights.
In the realm of short-term tactical decisions, such as optimising supply chains and refining pricing strategies, AI truly shines. With vast amounts of data at its disposal, it delivers measurable outcomes that can drive immediate improvements. During a crisis, AI can rapidly analyse a situation, recommend actions and predict the potential consequences of different approaches.
As we recognise AI's increasing significance in the strategic process, it raises questions too. Will it remain a powerful tool, or does it represent a fundamental shift in how we approach strategy formulation? Or could it even autonomously develop a novel strategy?
Creativity has been hailed as the defining trait of human intelligence for centuries. It’s the spark that led to the Renaissance, Da Vinci's creations, Yeats' poetry, Rembrandt's paintings, and the algorithms behind modern AI itself.
We view creativity as a unique expression of human thought. But humans do not create out of nowhere. Our greatest artists, scientists, and innovators build on the works of others, remixing, transforming, and repurposing ideas. Shakespeare borrowed heavily from history and folklore. Picasso famously said, "Good artists copy, great artists steal." Even scientific breakthroughs emerge from established knowledge, recombined in new ways.
GenAI would not be the powerful force it is today if it were not for thousands of scientists standing on the shoulders of our predecessors and improving upon their work. GenAI absorbs vast amounts of human-created data, philosophy, art, literature, and code and generates something new. It writes symphonies in the style of Beethoven, paints like Van Gogh, and proposes novel molecular compounds for medicine.
If humans are, in essence, pattern-recognisers remixing existing knowledge, is GenAI not following the same fundamental process? If creativity can be reduced to patterns and training on data, is there any fundamental reason why AI could never possess it? Or is creativity simply the ability to combine past ideas in new ways?
Defining strategy requires intuition, experience, and the ability to foresee opportunities and threats. But what if the best strategic options are not the ones humans can think of but the ones they cannot?
In 2016, AlphaGo shocked the world by defeating Lee Sedol, one of the greatest Go players in history. But the real surprise was not that AI won, rather how it won. AlphaGo made moves no human would have considered, breaking centuries of strategic wisdom in the game. At first, these moves seemed illogical. Then, they proved brilliant. This was not just a machine playing faster; it was thinking differently.
Generative AI can process vast amounts of market data, competitive intelligence, and economic trends, and then generate strategic options that defy conventional thinking. It can propose business models that challenge industry norms, uncover untapped markets, and optimise supply chains in ways even seasoned executives might overlook.
A GenAI algorithm analysing patterns could suggest an unexpected expansion strategy. When it comes to creativity and generation options, AI provides leaders with broader options, including ones beyond human intuition. Maybe the key challenge is not whether AI can generate strategic options but whether humans are open-minded enough to recognise its genius when it does.
The most important aspect of competitive advantage, so well argued by Rumelt in Good Strategy Bad Strategy, is knowing what not to do. Once we have the different strategic options, the ability to simulate and test the implications is critical. A key challenge is knowing which options are most likely to succeed.
Traditionally, businesses rely on past experiences, expert opinions, and market research to predict outcomes. GenAI excels at simulating the potential futures, pressure-testing strategies and revealing the best path. AI can model different strategic scenarios with such detail, precision and speed that no human team can match. By analysing massive datasets (economic trends, competitor moves, consumer behaviour, geopolitical risks) AI can simulate how each option might play out under different conditions.
For example, a global retailer considering expansion into three new markets can use AI to simulate sales performance, supply chain risks, and customer adoption rates for each region. Instead of relying solely on expert judgment, executives get data-backed probabilities for success, helping them make more confident decisions.
Just as financial institutions stress-test portfolios against economic downturns, AI can stress-test business strategies and reveal hidden risks and unintended consequences. A tangible advantage is that when prompted correctly GenAI does not only test what humans think might work. It challenges assumptions, uncovers blind spots, and ensures that leaders make decisions with the highest chance of success in an unpredictable world.
Humans are taught that there’s no such thing as a stupid question. Ask away, and the answer will be valuable. This is meant to encourage curiosity and make people feel comfortable exploring ideas. Well, in the age of AI, it’s not only incorrect; it’s dangerous.
Many arguments about the bias in AI correctly identify the challenges presented by the training data. When a GenAI’s Large Language Model is trained on the global dataset of human knowledge with all our biases and centuries of discriminating practices, it will repeat that, just like a human exposed to the same data when growing up. These same champions of ethical AI fail to recognise that the quality of the answer is also highly dependent on the question asked. A vague, poorly framed, or misleading prompt produces irrelevant, shallow, or nonsensical responses.
GenAI, like any system, processes what it’s given. If the input lacks clarity, depth, or precision, the output will, too. Imagine a CEO asking, “Should we expand?” rather than, “Based on current market trends for xxx industry, how would expanding into yyy impact our supply chain and profitability over the next five years?” One question is stupid; the other is strategic.
If we continue to believe that all questions are valuable, we risk drowning in useless AI-generated noise. In this new era, learning how not to ask stupid questions is imperative.
A perspective often lacking in the current discourse is the ability of a single human to strategise. Virtually every limitation we can lob at AI could equally apply to a single human. It’s accepted that no human locks themself in a room and emerges later with a strategy.
Formulating a strategy is an interactive process followed by a group of humans deeply dependent on the depth of the knowledge in the room and the ability of the facilitator of the process to extract and channel that knowledge into a coherent strategy. If no human can define 'a strategy' following a linear process, why would we think a single GenAI system or even prompt could do that?
Many early adopters of AI have accepted the principle of Human in the loop (HITL), where human judgment is integrated into an AI system's decision-making process. Instead of AI operating entirely independently, a human oversees, intervenes, or refines its outputs to ensure accuracy, ethics, or strategic alignment. However, HITL may be misplaced when considering GenAI and strategy. Given the strategic process's iterative and collaborative nature, the correct question may be whether AI is in the room, not whether a human is in the loop.
As AI advances, the human role shifts from manual oversight to strategic decision-making. The challenge is balancing efficiency with responsibility. We need to decide when humans must be in the loop, when AI can be trusted to act alone, and, importantly, whether AI is in the room and valued as a co-author of the strategy we are developing.
It’s often argued that GenAI lacks human intuition and the ability to understand complex social dynamics, which are crucial in strategic decision-making. Other arguments taunt AI's reliance on historical data, limiting its ability to devise truly innovative strategies. AI is powerful, but it’s not infallible. It can generate biased results, misinterpret context, or fail in unpredictable scenarios. Sounds very human, does it not?
However, GenAI has the potential to revolutionise strategic planning. Current capabilities may not yet replace human intuition, creativity, and ethical judgment under all conditions, but the gap is closing fast. The future of strategy will likely involve a synergistic relationship between human strategists and AI, where each complements the other's strengths.
But there’s an alternative future. As AI continues to evolve, we might see autonomous strategic AI systems that can devise and implement strategies with minimal human intervention. We are far from this future, and even then, these systems will look and act differently. However, it’s not unfathomable that groups of AI systems, appropriately coordinated by an AI facilitator, can follow an iterative and collaborative process to define an excellent strategy.
Until this alternative future emerges, be sure to ask if AI is in the room.