Lion Strategy Take
Most AI strategy is expensive superstition
The process by which most organisations decide what to do with AI is built on cognitive shortcuts dressed up as analysis.
The pattern
A competitor announces an AI initiative and suddenly your timeline accelerates. A vendor demo resets everyone's expectations. The success stories are vivid; the failures are invisible. Budget has already been committed, so the real question becomes how to justify it, not whether to continue.
None of this is strategy. It's pattern-matching — the brain doing what it evolved to do, which is reach fast conclusions from incomplete information.
Why this matters
The same cognitive mechanisms that explain how humans perceive, remember and decide are now the foundation of artificial intelligence. Which means the biases that shape how you think are the same biases that shape how AI fails.
A competitor deploys AI (social proof). You're shown an impressive demo (anchoring). You read the success stories (survivorship bias). You feel you understand the technology (the Dunning-Kruger gap). You've already committed budget (sunk cost). The risks you worry about are the dramatic ones, not the real ones (availability heuristic).
Layer these together and you get something that looks like a strategy but is actually a collection of cognitive biases wearing a business case.
The Lion Strategy view
This isn't a criticism of the people making these decisions. These biases are universal. They're features of human cognition, not bugs.
But you can't make good decisions about artificial intelligence until you understand natural intelligence — including your own.
Understanding the cognitive architecture that connects how leaders think to how their AI will perform is the single most undervalued capability in technology strategy today.
This is the first in a series exploring the intersection of cognitive science and AI decision-making. More at The Mind Behind the Machine.
Relevance