A well-constructed AI roadmap has a specific aesthetic. Confident quarter labels. Milestone names that are one part technical and two parts aspiration — “Foundation Layer,” “Intelligence Platform,” “Scale and Optimise.” A Gantt chart that implies the future is a scheduling problem rather than an unknowable fog. Color-coded phases that communicate momentum without specifying what, exactly, is moving.
I have reviewed many of these roadmaps. I have sat in the kickoff meetings where they are presented to nodding stakeholders. And I have, on a depressingly regular basis, been brought in six months later when the phases have slipped, the milestone names have been quietly updated, and the colour-coded Gantt chart has been replaced by a document titled “Strategic Realignment — Working Draft.”
The roadmap wasn’t wrong because the engineers were bad or the timeline was unrealistic or the technology wasn’t ready. The roadmap was wrong because it was built on top of a question nobody had answered: does the problem this solves actually exist, and does anyone want it solved this way?
The Slide That’s Never in the Deck
Every AI roadmap deck I’ve ever reviewed has slides for: the market opportunity, the competitive landscape, the technical architecture, the go-to-market strategy, the team, and the timeline. There is one slide that is, in my experience, never in the deck.
The validated problem slide.
Not the “problem we’ve identified” slide — every deck has one of those. The problem is always real, always significant, always described in terms that make it feel inevitable that a solution would be warmly received. I’m talking about the slide that says: here is the problem, here are the three customers we spoke to who confirmed it, here is what they said when we showed them the proposed solution, and here is evidence they would pay or change behaviour to access it.
This slide is absent not because teams don’t believe in validation. They do, in principle. It’s absent because validation takes time, produces inconvenient findings, and delays the part where you get to build something. Building something is more enjoyable than learning that you might be building the wrong thing.
The most expensive slide to skip is the one you haven’t made yet.
The Vocabulary of Confident Uncertainty
AI roadmaps have developed a specific vocabulary for describing plans that haven’t been fully thought through. It is a beautiful vocabulary. It sounds like strategy and contains mostly aesthetic choices.
“AI-powered.” This appears in roadmaps as a feature description, as if “powered by AI” is an end-state rather than a mechanism. AI-powered toward what? For whom? With what success metric? The phrase answers none of these questions and asks you not to notice.
“Intelligent automation.” Similarly load-bearing in its vagueness. What is being automated? Which decisions? What does the human do that the automation doesn’t? Is “intelligent” doing any work here or is it the same automation with a better adjective?
“Foundation layer.” Phase one is always the foundation layer. The foundation layer involves infrastructure, data pipelines, model integrations, and platform work that produces nothing visible for several months. This is sometimes genuinely necessary. It is also sometimes a mechanism for spending the first two quarters on technical work while deferring the uncomfortable question of what that foundation will support.
“Scale.” Scale when? To what? At what cost, in what timeframe, measured how? Scale is the Q3 milestone in every roadmap I have seen. It is also the milestone most frequently renamed when Q3 arrives and the product is still in beta with seventeen internal users and a Slack channel called #ai-platform-feedback-internal.
The confidence of an AI roadmap is inversely proportional to the number of actual customer conversations that preceded it. A roadmap built on three customer interviews is tentative, specific, and usually correct about what matters. A roadmap built on market research and enthusiasm is beautiful, comprehensive, and optimised for impressing people who are not going to use the product.
What a Real AI Roadmap Looks Like
It is shorter. Uncomfortably shorter. The first phase is not “Foundation Layer” — it is “Validate that this specific problem exists for these specific users in this specific context.” It has a duration of two to four weeks, not two quarters. The output is a decision: build, pivot, or stop. Not a codebase.
The second phase is a narrow solution to the validated problem, with a specific success metric defined before building begins. Not “users will love it.” A number. A behaviour change. Something that will appear in a dashboard and either be there or not be there six weeks from now.
The third phase is iteration on evidence from real usage, not a predetermined scale plan.
This roadmap does not look impressive in a deck. It does not have satisfying quarter milestone arches. It does not inspire confidence in stakeholders who measure progress by the ambition of the language. It does, however, have a significantly higher probability of producing something that functions in contact with reality.
The Part That’s Actually Fixable
The vibes-with-a-Gantt-chart roadmap is not the result of incompetence. It is the result of incentives. Roadmaps are often made to secure approval, investment, or internal buy-in. These audiences reward ambition. They respond to boldness. They find “we will spend four weeks talking to users before writing a line of code” considerably less inspiring than “Phase One: Intelligent Foundation Layer — Q1.”
The fix is not to make better decks. It is to have the conversation about what success looks like before the deck is built — and to define success in terms that survive contact with the actual product, in the hands of actual users, generating actual measurable outcomes.
If your AI roadmap can’t answer “how will we know in six months if this worked” without reference to the slides, it is a vibes document. A very well-formatted one. With excellent milestone names.
Start with the question. Build the roadmap around the answer. The Gantt chart comes last, if it comes at all.