The CFO asks: "What exactly is the ROI of your innovation department?"

Most innovation leaders face scrutiny: prove results or risk elimination. The dilemma isn't just answering the question—it's that committing to numbers on paper creates accountability. When you inflate projections to satisfy stakeholders and reality diverges, budgets get cut. Yet retreating to short-term thinking leads to lower growth.

The solution is building defensible business cases for innovation projects using three components: cause-and-effect models that map how activities create value, range-based estimates instead of precise guesses, and technical skills to simulate outcomes.

What used to require six-week courses and days wrestling with Monte Carlo simulations can now be done in minutes with AI.

Tristan Kromer, innovation advisor and lean startup expert, demonstrates how AI excels at what humans fail at: estimation. Leaders can now run instant sensitivity analysis to identify their riskiest assumptions mathematically, rather than guessing what to test.

The question transforms from "What is the ROI?" to "What is the probability of success, and is it a bet we're willing to take?"

To check the full session recording, upgrade to Premium

The Problem With Innovation Metrics

Why finance demands answers innovators can't easily give

The core tension in corporate innovation today is the clash between the uncertainty of the work and the certainty demanded by the organization. Innovation inherently involves unknowns, yet finance requires concrete projections and ROI calculations.

Some innovation leaders have the privilege of operating without strict ROI targets, creating culture programs and hackathons that don't face quarterly scrutiny. Most don't have that luxury. They face the question: What is the ROI of your innovation department? What is the return on this specific project?

"You need to be able to answer this question, and you need to speak in the language of finance," Tristan notes.

Otherwise, "finance is going to come and put a spreadsheet in front of you and say, put numbers in the spreadsheet and prove to me that you're doing a good job."

This request triggers a specific kind of dread. There's a little bit of fear and anxiety at looking at the spreadsheet and looking at the numbers. The fear isn't just about math; it's about accountability in an unpredictable environment. Innovators worry that "the moment I put it on paper, I'm done."

If you create a "hockey stick" graph to appease stakeholders, you are now on the hook for those numbers. When reality inevitably diverges from that optimistic projection, budgets get cut, and departments get shut down.

However, retreating to short-term thinking is dangerous. "If we only focus on the short term, it does eventually lead to lower growth." To survive, innovators must learn to build business cases that bridge this gap.

What You Actually Need for a Business Case

Three components that satisfy finance without setting you up for failure

To construct a defensible business case—one that satisfies finance without setting you up for failure—you need three distinct elements: a cause-and-effect model, range-based estimates, and the skills to weave those things together.

Ground Truth Through Cause-and-Effect Models

First, you must establish ground truth. You need to articulate how your activities lead to value. This doesn't have to be complex code; it can be visual. "Designers are very, very good at this. There are simple tools like journey maps, which allow you to translate behaviors into tangible outcomes."

Whether it is a user journey map for Instagram or a simple funnel for a food truck, you are mapping logic: step one leads to step two, which converts to step three. Even concepts like culture change or sustainability can be modeled this way.

Range-Based Estimates Over Point Precision

Once you have the model, you need numbers to populate it. This is where traditional business cases fail. They rely on precise point estimates—e.g., "We will make exactly $10 million."

Taking this insight further, most business cases are extremely precise but inaccurate. What would be much more interesting would be to have a less precise estimate which is actually accurate.

Technical Skills to Connect Model and Data

Finally, you need the technical ability to combine your model and your estimates into a simulation that reflects reality. Historically, this meant mastering Excel or Monte Carlo simulations.

Why We Suck at Estimating

The overconfidence problem that destroys business cases

The second component—estimates—is where humans consistently fail.

"Everybody thinks they're better than average drivers, and that's clearly untrue."

When asked to estimate something they don't know—like the height of the Burj Khalifa—people typically guess a single number. More critically, they fail to acknowledge how wrong they might be. People typically tend to be massively overconfident with their estimations.

This overconfidence seeps into business cases. We underestimate risks and overestimate returns. And because our inputs are flawed, our outputs are useless. "Garbage in, garbage out. If you create a massive Monte Carlo simulation and you put terrible estimates in, you're still going to get a terrible estimate out."

Thinking in Probabilities and Uncertainty

Move from sniper precision to meteorologist ranges

To fix our broken estimation process, we need to stop acting like snipers hitting a bullseye and start acting like meteorologists.

"The weather people don't tell us it's going to rain at 5:22pm. They give us probabilistic estimates." They tell us there is a 70% chance of rain, or that the temperature will be between 60 and 65 degrees.

This approach creates a "cone of uncertainty." As you look further into the future—like the path of a hurricane—the range of possible outcomes widens. "That's a wonderful way of expressing and thinking about innovation projects — a cone of uncertainty."

Instead of promising a specific ROI, you present a range. You might say a project could lose $20 million or make $60 million, with the most likely outcome somewhere in the middle.

"You are giving them your level of uncertainty, the level of risk in the project... and that is valuable information for disaster planning, but also opportunity planning."

Moving forward, to generate these ranges mathematically, we use Monte Carlo simulations—thousands of scenarios based on range-based estimates rather than single numbers.

What AI Is Good At

Use AI where humans fall short: estimation

This is where the recent explosion in AI capabilities becomes transformative. It turns out that AI is surprisingly good at the very thing humans are bad at: estimation.

Recent testing of AI models (specifically GPT-4 and newer reasoning models) on their ability to estimate known quantities from the future (relative to their training data), such as 2024 Olympic records or corporate sales figures, reveals something remarkable. It turns out you can prompt this thing to be a much better estimator than humans.

The AI performs well when asked to provide 90% confidence intervals—ranges wide enough to include the correct answer 90% of the time. While humans tend to make their ranges too narrow (overconfidence), AI models like GPT-4.1 are hitting near-perfect calibration.

"It is essentially doing wisdom of the crowds at scale." By accessing a vast dataset, the AI simulates the average guess of a massive population, which often yields a highly accurate result.

However, you cannot just ask the AI a simple question and expect magic. "You have to prompt it well. You have to use ranges and confidence intervals. Don't just ask it point estimates."

Beyond just generating the initial numbers, AI can dynamically update them. We can also ask our AI to update estimations based on real data. AI can help you make the estimates and help you calculate the probabilities of certain things, and that puts you in a really good position as a human to start making good strategic decisions.

What used to require days of wrestling with spreadsheets can now be done in a few hours.

In a recent demonstration, modeling a food truck business illustrated this shift. "Typically, this often takes a human being an hour to come up with a journey map... now we can do it in a few seconds." The AI can instantly generate the visual cause-and-effect model (the customer journey) and then populate it with estimates. By feeding the model into an AI-enabled tool, you can run the Monte Carlo simulation immediately and define goals to calculate probability of achievement.

Why This Is a Game-Changer

Instant scenario modeling and sensitivity analysis

The speed and ease of this process unlock a new level of strategic capability for innovation leaders. Leaders can model out any number of complicated scenarios. The process is both straightforward and powerful.

You can instantly see the probability of success for different business models. You can compare a "sure bet" (like a food truck with a high probability of making a small profit) against a "moonshot" (like a tech platform with a low probability of a massive return).

Estimates that used to take hours or days can now be generated in as little as five minutes—a significant boost to decision-making speed.

Furthermore, you can run a sensitivity analysis instantly. This tells you which variable in your model has the biggest impact on the outcome.

Instead of guessing what to test or shoving a business model canvas in front of a growth board and saying, make a decision based on sticky notes, you can mathematically identify the riskiest assumption.

Reply

Avatar

or to participate

Keep Reading