Innovation teams that cannot measure their impact do not stay funded for long.
Yet most are either drowning in KPIs or flying blind with none at all. Both states lead to the same place: budgets cut, credibility lost, and no way to prove the work is worth continuing. The data exists in ERP systems, financial controllers' spreadsheets, and pipeline tools, but without the right framework to make sense of it, it stays invisible.
Dan Toma, CEO and Partner at OUTCOME, proposes a minimum viable system of 12 indicators that cuts through that, answering the questions boards actually ask and giving CFOs something real to look at.

To check the full session recording, upgrade to Premium
The Minimum Viable System Architecture
The minimum viable system divides into distinct layers that correspond to different levels of organizational decision-making. These are "three plus one layers: strategic, funnel, tactical, and culture."
The Strategic Layer addresses the needs of the board. The overarching question at this level is existential: "Is our innovation system working? Does it make sense for us to continue investing in innovation, or should we do something else with the money?"
One layer below is the Funnel Layer. This layer aggregates data from below to feed insights up to the strategic level. It is primarily concerned with the health of the pipeline. This level asks, "How is our portfolio doing? How healthy is our portfolio?"
The Tactical Layer focuses on the specific needs of teams. It answers questions about performance comparison and progress, such as, "How is this team performing versus the other?"
Finally, the Culture Layer sits across all three rather than sitting above or below them. It asks whether the organization has the conditions needed for innovation to actually happen — the psychological safety, the governance, and the appetite to stop ideas that are not working.
Connectivity between all four is non-negotiable. Each layer needs to feed the one above it — from what teams are learning, to the health of the portfolio, to the board's confidence in the innovation investment. A metric that exists only at the top, with no roots below, is just a number disconnected from reality.
The Strategic Layer: Answering the Board
At the top of the hierarchy, five specific KPIs help innovation leaders communicate value to the C-suite and finance departments.
Average Funnel Conversion Rate
This metric tracks how the funnel converts ideas from inception to businesses in the market. It tells you if you are launching ideas that customers actually want. This metric is particularly important for goal setting with the CFO. If a CFO allocates a $10 million budget, knowing your conversion rate allows you to manage expectations. Dan explains, "If you have the average funnel conversion rate, you can predict how many of those ideas will make it to market."
Average Time to Sustain
The term "sustain" is preferred over the traditional "time to market." Dan argues, "I don't like to use the word market, because you can be in the market with a minimum viable product." Sustain refers to the level of maturity where an idea moves from the Explore portfolio to the Exploit portfolio. This metric tells leadership "how long before you're going to see some ROI from your investments."
Investment Distribution
This indicator reveals how the budget is allocated across different portfolio slices, such as core, adjacent, and transformational. It acts as an early warning system for disruption. Dan cautions, "If 99% of your investments go to core, that's not really a good idea, because disruption actually happens outside core." If a company is in a position to be disrupted, this metric will be "bleeping in front of you saying, 'Hey, watch out.'"
Aggregated Estimated Impact
This is the aggregate valuation of the pipeline. While it is an estimate, it serves a critical political function. It gives "your CFO something to look at—something that connects your activity with what they care about most." It justifies the continued existence of the innovation department by showing the potential future value of the current portfolio.
New Product Vitality Index (NPVI)
The NPVI measures how much current revenue comes from products that did not exist in the portfolio three to five years ago. It is a metric used by major corporations like Cisco, DuPont, and 3M. Dan notes, "NPVI is one of the best KPIs for executives." It facilitates a specific conversation with finance: "How much are you expecting us to return to the company in three to five years? That's the conversation NPVI enables."
The Funnel Layer: Portfolio Health and Governance
Moving down to the funnel level, metrics diagnose the health of the innovation process itself.
Number of Ideas in Each Stage
Simply counting ideas in each stage of development provides a clear indication of how far the organization is from launching new businesses. It informs resource allocation, telling leadership if they need a new call for ideas or if the funnel is suffering from a lack of early-stage concepts.
Number of Ideas Stopped in Each Stage
This metric acts as a check on governance and culture. It asks, "What's the survivability rate of each stage of development?" Dan points out that "if you're stopping ideas very late in the funnel, it might mean your governance is wrong or your culture doesn't allow early stopping." It also helps identify if the organization is suffering from sunk cost bias.
Average Time Spent in Each Stage
This highlights which stages of the product lifecycle are the most difficult to pass. It serves as a benchmark for teams. "If you see a team learning a lot and presenting those learnings back... they're going to beat this average time spent to clear the stage." Conversely, if a team is dragging their feet, it allows leadership to intervene.
Average Cost of Failure
Most boards ask the wrong question about innovation performance. They want to know the conversion rate, or how many of the ideas that entered the funnel actually made it through. The problem is that a high conversion rate can just mean the organization is not stopping bad ideas early enough. A better question is what each failed idea actually cost.
The cost of failure determines how many bets an organization can place in a year. With a $10 million budget and a cost of failure sitting at $5 million per initiative, the math is brutal: two ideas a year, maximum. A startup with the same budget but a cost of failure ten times lower is running ten experiments to every one. Over time, that gap is impossible to close.
The number itself is not complicated to calculate. Look back three to five years, identify which innovation projects were stopped, find what was spent on them, and divide. The hard part is navigating the organization to get there — finding the right financial controllers, pulling the right project codes out of the ERP system, and separating innovation spend from everything else.
Dan did exactly this at a large telco where the cost of failure was sitting at $1.2 million per initiative. By changing the governance and the way the team operated, they brought it down to between $600,000 and $800,000. The budget did not change, but the number of ideas they could afford to explore did.
The Tactical Layer: Measuring Team Performance
At the team level, measuring "learning velocity" directly invites gaming the system. Instead, metrics assess confidence and potential:
Holistic Confidence
Holistic confidence is the inverse of risk. The higher the confidence score, the lower the perceived risk of continuing to invest in an idea. The lower the score, the more uncertainty remains.
What makes this metric useful is who assigns it. It is not self-reported by the team — it is evaluated by the decision maker based on how much the team's validation work has actually shifted their thinking. A team that is learning, testing assumptions, and presenting those findings back will move the needle. A team that is not will see their confidence score stall.
This also makes holistic confidence harder to game than metrics like learning velocity, where teams can start labeling everything as a learning just to hit a number. Here, the question is simpler and more honest: has this team's work made you more confident in the idea? If yes, by how much?
Estimated Impact (Risk-Adjusted)
Teams should provide estimates of revenue or cost savings. When leaders multiply this estimate by the holistic confidence score, they get a "risk-adjusted estimation." This helps track if estimations change as teams learn more, providing a grounded view of the potential value.
The Culture Level: Assessing the Environment
In corporate environments, raising funds for an idea is hard. Stopping them is equally hard — and that second problem is the one that quietly destroys innovation budgets.
Average time to kill is a metric that measures how long the organization takes to stop an idea, and a high number points to one thing above all: fear. People are afraid to raise their hand and accept when something isn’t working.
That fear has a direct financial consequence. The longer it takes to kill an idea, the more gets spent on it, and the higher the average cost of failure climbs. The two metrics move in the same direction, which means fixing one requires fixing the other. An organization that takes months or years to stop a failing idea is wasting money and reducing the number of new ones it can afford to try.
Implement, But Don't Start Fresh
Implementing innovation accounting should be treated as a "change management initiative." Wiping the slate clean is a mistake. "Start from the KPIs you already track—don't say, 'Scrap that, we're starting fresh.' People will hate your guts for doing that."
Stakeholders can be categorized into supporters, detractors, and neutrals. "Don't waste energy trying to convince detractors. Let them be convinced by the success of what you're building." Focus on the supporters and the neutrals instead.
The volume of metrics matters.
"Having too many KPIs or no KPIs is the same thing—you'll drown either way."
The goal is to keep the ones connected to the value creation system and cut everything else.
The ultimate purpose of these metrics is not just reporting, but triggering smarter conversations. Metrics like the Average Cost of Failure and the New Product Vitality Index are powerful because they force a confrontation with reality. They ask: Are we learning efficiently? Are we actually regenerating our business?
"Humans are wired to game KPIs—like students studying only the part of the chapter that's on the test."
Therefore, the selection of these 12 metrics is intentional. They are difficult to game without actually doing the work of innovation: validating, learning, and stopping failure early.
For leaders preparing to implement this, the rule is clear: do not try to build the perfect system overnight. Start today with the minimum viable set and connect the tactical work of teams to the strategic needs of the CFO.

