Innovation budgets get cut because innovation teams can't prove their worth.

Ideas generated, pilots shipped, engagement rates. None of it answers the one question that actually matters. The problem is not that innovation is not creating value. The problem is that we lack a systemic way to measure that value before it is fully realized.

In a recent Community session, Simon Hill introduced a fix: Expected Value— a single, currency-denominated metric that finally makes innovation legible to finance. Find out how this works in practice below.

Hans Balmaekers
Founder, the Compass and Chief @ Innov8rs

PS— Would love to hear back from you. How is the Compass useful in your day to day? Anything in particular you want us to improve or do more/less of?

Checking the pulse… share your answer to this quick poll.

Expected Value: The Innovation Metric That Finally Makes Sense to Finance

“What is this actually worth?"

It's the five-word question every CFO eventually asks in the innovation budget meeting, and the one nobody on the team has a clean answer to.

Simon Hill, founder of Wazoku and author of Expected Value, has spent more than a decade watching this scene repeat: "We see big, thick lines being drawn through our innovation budgets, because we've struggled to show the value we've been creating," he says.

One popular sport solved its version of this problem a decade ago: football. For most of the game's history, arguments started and ended with the scoreline. A team could outplay an opponent for ninety minutes, lose 1-0 to a deflection, and the coach was gone by Monday. The result was the only number that counted, and it often had little to do with the performance underneath it.

The fix was a metric called xG, or expected goals. xG looks at every shot a team takes and asks how often a chance like this ends up in the net. Added up across a match, it gave football a leading indicator that held up when the scoreline didn't, and it now sits on screen during every broadcast.

Innovation boardrooms did not have a model like this. What Simon eventually built, after years of research, is the innovation equivalent of xG, called xV, or Expected Value. xV is a single number, denominated in currency, that a CFO will accept, and that innovation teams can use to decide which projects to back, which to kill, and which to wait on.

The framework, a formula with four variables and one ratio, sets out to finally establish one shared language between innovation and finance. It also sets the foundation for what comes after: tracking whether the value forecasted by xV actually materializes in the business, what Simon calls Realized Value, or rV.

This piece walks through the whole framework in detail, so that next time your innovation team encounters the value question in a meeting, the answer is already written on the page.

When the Metrics Don't Match the Question

In his book “Expected Value: The System to Measure, Prove and Scale Value” Simon tells the xV story through a single character who represents the many innovation managers he has worked with over the past decade.

She is the Head of Innovation at a mid-sized company. On paper, she has everything an innovation leader could want: a board-level sponsor, a team, a budget that has survived three years of scrutiny. Her team has had a good year by the usual measures: two hackathons, hundreds of ideas, four pilots shipped, engagement up 40%.

Then the CFO asks: "That's great. But what's the value of all this?" She has no answer.

The pilots have produced outputs she cannot describe and outcomes she cannot quantify. One has been quietly shelved. Another sits in the murky zone between "still learning" and "probably dead." Her scorecard measures what innovation teams have always measured — idea volume, engagement, pilot count, buzz — and none of it answers the question on the table.

Within eighteen months after that review, the team's budget is cut in half. Two of the four pilots have been killed without any structured learning captured. One of her best people has left for a competitor. The Head of Innovation herself is reviewing whether the role is still worth staying in.

She goes looking for the answer she should have had in that meeting. Most of the frameworks she finds measure what Simon calls velocity: time-to-market, throughput, idea counts, participation rates, engagement statistics, occasional success stories. Simon's argument is that none of these measures the thing innovation teams are supposed to produce in the first place, which, in his definition, is new value created.

Motion is not value and activity is not impact, he continues. What innovation should really be measured on is the currency it produces.

That was the system the innovation leader was looking for, but it did not yet exist in innovation. It already existed, as it turned out, in sport.

From Sport to Strategy

The breakthrough came from outside innovation entirely. In the book, the innovation leader's brother is a data scientist who spent his career in sports analytics, working with xG every day.

Innovation, he points out, has exactly the same problem football did before xG. The "scoreline", whether a pilot got shipped or a platform launched on time, is a lagging indicator, and often a misleading one. What innovation leaders need is a leading indicator of value, calculated from the evidence available now, that updates as new evidence comes in. They call this xV, or Expected Value.

The formula they end up with does for innovation what xG did for football: it turns a fuzzy, debatable, politically contested conversation into a number that can be tracked, compared, and defended.

The xV Formula

The complete xV formula looks like this:

xV = Confidence × Predicted Value × Time Sensitivity × Strategic Fit

Visual 1. The xV System at a glance: Formula, efficiency metric, and benchmarks in one view.

Four variables, multiplied together, produce a single monetary figure. The first three answer the question: is this a good idea? The fourth answers: is this a good idea for us?

Before going component by component, two properties of the formula are worth sitting with.

  • xV is dynamic, not static. The number produced on Monday in a portfolio review is not the number produced six weeks later, after a customer validation sprint, a competitor launch, or a regulatory shift. xV recalculates on a cadence, whether weekly, biweekly, or monthly, depending on the speed of the environment. The same discipline a good sales team applies to pipeline forecasting is the discipline xV asks innovation teams to apply to their portfolio. Simon makes a parallel: no good sales leader runs a once-a-year forecast. Innovation shouldn't either.

  • xV is used by everyone in the chain. This is the piece most frameworks get wrong: they are built for the innovation team alone, not for the people who fund, sponsor, and deliver the work. In xV, each role has a job in the system. The project champion scores the variables and documents the evidence. The sponsor, usually a business-unit leader or executive, signs off on the scores with their name attached. The innovation leader uses xV across the portfolio to decide allocation. Finance uses xV and the cost per xV point ratio (we'll come to this) to approve or deny the capital. The number is the common language between them.

With these properties defined, it’s time to see how the four variables unfold within the formula.

Confidence, the evidence factor

Most innovation teams lose the conversation with finance on this variable alone, because they've been trained to conflate two different things.

Felt confidence is the conviction of the project champion, the person pitching the idea. It's built on months of thinking, customer conversations, competitive scanning, and instinct. It's what makes a founder pitch on stage with absolute certainty the idea will change the world. Simon, who has been that founder many times, knows it well.

Evidenced confidence is what's left when the enthusiasm is stripped away and the question becomes: what has actually been validated, with data, that a disinterested third party would accept as proof?

For most early-stage ideas, honestly assessed, the answer is: very little.

The Confidence variable in xV is scored from 0.1 to 1.0, based entirely on evidenced confidence, across six dimensions. A project gets a full Confidence score only when each dimension has real, documented evidence attached:

  1. Technical feasibility: the capacity to build this reliably and at scale.

  2. User desirability: evidence real users want it, use it, and will pay for it.

  3. Market viability: a sustainable commercial model with unit economics that work.

  4. Operational delivery: the ability to ship inside existing operations.

  5. Implementation readiness: the team, infrastructure, and processes needed to deploy.

  6. Regulatory compliance: a path through the legal and regulatory landscape.

Each dimension is scored independently against documented evidence, including prototypes tested, willingness-to-pay signals from real customers, pilot data, capability assessments, and regulatory reviews.

Confidence Scoring Table

From "assumed belief" at 0.0 to "proven and predictable" at 1.0.

The matrix is meant to be applied at two points in a project's life. First, at the idea-to-pilot gate, when the team is deciding whether to move from concept into a funded pilot. The matrix tells the team honestly what confidence level the evidence has earned, and what evidence needs to be gathered to move up the scale. Second, at every portfolio review from that point on, as the recurring check on whether Confidence is rising, flat, or falling.

A project whose Confidence stays at 0.3 for two quarters is telling the team something. Usually, that nobody has done the hard work to generate the next round of evidence.

Confidence Quotient

Description

0.0 – 0.1

Assumed belief. The idea is based on intuition or early hypothesis with no supporting evidence. Assumptions are untested; potential value is speculative. Insight comes mainly from internal experience or anecdote rather than validation.

0.1 – 0.2

Early hypothesis formation. The problem and solution have been articulated, and limited qualitative evidence (e.g., a few interviews or secondary data) supports the need. However, no experiments or prototypes have been conducted. Uncertainty remains very high.

0.3 – 0.4

Early validation with partial supporting evidence. Core assumptions have been tested through limited prototypes, pilots, or user trials, showing that the concept works in principle. Some uncertainty has been reduced, and early stakeholders or users express interest or alignment. Confidence remains moderate as scalability.

0.5 – 0.6

Structured validation. Multiple tests or pilots have produced consistent results. Technical feasibility and user desirability are evidenced; early indicators of commercial or operational viability are emerging. Key risks are being managed through iterative learning. Confidence is building but not yet proven at scale.

0.7 – 0.8

Demonstrated performance. Evidence from real-world pilots or controlled rollouts shows clear value creation, with measurable results tied to performance or adoption. Replicability is improving and most major risks have been reduced. The innovation is ready for broader scaling or investment.

0.9 – 1.0

Proven and predictable. The innovation has demonstrated consistent results at scale, with robust data validating its value and sustainability. Confidence is high, uncertainty is minimal, and the focus shifts from validation to optimisation and exploitation.

Case study: the healthcare AI diagnostic assistant

A large healthcare company applied the framework to a live project. The team was six months into an AI-powered diagnostic assistant, with significant internal excitement, senior-level backing, and roughly $400,000 already committed. Simon walked them through their plans and asked them to run the Confidence matrix honestly, dimension by dimension.

The results were stark.

Technical feasibility scored 0.6, because the prototype worked in controlled conditions. User desirability scored 0.2, because clinician enthusiasm from a few conversations was not the same as evidenced willingness to pay from the hospitals that would actually buy the product. Market viability scored 0.2, because no pricing model had been tested. Operational delivery, implementation readiness, and regulatory compliance each scored somewhere between 0.2 and 0.4.

Aggregated, the Confidence score came in at 0.3.

That single number did more to refocus the project than the previous twelve months of steering-committee reviews. It pointed the team at the one thing that needed to happen next: stop investing in technical development, and go generate willingness-to-pay evidence from the hospitals, or stop investing entirely. Six weeks later, signal from two hospital pilots had lifted Confidence to 0.5, and the project had earned the right to continue.

The lesson for innovation leaders sits inside this story: low Confidence is not a reason to stop. It's a reason to know where to focus. xV turns the usual anxiety about an early-stage idea into a specific, fundable learning activity.

Predicted Value, the potential upside

Confidence tells the team how much of the idea has been validated. Predicted Value tells them what the idea is worth, if that validation holds up.

One without the other is useless.

Confidence of 0.9 on something worth $100,000 is a rounding error. Confidence of 0.3 on something worth $50 million is a genuine opportunity, if the innovation leader knows how to invest behind building the evidence.

Predicted Value is the monetary upside if the idea succeeds, typically measured over a three-year horizon. Simon takes the three-year frame from the vitality index, a measure popularized by 3M and now used widely across R&D-heavy companies, which tracks how much of a company's current revenue comes from products launched in the last three years. Three years is long enough for a new product to compound, but short enough to still be recognizable as new.

And this is the variable where teams most often lose discipline; they confuse the size of the opportunity with the size of their claim on it.

Here's how it usually plays out. The Total Addressable Market deck says the opportunity is $50 billion. The project champion, the person whose job depends on getting the pilot funded, sincerely believes the company can capture $500 million of it. The skeptic at the back of the room, usually someone from finance, mumbles that $5 million is more realistic.

A familiar pattern illustrates exactly this dynamic going wrong: Project Phoenix, a celebrated AI prototype one organization loved, consumed $800,000 of investment without delivering a single cent of evidenced value. Meanwhile, an internal customer service tool that nobody had branded as "breakthrough" was quietly generating $260,000 in annual productivity savings. The buzz was on the wrong project.

The xV discipline cuts through the noise. The right number at the early stage is almost always what the team can see right now, not what they can imagine.

Simon's test for Predicted Value has three questions the team needs to answer, with evidence:

  • What specific mechanism generates the value? Is it a cost line eliminated? A revenue stream captured? A risk retired? A premium extracted? One mechanism, concretely named.

  • How big is it, sized from evidence? Pilot data. Signed pipeline. Comparable deals. Not a TAM calculation multiplied by "10% market share."

  • Over what timeframe does it materialize? Calibrated over three years from launch, not from today.

When the team can answer all three, the calculation is straightforward. Sum the cost savings, the new revenue, and any quantified risk or strategic value across the three-year horizon. That total, in currency, is the Predicted Value that goes into the xV formula.

When the team cannot answer all three, Simon presents a fallback: a t-shirt framework with pre-defined dollar bands for the organization. A typical set: Small under $1M, Medium $1M to $10M, Large above $10M. Each band is tied to a profile. Small affects a single department, Medium affects multiple departments or a significant customer segment, and Large is transformative at the organizational level.

The t-shirt approach is deliberately imprecise since precision at the early stage is, most of the time, a false signal.

Who does what in this process matters as much as the method. There is a clean division of responsibility between roles here, that's worth making explicit:

  • The project champion estimates Predicted Value using the framework.

  • The sponsor, the business-unit leader, signs off on it.

  • The innovation leader uses it across the portfolio.

  • Finance accepts it, or challenges it with evidence-based questions.

Ambition belongs to the team that has to build the thing. Predicted Value belongs to the system that has to decide whether to fund them.

Time Sensitivity, the urgency factor

Predicted Value tells the team what the idea is worth. Time Sensitivity tells them whether the moment matters.

This is the variable many innovation teams get wrong. They carry a default bias toward acceleration: every project has an "urgent" stamp on it somewhere, every sponsor wants their thing moving faster. xV flips the question: why are we not going slower?

Time Sensitivity Table

Time Sensitivity is a multiplier that adjusts the base xV up or down based on the timing pressure on the idea. The scale runs from 0.7 to 1.5, and every band has a specific meaning.

Score

Meaning

Typical window

0.7-0.8

Strategic delay, better off waiting

12+ months

0.9

Slight delay, implementation in the medium term

6-12 months

1.0

Standard timing, normal development cycle

3-6 months

1.1-1.2

Moderate urgency, early signs of competitive movement

Prioritize within a quarter

1.3-1.4

Significant urgency, clear market window or regulatory deadline

3-6 month window

1.5

Critical urgency, now or never

1-3 month window

Inside an innovation team, the mechanic runs like this:

  • The project champion proposes a Time Sensitivity score, grounded in specific market or regulatory evidence, and documents why.

  • The sponsor signs off with their name attached.

  • The innovation leader uses the scores across the portfolio to sequence work and allocate scarce fast resources: engineering capacity, budget headroom, go-to-market support.

Note that before any idea earns a Time Sensitivity above 1.0, two tests apply:

  • The regulatory or market-window clock. Is there a specific date by which action is required, or a window after which the opportunity closes? If yes, the team needs to quantify what each month of delay costs.

  • The first-mover test. Is there evidence, not assertion, that being first into this market is materially more valuable than arriving once the category has opened up? In most categories, it is not. The companies that arrive second or third often win by learning from the first mover's mistakes.

If neither test is met, the score stands at 1.0. If both are met, the score is probably somewhere above 1.2. And if the team cannot quantify the cost of going slower, the default stance is that slower is fine.

There is a deeper logic to this caution. Every innovation follows an S-curve: emergence, acceleration, maturity, decline. Urgency behaves differently at each stage. An emerging project almost never needs speed, because the market and the capability are still being built. An accelerating one almost always does. A mature one is often better served by deliberately slowing investment as the curve flattens.

Simon names the trap at both ends of the scale, calling it “The Icarus paradox”: fly too high and the wings melt, because of over-ambition. Fly too low, and you get what Simon calls "ambition deficit disorder," and the project never catches the curve at all.

Good innovation leaders navigate both.

Strategic Fit, the "right to win" question

The first three variables answer: is this a good idea? Strategic Fit answers a completely separate question: is this a good idea for us?

Simon has seen more than one project killed that was genuinely good, with a real market, a validated customer need, and a workable business model. Despite the positive signs, these were not good ideas for the companies that were building them.

Without capability, without a place in the strategy, without a right to win, a good idea becomes a sunk cost. Strategic Fit is the gate that “stops intruders” and “catches” projects like these.

It's the most composite of the four variables, because it packs four distinct checks into one multiplier. Each check is scored from 0.1 (weak) to 0.3 (strong), and the four scores are summed. The total Strategic Fit multiplier runs from 0.4 (weak fit) to 1.2 (exceptional fit).

Strategic Fit Four-Check

The Strategic Fit four-check:

  • High-Value Problem — Does this address a significant, validated pain point?

  • Company Advantage — Do we possess fundamental capabilities that give us an edge?

  • Market Attractiveness — Is the opportunity space viable and valuable?

  • Trend Alignment — Does this fit where the world is heading?

This is how to score each dimension accurately:

  • Dimension 1, High-Value Problem

    • 0.1: Problem is barely recognized or has minimal impact

    • 0.2: Problem is recognized and has moderate impact

    • 0.3: Problem is critical, with measurable major consequences and broad impact

  • Dimension 2, Company Advantage:

    • 0.1: Significant disadvantage or no clear advantage compared to alternatives

    • 0.2: Parity with alternatives or modest advantage

    • 0.3: Significant, sustainable advantage that creates distinctive value

  • Dimension 3, Market Attractiveness:

    • 0.1: Challenging market with unattractive conditions or significant barriers

    • 0.2: Moderate opportunity with mixed market conditions

    • 0.3: Exceptionally attractive market with favorable economics and profit potential

  • Dimension 4, Trend Alignment:

    • 0.1: Misaligned with or countering significant trends

    • 0.2: Neutral or mixed alignment with trends

    • 0.3: Perfectly positioned for multiple converging trends

Once the four dimensions are scored and summed, the total translates into a clear decision pattern:

Above 1.1 signals pursue aggressively. The idea fits, the organization can win, and capital should flow. These are the projects that deserve executive sponsorship and meaningful resource commitment.

Between 0.8 and 1.1, with one weak dimension, signals “adapt”. The idea is broadly right but missing something specific. Partner, acquire, or spin up the missing capability before committing full resources. The "adapt" move is often the difference between a good idea that ships and a good idea that gets stuck.

Below 0.8 on a high-xV idea signals “explore cautiously or pass”. The idea may be genuinely good, but not for this organization. The honest answer may be to license it, spin it out to a better-fitted partner, or decline it altogether. This is the gate that catches the "good idea, wrong company" pattern Simon sees most often.

This score isn’t just an extra note for the writing board: it produces a clear next move.

Case study: Fujifilm's move from film to cosmetics

In the early 2000s, digital photography was climbing exponentially, and Fujifilm's core business, photographic film, was heading into terminal decline. Kodak, their great rival, was already on the path to the bankruptcy it would file in 2012. Fujifilm's leadership had to decide whether to defend a dying business or leap to an adjacent one. A conventional strategic-fit check on "move into cosmetics" would have raised the obvious objection: we don't sell cosmetics, we sell film.

Run Strategic Fit on that decision and it scores almost perfectly. The collapse of film made diversification existentially urgent. Film emulsions rely on collagen and nanoparticle chemistry, the same science that underpins modern skincare, so the company's technical advantage transferred directly. Global cosmetics was a large, accessible, growing market. And the anti-aging category was accelerating on demographic tailwinds. Four out of four, with or without hindsight.

Fujifilm launched its Astalift skincare brand in 2007. By the mid-2010s, the division that grew out of the move was a meaningful share of corporate profit.

Fujifilm survived the collapse of film. Kodak did not.

That is what Strategic Fit is built to surface: moves that look absurd from the outside and obvious once the right questions are asked.

Putting the four together

With all four variables scored, calculating xV requires multiplying the scores:

xV = Confidence × Predicted Value × Time Sensitivity × Strategic Fit

A project with Confidence of 0.5, Predicted Value of $3M, Time Sensitivity of 0.7, and Strategic Fit of 0.9 has an xV of $945,000 today.

A month later, three enterprise customers signal willingness to pay. Confidence moves from 0.5 to 0.65. xV updates to $1.23M. The number moved because the evidence moved.

That trajectory, not the static snapshot, is where the real insight lives.

But even a moving xV is not enough on its own, and there is a reason why. xV tells the team what a project is expected to be worth. It does not tell the team what the project cost to reach that point. Two projects can carry an identical xV of $500,000 and still be fundamentally unequal. One might have spent $50,000 to get there. The other, $500,000. The first is a good investment. The second has spent almost as much as it is expected to produce.

A CFO comparing those two projects does not see equivalent bets. They see one worth backing and one close to breaking even, and the xV number alone does not show the difference.

That gap is what the next lens closes.

Cost Per xV point: Making Efficiency Visible

Back to the brainstorming session, the innovation leader and her data-scientist collaborator have the four-variable formula working by midnight. Over the next hour, as they start testing it on real projects from her portfolio, she notices a problem, and the problem is the same one every innovation leader has to wrestle with eventually.

What they need is a metric that would make a CFO lean forward in their chair. Not just a measure of what the idea is worth, but a measure of how efficiently the innovation team is converting invested capital into expected value. At roughly 3am they write it on the whiteboard:

Cost per xV point = Total Investment ÷ xV

Or inverted: xV Efficiency = xV ÷ Total Investment.

This is the ratio that closes the gap between innovation and finance, because it maps directly onto the logic every CFO already uses: how much capital am I putting in, and how much expected value am I getting out? It gives finance a unit they can benchmark, and it gives the innovation leader a way to defend the portfolio that doesn't depend on activity, velocity, or storytelling.

Innovation teams are not meant to be the cheapest value creators in the organization. They are meant to be the most efficient, the highest ratio of expected value produced per dollar invested. That is the thing a CFO will fund indefinitely, once they believe the number.

To see what this looks like in practice, here is the portfolio the innovation leader in the book ran through the system. Four real projects, pulled from a live portfolio, scored in the same week.

Project Alpha, AI customer service assistant. Internal R&D build. Big headline opportunity, but early-stage and expensive.

Variable

Score

Confidence

0.3 (concept phase, assumptions untested)

Predicted Value

$2,000,000 (cost savings + revenue, 3-year)

Time Sensitivity

1.0 (no urgent pressure)

Strategic Fit

0.8

xV

$480,000

Cost to date

$450,000

Cost per xV point

$0.94

Project Beta, internal process automation. Aligned with an imminent system migration. Already has a pilot.

Variable

Score

Confidence

0.8 (pilot with strong results)

Predicted Value

$500,000 (efficiency gains, 3-year)

Time Sensitivity

1.3 (aligned with migration deadline)

Strategic Fit

1.1

xV

$572,000

Cost to date

$75,000

Cost per xV point

$0.13

Project Gamma, blockchain verification. External partnership. Speculative, weak strategic alignment.

Variable

Score

Confidence

0.2

Predicted Value

$4,000,000

Time Sensitivity

1.0

Strategic Fit

0.6

xV

$480,000

Cost to date

$250,000

Cost per xV point

$0.52

Project Delta, customer-facing mobile app enhancement. Delivered through an open innovation challenge rather than internal R&D.

Variable

Score

Confidence

0.5 (validated with users, technical approach being finalized)

Predicted Value

$3,000,000 (new revenue, 3-year)

Time Sensitivity

0.7 (adoption 18-24 months out)

Strategic Fit

0.9

xV

$945,000

Cost to date

$35,000

Cost per xV point

$0.04

Delta costs less than one-tenth of Alpha, has nearly double the xV, and is being built through an open innovation model rather than internal R&D.

That is not a coincidence. It is the broader point about how innovation efficiency compounds.

Open innovation challenges, when run well, access external talent for a fraction of the internal cost and generate higher-quality solutions by widening the pool of people working the problem. Delta is an extreme case, but the pattern is consistent across Simon's dataset: open-innovation approaches routinely produce cost per xV points between $0.50 and $5, while internal R&D builds sit closer to $10 to $50 per xV point. A ten-to-twenty-fold efficiency gap.

Then comes the allocation question.

Imagine a $500,000 quarterly budget. The traditional approach, back the biggest headline xV, puts the whole $500,000 into Project Alpha. Total xV generated: $480,000. The portfolio has spent $500,000 to create $480,000 of expected value. It's losing money before a single pilot ships.

Allocation

Project

xV unlocked

$35,000

Delta

$945,000

$75,000

Beta

$572,000

$250,000

Gamma

$480,000

$140,000

Held back, or partial on Alpha

Total

$1,997,000

Note here: the efficiency-based allocation produces more than four times the expected value of the traditional approach from the same budget.

This is the analysis that reframes the innovation conversation inside a board meeting. Not "look how much we did." But "here is the expected value per dollar we spent, benchmarked against open-innovation efficiency, and here is what we're doing next quarter based on the numbers."

Cost per xV point decides what gets funded. It does not decide whether the forecasted value actually arrives: that’s what rV does.

From xV to rV: Closing the Loop

xV is a forecast. It tells the team what an innovation is expected to be worth. But predictions and outcomes are two different things, and an innovation function that consistently produces strong xV numbers without ever showing up in realized business outcomes ends up with the same credibility problem it started with.

Simon is direct about this: "We've gotten much better at predicting what innovations could be worth. Now we need to get equally good at ensuring they actually deliver that value."

This is where Realized Value, or rV, comes in.

rV is the measurable contribution an innovation makes to business outcomes after implementation. It is scored against predetermined metrics, across strategic, financial, operational, and risk dimensions, within defined timeframes.

Value Typology

Most organizations measure only the financial dimension. Simon's Value Typology expands that view into six categories, each with its own metrics and timeline:

Visual 2. The Value Typology: six categories, six timelines: What rV looks like across financial, strategic, operational, risk, people, and sustainability dimensions

In a traditional review, a project delivering new compliance capability, a reduction in regulatory risk, or a better customer-facing talent pool looks like "no ROI." But the Typology shows exactly where that value lives: Risk Value, People Value, Operational Value.

The data is derived from systems most organizations already operate:

  • Financial Value flows through standard financial reporting.

  • Operational Value through operational KPIs.

  • Risk Value through compliance and audit functions.

  • People Value through HR systems.

  • Sustainability Value through ESG reporting.

The Value Realization System

The Typology tells the team what to measure. It does not, on its own, tell them how. That is the job of the Value Realization System, a six-step process that turns the Typology from a categorization into an operational discipline:

Visual 3. The Value Realization System: from xV to rV in six steps The deliberate process that bridges forecasted value and realized value.

  • Step 1: Value Specification happens before the project ships. The team writes a contract with the business naming the specific value types the innovation will deliver, the metrics for each, the measurement points, the timeframes, and the named owners. Simon gives the example of a procurement automation tool: "The team specified three value types: financial value of $500,000 in cost savings over 18 months, operational value through a 40% reduction in processing time, and risk value via a 30% decrease in compliance exceptions. Each has defined metrics, measurement points, and clear ownership."

  • Steps 2 and 3: Implementation Pathways and Adoption Acceleration protect the value in transit. Implementation Pathways acknowledge that the journey from prototype to production is where value most often leaks away, and so the team explicitly maps handoff points, resource commitments, stakeholder engagements, and mitigation strategies for known barriers. Adoption Acceleration addresses the next failure mode: innovations that ship cleanly but go unused. Onboarding, change management, incentive alignment, and early feedback mechanisms exist because a perfect solution that no one uses creates zero value.

  • Step 4: Value Capture happens after the project ships. Baselines are taken before implementation. Tracking systems pull the relevant data during and after deployment. Regular measurement checkpoints align with the projected value timelines. And critically, attribution methodologies isolate the innovation's contribution from other factors moving in the same direction. These mechanisms create accountability for measuring the actual value delivered.

  • Steps 5 and 6: Learning Integration and Value Amplification close the loop. Learning Integration compares projected xV against realized rV, isolates the gaps, and feeds them back into the model. Value Amplification scales successful innovations into adjacent contexts, multiplying realized value beyond a single deployment.

The innovation team's job is not to build new measurement infrastructure. It is to wire the project into the systems that already exist, on the terms Value Specification has already named.

This whole feedback loop is the real insight. Paired with rV, xV is no longer just a metric. It becomes a forecast discipline, constantly calibrated by rV. When the two align closely, the team's scoring is validated. When the gap is large, the team investigates why.

This is what finally makes innovation investment look like every other investment the business makes.

The Language of Value

The CFO asking what is this actually worth? was never the problem. The problem was that nobody on the innovation side of the table had a number to answer with.

That is the shift xV makes.

What xV offers is not a better pitch, or a louder defence of the budget. It is a shared number, denominated in the currency the rest of the business has been using for a hundred years.

Once that number is on the table, the question in the room changes with it. And paired with rV, xV gets tested against reality. xV forecasts the value; rV proves it landed. That is the pairing every other function in the business already runs on, and the one innovation has been missing.

This is why the goal should not always be a bigger budget. Instead, it should be about sharper decisions, inside the budget, already approved. And that starts on Monday: with five projects, four variables, and one ratio.

Six months after that first cycle, the same innovation leader walks into a CFO review with the answer to what is this actually worth? already on the page.

How to Run Your First xV Cycle in Five Days

Frameworks die in the gap between reading and doing. The four variables make sense on the page, the ratio is clean, the logic holds. But when implementation time arrives, nothing changes: the portfolio review runs the way it always has.

xV avoids that trap by being small enough to start now.

The minimum viable version fits inside a single working week, and it produces enough signal to change the next portfolio review.

The cadence that follows, running the same cycle every six weeks, is what builds the trajectory finance actually responds to.

Here's how your innovation team can implement it.

Monday: Select five live projects

Pick a variety, not a test set. The temptation is to start with the easy ones, the projects where the answer is already obvious. Don't. The framework only earns credibility if it holds up against the messy middle of the portfolio.

A good five-project mix:

  • One the organization loves

  • One that has gone quiet

  • One that was recently funded

  • One that has been running for a long time without a clear outcome

  • One that the business actively argues about

The goal is to stress-test xV against the full range of what sits in the portfolio, not to confirm what the team already believes.

Tuesday and Wednesday: Score each project, variable by variable

Score honestly, in this order:

  • Confidence, using the six-dimension matrix, based on evidence rather than enthusiasm

  • Predicted Value, using the three-question test, falling back to t-shirt sizes where the evidence is thin

  • Time Sensitivity, starting from why are we not going slower? Only move above 1.0 where the regulatory clock or first-mover test is genuinely met

  • Strategic Fit, using the four-check scorecard, scoring each dimension 0.1 to 0.3 and summing

Every score carries a named sponsor. No anonymous numbers, no committee scoring. The point is to put accountability behind each variable from day one.

Thursday: Calculate xV and cost per xV point, then rank

Run the formula for each project, then divide investment-to-date by xV to get cost per xV point. Rank the portfolio by efficiency, not headline xV.

That ranking is the one a CFO would produce if they had the team's data. It's almost certainly different from the one the team would produce on instinct, and that gap is the most useful output of the whole week.

Friday: Pick one weak variable on the top three projects, and design the next sprint around lifting it

Don't try to lift everything. Pick the single weakest variable on each of the top three projects, and build the next two-to-six-week sprint around it:

  • Weak Confidence means an evidence-gathering sprint, whether a willingness-to-pay test, a technical validation, or a paid pilot

  • Weak Strategic Fit means either re-alignment with the current strategy or a cleanly-scoped spin-out

  • Weak Predicted Value means going back to the three-question test with real customer or market data, not internal assumptions

One variable, one sprint, one named owner.

Run this cycle every six weeks and document the results. A flat confidence score across two cycles says something. A falling trajectory flags a project for review before it becomes a zombie. A rising trajectory shows which projects deserve more capital next quarter.

By the third cycle, six months in, an innovation leader running xV walks into a portfolio review with something few others in their company will have: a confidence-weighted, time-adjusted, strategically-filtered, efficiency-ranked portfolio, with a sponsor's name on every score and an evidence trail behind every number.

That’s it for today.

If you made it to this point, I’m sure you took out your Excel and started drafting ways to work with xV. Hoping this edition will prove useful in changing your conversation and relationship with finance. Next time, we’ll explore why breakthrough innovation requires a different, "atomic" structure focused on learning over established corporate efficiency.

Hans Balmaekers
Founder, the Compass and Chief @ Innov8rs

PS- best to move this newsletter to your primary inbox, or ‘whitelist’ our domain, to ensure we don’t end up in your promo or spam dungeons…

PPS- feel free to forward this newsletter to all the innovation leaders in your company and network. Sharing is… indeed!

Reply

Avatar

or to participate

Keep Reading