Traditional innovation processes waste months and hundreds of thousands of budget on discovery and ideation, only to find that management rejects the concepts.

Daniel Martin Callizo, Managing Partner at NOVA, argues that AI isn't just an efficiency tool but also a substitute for traditional discovery and ideation. The methodology compresses three to six months into weeks by eliminating the "middle stop" of opportunity scoring, using strategic constraints plus AI generation to move straight to testable concepts.

Combine the right data, the right methodology, and human judgment to decide which ideas make sense.

To check the full session recording, upgrade to Premium

The End of Ideas as a Competitive Advantage

Why volume trumps originality in the AI era

If ideas were ever a competitive advantage for companies when it comes to innovation, they are not competitive anymore with the arrival of AI.

In the traditional model, generating a high volume of quality ideas was difficult. It required time, money, and specific talent. Now, ideas are commodities. This abundance creates a significant change. If ideas are abundant, the value shifts downstream. The competitive advantage is no longer who has the best idea on paper, but who can move fastest to validate it.

The real difference is how you go about testing, validating, and experimenting.

The Problem with the "Middle Stop"

Eliminate theoretical-politcal debates and move straight to testable concepts

Moving past the commoditization of ideas, Daniel critiques the traditional Double Diamond framework, specifically the intermediate stage often called "opportunity scoring" or "opportunity sizing." This is the phase where teams try to prioritize high-level territories (like "healthy aging" or "sustainable travel") before having concrete concepts.

"I spent many hours in workshops with people just theorizing about, discussing whether this opportunity is good enough or big enough, when it was too high level to prioritize."

This "middle stop" creates a bottleneck of theoretical debate. You cannot accurately size a broad trend; you can only size a specific solution to a specific problem.

Furthermore, the efficacy of the human-led brainstorming sessions that usually follow is questionable. Often, the process succumbs to office politics rather than market merit. "In the end, it is more about what the highest paid person in the room wants to hear, and usually that is the idea that gets selected."

Daniel proposes a radical simplification: "Why do we need the middle stop of the opportunity scoring part? Why don't we just go straight to the ideas, which is ultimately the end goal of this ideation process?"

The AI-Enabled Shortcut: From Months to Days

Compress months into weeks with the right formula

In place of the traditional grind, you can compress a process that typically takes three to six months into a matter of weeks, or even days. The new formula is simple: Right Data + Right Methodology + Human Judgment.

"The only thing you need is access to the right information and data sources, the right methodology and tools, and of course, judgment of people that can detect whether ideas make sense."

The 4-Week Process

Translating this formula into action, below is a condensed timeline for this AI-driven approach:

Weeks 1-2: Strategic Alignment and Data Gathering

Before generating ideas, you must define the constraints. This is not about market research in the traditional sense, but about internal strategy. "When it comes to selecting the right ideas to move into testing, the primary criteria is internal strategy fit, because you know nothing at this stage about how this will be desirable for the market."

An inside-out analysis is essential where stakeholders define what is "in" and what is "out" of the company's innovation thesis. Simultaneously, you map out all the data sources—internal reports, competitor analysis, patent files—that will feed the AI.

Week 3: Iterative AI Ideation

This is where the heavy lifting happens. You feed your strategy and data into AI tools to generate volume. This is an iterative loop. You generate, review, refine, and generate again.

Week 4: Review and Selection

The process concludes with a human review workshop. Meetings are important not to generate ideas, but to make sure we're aligned on the outputs and pros and cons of each.

Strategic Alignment: The Antidote to "Vanilla Ideas"

Inject company-specific constraints to escape generic outputs

A common criticism of AI ideation is that it produces generic results. This concern is valid, but solvable.

"The big risk is you end up with vanilla ideas for everybody, unless you use company strategy, assets, and competitive advantage to guide the process."

If you simply prompt an LLM to give ideas for the automotive industry, you will get the same results as your competitors. The "creativity" of AI is dependent on the constraints and assets you feed it.

Creativity with AI can be much bigger if you use the right prompting. But you must inject your specific context. Many ideas don't hit the market because they are not aligned with the company's strategy, even if they pass all the desirability and feasibility checks.

By filtering AI generation through the lens of your company's unique assets and strategic goals upfront, you ensure the output is not just novel, but actionable for your organization.

Rethinking the Role of Humans and Design Thinking

From generators to judges: where human expertise still matters

Human intuition and frameworks like Design Thinking aren't obsolete, but their place in the sequence changes. The tension isn't about abandoning problem-first thinking—it's about avoiding analysis paralysis. Don't create solutions for non-existent problems, but don't get stuck analyzing problems without solutions either.

The new approach: generate ideas based on the right data, then check if they connect to real problems. This mirrors design sprints: build something and see if it fits the problem, rather than theorizing endlessly before building.

The human role shifts from generator to judge. "This ideation process is a combination of AI with access to data and human judgment to decide which ideas just sound right”, Daniel notes. You need expertise throughout the process to judge AI outputs, not generate ideas from scratch.

Strategic filtering comes first, not last. Strategic internal fit and success criteria need to come at the very beginning of the process. Use company strategy, assets, and competitive advantage to guide AI generation, otherwise you get vanilla ideas that any competitor could generate.

Barriers to Adoption: The "Black Box" Problem

Despite demonstrable advantages, resistance remains. An experiment conducted by Harvard Business Review and BCG consultants tested this directly: teams working on the same project with the same process were split into two groups—one using AI, one using traditional research and ideation tools. External industry experts assessed the quality of ideas generated by both groups. The AI-assisted teams produced ideas rated 40% higher in quality.

Yet skepticism persists. "Some people still don't believe AI-generated ideas make sense. They prefer focus groups and traditional processes."

This resistance often comes from Customer Insights teams who feel bypassed. "Customer insights teams often feel uncomfortable with what, for them, looks like a black box suddenly giving them ideas."

Looking Ahead: From Ideas to Testing

The framework demands a fundamental restructuring of innovation budgets and timelines. Compressing the Double Diamond’s discovery and definition phases from months to days means leaders must reallocate resources downstream, from ideation theater to rigorous market validation.

Effectively this means that innovation leaders should shift budgets to testing, redefine expertise around data curation and AI judgment (not ideation and brainstorming), and trust the machine with verification.

AI lets us skip theoretical opportunity sizing for tangible concepts ready for real-world proof.

Reply

Avatar

or to participate

Keep Reading