Definition
Outcome-Driven Innovation (ODI) is a strategy and innovation framework developed by Tony Ulwick at Strategyn. It provides a structured method for discovering what customers really need by focusing on the outcomes they want to achieve rather than the solutions they request.
The core insight behind ODI is that customers don't buy products. They hire them to get a job done, and they measure success by how well specific outcomes are achieved. By mapping these outcomes and measuring satisfaction levels, product teams can identify precisely where current solutions fall short.
ODI builds directly on Jobs-to-Be-Done theory but adds a quantitative layer. Where JTBD provides the conceptual framing, ODI supplies the research methodology, scoring system, and prioritization logic needed to turn job insights into product decisions. The Strategy Guide covers how frameworks like ODI fit into broader product strategy.
Why It Matters for Product Managers
Most product teams struggle with prioritization because they lack a reliable signal for what customers actually need. Feature requests are biased toward power users. Sales feedback skews toward prospects, not retained customers. Gut instinct varies by who you ask. ODI replaces these noisy inputs with a structured, quantitative approach.
The opportunity score formula gives PMs a defensible way to rank investments. When stakeholders debate whether to build Feature A or Feature B, ODI data shows which one addresses more underserved outcomes for a larger segment. This shifts roadmap conversations from opinion to evidence.
ODI also reduces the risk of building the wrong thing. By validating that an outcome is both important and underserved before committing development resources, teams avoid the expensive mistake of shipping features that customers don't value. The RICE framework can complement ODI by adding effort and reach estimates to outcome-based prioritization.
How ODI Works in Practice
The ODI process follows a defined sequence:
- Define the job. Identify the core functional job the customer is trying to accomplish. Express it as a verb plus object plus context.
- Map the job steps. Break the job into its sequential stages, from planning through execution to monitoring outcomes.
- Capture desired outcomes. For each job step, document the specific metrics customers use to judge success. Each outcome follows the format: minimize/maximize + variable + context.
- Survey for importance and satisfaction. Ask a statistically significant sample of customers to rate each outcome on importance and current satisfaction, both on a 1-10 scale.
- Calculate opportunity scores. Apply the formula: Opportunity = Importance + max(Importance - Satisfaction, 0). Scores above 10 indicate underserved outcomes. Scores below 6 indicate overserved outcomes.
- Segment and prioritize. Group customers by their outcome priorities. Identify segments with clusters of underserved outcomes. These segments represent your best product-market fit opportunities.
Implementation Checklist
- Identify the core job your product helps customers accomplish
- Recruit 12-30 customers for qualitative outcome discovery interviews
- Build a complete job map with 50-150 desired outcomes
- Design and deploy a quantitative survey to 100+ customers
- Calculate opportunity scores and rank outcomes
- Identify underserved segments using cluster analysis
- Align roadmap priorities with the highest-scoring opportunities
- Re-survey annually to track how satisfaction levels change
Common Mistakes
- Writing outcomes as solutions. Outcomes describe what customers want to achieve, not how they want to achieve it. "Minimize the time it takes to identify which tasks are behind schedule" is an outcome. "Add a Gantt chart" is a solution. Keep solutions out of the outcome statements.
- Skipping the quantitative step. Qualitative interviews reveal outcomes, but only quantitative surveys reveal which outcomes are underserved at scale. Teams that skip the survey step end up prioritizing based on the most vocal interviewees rather than the largest opportunities.
- Treating all customers as one segment. Different customer segments have different outcome priorities. A startup PM and an enterprise PM hiring the same product may have opposite satisfaction profiles. Segment analysis reveals these differences and enables targeted product strategies.
Measuring Success
Assess your ODI practice with these indicators:
- Outcome coverage: Percentage of job steps with documented desired outcomes
- Survey response rate: Higher response rates produce more reliable opportunity scores
- Prediction accuracy: Do features targeting high-opportunity outcomes drive more adoption than features targeting low-opportunity ones?
- Roadmap alignment: Percentage of roadmap items linked to quantified underserved outcomes
- Win rate: Improvement in competitive win rates for segments where you targeted underserved outcomes
ODI vs. Other Prioritization Approaches
| Approach | Input | Strength | Weakness |
|---|---|---|---|
| ODI | Customer outcome surveys | Quantitative, segment-aware, high confidence | Requires 100+ survey responses, slower setup |
| RICE | Team estimates of reach, impact, confidence, effort | Fast, works with limited data | Subjective scores, no customer validation |
| Jobs-to-Be-Done (qualitative) | Customer interviews | Deep insight into motivation | Hard to prioritize across many jobs |
| Feature voting | Customer requests | Easy to collect | Biased toward power users, misses non-users |
ODI works best when you have a defined market and enough customers to survey. For pre-product-market-fit teams, qualitative JTBD interviews or the RICE calculator may be faster starting points. Once you have 100+ customers, ODI adds the quantitative rigor those methods lack.
Worked Example: ODI for a Project Management Tool
Here is a simplified example showing how ODI works in practice for a PM tool.
Job: Plan and track a software project from kickoff to delivery.
Three sample outcomes from the "Monitor Progress" job step:
- Minimize the time it takes to identify which tasks are behind schedule
- Minimize the likelihood of missing a dependency between tasks
- Minimize the effort required to communicate status to stakeholders
Survey results (1-10 scale):
| Outcome | Importance | Satisfaction | Opportunity Score |
|---|---|---|---|
| Identify behind-schedule tasks | 9.2 | 4.1 | 14.3 |
| Catch task dependencies | 8.5 | 6.8 | 10.2 |
| Communicate status to stakeholders | 7.8 | 7.2 | 8.4 |
Outcome #1 scores 14.3 (well above the 10-point threshold), signaling a major unmet need. This tells you that building better progress visibility will resonate more than improving dependency tracking or status reporting. You could validate this further by checking whether the opportunity concentrates in a specific segment (e.g., teams above 20 people).
When NOT to Use ODI
ODI is powerful but not universal. Skip it when:
- You have fewer than 50 customers. The survey needs statistical significance. With a small base, qualitative customer development interviews are more practical.
- The job is not well-defined. If your product serves multiple distinct jobs, run ODI separately for each. Mixing jobs in one survey produces muddled scores.
- You are exploring a brand-new market. ODI assumes the job already exists. If you are creating a new category where the job itself is novel, ethnographic research is better than outcome surveys.
- Speed matters more than precision. A full ODI cycle (interviews + survey + analysis) takes 6-10 weeks. If you need to ship in 2 weeks, use ICE scoring or rapid prototype testing instead.
Related Concepts
ODI is grounded in Jobs-to-Be-Done theory and operationalizes it with a quantitative scoring system. The resulting opportunity scores help teams evaluate product-market fit by revealing how well their solution addresses a segment's most important outcomes. Customer development interviews can generate the initial outcome hypotheses, while value proposition design translates high-opportunity outcomes into compelling positioning. For faster prioritization alternatives, try the RICE calculator or explore the prioritization frameworks comparison.