Guides30 min read

The Complete Guide to Product Discovery: Methods, Frameworks, and Habits

A thorough guide to product discovery covering continuous discovery, opportunity solution trees, assumption mapping, research methods, and building a discovery culture.

By Tim Adair• Published 2025-05-05• Updated 2026-02-12

Quick Answer (TL;DR)

Product discovery is the practice of reducing the risk that you build the wrong thing. It combines user research, rapid experimentation, and structured decision-making to answer four questions before committing engineering resources: Is there a real user need? Will users choose our solution? Can we build it? Does it support the business? Teams that practice continuous discovery ship features that move metrics. Teams that skip it ship features that get ignored.

Summary: Discovery is how you decide what to build. Done well, it turns product development from a gamble into an informed bet.

Key Steps:

  1. Define a clear desired outcome before exploring solutions
  2. Map opportunities through user research, then generate multiple solutions per opportunity
  3. Test your riskiest assumptions with the smallest possible experiment

Time Required: Ongoing (1-3 hours per week for continuous discovery habits)

Best For: Product managers, designers, and engineers who want to build things that matter


Table of Contents

  1. What Product Discovery Is
  2. The Four Risks of Product Development
  3. Continuous Discovery: The Weekly Practice
  4. Opportunity Solution Trees
  5. Assumption Mapping
  6. Research Methods for Discovery
  7. Minimum Viable Tests
  8. Discovery Across the Product Lifecycle
  9. Discovery Anti-Patterns
  10. Building a Discovery Culture
  11. The Discovery Toolkit
  12. Key Takeaways

What Product Discovery Is

Product discovery is the process of deciding what to build and why. Before you commit engineering time to build it. It sits upstream of delivery (the process of building things well) and downstream of strategy (the process of choosing where to compete).

The core premise is simple: building software is expensive, and most ideas fail. CB Insights analyzed 156 startup post-mortems and found that 42% failed because they built something nobody needed. That is a discovery problem, not an engineering problem.

Discovery reduces the cost of being wrong by testing ideas cheaply before building them fully.

Discovery vs. Delivery

DiscoveryDelivery
QuestionShould we build this?How do we build this well?
OutputValidated opportunities and solutionsShipped, working software
SpeedDays to weeksWeeks to months
Cost of failureLow (a failed prototype costs hours)High (a failed feature costs sprints)
Who leadsProduct trio (PM, Design, Eng)Engineering team

The best teams do not separate discovery and delivery into sequential phases. They run them in parallel. While engineers deliver Sprint N, the product trio discovers what should go into Sprint N+2 and N+3.

What Discovery Is Not

  • Not just user research. Research is one input to discovery, but discovery also includes experimentation, assumption testing, and solution evaluation.
  • Not a phase before development. It is a continuous practice that runs alongside development.
  • Not asking users what they want. Users are experts on their problems, not on your solutions. Discovery uncovers problems; the product team designs solutions.
  • Not validation. Validation implies you already have a solution and are looking for confirmation. Discovery implies genuine curiosity about whether your assumptions are correct.

The Four Risks of Product Development

Marty Cagan identifies four risks that every product initiative faces. Discovery is the process of addressing all four before committing to full development.

1. Value Risk: Will Users Choose This?

Does the solution address a real need that users care about enough to change their behavior? Many features ship and get ignored because they solve a problem that is real but not important enough to drive adoption.

How to test: User interviews to validate the problem exists, fake door tests to measure interest, prototype testing to assess willingness to adopt.

2. Usability Risk: Can Users Figure It Out?

Even if the solution addresses a real need, will users be able to find it, understand it, and use it successfully? Usability risk is the gap between what the product does and what users perceive it does.

How to test: Usability testing with prototypes, wizard-of-oz tests, hallway testing with colleagues. See IdeaPlan's guide on user research methods for detailed techniques.

3. Feasibility Risk: Can We Build It?

Can the engineering team build this solution within acceptable time, cost, and technical constraints? Feasibility risk includes performance requirements, third-party dependencies, data availability, and architectural complexity.

How to test: Engineering spikes, proof-of-concept prototypes, architecture reviews with the tech lead. This is why engineers should be part of discovery. They spot feasibility risks that PMs and designers miss.

4. Business Viability Risk: Does It Work for the Business?

Even if users want it, can use it, and engineers can build it. Does it support the business? Viability risk includes questions about pricing, legal compliance, brand alignment, sales channel fit, and strategic direction.

How to test: Financial modeling, legal review, stakeholder alignment meetings, pricing experiments.

The Discovery Matrix

Every idea you are evaluating should be assessed across all four risks:

RiskQuestionTestConfidence (1-5)
ValueDo users need this?6 user interviews?
UsabilityCan they use it?Prototype test with 5 users?
FeasibilityCan we build it?Engineering spike (2 days)?
ViabilityDoes it work for the business?Financial model + legal review?

Your goal is to get all four risks to confidence level 4+ before committing a full sprint to building the solution.


Continuous Discovery: The Weekly Practice

Continuous discovery, as described by Teresa Torres, is the practice of conducting small, regular discovery activities every week. Not in occasional big-bang research projects. The goal is to make discovery a habit rather than an event.

The Continuous Discovery Cadence

ActivityFrequencyDurationWho
User interviewWeekly30 minProduct trio (at least PM + 1)
Opportunity assessmentWeekly30 minProduct trio
Assumption identificationPer feature/initiative15 minProduct trio
Assumption testWeekly or biweeklyVaries (hours to days)PM or designer
Synthesis and OST updateWeekly30 minProduct trio

The Weekly Interview Habit

The single most impactful discovery habit is talking to one user per week. Not a survey. Not analytics. An actual conversation with a real person who uses your product (or should be using it).

Why weekly: A single interview has limited value. But 50 interviews over a year creates a deep, evolving understanding of your users that no amount of data analysis can match. The compound effect is enormous.

How to make it sustainable:

  1. Automate recruiting: Set up an in-app prompt that asks users if they would be willing to talk. Aim for a standing pool of 10-15 willing participants.
  2. Keep it short: 30 minutes is enough. 20 minutes of conversation plus 10 minutes for the team to debrief.
  3. Rotate who leads: The PM does not have to conduct every interview. Designers, engineers, and even stakeholders should take turns.
  4. Record and share: With the participant's consent, record the session. Share key clips with the team.

What to ask:

The best discovery interviews focus on past behavior, not opinions about the future.

Good QuestionsBad Questions
"Tell me about the last time you [did the thing].""Would you use a feature that does X?"
"Walk me through your process for [task].""What features do you wish we had?"
"What was the hardest part of that?""How much would you pay for X?"
"What did you try before using our product?""Do you think this is a good idea?"

For a detailed guide to interview techniques, see User Research Methods.

For more on building sustainable discovery habits, see Continuous Discovery Habits.


Opportunity Solution Trees

The opportunity solution tree (OST) is the most useful visual framework for structuring discovery work. Developed by Teresa Torres, it maps the relationships between outcomes, opportunities, solutions, and experiments.

The Structure

         ┌─────────────────────┐
         │   DESIRED OUTCOME   │
         │  (Increase 30-day   │
         │   retention by 15%) │
         └────────┬────────────┘
                  │
    ┌─────────────┼─────────────┐
    │             │             │
┌───▼───┐   ┌────▼────┐   ┌────▼────┐
│ Opp 1 │   │  Opp 2  │   │  Opp 3  │
│ Users │   │ Users   │   │ Users   │
│ don't │   │ forget  │   │ can't   │
│ find  │   │ to come │   │ share   │
│ value │   │ back    │   │ with    │
│ fast  │   │         │   │ team    │
└──┬────┘   └────┬────┘   └────┬────┘
   │             │             │
┌──▼────┐   ┌────▼────┐   ┌────▼────┐
│Sol A  │   │ Sol C   │   │ Sol E   │
│Sol B  │   │ Sol D   │   │ Sol F   │
└──┬────┘   └────┬────┘   └────┬────┘
   │             │             │
┌──▼────┐   ┌────▼────┐   ┌────▼────┐
│Exp 1  │   │ Exp 3   │   │ Exp 5   │
│Exp 2  │   │ Exp 4   │   │ Exp 6   │
└───────┘   └─────────┘   └─────────┘

Level by Level

Outcome: The measurable business or customer result you are trying to achieve. This comes from your product strategy and is typically set for a quarter. Example: "Increase 30-day retention from 40% to 55%."

Opportunities: Unmet needs, pain points, or desires that, if addressed, would drive the outcome. Opportunities come from user research. They are things you discover, not things you invent. Example: "Users who do not find value in the first session rarely come back."

Solutions: Specific product ideas that could address an opportunity. For each opportunity, generate at least three solutions before committing to one. Example: "Guided onboarding wizard," "personalized starter templates," "first-session milestone with celebration." For e-commerce and content products, you can also use SEO data to surface product ideas by identifying high-demand, low-competition opportunities.

Experiments: Small tests to validate whether a solution will actually work before building it fully. Example: "Show 100 new users a clickable prototype of the onboarding wizard and measure completion rate."

How to Build an OST

  1. Start with your outcome. This should come from your product strategy or OKRs. If you do not have a clear outcome, stop here and define one.
  2. Populate opportunities from research. Review interview notes, support tickets, and analytics. Look for patterns in user behavior that connect to your outcome. Each pattern becomes an opportunity branch.
  3. Generate multiple solutions per opportunity. This is where most teams fail. They identify an opportunity and immediately jump to their first solution idea. Force yourself to generate at least three alternatives.
  4. Prioritize solutions by testability. Which solutions can you test most cheaply and quickly? Start there.
  5. Design experiments for your top solution. What is the riskiest assumption? Test that first.

Common OST Mistakes

  • Putting solutions in the opportunity layer. "Users need a better onboarding wizard" is a solution disguised as an opportunity. The real opportunity is "Users struggle to find value in their first session."
  • Having only one solution per opportunity. If you only have one solution, you have not explored enough. More options lead to better outcomes.
  • Skipping the experiment layer. Going straight from solution to building skips the entire point of the framework.
  • Treating the tree as static. The OST should be updated weekly as you learn from experiments and interviews. New opportunities emerge. Solutions get invalidated. The tree evolves.

Assumption Mapping

Every product idea rests on a stack of assumptions. Discovery is the process of making those assumptions explicit and testing the riskiest ones before committing resources.

The Assumption Types

TypeQuestionExample
DesirabilityDo users want this?"Users will switch from spreadsheets to our tool for roadmap planning."
UsabilityCan users figure this out?"Users will understand how to drag items between roadmap columns."
FeasibilityCan we build this?"We can integrate with Jira's API within 2 sprints."
ViabilityWill the business benefit?"Adding this feature will reduce churn by 5%."
EthicalShould we build this?"Sending daily notification emails won't annoy users."

The Assumption Mapping Exercise

  1. List all assumptions. Take your current top initiative and list every assumption it rests on. Aim for 10-20 assumptions. Include obvious ones.
  2. Plot on a 2x2. Map each assumption on two axes: Certainty (how confident are you?) and Importance (if this assumption is wrong, does the initiative fail?).
              HIGH IMPORTANCE
                    │
    ┌───────────────┼───────────────┐
    │               │               │
    │   TEST THESE  │  WATCH THESE  │
    │   IMMEDIATELY │               │
    │               │  (Important   │
    │  (Important   │   and you're  │
    │   and you're  │   fairly      │
    │   not sure)   │   confident)  │
    │               │               │
LOW ├───────────────┼───────────────┤ HIGH
CERTAINTY           │               CERTAINTY
    │               │               │
    │   REVISIT     │  SAFE TO      │
    │   LATER       │  IGNORE       │
    │               │               │
    │  (Not that    │  (Not that    │
    │   important   │   important   │
    │   anyway)     │   and you're  │
    │               │   confident)  │
    │               │               │
    └───────────────┼───────────────┘
                    │
              LOW IMPORTANCE
  1. Test the top-left quadrant first. These are your "leap of faith" assumptions. Important and uncertain. Design experiments specifically for these.

Example: Assumption Map for a Notification Feature

AssumptionImportanceCertaintyAction
Users want to be notified about updatesHighLowTest with fake door
Email is the preferred channelMediumLowSurvey current users
Users will configure notification preferencesHighMediumPrototype test
We can deliver real-time notifications at scaleHighHighEngineering confirmed
Notifications will increase daily active usageHighLowA/B test with cohort

The first and last assumptions are in the "test immediately" quadrant. Design experiments for those before writing a single line of production code.


Research Methods for Discovery

Discovery uses research to populate the opportunity layer of your OST and to validate assumptions. Different research methods answer different types of questions.

Generative Research: Finding Opportunities

Generative research helps you discover problems and opportunities you did not know existed. Use it early in discovery when you are exploring the problem space.

User interviews: One-on-one conversations about users' experiences, needs, and workflows. The foundation of most discovery work. See IdeaPlan's detailed user research methods guide.

Contextual inquiry: Observing users in their actual work environment. More time-intensive than interviews but reveals workflows, workarounds, and pain points that users cannot articulate. See contextual inquiry.

Diary studies: Users log their experiences over 1-4 weeks. Captures longitudinal patterns that single-session methods miss. Useful for understanding habits, routines, and how adoption evolves over time.

Customer development: Structured conversations designed to test business model assumptions. Originating from Steve Blank's methodology, customer development interviews focus on willingness to pay and purchase behavior rather than product usability.

Evaluative Research: Testing Solutions

Evaluative research tests whether a specific solution works. Use it after you have generated solutions and want to validate them before building.

Usability testing: Watch users attempt tasks with your prototype or product. Five participants uncover 85% of usability issues (Nielsen, 2000). See usability testing.

Concept testing: Show users a description, mockup, or prototype of a solution and gauge their reaction. Measures desirability before investing in usability.

A/B testing: Compare two versions of a feature with real users to measure which performs better. Requires enough traffic for statistical significance.

Surveys: Quantify how widespread a problem is or measure satisfaction with a solution. Best used after qualitative research has identified the right questions to ask.

Choosing the Right Method

What You Need to LearnBest MethodSample Size
What problems existUser interviews5-8 per segment
How common a problem isSurvey100+
Whether users can use your solutionUsability testing5
How users organize informationCard sorting15-20
What users do over timeDiary study10-15
Which solution performs betterA/B testingDepends on effect size
What actually happens in contextContextual inquiry4-6

Minimum Viable Tests

A minimum viable test (MVT) is the smallest, cheapest experiment that can tell you whether your riskiest assumption is true or false. The goal is to learn before you build.

The Test Spectrum

Tests range from low-fidelity (cheap, fast, less accurate) to high-fidelity (expensive, slow, more accurate):

Low fidelity ◄──────────────────────────────► High fidelity

Smoke test → Landing page → Wizard of Oz →
  Concierge → Prototype → Beta → Full build

Test Types

Smoke test / Fake door test: Add a button or menu item for a feature that does not exist. Measure how many users click it. If 15% of users try to use a feature that does not exist, that is strong demand signal.

Landing page test: Create a landing page describing the feature and measure sign-ups or interest. Works well for testing willingness to adopt before investing in building.

Wizard of Oz test: The user thinks they are interacting with a working product, but a human is manually performing the work behind the scenes. Useful for testing AI features, recommendation engines, and complex workflows without building the technology.

Concierge test: Similar to Wizard of Oz, but the user knows a human is helping. You manually deliver the service to a small group of users to test whether the value proposition works before automating.

Prototype test: Build a clickable prototype (Figma, InVision) and run usability tests with 5 users. Measures both desirability and usability without writing production code.

Beta test: Ship a minimal version of the feature to a small cohort. Measure adoption, engagement, and satisfaction before rolling out broadly. See beta testing.

Choosing the Right Test

Assumption to TestBest TestTimeCost
"Users want this feature"Fake door test1-2 daysNear zero
"Users will pay for this"Landing page with pricing1 week$100-500 (ads)
"The AI model works well enough"Wizard of Oz1-2 weeksHuman labor
"Users can complete the workflow"Prototype usability test3-5 daysPrototype time
"This drives the target metric"Beta with cohort2-4 weeksEngineering time

Writing Good Experiment Briefs

Every experiment should have a brief written before it starts:

EXPERIMENT BRIEF
═══════════════════════════════════════
Assumption: [The specific assumption being tested]

Hypothesis: We believe that [change] will result
in [outcome] for [users]. We will know this is
true when [measurable criteria].

Method: [How you will test it]

Success criteria: [Specific threshold]
  - Example: "At least 10% of users who see
    the fake door click it"

Sample size: [How many users/participants]

Duration: [How long the test will run]

Decision: If success → [next step]
          If failure → [next step]

The "Decision" section is critical. Before running the experiment, decide what you will do with each possible result. This prevents post-hoc rationalization ("well, 8% is close to 10%, so let's build it anyway").


Discovery Across the Product Lifecycle

Discovery looks different depending on where your product is in its lifecycle.

Pre-Product-Market Fit

At this stage, your entire job is discovery. You are trying to find a problem worth solving, a solution users will adopt, and a business model that works.

Focus: Broad customer development interviews. Talk to 50+ people. Test wildly different solutions. Expect most ideas to fail. Use the product-market fit lens to evaluate progress.

Cadence: Daily discovery activities. Multiple interviews per week. Rapid prototyping and testing cycles measured in days, not weeks.

Common mistake: Building too much before validating demand. Use MVTs aggressively. If users will not use a fake door, they will not use the real feature either.

Growth Stage

You have product-market fit. Now you need to deepen value for existing users and expand to new segments.

Focus: Targeted discovery within specific opportunity areas. Use analytics to identify where users drop off, where activation stalls, and which segments have the lowest retention. Run discovery specifically on those areas.

Cadence: Weekly interviews. Monthly experiment cycles. Quarterly opportunity reassessment.

Common mistake: Shifting entirely to delivery mode because "we know what users want." You do not. Markets evolve. Competitors move. User needs shift. Keep the discovery muscle active.

Maturity / Scale Stage

The product is established. Growth comes from optimization, expansion, and platform plays.

Focus: Discovery for new product lines, new segments, and major platform bets. Also, discovery for optimization of existing flows (onboarding, conversion, retention).

Cadence: Weekly interviews (maintain the habit). Experiment velocity slows for big bets but accelerates for optimization.

Common mistake: Innovation theater. Running "discovery sprints" that produce impressive presentations but never lead to actual product changes. Discovery must lead to decisions.


Discovery Anti-Patterns

Anti-Pattern 1: Discovery Theater

What it looks like: The team runs interviews, builds OSTs, maps assumptions. And then builds whatever the stakeholder originally wanted. Discovery artifacts exist but do not influence decisions.

Why it happens: The team adopted discovery practices under pressure but leadership has not bought in. Decisions are still made top-down.

Fix: Before starting discovery, agree with leadership on the decision that discovery will inform. "We will use the results of this discovery to decide whether to build Feature X, pivot to a different approach, or kill the initiative entirely." If leadership will not agree to this, you are doing discovery theater.

Anti-Pattern 2: Analysis Paralysis

What it looks like: The team is perpetually "doing discovery" and never building anything. Every assumption spawns three more experiments. The OST grows endlessly but never produces a decision.

Why it happens: Fear of making the wrong bet. Or, more commonly, the team has not defined clear success criteria for their experiments.

Fix: Time-box discovery. "We will spend two weeks testing this opportunity. At the end, we will decide: build, pivot, or kill." Set decision criteria before you start. Use the experiment brief template above.

Anti-Pattern 3: Solution-First Discovery

What it looks like: Someone (often a stakeholder or founder) has a specific solution in mind. The team runs "discovery" to validate it. Cherry-picking supportive evidence and dismissing contradictory signals.

Why it happens: Confirmation bias. When you already believe in a solution, you unconsciously filter evidence to support it.

Fix: Frame discovery around opportunities, not solutions. Instead of "validate whether users want Feature X," ask "what are the biggest barriers to activation for new users?" The research will tell you whether Feature X is the right solution. Or whether a different approach would work better.

Anti-Pattern 4: Skipping Feasibility

What it looks like: PM and designer run a thorough discovery process, validate user desirability, and hand a fully-formed spec to engineering. Engineering discovers it will take 3x longer than expected due to technical constraints nobody explored.

Why it happens: Engineers are not part of the product trio during discovery.

Fix: Include an engineer in every discovery conversation. Not every interview, but definitely every assumption mapping session, every experiment design, and every solution evaluation. Their perspective on feasibility constraints early saves weeks of rework later.

Anti-Pattern 5: Never Killing Ideas

What it looks like: Ideas that fail experiments get "pivoted" endlessly rather than killed. The team cannot bring themselves to abandon something they have invested time in (sunk cost fallacy).

Fix: Celebrate kills. When an experiment shows that an idea will not work, that is a success. You just saved weeks of engineering time on something that would not have moved the needle. Make a habit of sharing "things we learned not to build" alongside "things we shipped."


Building a Discovery Culture

Discovery is not a process you install. It is a culture you build. Here is how to start.

Step 1: Start Small

Do not try to implement continuous discovery across the entire organization at once. Start with one product trio. Run one interview per week for four weeks. Map findings on an OST. Run one experiment. Show results to leadership. Let the results speak.

Step 2: Make Discovery Visible

Share discovery work broadly. Post interview highlights in Slack. Present experiment results at sprint reviews. Make the OST visible on a wall or wiki page. When leadership sees that discovery produces useful insights, they will support more of it.

Step 3: Connect Discovery to Outcomes

The strongest argument for discovery is a causal chain: "We discovered through interviews that users struggle with X. We tested three solutions and found that solution B performed best. We built solution B and it improved [metric] by [amount]." Tell this story as often as you can.

Step 4: Get Engineering Involved

Discovery is not just a PM and designer activity. Engineers bring a unique perspective. They know what is feasible, they spot technical risks early, and they build better solutions when they understand the problem firsthand.

Invite engineers to user interviews. Share interview clips in engineering channels. Include a tech lead in assumption mapping sessions. The goal is for the entire product trio to have shared context about user problems.

Step 5: Build the Infrastructure

Sustainable discovery requires some infrastructure:

  • A recruiting pipeline: An in-app mechanism to recruit interview participants on an ongoing basis
  • A research repository: A searchable archive of past interview notes, experiment results, and OSTs
  • A regular cadence: Dedicated time on the calendar for interviews and synthesis
  • Leadership air cover: An executive sponsor who understands that discovery takes time and that not every experiment will validate the original hypothesis

The Discovery Toolkit

IdeaPlan Tools for Discovery

Frameworks for Discovery

  • Continuous Discovery Habits by Teresa Torres. The definitive guide to making discovery a weekly practice
  • Inspired by Marty Cagan. How the best product teams discover and deliver products users love
  • The Mom Test by Rob Fitzpatrick. How to talk to customers without getting lied to
  • Sprint by Jake Knapp. How to solve big problems and test new ideas in five days

Key Takeaways

  • Product discovery is the process of deciding what to build before committing engineering resources. It reduces the risk that you build the wrong thing.
  • Every product idea faces four risks: value (do users want it?), usability (can they use it?), feasibility (can we build it?), and viability (does it work for the business?). Discovery addresses all four.
  • Continuous discovery means making small discovery activities a weekly habit. Not running occasional big-bang research projects. One interview per week is more valuable than ten interviews per quarter.
  • The opportunity solution tree is the most effective framework for structuring discovery. It connects outcomes to opportunities to solutions to experiments, making your reasoning visible and testable.
  • Always generate multiple solutions per opportunity. The first idea is rarely the best idea.
  • Test your riskiest assumptions with the smallest possible experiment. Fake door tests, prototypes, and wizard-of-oz tests can validate ideas in days, not months.
  • Discovery that does not lead to decisions is theater. Time-box your discovery work and commit to acting on results.

Next Steps:

  1. Schedule your first weekly user interview for next week
  2. Identify your team's current desired outcome and build a starting OST
  3. Map assumptions for your top initiative and identify the riskiest one to test


About This Guide

Last Updated: February 12, 2026

Reading Time: 30 minutes

Expertise Level: Intermediate to Advanced

Citation: Adair, Tim. "The Complete Guide to Product Discovery: Methods, Frameworks, and Habits." IdeaPlan, 2026. https://ideaplan.io/guides/the-complete-guide-to-product-discovery

T
Tim Adair

Strategic executive leader and author of all content on IdeaPlan. Background in product management, organizational development, and AI product strategy.

Frequently Asked Questions

What is the difference between product discovery and product delivery?+
Discovery is the process of deciding what to build. Delivery is the process of building it. Discovery answers 'should we build this?' through research, experimentation, and validation. Delivery answers 'how do we build this well?' through engineering, design, and quality assurance. The best teams run discovery and delivery in parallel. The product trio discovers future work while engineers deliver current work.
How much time should a product team spend on discovery?+
A healthy ratio is roughly 30-40% of a product trio's time on discovery and 60-70% on delivery. In practice, this means at least one user interview per week, one assumption test per sprint, and one opportunity assessment per quarter. Teams that spend less than 20% on discovery tend to build features that do not move metrics.
What is an opportunity solution tree?+
An opportunity solution tree (OST) is a visual framework developed by [Teresa Torres](https://www.producttalk.org/) that maps the connections between a desired outcome, the opportunities that could drive that outcome, the solutions that could address each opportunity, and the experiments that validate each solution. It helps product teams make explicit the reasoning behind their choices and evaluate multiple paths before committing.
Free Resource

Want More Guides Like This?

Subscribe to get product management guides, templates, and expert strategies delivered to your inbox.

Weekly SaaS ideas + PM insights. Unsubscribe anytime.

Want instant access to all 50+ premium templates?

Start Free Trial →

Put This Guide Into Practice

Use our templates and frameworks to apply these concepts to your product.