DiscoveryAdvanced16 min read

Opportunity Solution Tree: Teresa Torres' Framework for Continuous Discovery

Build an Opportunity Solution Tree step by step with real examples, interview techniques, and experiment design for continuous product discovery.

Best for: Product trios (PM, designer, engineer) who want a structured approach to continuous discovery and evidence-based decision-making
By Tim Adair• Published 2026-02-08

Quick Answer (TL;DR)

The Opportunity Solution Tree (OST) is a visual framework created by Teresa Torres that maps the path from a desired outcome to opportunities (customer needs, pain points, and desires) to solutions (features and ideas) to experiments (tests to validate assumptions). It structures continuous discovery so product teams can make better decisions, avoid building the wrong things, and maintain a clear connection between what they're building and why. The OST is the centerpiece of Torres' Continuous Discovery Habits methodology.


What Is an Opportunity Solution Tree?

An Opportunity Solution Tree is a hierarchical visual map that connects your team's desired business outcome to the specific experiments you're running today. It provides a clear, traceable line from strategy to execution.

The tree has four levels:

  • Outcome (the root) -- The measurable business or product outcome you're trying to achieve
  • Opportunities (branches) -- Customer needs, pain points, and desires discovered through research
  • Solutions (smaller branches) -- Ideas for addressing each opportunity
  • Experiments (leaves) -- Tests designed to validate whether each solution will work
  • The OST was developed by Teresa Torres, a product discovery coach and author of Continuous Discovery Habits. It addresses one of the most persistent problems in product management: the gap between understanding what customers need and deciding what to build. Too often, product teams jump from "customers told us they want X" directly to "let's build X" without exploring the problem space, generating multiple solutions, or testing assumptions.

    Why Trees, Not Lists?

    Traditional backlogs are flat lists of features. They obscure the reasoning behind each item and make it impossible to see alternative solutions. The tree structure solves this by:

  • Making the reasoning visible: you can trace any experiment back through its solution, opportunity, and outcome
  • Showing alternatives: each opportunity has multiple possible solutions, reminding the team they have choices
  • Preventing pet features: if a solution doesn't connect to an opportunity that connects to an outcome, it doesn't belong on the tree
  • Enabling pivoting: if an experiment invalidates a solution, you can move to another solution for the same opportunity without starting over
  • The Four Levels in Detail

    Level 1: Outcome

    The outcome is the single, measurable metric your team is trying to move. It's set by leadership (or negotiated between the team and leadership) and defines the team's mission for a given period.

    What makes a good outcome:

  • It's measurable: "Increase trial-to-paid conversion rate from 8% to 12%"
  • It's within the team's influence: The team can directly impact it through product changes
  • It's time-bound: Tied to a quarter or other planning period
  • It's a lagging indicator of customer value: When customers get more value, the metric improves
  • Examples of good outcomes:

    OutcomeWhy It Works
    Increase 30-day retention from 65% to 75%Measurable, directly tied to product value, within team's control
    Reduce time-to-first-value from 5 days to 2 daysSpecific, measurable, directly impacts activation
    Increase weekly active usage from 3 to 5 sessionsBehavioral metric, tied to engagement and habit formation
    Reduce support tickets related to billing by 40%Measurable, specific problem area, clear success criteria

    Examples of poor outcomes:

  • "Improve the user experience" -- Not measurable
  • "Increase revenue" -- Too broad; not within a single team's control
  • "Launch feature X" -- This is an output, not an outcome
  • "Make users happier" -- Not specific or measurable
  • Level 2: Opportunities

    Opportunities are customer needs, pain points, and desires that, if addressed, would move the outcome. They come from research -- primarily customer interviews, but also support tickets, analytics, surveys, and observation.

    Opportunities are NOT solutions. This distinction is critical:

    Solution (wrong)Opportunity (right)
    "Build a dashboard""Users can't quickly see whether their project is on track"
    "Add Slack integration""Users miss important updates because they don't check our app frequently"
    "Create onboarding wizard""New users don't understand what to do first after signing up"
    "Add search filters""Users waste time scrolling through irrelevant results"

    How to discover opportunities:

    The primary method is weekly customer interviews. Torres recommends that the product trio (PM, designer, engineer) interviews at least one customer per week, every week, as a sustainable habit. These aren't big, formal research projects -- they're lightweight, 30-minute conversations that continuously feed the opportunity space.

    Interview structure for opportunity discovery:

  • Start with a story prompt (5 minutes): "Tell me about the last time you tried to [activity related to your outcome]."
  • Dig into the story (15 minutes): Follow the narrative. "What happened next? What were you thinking at that point? What did you try?"
  • Explore pain points and needs (10 minutes): "What was the hardest part? What would have made it easier? What did you wish you had?"
  • Organizing opportunities on the tree:

    Opportunities should be organized hierarchically. Broad opportunity areas break down into more specific sub-opportunities:

    Outcome: Increase trial-to-paid conversion from 8% to 12%
    ├── Users don't understand the value proposition during trial
    │   ├── Users don't know which features to try first
    │   ├── Users don't see how the product fits their workflow
    │   └── Users feel overwhelmed by too many options
    ├── Users hit technical barriers during setup
    │   ├── Data import process is confusing
    │   ├── Integration setup requires developer help
    │   └── Users don't have the right data to get started
    └── Users don't experience enough value before the trial ends
        ├── Trial period is too short for complex use cases
        ├── Users don't complete enough actions to see results
        └── Users compare to free alternatives and don't see the premium value

    Level 3: Solutions

    Solutions are specific ideas for addressing each opportunity. Each opportunity should have multiple solutions -- this is where the tree structure shines. By generating 3-5 solutions per opportunity, you avoid fixating on the first idea and increase your chances of finding the best approach.

    Generating solutions:

    For each opportunity, the product trio brainstorms multiple possible solutions:

    OpportunitySolution ASolution BSolution C
    Users don't know which features to try firstInteractive onboarding checklistPersonalized "getting started" path based on role"Quick wins" tutorial showing 3 high-value actions
    Data import process is confusingWizard-style guided importCSV template with pre-populated sample dataOne-click import from common tools (Trello, Asana)
    Users don't experience enough value before trial endsExtend trial to 30 daysPre-populate account with sample data to show valueTrigger a "value milestone" email when users complete key actions

    Key principles:

  • Generate before evaluating. List 3-5 solutions before discussing which is best.
  • Consider small and large. Not every solution needs to be a major feature. Sometimes the best solution is a tooltip, an email, or a copy change.
  • Include non-product solutions. Sales playbooks, documentation, customer success interventions, and marketing content are all valid solutions.
  • Level 4: Experiments

    Experiments are tests designed to validate or invalidate the assumptions underlying each solution. This is where the OST prevents teams from building full features only to discover they don't work.

    Assumption mapping:

    Before designing experiments, identify the assumptions embedded in each solution:

    For the solution "Interactive onboarding checklist":

  • Desirability assumption: Users will engage with a checklist (they won't ignore it)
  • Usability assumption: Users can understand and complete each checklist step
  • Feasibility assumption: We can build the checklist within our technical constraints
  • Viability assumption: The checklist will improve trial-to-paid conversion enough to justify the investment
  • Types of experiments:

    Experiment TypeWhat It TestsTime to RunExample
    Customer interviewDesirability, understanding1-2 daysShow a concept mockup and gauge reaction
    Prototype testUsability, desirability3-5 daysBuild a clickable prototype and observe users
    Fake door testDesirability at scale1-2 weeksAdd a button for the feature, measure clicks, show "coming soon"
    Concierge testValue proposition1-2 weeksManually deliver the solution to 5-10 users
    A/B testImpact on outcome2-4 weeksBuild a minimal version and measure conversion impact
    Wizard of OzFull experience feasibility1-3 weeksSimulate the feature with human effort behind the scenes

    Experiment design template:

    For each experiment, document:

  • Assumption being tested: What specific belief are we validating?
  • Method: How will we test it?
  • Success criteria: What result would confirm the assumption? What would disprove it?
  • Sample size: How many users/data points do we need?
  • Timeline: How long will the experiment run?
  • Building an OST: Step-by-Step

    Step 1: Set Your Outcome (1 day)

    Work with leadership to define a clear, measurable outcome for your team. If leadership assigns an output ("build feature X"), negotiate it into an outcome ("increase the metric that feature X is supposed to improve").

    Step 2: Seed the Opportunity Space (2-4 weeks)

    Conduct 6-10 customer interviews focused on stories related to your outcome. Synthesize findings into an initial set of opportunities. Organize them hierarchically on the tree.

    Step 3: Prioritize Opportunities (1 day)

    You can't pursue every opportunity simultaneously. Evaluate based on:

  • Size: How many customers experience this pain point?
  • Market factors: Does addressing this create competitive advantage?
  • Company factors: Does this align with company strategy?
  • Customer factors: How severe is the pain?
  • Select 1-3 opportunities to focus on for the next few weeks.

    Step 4: Generate Solutions (Half day)

    For each prioritized opportunity, brainstorm 3-5 solutions with your product trio. Don't evaluate yet -- just generate.

    Step 5: Map Assumptions (Half day)

    For each solution, identify the key assumptions that must be true for it to work. Categorize them as desirability, usability, feasibility, or viability.

    Step 6: Design and Run Experiments (Ongoing)

    Start with the riskiest assumptions first. Design lightweight experiments. Run them. Learn. Update the tree based on results.

    Step 7: Iterate Continuously

    The OST is a living document. Every week:

  • Conduct at least one customer interview to discover new opportunities
  • Review experiment results and update the tree
  • Prune solutions that have been invalidated
  • Add new solutions based on what you've learned
  • Real-World OST Example: Spotify's Podcast Discovery

    Here's how Spotify might have structured an OST for improving podcast discovery:

    Outcome: Increase podcast listening hours per user from 2 to 4 hours/week
    
    ├── Users don't know what podcasts to listen to
    │   ├── Solution: Personalized podcast recommendations based on music taste
    │   │   ├── Experiment: Interview 10 users about discovery behavior
    │   │   └── Experiment: A/B test recommendation algorithm on 5% of users
    │   ├── Solution: "Daily Podcast Mix" similar to Daily Mix for music
    │   │   └── Experiment: Fake door test on home screen
    │   └── Solution: Social sharing -- see what friends are listening to
    │       └── Experiment: Survey 200 users on interest in social podcast features
    
    ├── Users start podcasts but don't finish
    │   ├── Solution: Playback speed controls (1.5x, 2x)
    │   │   └── Experiment: Usage data analysis of current speed control adoption
    │   ├── Solution: Episode summaries with chapter markers
    │   │   └── Experiment: Prototype test with 8 users
    │   └── Solution: "Continue listening" prominent placement
    │       └── Experiment: A/B test placement on home screen
    
    └── Users forget to come back for new episodes
        ├── Solution: Smart notifications when new episodes drop
        │   └── Experiment: Opt-in notification test with 1,000 users
        ├── Solution: Auto-download new episodes from followed podcasts
        │   └── Experiment: Concierge test -- manually send "new episode" reminders
        └── Solution: Weekly podcast digest email
            └── Experiment: Email campaign to 5,000 users, measure re-engagement

    Common Mistakes and Pitfalls

    1. Starting with Solutions Instead of Opportunities

    The most pervasive mistake. Teams put "Build feature X" on the tree without first understanding what customer need it addresses. If you can't articulate the opportunity, the solution doesn't belong on the tree yet.

    2. Only One Solution Per Opportunity

    If you have exactly one solution for each opportunity, you're not exploring the solution space. This leads to anchoring on the first idea and missing better alternatives. Force yourself to generate at least three solutions before committing.

    3. Confusing Outputs with Outcomes

    "Launch the new onboarding flow" is an output. "Increase activation rate from 30% to 45%" is an outcome. If your tree starts with an output, the entire structure is compromised because you've already decided on the solution before exploring alternatives.

    4. Building the Tree Once and Never Updating It

    The OST is a living artifact. It should change weekly as you learn from interviews and experiments. A stale tree is just a pretty diagram. Build the habit of updating the tree every week during your trio sync.

    5. Skipping Experiments

    Some teams use the OST to organize their thinking but then skip the experiment layer and jump straight to building. This defeats the purpose. The experiment layer is where you de-risk solutions before committing engineering resources.

    6. Making the Tree Too Big

    A tree with 50 opportunities, 200 solutions, and 400 experiments is unusable. Focus on the 3-5 most important opportunities and the 2-3 most promising solutions for each. Archive the rest. You can always bring them back.

    7. Working in Isolation

    The OST is designed for the product trio (PM + designer + engineer) to build and maintain together. A tree built by the PM alone misses technical feasibility insights from engineering and usability insights from design.

    The Continuous Discovery Habits Behind OST

    Teresa Torres' framework isn't just about the tree -- it's about the habits that keep the tree alive and useful:

    Habit 1: Weekly Customer Interviews

    Interview at least one customer per week. Not a big research project -- a lightweight, 30-minute conversation. Use an automated recruiting system so interviews happen consistently without manual scheduling overhead each time.

    Habit 2: Interview Snapshots

    After each interview, create a one-page summary capturing:

  • Key quotes (2-3 direct quotes)
  • Opportunities identified (pain points, needs, desires)
  • Surprises (anything unexpected)
  • Quick sketch of the customer's workflow or mental model
  • Habit 3: Opportunity Mapping

    Weekly, review your interview snapshots and update the opportunity space on your tree. Are new opportunities emerging? Are existing opportunities being validated by multiple interviews?

    Habit 4: Assumption Testing

    Continuously identify and test the riskiest assumptions in your tree. Run small experiments weekly -- not monthly or quarterly. This keeps the learning cycle fast and prevents big bets on unvalidated assumptions.

    Habit 5: Trio Decision-Making

    The product trio (PM, designer, engineer) makes discovery decisions together. Not the PM deciding and informing the others. Shared understanding leads to better solutions and stronger buy-in.

    OST vs. Other Discovery Frameworks

    FactorOSTDesign ThinkingJTBDLean StartupStory Mapping
    FocusConnecting outcomes to experimentsCreative problem-solvingCustomer motivationsBusiness validationUser journey and delivery planning
    CadenceContinuous (weekly)Project-basedResearch-phaseBuild-measure-learn cyclesPlanning-phase
    Key artifactVisual treePrototypesJob mapsMVPsStory map
    Team involvedProduct trioCross-functionalResearch teamFounding/product teamCross-functional
    Best forOngoing discovery alongside deliveryTackling ambiguous problemsUnderstanding "why" behind behaviorValidating business modelsPlanning releases
    Unique strengthMaintains connection between strategy and executionDeep user empathyReveals hidden motivationsSpeed to market learningVisualization of user experience

    OST + JTBD

    JTBD interviews are an excellent way to seed the opportunity space in your OST. JTBD reveals the jobs customers are trying to do and the underserved outcomes within those jobs. These become opportunities on your tree.

    OST + RICE

    Once you've identified opportunities and solutions on your tree, use RICE scoring to prioritize which solutions to pursue first. The OST ensures you're scoring the right things; RICE ensures you're sequencing them effectively.

    Best Practices for OST Implementation

    Start Small

    Don't try to build a complete tree in one session. Start with your outcome and 3-5 opportunities from recent customer conversations. Add to it weekly as you learn more.

    Make It Visible

    Put your OST on a wall, in Miro, in FigJam, or in a dedicated tool. Make it the first thing the team sees during planning sessions. Visibility keeps the tree alive and relevant.

    Use the Tree in Stakeholder Conversations

    When a stakeholder asks "Why aren't we building feature X?" walk them through the tree. Show how the outcome connects to opportunities, opportunities to solutions, and how you're testing solutions through experiments. This builds trust in your decision-making process.

    Review the Tree Weekly

    During your weekly product trio sync:

  • Review experiment results from the past week
  • Update the tree based on learnings (prune invalidated solutions, add new ones)
  • Review new interview insights and update opportunities
  • Plan next week's experiments
  • Track Your Discovery Velocity

    Measure how many interviews you're conducting per week, how many experiments you're running, and how many assumptions you're validating. These leading indicators tell you whether your discovery process is healthy.

    Connect Discovery to Delivery

    The OST bridges discovery and delivery. When an experiment validates a solution, it moves into your delivery backlog with full context: the outcome it serves, the opportunity it addresses, the evidence supporting it, and the assumptions that have been validated. Engineers who understand the "why" build better solutions.

    Getting Started with Opportunity Solution Trees

  • Form your product trio -- PM, designer, and one engineer committed to discovery
  • Set a clear outcome tied to a metric your team can influence
  • Conduct 4-6 customer interviews to seed your opportunity space
  • Build your initial tree with outcome, 3-5 opportunities, and 2-3 solutions per opportunity
  • Identify the riskiest assumption in your top-priority solution
  • Design and run an experiment to test that assumption this week
  • Review and update the tree weekly based on what you learn
  • Establish a weekly interview cadence to continuously feed the tree with fresh insights
  • The Opportunity Solution Tree transforms product discovery from an occasional, unstructured activity into a continuous, disciplined practice. It ensures that everything your team builds is connected to a real customer need and a measurable outcome. That traceability -- from strategy through discovery through delivery -- is what separates teams that ship features from teams that ship outcomes.

    Free Resource

    Want More Frameworks?

    Subscribe to get PM frameworks, templates, and expert strategies delivered to your inbox.

    No spam. Unsubscribe anytime.

    Want instant access to all 50+ premium templates?

    Apply This Framework

    Use our templates to put this framework into practice on your next project.