What This Template Is For
Every product decision rests on a stack of assumptions. Some are safe bets. Others are complete guesses dressed up as strategy. The dangerous ones sit in the middle: plausible enough that nobody questions them, untested enough that they could sink the entire initiative if they turn out to be wrong.
This template gives you a structured format for surfacing, categorizing, and testing the assumptions behind a product decision. It walks through four stages: assumption extraction (pulling implicit beliefs out of your PRD, roadmap, or pitch deck), risk scoring (which assumptions would cause the most damage if wrong), experiment design (the cheapest way to test each high-risk assumption), and evidence logging (what you learned and what it means for the product).
The approach draws from the Lean Startup methodology and pairs well with the broader Product Discovery Handbook, which covers how assumption testing fits into a full discovery cycle. If you need to run customer interviews as part of your tests, use that template alongside this one.
When to Use This Template
- Before kicking off a new product initiative. Extract the assumptions behind your roadmap item and test the riskiest ones before writing a single line of code.
- When a stakeholder makes a confident claim without evidence. "Our users definitely want real-time collaboration" is an assumption, not a fact. This template helps you test it.
- During quarterly planning. Score assumptions behind each proposed initiative to compare risk levels and decide where to invest in validation first.
- When pivoting or changing direction. A pivot introduces a new set of assumptions. Map them before committing the team.
- After a failed launch. Post-mortems often reveal untested assumptions. Use this template to build the habit of testing them in advance.
How to Use This Template
- Start with the decision. Write down the product decision or initiative you are evaluating. Be specific: "Launch a freemium tier" is better than "Growth strategy."
- Extract assumptions. Read through the PRD, roadmap item, or pitch deck and list every belief that must be true for the initiative to succeed. Aim for 10-20 assumptions.
- Score each assumption. Rate each on two dimensions: how critical it is to success (impact) and how little evidence you have for it (uncertainty). High-impact, high-uncertainty assumptions are your priority.
- Design experiments. For your top 3-5 assumptions, design the cheapest, fastest test that would produce a clear pass/fail signal.
- Run and log results. Execute the experiments, record the evidence, and update your plan based on what you learn.
The Template
Part 1: Initiative Context
| Field | Details |
|---|---|
| Initiative | [Name of the product initiative or feature] |
| Owner | [PM or team lead] |
| Target Launch | [Planned launch date or quarter] |
| Decision at Stake | [What decision are you trying to validate?] |
| Success Metric | [How will you measure if the initiative worked?] |
Part 2: Assumption Extraction
List every assumption that must be true for this initiative to succeed. Pull them from your PRD, pitch deck, business case, or team discussions.
Categories to scan:
- User assumptions. Who is the target user? Do they have the problem you think they have? Do they care enough to switch from their current solution?
- Problem assumptions. Is the problem frequent enough, painful enough, and urgent enough to justify a solution?
- Solution assumptions. Will your proposed solution actually solve the problem? Can users figure out how to use it?
- Business assumptions. Will users pay for this? Can you acquire them at a viable cost? Is the market large enough?
- Technical assumptions. Can you build this within the timeline and budget? Are there dependencies or integration risks?
| # | Assumption | Category | Source |
|---|---|---|---|
| 1 | [e.g., "SMB marketing teams struggle with campaign coordination"] | User | [PRD section 2] |
| 2 | [e.g., "Users will pay $50/mo for this capability"] | Business | [Pricing model doc] |
| 3 | [e.g., "We can integrate with Salesforce in 4 weeks"] | Technical | [Engineering estimate] |
| 4 | |||
| 5 | |||
| 6 | |||
| 7 | |||
| 8 | |||
| 9 | |||
| 10 |
Part 3: Assumption Prioritization Matrix
Score each assumption on two axes (1-5 scale):
- Impact: If this assumption is wrong, how badly does it hurt the initiative? (5 = initiative fails entirely, 1 = minor inconvenience)
- Uncertainty: How little evidence do we have? (5 = pure guess, 1 = well-established fact with data)
| # | Assumption (short) | Impact (1-5) | Uncertainty (1-5) | Risk Score (I x U) | Priority |
|---|---|---|---|---|---|
| 1 | |||||
| 2 | |||||
| 3 | |||||
| 4 | |||||
| 5 |
Priority guide:
- Risk Score 16-25: Test immediately. Do not proceed without evidence.
- Risk Score 9-15: Test before committing significant resources.
- Risk Score 1-8: Monitor but acceptable to proceed.
Part 4: Experiment Design
For each high-priority assumption, design the cheapest test that produces a clear signal.
Assumption #___: [Statement]
| Field | Details |
|---|---|
| Hypothesis | We believe that [assumption]. We will know this is true if [measurable outcome]. |
| Test Method | [Interview / Survey / Prototype test / Landing page / Data analysis / Concierge / Smoke test] |
| Sample Size | [How many users or data points needed for a credible signal?] |
| Timeline | [How long will the test take?] |
| Cost | [Time, money, or resources required] |
| Pass Criteria | [Specific threshold, e.g., "4 of 5 interviewees describe this pain point unprompted"] |
| Fail Criteria | [What result would disprove the assumption?] |
| Owner | [Who runs this test?] |
(Copy this block for each assumption you are testing.)
Part 5: Evidence Log
Record what you learned from each test.
| # | Assumption | Test Run | Result | Evidence Summary | Verdict | Action |
|---|---|---|---|---|---|---|
| 1 | [Date] | Pass / Fail / Inconclusive | [Key findings in 1-2 sentences] | Validated / Invalidated / Needs more data | [Next step] | |
| 2 | ||||||
| 3 |
Part 6: Decision Update
After testing, update your plan.
| Field | Details |
|---|---|
| Original Decision | [What you planned to do] |
| Validated Assumptions | [List assumptions confirmed by evidence] |
| Invalidated Assumptions | [List assumptions disproved by evidence] |
| Revised Decision | [What you will do now, given the evidence] |
| Remaining Risks | [Assumptions still untested or inconclusive] |
| Next Actions | [Specific next steps with owners and dates] |
Filled Example: Freemium Tier Launch
Context. A B2B project management SaaS (Series A, 800 paying customers) is considering launching a free tier to drive top-of-funnel growth. The PM wants to test the riskiest assumptions before committing engineering time.
Initiative Context (Example)
| Field | Details |
|---|---|
| Initiative | Launch a freemium tier with limited features |
| Owner | Sarah K. (PM, Growth) |
| Target Launch | Q3 2026 |
| Decision at Stake | Should we invest 2 engineering sprints building a free tier, or continue with the 14-day trial model? |
| Success Metric | 500 free signups in first 30 days, 8% conversion to paid within 60 days |
Top Assumptions Extracted (Example, Abbreviated)
| # | Assumption | Impact | Uncertainty | Risk Score |
|---|---|---|---|---|
| 1 | Free users will convert to paid at 8%+ within 60 days | 5 | 4 | 20 |
| 2 | Freemium will not cannibalize existing trial conversions | 4 | 4 | 16 |
| 3 | Support cost per free user will be < $2/month | 3 | 5 | 15 |
| 4 | Target users will find the product through organic search | 4 | 3 | 12 |
Experiment for Assumption #1 (Example)
| Field | Details |
|---|---|
| Hypothesis | We believe free users will convert at 8%+. We will know this is true if a simulated free experience (extended trial with limited features) converts at 8%+ within 60 days. |
| Test Method | Extend trial to 90 days for 200 new signups with feature limits matching the planned free tier. Track conversion behavior. |
| Sample Size | 200 trial users |
| Timeline | 90 days (60-day conversion window + 30-day setup) |
| Cost | 1 week engineering for feature gating, no marketing spend |
| Pass Criteria | 16+ of 200 users (8%) convert to paid within 60 days of signup |
| Fail Criteria | Fewer than 10 of 200 (5%) convert. Below 5% means freemium economics do not work. |
| Owner | Sarah K. (PM) + Dev (engineering toggle) |
Key Takeaways
- The most dangerous assumptions are the ones nobody thinks to question. Scan your PRD for words like "obviously," "clearly," and "of course." Those sentences usually contain untested beliefs.
- Use the Impact x Uncertainty matrix to focus your testing energy. Not every assumption needs a formal experiment. High-impact, high-uncertainty ones do.
- Design the cheapest test possible. A five-person customer interview round costs less than a week of engineering. A landing page smoke test costs less than a prototype. Start cheap.
- "Inconclusive" is a valid result. It means your test was not sharp enough, not that the assumption is safe. Redesign the test with clearer pass/fail criteria.
- Share your evidence log with stakeholders. It transforms "I think we should pivot" from an opinion into a data-driven recommendation. Use the RICE framework to reprioritize your backlog based on what you learned.
About This Template
Created by: Tim Adair
Last Updated: 3/5/2026
Version: 1.0.0
License: Free for personal and commercial use
