Skip to main content
New: Forge AI docs + Loop PM assistant. 7-day free trial.
TemplateFREE⏱️ 20 minutes prep

Assumption Testing Template

Free assumption testing template for product teams. Map, prioritize, and test your riskiest product assumptions before committing development resources.

By Tim Adair• Last updated 2026-03-05
Assumption Testing Template preview

Assumption Testing Template

Free Assumption Testing Template — open and start using immediately

or use email

Instant access. No spam.

What This Template Is For

Every product decision rests on a stack of assumptions. Some are safe bets. Others are complete guesses dressed up as strategy. The dangerous ones sit in the middle: plausible enough that nobody questions them, untested enough that they could sink the entire initiative if they turn out to be wrong.

This template gives you a structured format for surfacing, categorizing, and testing the assumptions behind a product decision. It walks through four stages: assumption extraction (pulling implicit beliefs out of your PRD, roadmap, or pitch deck), risk scoring (which assumptions would cause the most damage if wrong), experiment design (the cheapest way to test each high-risk assumption), and evidence logging (what you learned and what it means for the product).

The approach draws from the Lean Startup methodology and pairs well with the broader Product Discovery Handbook, which covers how assumption testing fits into a full discovery cycle. If you need to run customer interviews as part of your tests, use that template alongside this one.

When to Use This Template

  • Before kicking off a new product initiative. Extract the assumptions behind your roadmap item and test the riskiest ones before writing a single line of code.
  • When a stakeholder makes a confident claim without evidence. "Our users definitely want real-time collaboration" is an assumption, not a fact. This template helps you test it.
  • During quarterly planning. Score assumptions behind each proposed initiative to compare risk levels and decide where to invest in validation first.
  • When pivoting or changing direction. A pivot introduces a new set of assumptions. Map them before committing the team.
  • After a failed launch. Post-mortems often reveal untested assumptions. Use this template to build the habit of testing them in advance.

How to Use This Template

  1. Start with the decision. Write down the product decision or initiative you are evaluating. Be specific: "Launch a freemium tier" is better than "Growth strategy."
  2. Extract assumptions. Read through the PRD, roadmap item, or pitch deck and list every belief that must be true for the initiative to succeed. Aim for 10-20 assumptions.
  3. Score each assumption. Rate each on two dimensions: how critical it is to success (impact) and how little evidence you have for it (uncertainty). High-impact, high-uncertainty assumptions are your priority.
  4. Design experiments. For your top 3-5 assumptions, design the cheapest, fastest test that would produce a clear pass/fail signal.
  5. Run and log results. Execute the experiments, record the evidence, and update your plan based on what you learn.

The Template

Part 1: Initiative Context

FieldDetails
Initiative[Name of the product initiative or feature]
Owner[PM or team lead]
Target Launch[Planned launch date or quarter]
Decision at Stake[What decision are you trying to validate?]
Success Metric[How will you measure if the initiative worked?]

Part 2: Assumption Extraction

List every assumption that must be true for this initiative to succeed. Pull them from your PRD, pitch deck, business case, or team discussions.

Categories to scan:

  • User assumptions. Who is the target user? Do they have the problem you think they have? Do they care enough to switch from their current solution?
  • Problem assumptions. Is the problem frequent enough, painful enough, and urgent enough to justify a solution?
  • Solution assumptions. Will your proposed solution actually solve the problem? Can users figure out how to use it?
  • Business assumptions. Will users pay for this? Can you acquire them at a viable cost? Is the market large enough?
  • Technical assumptions. Can you build this within the timeline and budget? Are there dependencies or integration risks?
#AssumptionCategorySource
1[e.g., "SMB marketing teams struggle with campaign coordination"]User[PRD section 2]
2[e.g., "Users will pay $50/mo for this capability"]Business[Pricing model doc]
3[e.g., "We can integrate with Salesforce in 4 weeks"]Technical[Engineering estimate]
4
5
6
7
8
9
10

Part 3: Assumption Prioritization Matrix

Score each assumption on two axes (1-5 scale):

  • Impact: If this assumption is wrong, how badly does it hurt the initiative? (5 = initiative fails entirely, 1 = minor inconvenience)
  • Uncertainty: How little evidence do we have? (5 = pure guess, 1 = well-established fact with data)
#Assumption (short)Impact (1-5)Uncertainty (1-5)Risk Score (I x U)Priority
1
2
3
4
5

Priority guide:

  • Risk Score 16-25: Test immediately. Do not proceed without evidence.
  • Risk Score 9-15: Test before committing significant resources.
  • Risk Score 1-8: Monitor but acceptable to proceed.

Part 4: Experiment Design

For each high-priority assumption, design the cheapest test that produces a clear signal.

Assumption #___: [Statement]

FieldDetails
HypothesisWe believe that [assumption]. We will know this is true if [measurable outcome].
Test Method[Interview / Survey / Prototype test / Landing page / Data analysis / Concierge / Smoke test]
Sample Size[How many users or data points needed for a credible signal?]
Timeline[How long will the test take?]
Cost[Time, money, or resources required]
Pass Criteria[Specific threshold, e.g., "4 of 5 interviewees describe this pain point unprompted"]
Fail Criteria[What result would disprove the assumption?]
Owner[Who runs this test?]

(Copy this block for each assumption you are testing.)


Part 5: Evidence Log

Record what you learned from each test.

#AssumptionTest RunResultEvidence SummaryVerdictAction
1[Date]Pass / Fail / Inconclusive[Key findings in 1-2 sentences]Validated / Invalidated / Needs more data[Next step]
2
3

Part 6: Decision Update

After testing, update your plan.

FieldDetails
Original Decision[What you planned to do]
Validated Assumptions[List assumptions confirmed by evidence]
Invalidated Assumptions[List assumptions disproved by evidence]
Revised Decision[What you will do now, given the evidence]
Remaining Risks[Assumptions still untested or inconclusive]
Next Actions[Specific next steps with owners and dates]

Filled Example: Freemium Tier Launch

Context. A B2B project management SaaS (Series A, 800 paying customers) is considering launching a free tier to drive top-of-funnel growth. The PM wants to test the riskiest assumptions before committing engineering time.

Initiative Context (Example)

FieldDetails
InitiativeLaunch a freemium tier with limited features
OwnerSarah K. (PM, Growth)
Target LaunchQ3 2026
Decision at StakeShould we invest 2 engineering sprints building a free tier, or continue with the 14-day trial model?
Success Metric500 free signups in first 30 days, 8% conversion to paid within 60 days

Top Assumptions Extracted (Example, Abbreviated)

#AssumptionImpactUncertaintyRisk Score
1Free users will convert to paid at 8%+ within 60 days5420
2Freemium will not cannibalize existing trial conversions4416
3Support cost per free user will be < $2/month3515
4Target users will find the product through organic search4312

Experiment for Assumption #1 (Example)

FieldDetails
HypothesisWe believe free users will convert at 8%+. We will know this is true if a simulated free experience (extended trial with limited features) converts at 8%+ within 60 days.
Test MethodExtend trial to 90 days for 200 new signups with feature limits matching the planned free tier. Track conversion behavior.
Sample Size200 trial users
Timeline90 days (60-day conversion window + 30-day setup)
Cost1 week engineering for feature gating, no marketing spend
Pass Criteria16+ of 200 users (8%) convert to paid within 60 days of signup
Fail CriteriaFewer than 10 of 200 (5%) convert. Below 5% means freemium economics do not work.
OwnerSarah K. (PM) + Dev (engineering toggle)

Key Takeaways

  • The most dangerous assumptions are the ones nobody thinks to question. Scan your PRD for words like "obviously," "clearly," and "of course." Those sentences usually contain untested beliefs.
  • Use the Impact x Uncertainty matrix to focus your testing energy. Not every assumption needs a formal experiment. High-impact, high-uncertainty ones do.
  • Design the cheapest test possible. A five-person customer interview round costs less than a week of engineering. A landing page smoke test costs less than a prototype. Start cheap.
  • "Inconclusive" is a valid result. It means your test was not sharp enough, not that the assumption is safe. Redesign the test with clearer pass/fail criteria.
  • Share your evidence log with stakeholders. It transforms "I think we should pivot" from an opinion into a data-driven recommendation. Use the RICE framework to reprioritize your backlog based on what you learned.

About This Template

Created by: Tim Adair

Last Updated: 3/5/2026

Version: 1.0.0

License: Free for personal and commercial use

Frequently Asked Questions

How many assumptions should I test before moving forward?+
Focus on your top 3-5 highest-risk assumptions (Risk Score 16+). You do not need to test every assumption. Low-impact or well-evidenced assumptions can proceed without formal testing. The goal is to derisk the initiative enough that the remaining unknowns are manageable, not to eliminate all uncertainty.
What is the difference between assumption testing and A/B testing?+
Assumption testing happens before you build. It validates whether the premise behind a feature is sound. A/B testing happens after you build. It optimizes the implementation of a feature that already exists. Assumption testing asks "Should we build this?" while A/B testing asks "Which version performs better?" Both are valuable, but assumption testing prevents wasted effort on features that should never have been built.
How do I test business model assumptions without building the product?+
Use smoke tests and concierge experiments. A smoke test puts up a landing page or pricing page for a product that does not exist yet and measures intent signals (email signups, "Buy Now" clicks). A concierge experiment delivers the value manually to a small group of users to see if they find it useful enough to pay for. Both generate evidence about willingness to pay without writing production code.
What if my test invalidates a key assumption but stakeholders still want to proceed?+
Present the evidence clearly and recommend an alternative path. "Our test showed 2% conversion versus our 8% target. Here are three options: redesign the free tier to be more compelling, invest in paid acquisition instead of freemium, or proceed with adjusted financial projections showing break-even at month 18 instead of month 6." Give them options, not ultimatums. If they choose to proceed despite the evidence, document it.
How does assumption testing fit into the broader product [discovery](/glossary/discovery-product-discovery) process?+
Assumption testing typically happens after initial research (interviews, data analysis) and before solution design. You gather evidence about the problem space through interviews, form assumptions about the solution, test the riskiest ones, and then proceed to prototyping and usability testing with the validated assumptions as your foundation. The [Product Discovery Handbook](/discovery-guide) covers this full cycle. ---

Explore More Templates

Browse our full library of AI-enhanced product management templates

Free PDF

Like This Template?

Subscribe to get new templates, frameworks, and PM strategies delivered to your inbox.

or use email

Instant PDF download. One email per week after that.

Want full SaaS idea playbooks with market research?

Explore Ideas Pro →