Skip to main content
New: Forge AI docs + Loop PM assistant. 7-day free trial.
TemplateFREE⏱️ 20 minutes prep

Feature Validation Template

Free feature validation template for product teams. Test feature ideas with users before committing development resources using structured validation experiments.

By Tim Adair• Last updated 2026-03-05
Feature Validation Template preview

Feature Validation Template

Free Feature Validation Template — open and start using immediately

or use email

Instant access. No spam.

What This Template Is For

Building the wrong feature is the most expensive mistake a product team can make. Not because of the engineering cost alone, but because of the opportunity cost: every sprint spent on a feature nobody uses is a sprint not spent on one that would have moved a metric.

This template provides a structured process for validating feature ideas before they enter development. It covers four stages: defining the feature hypothesis (what you believe and why), selecting the right validation method (from lightweight to high-fidelity), designing the experiment (what to measure and what constitutes a pass), and documenting the outcome (what you learned and what to do next).

The methods here range from a 30-minute fake-door test to a 2-week prototype study. Choose based on the risk level of the feature: high-stakes features (large engineering investment, irreversible changes, bet-the-company decisions) deserve rigorous validation. Low-stakes features (small UI tweaks, easily reversible changes) can use lighter methods.

This template pairs with the assumption testing template for broader initiative-level validation, and with the Product Discovery Handbook for the full discovery methodology. If the feature involves a new workflow, consider running a usability test on the prototype.

When to Use This Template

  • Before adding a feature to the sprint backlog. If the feature will take more than one sprint to build, validate it first.
  • When the team disagrees about a feature's value. Replace opinion-based debates with evidence. "Let's test it" is a better response than "I think users want this."
  • After receiving a customer request. One customer's request is an anecdote. Five customers describing the same problem is a pattern. This template helps you distinguish between the two.
  • During roadmap planning. Use validation results to stack-rank competing feature ideas by evidence strength, not by stakeholder volume.
  • When pivoting a feature's design. If user feedback on v1 of a feature was poor, validate the revised approach before rebuilding.

How to Use This Template

  1. Write the feature hypothesis. State what you believe, who it is for, and what outcome you expect.
  2. Choose a validation method. Match the method to the feature's risk level and the time you have.
  3. Design the experiment. Define pass/fail criteria, sample size, and timeline.
  4. Run the experiment and log results. Capture data, quotes, and observations.
  5. Decide: build, iterate, or kill. Use the evidence to make a clear recommendation.

The Template

Part 1: Feature Definition

FieldDetails
Feature Name[Short, descriptive name]
One-Line Description[What does this feature do?]
Target User[Which user segment or persona?]
Problem It Solves[What pain point or unmet need does this address?]
Evidence for the Problem[Customer interviews, support tickets, analytics data, competitor analysis]
Estimated Build Effort[T-shirt size: S/M/L/XL or sprint count]
Reversibility[Easy to revert / Hard to revert / Irreversible]
Risk Level[Low / Medium / High based on effort + reversibility]

Part 2: Feature Hypothesis

Write a testable hypothesis using this format:

We believe that [feature description]
for [target users]
will result in [expected outcome or behavior change]
because [reasoning based on evidence].

>

We will know this is true when [measurable signal].

Example:

We believe that adding a bulk-action toolbar to the project list for power users will reduce the time to update 10+ project statuses by 70% because our analytics show that users with 15+ projects spend 4 minutes clicking through each project individually. We will know this is true when 5 of 8 test participants complete the bulk-update task in under 90 seconds.

Part 3: Validation Method Selection

Choose the method that matches your risk level and available time.

MethodBest ForTime RequiredEvidence QualityRisk Level
Fake Door TestTesting demand for a feature before building it1-2 days setup, 1-2 weeks data collectionModerateLow-Medium
Painted Door (UI Mockup)Testing whether users understand and want a capability2-3 daysModerateLow-Medium
Wizard of OzTesting the value of a feature by delivering it manually1-2 weeksHighMedium-High
Prototype Usability TestTesting whether users can use the feature successfully3-5 daysHighMedium-High
Concierge MVPTesting end-to-end value by performing the service manually for real users2-4 weeksVery HighHigh
Beta/Feature FlagTesting with real users on real data2-4 weeksVery HighHigh

Selected Method: [Your choice]

Rationale: [Why this method fits the risk level and timeline]


Part 4: Experiment Design

FieldDetails
Method[From Part 3]
Participants[Who and how many? Minimum 5 for qualitative, 100+ for quantitative.]
Recruitment[How will you find participants? Existing users, panel, customer list?]
Stimulus[What will participants see or interact with? Mockup, prototype, live feature?]
Task(s)[What will participants try to do?]
Metrics[What will you measure? Task completion rate, time-on-task, click-through rate, NPS?]
Pass Criteria[Specific threshold, e.g., "70% task completion rate" or "200+ clicks on fake door in 1 week"]
Fail Criteria[What result would kill the feature?]
Timeline[Start date, end date, decision date]
Owner[Who runs this experiment?]
Cost[Participant incentives, tool costs, time investment]

Part 5: Experiment Execution Log

Quantitative Data (if applicable)

MetricTargetActualPass/Fail
[e.g., Click-through rate on fake door][5%+][%]
[e.g., Task completion rate][70%+][%]
[e.g., Time on task][< 90 sec][sec]
[e.g., Error rate][< 20%][%]

Qualitative Data (if applicable)

ParticipantTask SuccessKey QuoteKey Observation
P1Yes / No / Partial
P2
P3
P4
P5

Patterns Observed

Pattern# ParticipantsSignificance
[e.g., "Users looked for bulk select in the header, not the sidebar"]/5[High / Medium / Low]

Part 6: Verdict and Recommendation

FieldDetails
Overall ResultPass / Fail / Inconclusive
Evidence Summary[2-3 sentences summarizing what you learned]
RecommendationBuild as designed / Build with modifications / Do not build / Test further
Modifications (if applicable)[What changes are needed based on what you learned?]
Remaining Risks[What uncertainties remain even after this test?]
Next Step[Specific action with owner and date]

Filled Example: Bulk-Action Toolbar Validation

Context. A project management SaaS has heard from several customers that managing large numbers of projects is tedious. The PM proposes adding a bulk-action toolbar. Before building it (estimated 2 sprints), they validate with a prototype usability test.

Feature Hypothesis (Example)

We believe that adding a bulk-action toolbar to the project list for power users (15+ active projects) will reduce the time to update multiple project statuses by 70% because our analytics show these users spend 4+ minutes clicking through projects individually. We will know this is true when 5 of 8 test participants complete a 10-project bulk-update task in under 90 seconds.

Experiment Design (Example)

FieldDetails
MethodPrototype usability test (Figma interactive prototype)
Participants8 existing users with 15+ active projects
Task"You need to mark these 10 projects as 'On Hold' and reassign them to your team lead. Show me how you would do that."
MetricsTask completion rate (target: 75%+), time on task (target: < 90 sec), error rate (target: < 25%)
Pass Criteria6 of 8 participants complete the task in < 90 seconds
Fail CriteriaFewer than 4 of 8 complete the task, or average time exceeds 3 minutes

Results (Example)

MetricTargetActualPass/Fail
Task completion rate75%+87.5% (7/8)Pass
Average time on task< 90 sec72 secondsPass
Error rate< 25%25% (2/8 initially selected wrong action)Borderline

Key finding: 7 of 8 participants completed the task successfully. Two participants initially looked for the bulk-select checkbox in the row hover state (like Gmail) rather than the header row. After finding the header checkbox, both completed the task quickly. The one failure was a participant who did not notice the toolbar appeared after selecting items.

Verdict (Example)

FieldDetails
Overall ResultPass with modifications
RecommendationBuild with two design changes: (1) Add row-level checkboxes on hover for discoverability, (2) Add a subtle animation when the toolbar first appears to draw attention.
Remaining RisksUntested on mobile. The prototype only covered desktop. Add mobile validation before launch if mobile usage is >10% for power users.

Key Takeaways

  • Write the hypothesis before choosing the method. The hypothesis determines what you need to measure, which determines which method produces the right evidence. Working backwards from a method you like leads to weak experiments.
  • Set pass/fail criteria before running the test. Deciding what "success" looks like after seeing the data introduces confirmation bias. If your threshold was 70% and you got 68%, that is a fail. Do not move the goalposts.
  • Match the method to the risk. A one-sprint feature can be validated with a 2-day painted door test. A 6-sprint feature with irreversible architecture changes deserves a 2-week prototype study. Use the RICE framework to quantify the stakes.
  • Five participants is enough for qualitative usability validation. Nielsen Norman Group research consistently shows that 5 users surface ~85% of usability problems. For quantitative tests (conversion rates, click-through rates), you need 100+ data points for statistical significance.
  • "Inconclusive" means your experiment was not sharp enough. It does not mean the feature is safe to build. Redesign the test with clearer tasks, better pass/fail criteria, or a different method.
  • Document every validation, including the ones that kill features. A validated "do not build" decision saves the team from revisiting the same bad idea six months later. Store findings in your research repository.

About This Template

Created by: Tim Adair

Last Updated: 3/5/2026

Version: 1.0.0

License: Free for personal and commercial use

Frequently Asked Questions

What is a fake door test and when should I use it?+
A fake door test places a button, link, or menu item for a feature that does not exist yet. When users click it, they see a message like "This feature is coming soon. Sign up to be notified." You measure click-through rate to gauge demand. Use it when you want to test whether users want a capability before investing in design or engineering. It works best for features that are easy to describe with a label (e.g., "Export to PDF," "AI Assistant"). It does not test usability or value, only initial interest.
How do I validate a feature when I cannot build a prototype?+
Use a Wizard of Oz approach. Present the user with a realistic interface (even a static mockup), and deliver the result manually behind the scenes. A classic example: a PM testing an "AI-powered report generator" shows users a button, manually creates the report based on their data, and delivers it as if the AI generated it. The user experiences the value. You learn whether the output is useful before building the automation. It is time-intensive per participant but produces high-quality evidence about feature value.
What if stakeholders want to skip validation and "just build it"?+
Reframe validation as risk management, not as a delay. "This feature will take 3 sprints to build. A 3-day validation experiment either confirms we are building the right thing (and we proceed with confidence) or saves us 3 sprints of wasted effort. The expected value of testing is positive either way." If they still insist, document the skip and set a post-launch measurement plan so you can evaluate the feature's actual impact. Sometimes the political cost of insisting on validation exceeds the benefit. Pick your battles.
How many features should I validate before each sprint?+
Not every feature needs formal validation. Apply the risk filter from Part 1. Features that are small (< 1 sprint), easily reversible, and backed by strong existing evidence (analytics, multiple customer interviews) can skip formal validation. Features that are large (2+ sprints), hard to reverse, or based on assumptions should be validated. A practical rule: validate the top 1-2 features in your backlog per quarter, not every line item.
How does feature validation relate to the broader [discovery](/glossary/discovery-product-discovery) process?+
Feature validation sits between problem discovery and solution delivery. First, you identify and understand the problem through research ([customer interviews](/templates/customer-interview-template), data analysis, ethnographic observation). Then you generate feature ideas. Feature validation tests whether your proposed solution actually addresses the problem effectively before you commit engineering resources. The [Product Discovery Handbook](/discovery-guide) covers this full sequence. ---

Explore More Templates

Browse our full library of AI-enhanced product management templates

Free PDF

Like This Template?

Subscribe to get new templates, frameworks, and PM strategies delivered to your inbox.

or use email

Instant PDF download. One email per week after that.

Want full SaaS idea playbooks with market research?

Explore Ideas Pro →