TemplateFREE⏱️ 2-3 hours (setup); 1-2 hours per post-launch analysis
Impact Analysis Template for Product Analytics
A feature impact analysis template for product teams. Covers metric baseline capture, pre/post measurement, statistical significance testing, segment...
Updated 2026-03-05
Impact Analysis
| # | Dimension | Option A | Option B | Option C | Weight | Notes | |
|---|---|---|---|---|---|---|---|
| 1 | |||||||
| 2 | |||||||
| 3 | |||||||
| 4 | |||||||
| 5 |
#1
#2
#3
#4
#5
Edit the values above to try it with your own data. Your changes are saved locally.
Get this template
Choose your preferred format. Google Sheets and Notion are free, no account needed.
Frequently Asked Questions
When should I use A/B testing vs. pre/post analysis?+
Use A/B testing when you can randomly assign users to treatment and control groups simultaneously. This is the gold standard because it controls for external factors (seasonality, marketing campaigns) that affect both groups equally. Use pre/post analysis when A/B testing is not possible: when the change affects all users at once (infrastructure changes, pricing changes), when the user base is too small to split, or when the feature interacts with network effects (collaboration features). Pre/post analysis is weaker because any change during the measurement period (not just your feature) could explain the results.
How long should I run an impact analysis?+
Run it long enough to reach your minimum detectable effect (MDE) sample size AND to capture at least one full usage cycle. For weekly-use products, run at least 2 full weeks. For monthly-use products, run at least 6 weeks. For activation metrics, you need enough new users to reach sample size. Use an A/B test sample size calculator (available in most experimentation platforms) to determine the exact duration based on your baseline metric, expected effect size, and traffic volume.
What if the result is not statistically significant?+
A non-significant result does not mean "no effect." It means "we cannot distinguish the effect from noise with the data we have." Three options: (1) Run longer to accumulate more data. (2) Accept the result as "no detectable effect at this sample size" and make a judgment call. (3) Look at segment breakdowns. The aggregate effect may be non-significant, but a specific segment (e.g., free plan users) may show a significant effect that is diluted by other segments. The [correlation analysis template](/templates/correlation-analysis-template) can help identify which segments to examine.
Should I measure guardrail metrics for every feature launch?+
Yes. Guardrail metrics protect against unintended negative consequences. At minimum, track page load time (performance), error rate (stability), and support ticket volume (usability). For revenue-impacting features, add conversion rate and revenue per user. Guardrail metrics do not need to improve. They need to not get worse. If a guardrail metric degrades significantly, investigate before expanding the rollout.
How do I handle features that take months to show impact?+
Some features (better search, improved permissions, API enhancements) take months to show up in top-line metrics. For slow-burn features, use leading indicators in the short term (adoption rate, task completion time, error reduction) and track the lagging metric (retention, NPS, expansion revenue) over a longer horizon. Document the expected timeline in your pre-launch prediction and set calendar reminders for the follow-up measurement.
Explore More Templates
Browse our full library of PM templates, or generate a custom version with AI.