What This Template Is For
A well-designed survey generates insights you can act on. A poorly designed survey generates data that looks useful but misleads. The difference is almost entirely in the design: question wording, response options, survey length, sampling method, and analysis plan.
Most product surveys fail in predictable ways. They ask leading questions ("How much do you love our new feature?"), include too many questions (response quality drops after question 8-10), sample only active users (survivorship bias), or collect data with no plan for how to analyze it. This template helps you avoid these traps by forcing you to define the research question, plan the analysis, and structure the survey before writing a single question.
This template is part of a broader research toolkit. For ongoing satisfaction measurement, see the NPS program template. The Product Discovery Handbook covers when to use surveys versus interviews versus behavioral data. The glossary entry on surveys explains different survey methodologies. For calculating NPS from survey responses, the NPS Calculator provides instant scoring.
How to Use This Template
- Start with the research question. What decision will this survey inform? If you cannot name a specific decision, you are not ready to survey.
- Define the target population and sampling method. Who should receive this survey, and how will you reach them?
- Write questions in the order specified: screening questions first, core questions in the middle, demographic questions last.
- For each question, define the analysis plan before fielding the survey. If you do not know how you will analyze a question's responses, remove it.
- Pilot the survey with 5-10 people from the target population. Watch for confusion, fatigue, and unexpected interpretations.
- Field the survey for a defined period (typically 1-2 weeks) and monitor the response rate daily.
- Analyze results using the pre-defined analysis plan. Report findings with confidence intervals, not just point estimates.
The Template
Survey Overview
| Field | Details |
|---|---|
| Survey Name | [e.g., "Post-Onboarding Experience Survey"] |
| Research Question | [The specific question this survey answers, e.g., "Which onboarding steps cause the most confusion?"] |
| Decision It Informs | [e.g., "Prioritize onboarding redesign backlog for Q2"] |
| Owner | [PM or researcher name] |
| Target Population | [e.g., "Users who completed signup in the past 30 days"] |
| Sample Size Target | [N responses needed for statistical significance] |
| Fielding Period | [Start date - End date] |
| Survey Tool | [Typeform / Google Forms / SurveyMonkey / In-app] |
Sampling Strategy
| Field | Details |
|---|---|
| Population size | [Total users matching criteria] |
| Sampling method | [Random / Stratified / Census / Convenience] |
| Sample frame | [How you will identify and reach the sample, e.g., "Email all users who signed up Feb 1-28"] |
| Strata (if stratified) | [e.g., "50% free users, 50% paid users to ensure representation"] |
| Expected response rate | [Typically 10-30% for email surveys, 5-15% for in-app] |
| Minimum sample size | [For 95% confidence, +/- 5% margin: typically 385 for large populations] |
| Incentive | [None / Gift card raffle / Account credit / Feature preview] |
Survey Structure
Rules for question order:
- Screening questions (1-2 questions): Confirm the respondent matches the target population
- Warm-up questions (1-2 questions): Easy, non-sensitive questions to build engagement
- Core questions (4-8 questions): The substantive questions that answer the research question
- Open-ended questions (1-2 questions): Qualitative depth on key topics
- Demographic questions (1-3 questions): Used for segmentation analysis only
Total question count target: 8-12 questions. Median completion time: 3-5 minutes.
Question Design
| # | Question Text | Type | Options | Required | Analysis Plan |
|---|---|---|---|---|---|
| 1 | [Screening: e.g., "When did you sign up for [Product]?"] | [Single choice] | [Date ranges] | [Yes] | [Filter: exclude respondents outside target period] |
| 2 | [Warm-up: e.g., "How often do you use [Product]?"] | [Single choice] | [Daily / Weekly / Monthly / Rarely] | [Yes] | [Segment: compare responses by usage frequency] |
| 3 | [Core: e.g., "Rate your experience with each onboarding step"] | [Matrix: Likert 5-point] | [Very Difficult to Very Easy] | [Yes] | [Mean score per step; rank steps by difficulty] |
| 4 | [Core: e.g., "Which step was most confusing?"] | [Single choice + Other] | [List of steps] | [Yes] | [Frequency count; cross-tab with Q3] |
| 5 | [Core: e.g., "How confident do you feel using [Product] after onboarding?"] | [Likert 5-point] | [Not at all to Extremely] | [Yes] | [Distribution analysis; compare by user segment] |
| 6 | [Open-ended: e.g., "What would have made your onboarding experience better?"] | [Free text] | [N/A] | [No] | [Thematic coding: group responses into 5-8 themes] |
| 7 | [Demographic: e.g., "What is your role?"] | [Single choice + Other] | [PM, Designer, Engineer, Exec, Other] | [No] | [Segment analysis: do role groups differ on Q3-Q5?] |
Bias Checklist
Before fielding, check for these common survey biases:
- ☐ Leading questions. Does any question suggest a "correct" answer? Replace "How much did our new onboarding help you?" with "How would you describe your onboarding experience?"
- ☐ Double-barreled questions. Does any question ask two things at once? "How easy and enjoyable was onboarding?" should be split into two questions.
- ☐ Order bias. Could the order of response options influence choices? Randomize option order for non-sequential choices.
- ☐ Acquiescence bias. Are there too many agree/disagree questions in a row? Mix in negatively worded items or use specific behavioral questions instead.
- ☐ Survivorship bias. Are you only surveying active users? If you want to understand churn, you need to reach churned users too.
- ☐ Social desirability bias. Could respondents feel pressure to answer in a socially acceptable way? Use anonymous surveys for sensitive topics.
- ☐ Recency bias. Are you asking about experiences that happened too long ago? Survey within 7 days of the experience for accurate recall.
Analysis Plan
Define how you will analyze each question before fielding. This prevents post-hoc data mining.
| Question | Analysis Method | Comparison Groups | Success Metric |
|---|---|---|---|
| Q3 (Likert matrix) | Mean score per step | Free vs. Paid users | Identify steps with mean < 3.0 |
| Q4 (Single choice) | Frequency distribution | By role | Top 2 most-selected steps |
| Q5 (Likert) | Distribution + mean | By usage frequency | Mean > 3.5 for "confident" |
| Q6 (Open-ended) | Thematic coding (2 coders) | N/A | Top 5 themes by frequency |
Reporting Template
| Section | Content |
|---|---|
| Executive summary | 3-5 bullet points: key findings and recommended actions |
| Methodology | Sample size, response rate, fielding dates, confidence interval |
| Key findings | Charts and tables for each core question with narrative |
| Segment differences | Notable differences by role, plan type, or usage frequency |
| Verbatim highlights | 5-10 representative open-ended quotes |
| Recommendations | 2-4 specific actions with expected impact |
| Limitations | Response bias, sample size issues, or generalizability caveats |
Filled Example: Post-Onboarding Satisfaction Survey
Survey Overview
| Field | Details |
|---|---|
| Survey Name | TaskFlow Post-Onboarding Experience Survey |
| Research Question | Which onboarding steps cause the most confusion for new users? |
| Decision It Informs | Prioritize onboarding redesign backlog for Q2 2026 |
| Owner | Maria Chen, Senior PM |
| Target Population | Users who completed signup between Feb 1-28, 2026 |
| Sample Size Target | 200 responses (from ~2,200 eligible users) |
| Fielding Period | March 3-14, 2026 |
| Survey Tool | Typeform (email distribution) |
Questions
Q1 (Screening). "When did you sign up for TaskFlow?"
- Before February 2026 [screen out]
- February 2026
- I don't remember [screen out]
Q2 (Warm-up). "How often do you currently use TaskFlow?"
- Daily
- A few times a week
- Weekly
- Less than weekly
- I stopped using it
Q3 (Core - Matrix). "How would you rate your experience with each onboarding step?"
Scale: Very Difficult (1) - Difficult (2) - Neutral (3) - Easy (4) - Very Easy (5)
- Creating your account
- Setting up your profile
- Creating your first project
- Inviting teammates
Q4 (Core). "Which onboarding step was most confusing?"
- Creating your account
- Setting up your profile
- Creating your first project
- Inviting teammates
- None were confusing
- Other (please specify)
Q5 (Core). "After completing onboarding, how confident did you feel using TaskFlow on your own?"
Scale: Not at all confident (1) - Slightly (2) - Moderately (3) - Very (4) - Extremely (5)
Q6 (Core). "Did you skip any onboarding steps?"
- Yes (which ones? [checkboxes])
- No
Q7 (Open-ended). "What one thing would have made your onboarding experience better?"
Q8 (Demographic). "What best describes your role?"
- Product Manager
- Designer
- Engineer
- Executive / Founder
- Operations
- Other
Results Summary (after fielding)
- Response rate: 11.2% (247 responses from 2,204 emails sent)
- Most confusing step: "Inviting teammates" (selected by 41% of respondents)
- Average confidence after onboarding: 3.2 / 5.0 (moderately confident)
- Skip rate: 38% of respondents skipped at least one step. "Invite teammates" was skipped by 29%.
- Top open-ended theme: "I wanted to explore the product first before being forced to invite people" (67 mentions)
Key Takeaways
- Define the research question and decision before writing any survey questions
- Keep surveys to 8-12 questions and 3-5 minutes completion time
- Write the analysis plan before fielding. If you do not know how to analyze a question, remove it
- Check for common biases: leading questions, survivorship bias, double-barreled questions
- Report findings with confidence intervals and segment breakdowns, not just overall averages
About This Template
Created by: Tim Adair
Last Updated: 3/4/2026
Version: 1.0.0
License: Free for personal and commercial use
