Skip to main content
New: Forge AI docs + Loop PM assistant. 7-day free trial.
TemplateFREE⏱️ 1-2 hours (design); 1-2 weeks (fielding); 2-4 hours (analysis)

Survey Design Template

A customer survey design template for product managers. Covers question types, survey structure, sampling strategy, bias avoidance, and analysis plan with a filled example for a post-onboarding satisfaction survey.

By Tim Adair• Last updated 2026-03-04
Survey Design Template preview

Survey Design Template

Free Survey Design Template — open and start using immediately

or use email

Instant access. No spam.

What This Template Is For

A well-designed survey generates insights you can act on. A poorly designed survey generates data that looks useful but misleads. The difference is almost entirely in the design: question wording, response options, survey length, sampling method, and analysis plan.

Most product surveys fail in predictable ways. They ask leading questions ("How much do you love our new feature?"), include too many questions (response quality drops after question 8-10), sample only active users (survivorship bias), or collect data with no plan for how to analyze it. This template helps you avoid these traps by forcing you to define the research question, plan the analysis, and structure the survey before writing a single question.

This template is part of a broader research toolkit. For ongoing satisfaction measurement, see the NPS program template. The Product Discovery Handbook covers when to use surveys versus interviews versus behavioral data. The glossary entry on surveys explains different survey methodologies. For calculating NPS from survey responses, the NPS Calculator provides instant scoring.


How to Use This Template

  1. Start with the research question. What decision will this survey inform? If you cannot name a specific decision, you are not ready to survey.
  2. Define the target population and sampling method. Who should receive this survey, and how will you reach them?
  3. Write questions in the order specified: screening questions first, core questions in the middle, demographic questions last.
  4. For each question, define the analysis plan before fielding the survey. If you do not know how you will analyze a question's responses, remove it.
  5. Pilot the survey with 5-10 people from the target population. Watch for confusion, fatigue, and unexpected interpretations.
  6. Field the survey for a defined period (typically 1-2 weeks) and monitor the response rate daily.
  7. Analyze results using the pre-defined analysis plan. Report findings with confidence intervals, not just point estimates.

The Template

Survey Overview

FieldDetails
Survey Name[e.g., "Post-Onboarding Experience Survey"]
Research Question[The specific question this survey answers, e.g., "Which onboarding steps cause the most confusion?"]
Decision It Informs[e.g., "Prioritize onboarding redesign backlog for Q2"]
Owner[PM or researcher name]
Target Population[e.g., "Users who completed signup in the past 30 days"]
Sample Size Target[N responses needed for statistical significance]
Fielding Period[Start date - End date]
Survey Tool[Typeform / Google Forms / SurveyMonkey / In-app]

Sampling Strategy

FieldDetails
Population size[Total users matching criteria]
Sampling method[Random / Stratified / Census / Convenience]
Sample frame[How you will identify and reach the sample, e.g., "Email all users who signed up Feb 1-28"]
Strata (if stratified)[e.g., "50% free users, 50% paid users to ensure representation"]
Expected response rate[Typically 10-30% for email surveys, 5-15% for in-app]
Minimum sample size[For 95% confidence, +/- 5% margin: typically 385 for large populations]
Incentive[None / Gift card raffle / Account credit / Feature preview]

Survey Structure

Rules for question order:

  1. Screening questions (1-2 questions): Confirm the respondent matches the target population
  2. Warm-up questions (1-2 questions): Easy, non-sensitive questions to build engagement
  3. Core questions (4-8 questions): The substantive questions that answer the research question
  4. Open-ended questions (1-2 questions): Qualitative depth on key topics
  5. Demographic questions (1-3 questions): Used for segmentation analysis only

Total question count target: 8-12 questions. Median completion time: 3-5 minutes.


Question Design

#Question TextTypeOptionsRequiredAnalysis Plan
1[Screening: e.g., "When did you sign up for [Product]?"][Single choice][Date ranges][Yes][Filter: exclude respondents outside target period]
2[Warm-up: e.g., "How often do you use [Product]?"][Single choice][Daily / Weekly / Monthly / Rarely][Yes][Segment: compare responses by usage frequency]
3[Core: e.g., "Rate your experience with each onboarding step"][Matrix: Likert 5-point][Very Difficult to Very Easy][Yes][Mean score per step; rank steps by difficulty]
4[Core: e.g., "Which step was most confusing?"][Single choice + Other][List of steps][Yes][Frequency count; cross-tab with Q3]
5[Core: e.g., "How confident do you feel using [Product] after onboarding?"][Likert 5-point][Not at all to Extremely][Yes][Distribution analysis; compare by user segment]
6[Open-ended: e.g., "What would have made your onboarding experience better?"][Free text][N/A][No][Thematic coding: group responses into 5-8 themes]
7[Demographic: e.g., "What is your role?"][Single choice + Other][PM, Designer, Engineer, Exec, Other][No][Segment analysis: do role groups differ on Q3-Q5?]

Bias Checklist

Before fielding, check for these common survey biases:

  • Leading questions. Does any question suggest a "correct" answer? Replace "How much did our new onboarding help you?" with "How would you describe your onboarding experience?"
  • Double-barreled questions. Does any question ask two things at once? "How easy and enjoyable was onboarding?" should be split into two questions.
  • Order bias. Could the order of response options influence choices? Randomize option order for non-sequential choices.
  • Acquiescence bias. Are there too many agree/disagree questions in a row? Mix in negatively worded items or use specific behavioral questions instead.
  • Survivorship bias. Are you only surveying active users? If you want to understand churn, you need to reach churned users too.
  • Social desirability bias. Could respondents feel pressure to answer in a socially acceptable way? Use anonymous surveys for sensitive topics.
  • Recency bias. Are you asking about experiences that happened too long ago? Survey within 7 days of the experience for accurate recall.

Analysis Plan

Define how you will analyze each question before fielding. This prevents post-hoc data mining.

QuestionAnalysis MethodComparison GroupsSuccess Metric
Q3 (Likert matrix)Mean score per stepFree vs. Paid usersIdentify steps with mean < 3.0
Q4 (Single choice)Frequency distributionBy roleTop 2 most-selected steps
Q5 (Likert)Distribution + meanBy usage frequencyMean > 3.5 for "confident"
Q6 (Open-ended)Thematic coding (2 coders)N/ATop 5 themes by frequency

Reporting Template

SectionContent
Executive summary3-5 bullet points: key findings and recommended actions
MethodologySample size, response rate, fielding dates, confidence interval
Key findingsCharts and tables for each core question with narrative
Segment differencesNotable differences by role, plan type, or usage frequency
Verbatim highlights5-10 representative open-ended quotes
Recommendations2-4 specific actions with expected impact
LimitationsResponse bias, sample size issues, or generalizability caveats

Filled Example: Post-Onboarding Satisfaction Survey

Survey Overview

FieldDetails
Survey NameTaskFlow Post-Onboarding Experience Survey
Research QuestionWhich onboarding steps cause the most confusion for new users?
Decision It InformsPrioritize onboarding redesign backlog for Q2 2026
OwnerMaria Chen, Senior PM
Target PopulationUsers who completed signup between Feb 1-28, 2026
Sample Size Target200 responses (from ~2,200 eligible users)
Fielding PeriodMarch 3-14, 2026
Survey ToolTypeform (email distribution)

Questions

Q1 (Screening). "When did you sign up for TaskFlow?"

  • Before February 2026 [screen out]
  • February 2026
  • I don't remember [screen out]

Q2 (Warm-up). "How often do you currently use TaskFlow?"

  • Daily
  • A few times a week
  • Weekly
  • Less than weekly
  • I stopped using it

Q3 (Core - Matrix). "How would you rate your experience with each onboarding step?"

Scale: Very Difficult (1) - Difficult (2) - Neutral (3) - Easy (4) - Very Easy (5)

  • Creating your account
  • Setting up your profile
  • Creating your first project
  • Inviting teammates

Q4 (Core). "Which onboarding step was most confusing?"

  • Creating your account
  • Setting up your profile
  • Creating your first project
  • Inviting teammates
  • None were confusing
  • Other (please specify)

Q5 (Core). "After completing onboarding, how confident did you feel using TaskFlow on your own?"

Scale: Not at all confident (1) - Slightly (2) - Moderately (3) - Very (4) - Extremely (5)

Q6 (Core). "Did you skip any onboarding steps?"

  • Yes (which ones? [checkboxes])
  • No

Q7 (Open-ended). "What one thing would have made your onboarding experience better?"

Q8 (Demographic). "What best describes your role?"

  • Product Manager
  • Designer
  • Engineer
  • Executive / Founder
  • Operations
  • Other

Results Summary (after fielding)

  • Response rate: 11.2% (247 responses from 2,204 emails sent)
  • Most confusing step: "Inviting teammates" (selected by 41% of respondents)
  • Average confidence after onboarding: 3.2 / 5.0 (moderately confident)
  • Skip rate: 38% of respondents skipped at least one step. "Invite teammates" was skipped by 29%.
  • Top open-ended theme: "I wanted to explore the product first before being forced to invite people" (67 mentions)

Key Takeaways

  • Define the research question and decision before writing any survey questions
  • Keep surveys to 8-12 questions and 3-5 minutes completion time
  • Write the analysis plan before fielding. If you do not know how to analyze a question, remove it
  • Check for common biases: leading questions, survivorship bias, double-barreled questions
  • Report findings with confidence intervals and segment breakdowns, not just overall averages

About This Template

Created by: Tim Adair

Last Updated: 3/4/2026

Version: 1.0.0

License: Free for personal and commercial use

Frequently Asked Questions

How many responses do I need for reliable results?+
For a standard product survey with 5-point Likert scales, 100-200 responses give you a margin of error of 5-7% at 95% confidence. For segmented analysis (comparing free vs. paid, mobile vs. desktop), you need 50+ responses per segment. If your population is small (e.g., 50 Enterprise accounts), survey everyone (census) and skip statistical significance testing.
Should I offer an incentive?+
Incentives increase response rates but can attract low-quality responses from people who want the reward but do not care about the questions. For customer surveys, a raffle (e.g., 5x $50 gift cards among all respondents) works well. The incentive should be enough to motivate but not enough to attract professional survey takers. Avoid per-response payments.
How do I handle low response rates?+
If your response rate is below 5%, check three things: email deliverability (are your emails landing in spam?), subject line (does it clearly state the purpose and time required?), and timing (did you send on a weekday morning?). Send one reminder 3-5 days after the initial invitation. Two reminders is the maximum before you annoy users. If the rate is still low, consider an in-app intercept survey instead.
When should I use a survey versus user interviews?+
Surveys answer "how many" and "how much" questions: they quantify known issues across a population. Interviews answer "why" and "how" questions: they uncover unknown problems and motivations. The [Product Discovery Handbook](/discovery-guide) recommends using interviews first to identify themes, then surveys to validate the prevalence of those themes across your user base.
How do I analyze open-ended responses?+
Thematic coding. Read all responses once to identify recurring themes (typically 5-10 themes emerge). Then re-read each response and tag it with 1-3 themes. Count the frequency of each theme. Have a second person independently code a sample of 30-50 responses and compare agreement (inter-rater reliability). If two coders disagree on more than 20% of responses, your theme definitions are too vague. ---

Explore More Templates

Browse our full library of AI-enhanced product management templates

Free PDF

Like This Template?

Subscribe to get new templates, frameworks, and PM strategies delivered to your inbox.

or use email

Instant PDF download. One email per week after that.

Want full SaaS idea playbooks with market research?

Explore Ideas Pro →