What This Template Is For
In-app surveys give you feedback at the moment of experience rather than days later in an email. Response rates for in-app microsurveys (1-3 questions) run 15-30%, compared to 2-5% for email surveys. But poorly timed or poorly targeted surveys interrupt the user experience and train users to dismiss every prompt. The difference between useful signal and annoying noise is targeting, timing, and question design.
This template helps you plan in-app surveys that collect actionable data without degrading the user experience. It covers survey types, trigger logic, targeting rules, question design, sample rate management, and analysis frameworks. Use it alongside the Product Discovery Handbook to understand where in-app surveys fit in your broader research practice.
For post-calculation or post-action feedback specifically, see how the NPS Calculator structures scoring and follow-up questions.
How to Use This Template
- Start with Survey Strategy. Define what you need to learn and why an in-app survey is the right method.
- Choose your survey type and format.
- Design the trigger logic and targeting rules.
- Write and validate the questions.
- Set sample rates to control survey fatigue.
- Plan the analysis framework before launching.
The Template
Survey Strategy
- ☐ Define the research question you are trying to answer
- ☐ Confirm that an in-app survey is the right method (vs. interviews, analytics, email survey)
- ☐ Identify the user segment that can answer this question
- ☐ Define the sample size needed for statistical significance
- ☐ Set a launch date and end date (surveys should not run indefinitely)
| Field | Value |
|---|---|
| Research Question | [What do you need to learn?] |
| Survey Type | [NPS / CSAT / CES / Feature feedback / Churn reason] |
| Target Segment | [Who should see this?] |
| Sample Size Needed | [N responses for significance] |
| Run Period | [Start date] to [End date] |
| Owner | [PM name] |
Survey Types
- ☐ Select the survey type that matches your research question
- ☐ Define the response format (numeric scale, multiple choice, open text)
| Survey Type | When to Use | Format | Typical Questions |
|---|---|---|---|
| NPS | Measure overall loyalty quarterly | 0-10 scale + open text | "How likely to recommend?" + "Why?" |
| CSAT | After specific interactions | 1-5 stars or emoji scale | "How satisfied with [feature]?" |
| CES | After support or complex workflows | 1-7 scale | "How easy was it to [action]?" |
| Feature feedback | After feature use (first 3 times) | Thumbs up/down + open text | "Was this useful?" + "How to improve?" |
| Churn reason | On cancellation or downgrade | Multiple choice + open text | "Why are you leaving?" |
| PMF survey | Monthly to active users | Multiple choice | "How disappointed if this product disappeared?" |
Trigger Logic
- ☐ Define the exact event or condition that triggers the survey
- ☐ Set a delay after the trigger (do not interrupt mid-action)
- ☐ Define the presentation format (modal, slide-in, inline, bottom bar)
- ☐ Plan the dismiss behavior (X button, click outside, swipe away)
- ☐ Set re-trigger rules (show again if dismissed? after how long?)
| Parameter | Value |
|---|---|
| Trigger Event | [Completed action X / Visited page Y / N sessions] |
| Delay After Trigger | [X seconds after page load / after action complete] |
| Presentation | [Modal / Slide-in / Inline / Bottom bar] |
| Dismiss Behavior | [X button / Click outside / Auto-dismiss after X sec] |
| Re-trigger After Dismiss | [Never / After X days / After X sessions] |
| Cooldown Between Surveys | [X days minimum between any survey for same user] |
Targeting Rules
- ☐ Define inclusion criteria (who sees the survey)
- ☐ Define exclusion criteria (who should never see it)
- ☐ Set a global survey fatigue limit per user
- ☐ Plan for segment-level sample rates
| Rule Type | Condition | Rationale |
|---|---|---|
| Include | Users who completed [action] in last [X] days | They have relevant experience |
| Include | Users on [plan] tier | Segment-specific feedback |
| Exclude | Users who saw any survey in last [X] days | Prevent fatigue |
| Exclude | Users in first [X] days of signup | Let them onboard first |
| Exclude | Users already responded to this survey | No duplicates |
| Sample Rate | Show to [X]% of eligible users | Control volume |
Question Design
- ☐ Limit to 1-3 questions per survey (microsurvey format)
- ☐ Lead with the quantitative question (scale, choice), follow with open text
- ☐ Write questions in plain language (no jargon, no double negatives)
- ☐ Test questions with 5 team members before launching
- ☐ Define the skip logic (if answer is X, show follow-up Y)
| Question # | Text | Format | Skip Logic | Required |
|---|---|---|---|---|
| Q1 | [Quantitative question] | [Scale / Choice / Thumbs] | None | Yes |
| Q2 | [Follow-up based on Q1] | [Open text / Choice] | Show if Q1 = [value] | No |
| Q3 | [Optional deep-dive] | [Open text] | Show if Q2 answered | No |
Question quality checklist:
- ☐ Each question asks about one thing only (no compound questions)
- ☐ Scale anchors are clear ("1 = Very Difficult, 7 = Very Easy")
- ☐ Multiple choice options are mutually exclusive and collectively exhaustive
- ☐ Open text fields have placeholder text suggesting the level of detail expected
- ☐ Questions do not lead the respondent toward a particular answer
Sample Rate Management
- ☐ Calculate the sample rate needed to reach your target response count within the run period
- ☐ Set a global per-user cooldown (recommended: 30 days between surveys)
- ☐ Cap concurrent surveys (recommended: 1 survey running per user segment at a time)
- ☐ Monitor response rate daily for the first week and adjust sample rate if needed
| Parameter | Value | Calculation |
|---|---|---|
| Eligible users per day | [N] | From analytics |
| Target responses | [N] | For statistical significance |
| Expected response rate | [%] | Based on survey type |
| Required impressions | [N] | Target / response rate |
| Run period | [X days] | Campaign duration |
| Sample rate | [%] | Required impressions / (eligible per day x run period) |
Analysis Framework
- ☐ Define how you will analyze quantitative responses (averages, distributions, segments)
- ☐ Define how you will analyze open text responses (coding themes, sentiment)
- ☐ Plan the segmentation cuts (by plan, tenure, usage level, feature adoption)
- ☐ Set the decision threshold (what score or theme triggers action?)
- ☐ Define who reviews results and how often
| Analysis | Method | Tool | Frequency |
|---|---|---|---|
| Quantitative | Average + distribution + trend | Spreadsheet or BI tool | Weekly |
| Open text | Theme coding (manual or AI) | Spreadsheet + tags | Weekly |
| Segmentation | Break scores by plan, tenure, feature use | BI tool | Bi-weekly |
| Action mapping | Map themes to product backlog items | [Backlog tool] | Bi-weekly |
The Product Analytics Handbook covers segmented analysis techniques that apply directly to survey data interpretation.
Response Protocol
- ☐ Define who is responsible for reviewing responses
- ☐ Set SLA for closing the loop with respondents who leave contact info
- ☐ Plan how findings flow into the product backlog
- ☐ Create a template for sharing survey results with the team
Filled Example: SaaS Onboarding CSAT Survey
Strategy
| Field | Value |
|---|---|
| Research Question | How satisfied are new users with the onboarding flow? |
| Survey Type | CSAT (5-star) |
| Target Segment | Users who completed onboarding in the last 48 hours |
| Sample Size | 200 responses |
| Run Period | March 1 - March 31 |
Questions
| # | Text | Format | Skip Logic |
|---|---|---|---|
| Q1 | "How would you rate your setup experience?" | 5-star scale | None |
| Q2 | "What was the hardest part?" | Multiple choice (5 options + Other) | Show if Q1 <= 3 stars |
| Q3 | "Any suggestions to improve setup?" | Open text | Show if Q2 answered |
Targeting
- Include: Users who completed onboarding step 5 in the last 48 hours
- Exclude: Users who saw any survey in the last 30 days
- Exclude: Internal team accounts
- Sample rate: 40% of eligible users
- Presentation: Slide-in from bottom-right, 5 seconds after onboarding completion screen loads
- Dismiss: X button, do not re-trigger
Key Takeaways
- Microsurveys (1-3 questions) outperform long surveys by 3-5x on response rate. Ask less, learn more
- Trigger surveys after relevant actions, not on random page loads. Context drives quality responses
- Set global cooldowns (30+ days) and sample rates to prevent survey fatigue across your user base
- Design the analysis framework before launching. If you do not know how you will use the data, do not collect it
- Close the loop. Users who give feedback and see no change stop responding to future surveys
About This Template
Created by: Tim Adair
Last Updated: 3/5/2026
Version: 1.0.0
License: Free for personal and commercial use
