What This Template Is For
Intercept surveys appear inside your product at a specific moment in the user's workflow. Unlike emailed surveys (which suffer from recall bias and low response rates), intercept surveys capture feedback at the point of experience, when the user's memory is fresh and their context is clear.
This template helps you design intercept surveys that collect high-quality feedback without annoying users. It covers trigger design (when to show the survey), question design (what to ask and how to ask it), targeting rules (who sees it), and frequency controls (how to prevent survey fatigue). The output is a survey specification your engineering team or survey tool can implement.
The key constraint is brevity. An intercept survey should take 15-30 seconds to complete. If your research question requires 10+ questions, you need a full survey, not an intercept. Intercept surveys are best for single-point-in-time feedback: "How easy was that?" "Did you find what you needed?" "How likely are you to recommend this feature?"
For deeper qualitative understanding, follow up survey responses with customer interviews. The Product Discovery Handbook covers how to combine quantitative survey data with qualitative research methods.
When to Use This Template
- After a user completes a key workflow. Measure satisfaction, effort, or task success at the exact moment the experience ends.
- After onboarding. Ask new users about their first experience before they forget the friction points.
- Before a major redesign. Baseline current satisfaction on the workflows you plan to change.
- After launching a new feature. Collect initial reactions from early users.
- When you need to measure a specific metric over time. Run a recurring intercept survey (monthly or quarterly) to track NPS, CSAT, or CES trends.
How to Use This Template
- Define the research question. What do you need to learn?
- Choose the trigger. When in the user's workflow should the survey appear?
- Write the questions. Keep it to 1-3 questions maximum.
- Set targeting and frequency rules. Who sees it, and how often?
- Implement and monitor. Launch, watch response rates, and iterate.
The Template
Part 1: Survey Specification
| Field | Details |
|---|---|
| Survey Name | [Internal name for tracking, e.g., "Post-export CSAT Q1 2026"] |
| Research Question | [What decision will this survey inform?] |
| Metric | [NPS / CSAT / CES / Custom / Open-ended] |
| Owner | [PM name] |
| Launch Date | [YYYY-MM-DD] |
| End Date | [YYYY-MM-DD or "Ongoing"] |
| Target Responses | [Minimum sample for statistical confidence, e.g., 200+] |
| Tool | [Pendo / Sprig / Hotjar / Appcues / Custom / Intercom] |
Part 2: Trigger Design
Define exactly when the survey appears.
| Field | Details |
|---|---|
| Trigger Event | [The specific user action that triggers the survey, e.g., "User clicks 'Export Complete' button" or "User views dashboard for 30+ seconds"] |
| Delay After Trigger | [Seconds to wait after the trigger, e.g., "2 seconds" to avoid interrupting the flow] |
| Page / Screen | [Where the survey appears, e.g., "Export confirmation page" or "Dashboard"] |
| Placement | [Bottom-right popover / Center modal / Inline / Slide-in from right] |
| Dismissal | [X button always visible / Auto-dismiss after 30 seconds / Persist until answered or dismissed] |
Trigger type options:
| Trigger Type | When to Use | Example |
|---|---|---|
| Post-action | After completing a specific task | Survey after exporting a report |
| Post-session | After a set number of actions or time | Survey after 5th session or 10 minutes |
| Event-based | After encountering a specific state | Survey after seeing an error message |
| Time-based | After N days since signup or feature adoption | Survey 7 days after first using a feature |
| Exit-intent | When user is about to leave | Survey on mouse moving to close tab (web) |
Part 3: Targeting Rules
| Field | Details |
|---|---|
| User Segment | [Which users see this? e.g., "Free tier users who signed up > 7 days ago"] |
| Include Criteria | [Positive conditions, e.g., "Has completed the workflow at least once before"] |
| Exclude Criteria | [Negative conditions, e.g., "Has been shown any survey in the last 14 days"] |
| Sample Rate | [% of eligible users who see it, e.g., "25%" to reduce fatigue while maintaining sample] |
| Max Shows Per User | [Lifetime or per-period cap, e.g., "Once per user per quarter"] |
Frequency control rules (apply globally across all intercept surveys):
- ☐ No user sees more than 1 survey per 14-day period
- ☐ No user sees the same survey more than once per quarter
- ☐ Users who dismissed a survey are excluded from all surveys for 30 days
- ☐ Users who completed the survey are excluded from that survey permanently
- ☐ New users (< 3 days since signup) are excluded from all surveys except onboarding surveys
Part 4: Question Design
Question 1 (Required): Rating or Scale
| Field | Details |
|---|---|
| Question Text | [e.g., "How easy was it to export your report?"] |
| Scale Type | [1-5 stars / 1-7 Likert / 0-10 NPS / Thumbs up/down / 1-5 smiley faces] |
| Scale Labels | [e.g., 1 = "Very difficult", 5 = "Very easy"] |
Question 2 (Optional): Follow-Up
| Field | Details |
|---|---|
| Condition | [When to show, e.g., "Only if Q1 rating is 1-3" or "Always"] |
| Question Text | [e.g., "What made it difficult?" or "What would you improve?"] |
| Format | [Open text (1-2 sentences) / Multiple choice / Single select] |
| Options (if MC) | [List options] |
Question 3 (Optional): Single Additional Question
| Field | Details |
|---|---|
| Condition | [When to show] |
| Question Text | |
| Format |
Design rules:
- Maximum 3 questions total. One rating + one follow-up is the ideal pattern.
- Show follow-up questions conditionally. If the user rates 4-5, skip the follow-up. If they rate 1-3, ask what went wrong.
- Use the same scale consistently across all intercept surveys so you can compare scores.
- End with a "Thank you" screen and optional link to a feedback board or community.
Part 5: Results Template
Response Summary
| Metric | Value |
|---|---|
| Total eligible users | [#] |
| Surveys shown | [#] |
| Responses collected | [#] |
| Response rate | [%] |
| Dismissal rate | [%] |
| Average rating (Q1) | [/5 or /10] |
| Median rating | |
| NPS Score (if applicable) | [-100 to +100] |
Rating Distribution
| Rating | Count | % | Visual |
|---|---|---|---|
| 5 | [bar] | ||
| 4 | [bar] | ||
| 3 | [bar] | ||
| 2 | [bar] | ||
| 1 | [bar] |
Open-Ended Response Themes (Q2)
| Theme | Count | Example Response | Action |
|---|---|---|---|
| [e.g., "Export takes too long"] | ["It took 3 minutes to export a 10-page report"] | ||
| [e.g., "Format options missing"] | ["I need to export as CSV, not just PDF"] |
Segment Comparison (if applicable)
| Segment | Avg Rating | Response Rate | N |
|---|---|---|---|
| [e.g., Free tier] | |||
| [e.g., Pro tier] | |||
| [e.g., New users < 30 days] | |||
| [e.g., Power users 50+ sessions] |
Part 6: Action Plan
| Finding | Severity | Proposed Action | Owner | Timeline |
|---|---|---|---|---|
| [e.g., "42% of 1-2 raters cited export speed"] | High | Optimize export pipeline, target < 30 sec | Eng | Q2 Sprint 3 |
Filled Example: Post-Onboarding CES Survey
Context. A B2B project management tool wants to measure the onboarding experience for new users. They deploy a Customer Effort Score (CES) survey that triggers after a new user completes their first project.
Survey Spec (Example)
| Field | Details |
|---|---|
| Survey Name | Post-Onboarding CES Q1 2026 |
| Trigger Event | User creates their first project and adds at least one task |
| Delay | 5 seconds after the "Project created" confirmation |
| Placement | Bottom-right slide-in |
| Targeting | Users who signed up in the last 30 days, first project only |
| Frequency | Once per user, lifetime |
| Sample Rate | 100% (all eligible new users) |
Questions (Example)
Q1: "How easy was it to set up your first project?" (1-5 scale: Very Difficult to Very Easy)
Q2 (conditional, if Q1 is 1-3): "What was the hardest part?" (Open text)
Results (Example, 4-Week Collection)
| Metric | Value |
|---|---|
| Surveys shown | 342 |
| Responses | 198 |
| Response rate | 57.9% |
| Average CES | 3.4 / 5 |
| % rating 4-5 (easy) | 52% |
| % rating 1-2 (difficult) | 19% |
Top themes from low-raters:
- "Confused about the difference between projects and workspaces" (12 responses)
- "Could not figure out how to invite my team" (8 responses)
- "The template picker had too many options" (6 responses)
Action: Redesign the project creation flow to include an inline "invite team" step and reduce the template picker from 24 options to 6 (with a "See all" link).
Key Takeaways
- Intercept surveys beat email surveys on response rate (40-60% vs 10-15%) and data quality (real-time context vs recall bias). Use them for in-product feedback.
- One question is often enough. A single CES or CSAT rating with a conditional follow-up text field gives you both the metric and the context. Adding more questions drops completion rates sharply.
- Frequency controls are not optional. Users who see surveys too often develop "survey blindness" and start dismissing them without reading. Enforce the 14-day minimum gap globally.
- Conditional follow-ups are the highest-value pattern. Show the open-ended question only to low raters. They have the most to tell you, and their frustration is freshest. High raters rarely provide actionable text.
- Compare segments, not just averages. A 3.8 average might mask the fact that new users rate 2.5 while power users rate 4.5. Segment by tenure, plan tier, and use case.
- Feed results into your hypothesis backlog. Each low-satisfaction theme is a hypothesis to investigate further through customer interviews or usability tests. The Product Discovery Handbook shows how to build this feedback loop.
About This Template
Created by: Tim Adair
Last Updated: 3/5/2026
Version: 1.0.0
License: Free for personal and commercial use
