Skip to main content
New: Forge AI docs + Loop PM assistant. 7-day free trial.
TemplateFREE⏱️ 15 minutes prep

Intercept Survey Template

Free intercept survey template for product teams. Design in-product surveys that capture user feedback at the moment of experience without disrupting workflows.

By Tim Adair• Last updated 2026-03-05
Intercept Survey Template preview

Intercept Survey Template

Free Intercept Survey Template — open and start using immediately

or use email

Instant access. No spam.

What This Template Is For

Intercept surveys appear inside your product at a specific moment in the user's workflow. Unlike emailed surveys (which suffer from recall bias and low response rates), intercept surveys capture feedback at the point of experience, when the user's memory is fresh and their context is clear.

This template helps you design intercept surveys that collect high-quality feedback without annoying users. It covers trigger design (when to show the survey), question design (what to ask and how to ask it), targeting rules (who sees it), and frequency controls (how to prevent survey fatigue). The output is a survey specification your engineering team or survey tool can implement.

The key constraint is brevity. An intercept survey should take 15-30 seconds to complete. If your research question requires 10+ questions, you need a full survey, not an intercept. Intercept surveys are best for single-point-in-time feedback: "How easy was that?" "Did you find what you needed?" "How likely are you to recommend this feature?"

For deeper qualitative understanding, follow up survey responses with customer interviews. The Product Discovery Handbook covers how to combine quantitative survey data with qualitative research methods.

When to Use This Template

  • After a user completes a key workflow. Measure satisfaction, effort, or task success at the exact moment the experience ends.
  • After onboarding. Ask new users about their first experience before they forget the friction points.
  • Before a major redesign. Baseline current satisfaction on the workflows you plan to change.
  • After launching a new feature. Collect initial reactions from early users.
  • When you need to measure a specific metric over time. Run a recurring intercept survey (monthly or quarterly) to track NPS, CSAT, or CES trends.

How to Use This Template

  1. Define the research question. What do you need to learn?
  2. Choose the trigger. When in the user's workflow should the survey appear?
  3. Write the questions. Keep it to 1-3 questions maximum.
  4. Set targeting and frequency rules. Who sees it, and how often?
  5. Implement and monitor. Launch, watch response rates, and iterate.

The Template

Part 1: Survey Specification

FieldDetails
Survey Name[Internal name for tracking, e.g., "Post-export CSAT Q1 2026"]
Research Question[What decision will this survey inform?]
Metric[NPS / CSAT / CES / Custom / Open-ended]
Owner[PM name]
Launch Date[YYYY-MM-DD]
End Date[YYYY-MM-DD or "Ongoing"]
Target Responses[Minimum sample for statistical confidence, e.g., 200+]
Tool[Pendo / Sprig / Hotjar / Appcues / Custom / Intercom]

Part 2: Trigger Design

Define exactly when the survey appears.

FieldDetails
Trigger Event[The specific user action that triggers the survey, e.g., "User clicks 'Export Complete' button" or "User views dashboard for 30+ seconds"]
Delay After Trigger[Seconds to wait after the trigger, e.g., "2 seconds" to avoid interrupting the flow]
Page / Screen[Where the survey appears, e.g., "Export confirmation page" or "Dashboard"]
Placement[Bottom-right popover / Center modal / Inline / Slide-in from right]
Dismissal[X button always visible / Auto-dismiss after 30 seconds / Persist until answered or dismissed]

Trigger type options:

Trigger TypeWhen to UseExample
Post-actionAfter completing a specific taskSurvey after exporting a report
Post-sessionAfter a set number of actions or timeSurvey after 5th session or 10 minutes
Event-basedAfter encountering a specific stateSurvey after seeing an error message
Time-basedAfter N days since signup or feature adoptionSurvey 7 days after first using a feature
Exit-intentWhen user is about to leaveSurvey on mouse moving to close tab (web)

Part 3: Targeting Rules

FieldDetails
User Segment[Which users see this? e.g., "Free tier users who signed up > 7 days ago"]
Include Criteria[Positive conditions, e.g., "Has completed the workflow at least once before"]
Exclude Criteria[Negative conditions, e.g., "Has been shown any survey in the last 14 days"]
Sample Rate[% of eligible users who see it, e.g., "25%" to reduce fatigue while maintaining sample]
Max Shows Per User[Lifetime or per-period cap, e.g., "Once per user per quarter"]

Frequency control rules (apply globally across all intercept surveys):

  • No user sees more than 1 survey per 14-day period
  • No user sees the same survey more than once per quarter
  • Users who dismissed a survey are excluded from all surveys for 30 days
  • Users who completed the survey are excluded from that survey permanently
  • New users (< 3 days since signup) are excluded from all surveys except onboarding surveys

Part 4: Question Design

Question 1 (Required): Rating or Scale

FieldDetails
Question Text[e.g., "How easy was it to export your report?"]
Scale Type[1-5 stars / 1-7 Likert / 0-10 NPS / Thumbs up/down / 1-5 smiley faces]
Scale Labels[e.g., 1 = "Very difficult", 5 = "Very easy"]

Question 2 (Optional): Follow-Up

FieldDetails
Condition[When to show, e.g., "Only if Q1 rating is 1-3" or "Always"]
Question Text[e.g., "What made it difficult?" or "What would you improve?"]
Format[Open text (1-2 sentences) / Multiple choice / Single select]
Options (if MC)[List options]

Question 3 (Optional): Single Additional Question

FieldDetails
Condition[When to show]
Question Text
Format

Design rules:

  • Maximum 3 questions total. One rating + one follow-up is the ideal pattern.
  • Show follow-up questions conditionally. If the user rates 4-5, skip the follow-up. If they rate 1-3, ask what went wrong.
  • Use the same scale consistently across all intercept surveys so you can compare scores.
  • End with a "Thank you" screen and optional link to a feedback board or community.

Part 5: Results Template

Response Summary

MetricValue
Total eligible users[#]
Surveys shown[#]
Responses collected[#]
Response rate[%]
Dismissal rate[%]
Average rating (Q1)[/5 or /10]
Median rating
NPS Score (if applicable)[-100 to +100]

Rating Distribution

RatingCount%Visual
5[bar]
4[bar]
3[bar]
2[bar]
1[bar]

Open-Ended Response Themes (Q2)

ThemeCountExample ResponseAction
[e.g., "Export takes too long"]["It took 3 minutes to export a 10-page report"]
[e.g., "Format options missing"]["I need to export as CSV, not just PDF"]

Segment Comparison (if applicable)

SegmentAvg RatingResponse RateN
[e.g., Free tier]
[e.g., Pro tier]
[e.g., New users < 30 days]
[e.g., Power users 50+ sessions]

Part 6: Action Plan

FindingSeverityProposed ActionOwnerTimeline
[e.g., "42% of 1-2 raters cited export speed"]HighOptimize export pipeline, target < 30 secEngQ2 Sprint 3

Filled Example: Post-Onboarding CES Survey

Context. A B2B project management tool wants to measure the onboarding experience for new users. They deploy a Customer Effort Score (CES) survey that triggers after a new user completes their first project.

Survey Spec (Example)

FieldDetails
Survey NamePost-Onboarding CES Q1 2026
Trigger EventUser creates their first project and adds at least one task
Delay5 seconds after the "Project created" confirmation
PlacementBottom-right slide-in
TargetingUsers who signed up in the last 30 days, first project only
FrequencyOnce per user, lifetime
Sample Rate100% (all eligible new users)

Questions (Example)

Q1: "How easy was it to set up your first project?" (1-5 scale: Very Difficult to Very Easy)

Q2 (conditional, if Q1 is 1-3): "What was the hardest part?" (Open text)

Results (Example, 4-Week Collection)

MetricValue
Surveys shown342
Responses198
Response rate57.9%
Average CES3.4 / 5
% rating 4-5 (easy)52%
% rating 1-2 (difficult)19%

Top themes from low-raters:

  1. "Confused about the difference between projects and workspaces" (12 responses)
  2. "Could not figure out how to invite my team" (8 responses)
  3. "The template picker had too many options" (6 responses)

Action: Redesign the project creation flow to include an inline "invite team" step and reduce the template picker from 24 options to 6 (with a "See all" link).


Key Takeaways

  • Intercept surveys beat email surveys on response rate (40-60% vs 10-15%) and data quality (real-time context vs recall bias). Use them for in-product feedback.
  • One question is often enough. A single CES or CSAT rating with a conditional follow-up text field gives you both the metric and the context. Adding more questions drops completion rates sharply.
  • Frequency controls are not optional. Users who see surveys too often develop "survey blindness" and start dismissing them without reading. Enforce the 14-day minimum gap globally.
  • Conditional follow-ups are the highest-value pattern. Show the open-ended question only to low raters. They have the most to tell you, and their frustration is freshest. High raters rarely provide actionable text.
  • Compare segments, not just averages. A 3.8 average might mask the fact that new users rate 2.5 while power users rate 4.5. Segment by tenure, plan tier, and use case.
  • Feed results into your hypothesis backlog. Each low-satisfaction theme is a hypothesis to investigate further through customer interviews or usability tests. The Product Discovery Handbook shows how to build this feedback loop.

About This Template

Created by: Tim Adair

Last Updated: 3/5/2026

Version: 1.0.0

License: Free for personal and commercial use

Frequently Asked Questions

What is the best scale for intercept surveys?+
It depends on the metric. For satisfaction (CSAT), use a 1-5 scale with labeled endpoints ("Very Dissatisfied" to "Very Satisfied"). For effort (CES), use a 1-5 or 1-7 scale with labeled endpoints ("Very Difficult" to "Very Easy"). For loyalty ([NPS](/glossary/nps-net-promoter-score)), use the standard 0-10 scale. For quick binary feedback, thumbs up/down is effective. Consistency matters more than the specific scale: pick one and use it across all surveys so you can compare over time.
How many responses do I need for reliable results?+
For a rating average, 100+ responses gives you a margin of error around +/- 0.2 on a 5-point scale. For segmented analysis (comparing free vs pro users), you need 50+ per segment. For open-ended themes, 30+ responses typically reveal the major themes. If your product has low traffic, run the survey longer rather than lowering the sample rate.
How do I prevent survey fatigue?+
Three rules: (1) enforce a minimum gap between surveys per user (14 days is the standard), (2) set a sample rate below 100% for high-traffic touchpoints (25-50%), and (3) cap the total number of surveys any user sees per quarter (2-3 maximum). If you run multiple intercept surveys across different product areas, maintain a central registry so the frequency controls apply globally.
Should I incentivize responses to intercept surveys?+
Generally no. Intercept surveys already achieve 40-60% response rates because they appear in context with minimal effort. Incentives introduce response bias (people answering for the reward, not because they have genuine feedback). If your response rate is below 30%, the problem is usually timing, targeting, or survey length, not lack of incentive. Fix the trigger and simplify the questions before adding incentives.
Can I use intercept surveys for NPS measurement?+
Yes, but with caveats. In-product NPS surveys tend to skew higher than email NPS surveys because they capture users who are actively engaged (by definition, they are using the product). This is fine as long as you compare in-product NPS to in-product NPS over time, not to industry benchmarks based on email surveys. Run the NPS intercept on a recurring schedule (quarterly) with consistent targeting to track trends. ---

Explore More Templates

Browse our full library of AI-enhanced product management templates

Free PDF

Like This Template?

Subscribe to get new templates, frameworks, and PM strategies delivered to your inbox.

or use email

Instant PDF download. One email per week after that.

Want full SaaS idea playbooks with market research?

Explore Ideas Pro →