Skip to main content
New: Forge AI docs + Loop PM assistant. 7-day free trial.
TemplateFREE⏱️ 45-60 minutes

In-App Survey Template

Free in-app survey template for product teams. Covers survey types, trigger logic, question design, targeting rules, response analysis, and sample rate management.

By Tim Adair• Last updated 2026-03-05
In-App Survey Template preview

In-App Survey Template

Free In-App Survey Template — open and start using immediately

or use email

Instant access. No spam.

What This Template Is For

In-app surveys give you feedback at the moment of experience rather than days later in an email. Response rates for in-app microsurveys (1-3 questions) run 15-30%, compared to 2-5% for email surveys. But poorly timed or poorly targeted surveys interrupt the user experience and train users to dismiss every prompt. The difference between useful signal and annoying noise is targeting, timing, and question design.

This template helps you plan in-app surveys that collect actionable data without degrading the user experience. It covers survey types, trigger logic, targeting rules, question design, sample rate management, and analysis frameworks. Use it alongside the Product Discovery Handbook to understand where in-app surveys fit in your broader research practice.

For post-calculation or post-action feedback specifically, see how the NPS Calculator structures scoring and follow-up questions.


How to Use This Template

  1. Start with Survey Strategy. Define what you need to learn and why an in-app survey is the right method.
  2. Choose your survey type and format.
  3. Design the trigger logic and targeting rules.
  4. Write and validate the questions.
  5. Set sample rates to control survey fatigue.
  6. Plan the analysis framework before launching.

The Template

Survey Strategy

  • Define the research question you are trying to answer
  • Confirm that an in-app survey is the right method (vs. interviews, analytics, email survey)
  • Identify the user segment that can answer this question
  • Define the sample size needed for statistical significance
  • Set a launch date and end date (surveys should not run indefinitely)
FieldValue
Research Question[What do you need to learn?]
Survey Type[NPS / CSAT / CES / Feature feedback / Churn reason]
Target Segment[Who should see this?]
Sample Size Needed[N responses for significance]
Run Period[Start date] to [End date]
Owner[PM name]

Survey Types

  • Select the survey type that matches your research question
  • Define the response format (numeric scale, multiple choice, open text)
Survey TypeWhen to UseFormatTypical Questions
NPSMeasure overall loyalty quarterly0-10 scale + open text"How likely to recommend?" + "Why?"
CSATAfter specific interactions1-5 stars or emoji scale"How satisfied with [feature]?"
CESAfter support or complex workflows1-7 scale"How easy was it to [action]?"
Feature feedbackAfter feature use (first 3 times)Thumbs up/down + open text"Was this useful?" + "How to improve?"
Churn reasonOn cancellation or downgradeMultiple choice + open text"Why are you leaving?"
PMF surveyMonthly to active usersMultiple choice"How disappointed if this product disappeared?"

Trigger Logic

  • Define the exact event or condition that triggers the survey
  • Set a delay after the trigger (do not interrupt mid-action)
  • Define the presentation format (modal, slide-in, inline, bottom bar)
  • Plan the dismiss behavior (X button, click outside, swipe away)
  • Set re-trigger rules (show again if dismissed? after how long?)
ParameterValue
Trigger Event[Completed action X / Visited page Y / N sessions]
Delay After Trigger[X seconds after page load / after action complete]
Presentation[Modal / Slide-in / Inline / Bottom bar]
Dismiss Behavior[X button / Click outside / Auto-dismiss after X sec]
Re-trigger After Dismiss[Never / After X days / After X sessions]
Cooldown Between Surveys[X days minimum between any survey for same user]

Targeting Rules

  • Define inclusion criteria (who sees the survey)
  • Define exclusion criteria (who should never see it)
  • Set a global survey fatigue limit per user
  • Plan for segment-level sample rates
Rule TypeConditionRationale
IncludeUsers who completed [action] in last [X] daysThey have relevant experience
IncludeUsers on [plan] tierSegment-specific feedback
ExcludeUsers who saw any survey in last [X] daysPrevent fatigue
ExcludeUsers in first [X] days of signupLet them onboard first
ExcludeUsers already responded to this surveyNo duplicates
Sample RateShow to [X]% of eligible usersControl volume

Question Design

  • Limit to 1-3 questions per survey (microsurvey format)
  • Lead with the quantitative question (scale, choice), follow with open text
  • Write questions in plain language (no jargon, no double negatives)
  • Test questions with 5 team members before launching
  • Define the skip logic (if answer is X, show follow-up Y)
Question #TextFormatSkip LogicRequired
Q1[Quantitative question][Scale / Choice / Thumbs]NoneYes
Q2[Follow-up based on Q1][Open text / Choice]Show if Q1 = [value]No
Q3[Optional deep-dive][Open text]Show if Q2 answeredNo

Question quality checklist:

  • Each question asks about one thing only (no compound questions)
  • Scale anchors are clear ("1 = Very Difficult, 7 = Very Easy")
  • Multiple choice options are mutually exclusive and collectively exhaustive
  • Open text fields have placeholder text suggesting the level of detail expected
  • Questions do not lead the respondent toward a particular answer

Sample Rate Management

  • Calculate the sample rate needed to reach your target response count within the run period
  • Set a global per-user cooldown (recommended: 30 days between surveys)
  • Cap concurrent surveys (recommended: 1 survey running per user segment at a time)
  • Monitor response rate daily for the first week and adjust sample rate if needed
ParameterValueCalculation
Eligible users per day[N]From analytics
Target responses[N]For statistical significance
Expected response rate[%]Based on survey type
Required impressions[N]Target / response rate
Run period[X days]Campaign duration
Sample rate[%]Required impressions / (eligible per day x run period)

Analysis Framework

  • Define how you will analyze quantitative responses (averages, distributions, segments)
  • Define how you will analyze open text responses (coding themes, sentiment)
  • Plan the segmentation cuts (by plan, tenure, usage level, feature adoption)
  • Set the decision threshold (what score or theme triggers action?)
  • Define who reviews results and how often
AnalysisMethodToolFrequency
QuantitativeAverage + distribution + trendSpreadsheet or BI toolWeekly
Open textTheme coding (manual or AI)Spreadsheet + tagsWeekly
SegmentationBreak scores by plan, tenure, feature useBI toolBi-weekly
Action mappingMap themes to product backlog items[Backlog tool]Bi-weekly

The Product Analytics Handbook covers segmented analysis techniques that apply directly to survey data interpretation.


Response Protocol

  • Define who is responsible for reviewing responses
  • Set SLA for closing the loop with respondents who leave contact info
  • Plan how findings flow into the product backlog
  • Create a template for sharing survey results with the team

Filled Example: SaaS Onboarding CSAT Survey

Strategy

FieldValue
Research QuestionHow satisfied are new users with the onboarding flow?
Survey TypeCSAT (5-star)
Target SegmentUsers who completed onboarding in the last 48 hours
Sample Size200 responses
Run PeriodMarch 1 - March 31

Questions

#TextFormatSkip Logic
Q1"How would you rate your setup experience?"5-star scaleNone
Q2"What was the hardest part?"Multiple choice (5 options + Other)Show if Q1 <= 3 stars
Q3"Any suggestions to improve setup?"Open textShow if Q2 answered

Targeting

  • Include: Users who completed onboarding step 5 in the last 48 hours
  • Exclude: Users who saw any survey in the last 30 days
  • Exclude: Internal team accounts
  • Sample rate: 40% of eligible users
  • Presentation: Slide-in from bottom-right, 5 seconds after onboarding completion screen loads
  • Dismiss: X button, do not re-trigger

Key Takeaways

  • Microsurveys (1-3 questions) outperform long surveys by 3-5x on response rate. Ask less, learn more
  • Trigger surveys after relevant actions, not on random page loads. Context drives quality responses
  • Set global cooldowns (30+ days) and sample rates to prevent survey fatigue across your user base
  • Design the analysis framework before launching. If you do not know how you will use the data, do not collect it
  • Close the loop. Users who give feedback and see no change stop responding to future surveys

About This Template

Created by: Tim Adair

Last Updated: 3/5/2026

Version: 1.0.0

License: Free for personal and commercial use

Frequently Asked Questions

What response rate should I expect from in-app surveys?+
Microsurveys (1-2 questions) with contextual triggers typically achieve 15-30% response rates. Longer surveys (5+ questions) drop to 5-10%. The biggest driver of response rate is timing: surveys shown immediately after a relevant action outperform surveys shown on random page loads by 3-5x. Format also matters. A single thumbs-up/down question in a bottom bar gets higher response rates than a modal with a 10-point scale. Start with the simplest format that answers your [product-market fit](/glossary/product-market-fit) question.
How do I avoid survey fatigue?+
Three rules. First, set a global cooldown of at least 30 days between any two surveys for the same user. Second, never run more than one survey per user segment simultaneously. Third, use sample rates so only a fraction of eligible users see each survey. Most teams over-survey their power users because those users trigger the most events. Protect your most active users from survey overload by capping impressions regardless of trigger frequency.
Should I use modals or inline prompts?+
Modals get higher visibility but are more interruptive. Use modals for high-priority surveys (NPS, churn reasons) that you need every eligible user to see. Use inline prompts or slide-ins for feature feedback that should not break the user's flow. Bottom-bar prompts are a good compromise: visible but not blocking. Match the survey importance to the interruption level. The [PLG Handbook](/plg-guide) discusses how to balance in-product communication without degrading the experience.
When should I NOT use an in-app survey?+
Do not use in-app surveys when you need deep qualitative understanding (use interviews instead), when the topic is sensitive (users will not share salary or personal info in a popup), when you need responses from churned users (they are not in the app), or when you do not have a clear research question. Running a "general feedback" survey with no hypothesis is a waste of user attention and your analysis time. ---

Explore More Templates

Browse our full library of AI-enhanced product management templates

Free PDF

Like This Template?

Subscribe to get new templates, frameworks, and PM strategies delivered to your inbox.

or use email

Instant PDF download. One email per week after that.

Want full SaaS idea playbooks with market research?

Explore Ideas Pro →