What This Template Is For
Customer interviews are the highest-signal research method available to product teams. Surveys tell you what people claim. Analytics tell you what people do. Interviews tell you why. But most product teams conduct interviews poorly: they ask leading questions, talk to the wrong people, fail to synthesize findings, and let insights decay in forgotten Notion pages.
This template structures the entire customer interview process: planning who to talk to, preparing questions that surface genuine insights, running the interview without biasing responses, and synthesizing findings into actionable product decisions. It includes stakeholder-specific question banks because what you ask a VP is different from what you ask an end user.
Use this template when you are doing discovery for a new feature, validating a product direction, investigating churn, understanding a new market segment, or building a business case for an investment. It works for prospect interviews (people who have not bought yet) and customer interviews (people who are already using your product).
The Product Discovery Handbook covers the full discovery process including when and how to use interviews. For quantitative validation after interviews, the RICE Calculator helps prioritize the opportunities you uncover. The Jobs to Be Done Framework provides a lens for structuring interview questions around customer outcomes. The Product Analytics Handbook covers how to combine interview insights with behavioral data.
How to Use This Template
- Define your research question. What decision will these interviews inform? Be specific.
- Select your interview participants. Choose 8-12 people across the relevant segments and roles.
- Prepare your interview guide. Customize the question bank for each stakeholder type.
- Run the interviews. Follow the structure but stay flexible. The best insights come from follow-up questions.
- Synthesize findings. Use the synthesis framework to identify patterns, not just individual stories.
- Share and act. Distribute findings to the team and connect them to specific product decisions.
The Customer Interview Guide Template
1. Research Plan
| Field | Details |
|---|---|
| Research question | [What specific question are these interviews designed to answer?] |
| Product decision this informs | [What will you do differently based on what you learn?] |
| Author | [Name] |
| Date | [Date] |
| Timeline | [Start date to findings delivery] |
| Number of interviews planned | [8-12 is the standard; fewer for narrow questions, more for broad exploration] |
Research scope.
| In Scope | Out of Scope |
|---|---|
| [e.g., Understanding why mid-market customers churn at month 6] | [e.g., Pricing sensitivity (separate research)] |
| [e.g., Identifying unmet needs in the onboarding experience] | [e.g., Competitive feature comparison (use other methods)] |
Hypotheses to test. [List 2-4 hypotheses you want to validate or invalidate.]
- [e.g., Customers churn because they never fully onboard, not because of missing features]
- [e.g., The primary buyer and the primary user have different success criteria]
- [e.g., Customers would pay more for a dedicated support tier]
2. Participant Selection
| Segment | Target Count | Selection Criteria | Recruitment Channel |
|---|---|---|---|
| [e.g., Active power users] | [3] | [Daily active, 6+ months tenure, >80% feature adoption] | [In-app prompt, CS introduction] |
| [e.g., Recent churners] | [3] | [Cancelled within last 90 days, was active before cancelling] | [Email outreach from CS] |
| [e.g., Prospects who did not buy] | [3] | [Completed demo/trial but did not convert] | [Sales team introduction] |
| [e.g., New customers (0-3 months)] | [3] | [Signed within last quarter, still in onboarding] | [CS introduction] |
Participant diversity checklist.
- ☐ Multiple company sizes represented (SMB, mid-market, enterprise)
- ☐ Multiple industries represented (if your product spans industries)
- ☐ Both buyers and end users included
- ☐ Mix of satisfied and dissatisfied customers (not just happy ones)
- ☐ Geographic diversity (if relevant to your research question)
Recruitment script.
Subject: Quick 30-min conversation about [topic]
>
Hi [Name],
>
I am [Your Name], a product manager at [Company]. We are researching [topic area, not product] and I would love to hear about your experience.
>
This is a 30-minute conversation. I am not selling anything. I am genuinely trying to understand [specific area] so we can make better decisions.
>
Would you be open to a call this week or next? I am happy to work around your schedule.
>
[Optional: We can share a summary of the research findings as a thank-you.]
3. Interview Structure
Total time: 30-45 minutes.
| Phase | Duration | Purpose |
|---|---|---|
| Warm-up | 3-5 min | Build rapport, set expectations, get consent to record |
| Context | 5-7 min | Understand their role, responsibilities, and environment |
| Core exploration | 15-20 min | Investigate the research question in depth |
| Specific probes | 5-10 min | Follow up on signals from the core section |
| Wrap-up | 3-5 min | Ask for referrals, thank them, explain next steps |
Ground rules for the interviewer.
- ☐ Listen more than you talk (target: 80% participant, 20% you)
- ☐ Never pitch your product or defend a design decision
- ☐ Ask about past behavior ("Tell me about the last time..."), not hypothetical future ("Would you...?")
- ☐ Follow up on emotional language ("You said that was frustrating. Tell me more about that.")
- ☐ Embrace silence. Count to 5 after they finish before asking the next question
- ☐ Do not ask "Would you use feature X?" (people cannot predict their own behavior)
4. Question Bank
Warm-Up Questions
- "Tell me about your role. What does a typical week look like for you?"
- "How long have you been in this position? What were you doing before?"
- "What are the biggest challenges your team is facing right now?"
Context Questions
- "Walk me through how your team currently handles [process related to your product]."
- "How many people are involved in [process]? What are their roles?"
- "What tools do you use for [process] today? How did you end up with those tools?"
- "If you could change one thing about how [process] works today, what would it be?"
Discovery Questions (Past Behavior)
- "Tell me about the last time [problem] happened. Walk me through what you did."
- "When did you first realize [problem] was an issue? What triggered that realization?"
- "What have you tried to solve this problem? What worked and what did not?"
- "How much time does your team spend on [problem area] each week?"
- "What happens when [problem] is not addressed? What are the consequences?"
Decision-Making Questions
- "Walk me through the last time you evaluated a tool or solution for [area]. How did that process work?"
- "Who else was involved in that decision? What did each person care about?"
- "What criteria mattered most when you were evaluating options?"
- "Was there a specific moment when you decided to move forward (or not)? What triggered that?"
- "What would have made that decision easier or faster?"
Value and Outcomes Questions
- "If [problem] were completely solved, what would change for you personally? For your team?"
- "How would you measure whether a solution is working? What metrics would you look at?"
- "What would make you recommend a solution to a colleague in your position?"
Switching and Competition Questions
- "Have you looked at alternatives to what you use today? What did you think of them?"
- "What would need to be true for you to switch away from your current approach?"
- "What is the biggest barrier to changing how you do [process] today?"
Forward-Looking Questions (Use Sparingly)
- "What concerns would you have about changing your current approach?"
- "If I could wave a magic wand and fix one thing about [process], what would it be?"
5. Stakeholder-Specific Question Guides
For Executive Buyers (VP+)
Focus: Business outcomes, ROI, strategic priorities.
- "What are your top 3 priorities for [department] this year?"
- "How do you measure success for [process area]? What numbers do you report to the board?"
- "What would solving [problem] mean for your ability to hit your goals this year?"
- "When you evaluate tools, what matters most: speed of deployment, total cost, or depth of capability?"
- "Who on your team would own the implementation if you moved forward?"
For Technical Evaluators (Engineers, Architects)
Focus: Integration, security, scalability, implementation effort.
- "Walk me through your current architecture for [area]. Where does [process] sit?"
- "What integrations are essential? Which tools must it connect with?"
- "What are your security and compliance requirements? SOC 2, GDPR, specific industry standards?"
- "If you were evaluating a tool for [area], what would you test first?"
- "What is the most common technical reason you have rejected a tool in the past?"
For End Users (Day-to-Day Operators)
Focus: Workflow, usability, daily pain points.
- "Show me how you do [task] today. Walk me through it step by step."
- "What is the most annoying part of [task]? Why?"
- "How often do you do [task]? How long does it take each time?"
- "What workarounds have you created to deal with limitations in your current tools?"
- "If you had an extra hour in your day from not having to deal with [problem], what would you do with it?"
6. Note-Taking Framework
During the interview, capture notes in this structure.
| Timestamp | Quote or Observation | Category | Signal Strength |
|---|---|---|---|
| [MM:SS] | [Direct quote or paraphrased observation] | [Pain / Need / Behavior / Context / Surprise] | [Strong / Moderate / Weak] |
Signal strength guide.
| Strength | Criteria |
|---|---|
| Strong | Participant described a specific past event with emotional language and detailed recall |
| Moderate | Participant described a general pattern ("We usually..." / "It often...") |
| Weak | Participant speculated about hypothetical behavior ("I would probably...") |
7. Synthesis Framework
After completing all interviews, synthesize using this structure.
Pattern Identification
| Pattern | Frequency | Example Quotes | Segments Affected |
|---|---|---|---|
| [e.g., Manual data entry is the #1 time sink] | [8/12 participants] | ["I spend 2 hours every Monday copying data between spreadsheets"] | [End users, SMB and mid-market] |
| [e.g., Buyers cannot quantify the cost of the current process] | [6/12] | ["I know it's expensive but I can't put a number on it"] | [Executive buyers] |
| [Pattern] | [X/12] | [Quote] | [Segments] |
Hypothesis Validation
| Hypothesis | Supported? | Evidence | Confidence |
|---|---|---|---|
| [Hypothesis 1] | Yes / No / Partially | [Summary of evidence] | [High / Medium / Low] |
| [Hypothesis 2] | [Result] | [Evidence] | [Confidence] |
Key Insights
- Insight 1: [Clear statement of what you learned, with evidence count]
- Insight 2: [Insight]
- Insight 3: [Insight]
Recommended Actions
| Action | Based On | Impact | Effort | Priority |
|---|---|---|---|---|
| [e.g., Simplify onboarding flow] | [Insight 1, 8/12 participants] | [High] | [Medium] | [P1] |
| [Action 2] | [Insight] | [Impact] | [Effort] | [Priority] |
Filled Example: TaskFlow Project Management Tool
Research Plan
Research question: Why do mid-market customers (50-200 employees) stop expanding beyond the initial team after successful onboarding?
Hypotheses: (1) The product works well for one team but does not support cross-team workflows. (2) The buyer does not have visibility into other teams' needs. (3) IT has concerns about proliferating SaaS tools.
Key Findings (after 10 interviews)
Pattern 1 (8/10): Cross-team expansion stalls because each team has different workflow requirements, and TaskFlow's templates are too rigid to accommodate them. Teams outside the initial deployment try it, find it does not match their process, and revert to their existing tools.
Pattern 2 (6/10): The champion who bought TaskFlow does not have relationships with leaders in other departments. Expansion requires a new champion in each department, and there is no mechanism to create them.
Pattern 3 (5/10): IT prefers to standardize on one project management tool, but TaskFlow's admin controls (SSO, user provisioning, audit logs) are insufficient for company-wide deployment.
Recommended actions: (1) Build customizable workflow templates per team type (P1, engineering-led). (2) Create a "champion kit" that the original buyer can share with peers in other departments (P1, PMM-led). (3) Invest in enterprise admin features: SSO, SCIM, audit logs (P2, engineering-led).
Key Takeaways
- Define your research question before scheduling a single interview. "Learn about customers" is not a research question
- Ask about past behavior, not hypothetical future. "Tell me about the last time..." produces reliable data. "Would you use..." does not
- Interview both happy and unhappy customers. If you only talk to fans, you learn nothing new
- Synthesize patterns across interviews, not just individual stories. One person's complaint is an anecdote. Eight people's complaint is a pattern
- Connect every insight to a specific product decision. Insights that do not lead to action are wasted effort
About This Template
Created by: Tim Adair
Last Updated: 3/5/2026
Version: 1.0.0
License: Free for personal and commercial use
