Research10 min

Customer Interviews Gone Wrong: 8 Mistakes That Poison Your Data

Specific anti-patterns in customer interviews that lead to wrong conclusions, with before-and-after examples of bad versus good questions.

By Tim Adair• Published 2025-10-24• Last updated 2026-02-12
Share:
TL;DR: Specific anti-patterns in customer interviews that lead to wrong conclusions, with before-and-after examples of bad versus good questions.

Last year I reviewed the research notes from a PM who was convinced their product needed a Gantt chart feature. Their evidence: 11 out of 15 customer interviews mentioned wanting Gantt charts. Slam dunk, right?

I read the interview transcripts. Every single one contained some variation of the question: "Would you find a Gantt chart feature useful?" Eleven people said yes. Of course they did. If you ask someone whether a free feature would be useful, the polite answer is always yes.

When we re-ran the interviews without mentioning Gantt charts and instead asked "Walk me through how you plan and track work across teams," zero people described a need for Gantt charts. They described needing better status visibility, dependency tracking, and deadline notifications. The solution space was much wider than Gantt charts.

Those first 15 interviews were not just unhelpful. They were actively harmful. They created false confidence in the wrong direction. Here are eight mistakes that produce this kind of poisoned data, and how to avoid each one.

Mistake 1: Leading Questions

The problem: You frame the question in a way that suggests the answer you want to hear.

Bad: "Don't you think it would be easier if you could filter reports by date range?"

Good: "Walk me through what happens when you need to analyze data from a specific time period."

Why it matters: People are naturally agreeable in conversation, especially when talking to someone from the company whose product they use. A leading question gives them a socially easy path to "yes" regardless of whether they actually feel that way.

The fix: Eliminate opinion words ("easier," "better," "useful") from your questions. Ask about behaviors, not preferences. Past behavior is the most reliable predictor of future behavior, and it is much harder to fabricate on the spot.

Mistake 2: Asking About the Future

The problem: You ask people what they would do in hypothetical scenarios.

Bad: "If we built a mobile app, how often would you use it?"

Good: "When was the last time you needed to access [product] and were not at your desk? What happened?"

Why it matters: Humans are terrible at predicting their own future behavior. Research from the field of behavioral economics. Notably Daniel Kahneman's work on cognitive biases. Has shown this repeatedly. People overestimate how much they would use new features by 3-5x. They say they would pay for something they would actually never buy. They imagine a disciplined future self that does not exist.

Rob Fitzpatrick's The Mom Test nails this: never ask someone if they would use your product or feature. Ask about their actual past behavior. The past is data. The future is fiction.

Mistake 3: Confirmation Bias Framing

The problem: You only talk to people who support your hypothesis, or you only hear the things that confirm what you already believe.

Bad approach: Interview 15 users who requested the feature, conclude everyone wants it.

Good approach: Interview 5 users who requested it, 5 who did not, and 5 who churned. Compare their needs.

Why it matters: If your interview sample consists only of people who already expressed interest in a feature, you are measuring demand within a self-selected group. That is like polling people at a pizza restaurant about whether they like pizza.

The most valuable interviews are with people who do not want what you are building. They will tell you about the problems you are not solving and the alternatives they are using instead.

Mistake 4: The Product Demo Disguised as Research

The problem: You show the user your prototype or mockup and ask them what they think. What you call "research" is actually a feedback session on a predetermined solution.

Bad: "Here is our new dashboard design. What do you think?"

Good: "How do you currently track [metric]? Show me the tools and reports you use."

Why it matters: The moment you show a solution, you anchor the conversation to that solution. The user evaluates what you showed them instead of describing their actual needs. You lose the opportunity to discover that the real problem is upstream of what you designed.

There is a place for prototype testing. But it comes after discovery, not instead of it. Use customer development conversations to understand the problem space first. Then test solutions with separate usability studies.

Mistake 5: Insufficient Silence

The problem: You fill every pause with your own talking. The user starts to answer, pauses to think, and you jump in with a follow-up or a reframe.

Bad: User pauses for 3 seconds. You: "Or maybe you could describe it another way?"

Good: User pauses for 3 seconds. You: [Wait 7 more seconds in silence.]

Why it matters: The most honest, revealing answers come after the easy answers run out. A user's first response is often the socially acceptable answer. Their second response, after an uncomfortable pause, is closer to the truth. If you fill the silence, you never get the second response.

A practical rule: after the user finishes speaking, count to five silently before asking your next question. In those five seconds, about 40% of the time, they will add something more honest or more specific than their initial answer.

Mistake 6: Asking "Why?" Too Directly

The problem: You ask "why" and get a rationalized, post-hoc explanation instead of the real reason.

Bad: "Why did you stop using the reporting feature?"

Good: "Tell me about the last time you needed a report. What happened, step by step?"

Why it matters: When you ask someone why they did something, they construct a logical narrative that may have nothing to do with the actual reason. "I stopped using reporting because it was too complex" sounds reasonable. The real reason might be that their manager stopped asking for reports, or they found a workaround in a spreadsheet, or they forgot the feature existed.

The "why" is hidden in the "what happened." Ask people to walk you through specific incidents chronologically. The reasons emerge from the story without the user needing to self-analyze.

This approach aligns with the Jobs to Be Done framework, which focuses on the circumstances that led to a decision rather than abstract preferences.

Mistake 7: One-Size-Fits-All Interview Script

The problem: You use the same questions for every user regardless of their role, experience level, or relationship with your product.

Bad approach: Same 10 questions for a power user who has been on the platform for 3 years and a new user who signed up last week.

Good approach: A core set of 3-4 questions, with branches that adapt based on user type and what they share.

Why it matters: A power user and a new user have fundamentally different perspectives on your product. The power user can tell you about advanced workflows, missing features, and long-term value. The new user can tell you about first impressions, onboarding friction, and time-to-value. Asking both groups the same questions wastes half the interview.

The fix is to have a research question (what you want to learn) that stays constant, but let the interview questions (what you ask) vary by participant. Prepare 2-3 question variants for each segment and decide which to use based on who you are talking to.

Mistake 8: Not Triangulating

The problem: You treat interview data as ground truth without cross-referencing it against behavioral data or other research methods.

Bad approach: "8 out of 10 users said they want feature X, so we are building it."

Good approach: "8 out of 10 users said they want feature X. Usage data shows only 2 of those 8 have tried the existing workaround. Let's dig deeper into what they actually need."

Why it matters: What people say and what people do are often different. This is not because people are dishonest. It is because self-report is an inherently limited data collection method. People forget, rationalize, and unconsciously present themselves in a favorable light.

The fix is to combine qualitative research with quantitative signals. For the methods and frameworks to do this well, see the user research methods guide. Use interviews to generate hypotheses and behavioral data to validate them. Or use behavioral data to identify patterns and interviews to understand the mechanisms behind them. Before triangulating, you also need to know how to distinguish signal from noise in the raw feedback itself. what good product feedback actually looks like covers how to weight different sources and build a system that surfaces real problems.

A Better Interview Structure

Based on these eight mistakes, here is a structure that avoids the worst traps:

Opening (2 minutes): Explain the purpose (learning, not selling), get permission to record, establish that there are no wrong answers.

Context (5 minutes): Understand their role, their goals, and their current tools. No product-specific questions yet.

Stories (15 minutes): Ask them to walk through specific recent incidents related to the problem space. "Tell me about the last time you needed to [do the thing your product helps with]. What happened?" Follow the story with "and then what?" and "what did you do about that?"

Deeper probing (10 minutes): Based on what they shared, dig into the moments of friction, frustration, or workaround. "You mentioned you exported to a spreadsheet. How often do you do that? What would happen if you couldn't?"

Open space (5 minutes): "Is there anything else about [problem space] that I should have asked about but didn't?"

Close (3 minutes): Thank them. Ask if they know anyone else you should talk to (snowball sampling).

Notice what is missing: no questions about your product, no prototypes, no "would you use" hypotheticals. Those belong in separate research activities.

How Many Interviews Are Enough?

The common question: "How many interviews do I need before I can act?"

The honest answer: it depends on what you are deciding. For a low-stakes feature tweak, 5 interviews give you enough directional signal. For a major product pivot, 20-30 interviews with diverse segments are the minimum.

A useful heuristic: stop interviewing when you start hearing the same themes repeated by new participants. In research methodology this is called "saturation." For most product questions, you will reach saturation between 8 and 15 interviews. If you are at interview 20 and still hearing novel insights, your sample is too homogeneous. You need to broaden your participant criteria.

One practical note: schedule interviews in batches of 5. After each batch, review your notes and adjust your questions based on what you have learned. This iterative approach is more efficient than scheduling 15 interviews with a static script.

The Hardest Part

The hardest part of customer interviews is not asking the right questions. It is hearing answers you do not want to hear and updating your beliefs accordingly.

If every interview confirms what you already thought, either you have perfect product intuition or (more likely) you are unconsciously steering the conversation. The best interviews leave you slightly uncomfortable. They challenge assumptions, reveal needs you did not anticipate, and occasionally demolish the feature idea you were excited about.

That discomfort is the point. It means you are learning.

T
Tim Adair

Strategic executive leader and author of all content on IdeaPlan. Background in product management, organizational development, and AI product strategy.

Free Resource

Enjoyed This Article?

Subscribe to get the latest product management insights, templates, and strategies delivered to your inbox.

Weekly SaaS ideas + PM insights. Unsubscribe anytime.

Want instant access to all 50+ premium templates?

Start Free Trial →

Keep Reading

Explore more product management guides and templates