Skip to main content
New: Deck Doctor. Upload your deck, get CPO-level feedback. 7-day free trial.
Back to Glossary
Core PM ConceptsM

Minimum Viable Product (MVP)

Definition

A minimum viable product (MVP) is the smallest version of a product that can be released to test a key business hypothesis with real users. The concept was popularized by Eric Ries in The Lean Startup, building on Steve Blank's customer development methodology. An MVP is not a half-baked product or a feature-incomplete version of the final vision. It is a deliberate experiment designed to maximize validated learning with minimum effort.

The key insight behind MVPs is that most product failures are not engineering failures. They are market failures. Teams build products nobody wants. The MVP approach forces teams to test the riskiest assumptions before investing months of development. The Product Discovery Handbook covers how discovery practices feed into MVP scoping, and the MVP roadmap template provides a planning format for teams building their first version. For founders evaluating ideas before building, the Founder Fit Assessment helps validate whether the idea aligns with your skills and market.

Why It Matters for Product Managers

MVPs are the PM's primary tool for managing uncertainty. In early-stage products, almost every assumption is unvalidated: the problem exists, users will pay, the solution works, the market is big enough. Building a full product based on unvalidated assumptions is the most expensive mistake a PM can make.

MVPs matter for three reasons. First, they reduce the cost of being wrong. A 4-week MVP that invalidates a hypothesis saves 6-12 months of wasted development. That is not just time saved. It is opportunity cost recovered: the team can pursue better ideas sooner.

Second, MVPs create evidence for decisions. Instead of debating whether users want feature X, the team can build a thin version and measure actual behavior. Data from real users is more persuasive than any amount of internal speculation. This evidence feeds directly into prioritization decisions for the roadmap.

Third, MVPs build organizational learning muscle. Teams that ship MVPs regularly develop faster iteration cycles, better customer intuition, and higher tolerance for experimentation. This compounds: a team that runs 12 experiments per year learns 12x faster than one that ships a single big release.

How It Works in Practice

Building and testing an MVP involves seven stages:

  1. Identify the riskiest assumption. Every product idea rests on multiple assumptions: the problem is real, users will switch from their current solution, the product can deliver value at a sustainable cost. List all assumptions, then rank by risk (probability of being wrong multiplied by impact if wrong). The assumption that tops this list is what the MVP should test. The RICE Calculator can help rank which assumptions to test first.
  1. Define the hypothesis. Write a testable statement: "We believe [target user] will [take specific action] because [reason], and we will know this is true when [measurable outcome]." Vague hypotheses like "users will like the product" are untestable. Specific ones like "30% of freelance designers who see the landing page will sign up for the waitlist" are actionable.
  1. Choose the right MVP type. Match the MVP type to the hypothesis:

- Landing page: tests demand before building anything

- Wizard of Oz: appears automated but is manually fulfilled behind the scenes

- Concierge: delivers the service manually to a small group

- Single-feature: one core feature built end-to-end

- Piecemeal: assembled from existing tools (Typeform + Zapier + Airtable)

  1. Scope to the core value. Cut every feature that does not directly test the hypothesis. No user settings, no admin panel, no edge-case handling, no polish. If the core value proposition does not resonate with a rough version, a polished version will not save it.
  1. Set success criteria before building. Define what constitutes a pass, fail, or ambiguous result. For example: "15%+ signup rate = proceed, below 5% = pivot, 5-15% = iterate on messaging." Deciding criteria after seeing results introduces confirmation bias.
  1. Build within a timebox. Set a hard deadline (2-6 weeks for most software MVPs). If the scope does not fit the timebox, cut scope, not time. The sprint planning guide covers how to estimate and commit to realistic delivery targets.
  1. Measure, learn, decide. Collect quantitative data against success criteria. Conduct 5-10 user interviews to understand the "why" behind the numbers. Then decide: iterate (the hypothesis shows promise but execution needs work), pivot (the hypothesis is invalidated), or scale (the hypothesis is validated and the product is ready for growth). Document learnings regardless of outcome.

Implementation Checklist

  • List all assumptions underlying the product idea (problem, user, value, market size, willingness to pay)
  • Rank assumptions by risk (likelihood of being wrong x impact if wrong)
  • Write a testable hypothesis for the riskiest assumption
  • Choose an MVP type that matches the hypothesis (landing page, Wizard of Oz, concierge, single-feature, piecemeal)
  • Define success, failure, and ambiguous result thresholds before building
  • Set a timebox (2-6 weeks) and commit to shipping within it
  • Cut every feature that does not directly test the hypothesis
  • Identify 20-30 target users to recruit for the test
  • Build instrumentation to measure the success criteria (analytics, surveys, interviews)
  • Launch to the target audience and collect data for at least 1-2 weeks
  • Conduct 5-10 user interviews to understand the data
  • Document the learnings and make an explicit iterate/pivot/scale decision

Common Mistakes

1. Building too much

The most common MVP failure is scope creep. Teams add "just one more feature" because it "only takes a few days." Each addition delays learning and increases sunk cost bias. The MVP becomes a V1 that took 4 months instead of 4 weeks. Strict timeboxing and a single-hypothesis focus prevent this.

2. Not defining success criteria before launch

Without pre-defined metrics, teams interpret ambiguous results as positive because they want the idea to work. A 3% conversion rate feels like validation when you are excited about the product. It feels like failure when you defined 10% as the bar beforehand. Always set the bar before you see the data.

3. Confusing MVP with "bad product"

An MVP is not permission to ship broken software. It is permission to ship incomplete software. The core feature must work well. Users should have a good experience with the thing the MVP does, even if it does not do much. A landing page MVP should look professional. A single-feature MVP should have that one feature working reliably.

4. Testing with the wrong audience

An MVP tested on friends, family, or early adopter enthusiasts produces false positives. Early adopters will tolerate anything novel. The real test is whether the early majority (pragmatic buyers who need the product to work reliably) find value. Recruit test users who match the actual target persona, not just anyone willing to try.

5. Skipping the "learn" step

Teams build the MVP, look at the signup numbers, and immediately start building V2. The learning step requires talking to users: why did some sign up? Why did others bounce? What did they expect versus what they got? The qualitative data often matters more than the quantitative data for deciding what to do next.

6. One-and-done MVP thinking

An MVP is not a single event. It is the start of a build-measure-learn loop that continues throughout the product lifecycle. Teams that treat the MVP as a phase ("we did our MVP, now we are in growth mode") miss the point. The MVP mindset of testing assumptions before investing applies to every major product decision, not just the initial launch.

Measuring Success

Track these metrics to evaluate whether your MVP process is effective:

  • Hypothesis validation rate. What percentage of MVP tests produce a clear pass or fail (not ambiguous)? Higher clarity means better hypothesis design. Target: 70%+ of tests produce actionable results.
  • Time from idea to test. How long from identifying a hypothesis to having real user data? Shorter is better. Target: 2-6 weeks for software MVPs.
  • Cost per learning. Total investment (engineering time, marketing spend, opportunity cost) divided by the number of validated or invalidated hypotheses. Lower is better. Track this to justify MVP investment to leadership.
  • Pivot rate. What percentage of MVPs result in a pivot decision? Too low (under 20%) suggests the team is only testing safe hypotheses. Too high (over 60%) suggests poor problem selection.
  • PMF progression. Track the Sean Ellis survey score across successive MVP iterations. It should trend upward as the team iterates toward fit. Use the PMF Calculator to evaluate each iteration.

Lean Startup is the methodology that popularized MVPs and the build-measure-learn loop. Customer Development is Steve Blank's framework for systematic hypothesis testing that precedes and informs MVP design. Product-Market Fit is the state an MVP aims to achieve or validate. Fake Door Test is a specific MVP technique that tests demand by showing a feature that does not yet exist. Prototype is a design artifact used to test usability, while an MVP tests business viability. Minimum Lovable Product (MLP) extends the MVP concept by adding a minimum bar for user delight, useful in competitive markets where viability alone is insufficient.

Put it into practice

Tools and resources related to Minimum Viable Product (MVP).

Frequently Asked Questions

What is a minimum viable product?+
A minimum viable product (MVP) is the smallest version of a product that can be released to test a key business hypothesis with real users. The term was popularized by Eric Ries in The Lean Startup. An MVP is not a half-baked product or a prototype. It is a deliberate experiment designed to maximize learning about customers with the least effort. The goal is to validate (or invalidate) the riskiest assumption before investing in full development.
What is the difference between an MVP and a prototype?+
A prototype is a model used to explore and communicate a design concept, typically tested internally or with a small group. An MVP is a real product released to real users to test a business hypothesis. Prototypes test usability and desirability. MVPs test viability and demand. A prototype might never be released. An MVP is always released, even if to a small audience.
What are the different types of MVPs?+
Common MVP types include: landing page MVP (tests demand with a signup form before building anything), Wizard of Oz MVP (users interact with what appears to be a real product but the backend is manual), concierge MVP (the service is delivered manually to a handful of users), single-feature MVP (one core feature built end-to-end), and piecemeal MVP (assembled from existing tools like Typeform, Zapier, and Airtable). Each type matches a different risk level and learning objective.
How long should it take to build an MVP?+
Most software MVPs should take 2-6 weeks. If it takes longer, the scope is too broad. The point of an MVP is speed of learning, not completeness. Hardware MVPs take longer (8-16 weeks) due to physical prototyping constraints. A useful rule: if you are not embarrassed by the first version, you launched too late.
What is the biggest mistake teams make with MVPs?+
The most common mistake is building too much. Teams add features 'just in case' or polish the UI before validating the core value proposition. The second biggest mistake is not defining success criteria before launch. Without pre-defined metrics, teams interpret ambiguous results as validation because they want the idea to work.
How do you decide what to include in an MVP?+
Start with the riskiest assumption and include only what is needed to test it. Ask for each feature: 'If we remove this, can we still test the hypothesis?' If yes, cut it. A useful exercise is to list every feature, rank them by necessity for the hypothesis test, and draw a line at the minimum set. Everything below the line is deferred.
What is the difference between MVP and MLP (Minimum Lovable Product)?+
An MVP focuses on testing whether the product solves a real problem. An MLP (Minimum Lovable Product) goes further by ensuring the first version also delivers a delightful experience. The MLP concept argues that in competitive markets, merely viable is not enough to retain users. In practice, most teams start with MVP thinking for initial validation and apply MLP thinking when refining for retention.
How do you measure MVP success?+
Measure against the pre-defined success criteria for your hypothesis. Common MVP metrics include: signup or waitlist conversion rate (demand validation), activation rate (value delivery), D7 retention (problem-solution fit), willingness to pay (revenue model validation), and the Sean Ellis survey (overall product-market fit signal). The specific metric depends on what your MVP is testing.
Should you charge for an MVP?+
If your hypothesis involves willingness to pay, then yes. Charging is the strongest validation signal. Users will sign up for free products they never use, but they will only pay for products that solve a real problem. Even a small charge ($5-10/month) separates genuine demand from casual interest. If your hypothesis is about something else (demand, usability), free is fine for the first test.
When should you pivot vs iterate on an MVP?+
Iterate when the core hypothesis shows promise but execution needs improvement (e.g., users love the concept but churn due to poor onboarding). Pivot when the core hypothesis is invalidated (e.g., the problem is not painful enough, or the target audience does not exist in sufficient numbers). Set clear pivot criteria in advance: 'If metric X stays below Y after Z iterations, we pivot.' The decision framework in the when-to-pivot guide covers this in detail.
Free PDF

Get the PM Toolkit Cheat Sheet

All key PM concepts, tools, and frameworks in a printable 2-page PDF. The reference card for terms like this one.

or use email

Join 10,000+ product leaders. Instant PDF download.

Want full SaaS idea playbooks with market research?

Explore Ideas Pro →

Keep exploring

380+ PM terms defined, plus free tools and frameworks to put them to work.