Quick Answer (TL;DR)
A minimum viable product (MVP) is the smallest version of a product that delivers enough value to attract early users and generate real feedback. The concept comes from Eric Ries's Lean Startup methodology: build the simplest thing that tests your riskiest assumption, measure what happens, and iterate. An MVP is not a half-baked product. It is a focused product that does one thing well enough that people will use it over their current workaround.
What Is a Minimum Viable Product?
The term "minimum viable product" was popularized by Eric Ries in The Lean Startup (2011), though Frank Robinson coined it in 2001. The core idea is simple: instead of spending months building a full product based on assumptions, ship something small and learn from real users as fast as possible.
The "minimum" part means you build only what is necessary to test your hypothesis. The "viable" part means it still needs to work well enough that someone would actually use it. These two constraints create productive tension. Too minimal and nobody cares. Too polished and you wasted time building features nobody wanted.
A good MVP answers one question: Does anyone actually want this?
This is distinct from a prototype, which answers "Could this work technically?" and from a proof of concept, which answers "Is this feasible?" The MVP answers the market question.
For a deeper look at where MVPs fit within the broader discipline, see our guide to product management.
A Brief History of the MVP
Before the Lean Startup movement, most software companies followed waterfall development. Teams would spend 12-18 months building a product, launch it, and hope for the best. Failure rates were staggering. CB Insights data consistently shows that "no market need" is the top reason startups fail, accounting for roughly 35% of post-mortems.
The MVP concept emerged as a direct response to this waste. Ries, building on Steve Blank's Customer Development framework and Toyota's lean manufacturing principles, argued that startups are not smaller versions of large companies. They are organizations searching for a repeatable, scalable business model. The fastest way to find that model is to run cheap experiments.
The Build-Measure-Learn loop became the operating system for an entire generation of startups. Build something small. Measure whether people use it. Learn what to do next. Repeat.
Types of MVPs
Not every MVP requires writing code. The right type depends on what you are trying to learn.
Landing Page MVP (Smoke Test)
Create a landing page that describes your product and includes a signup form or pre-order button. If people sign up, you have evidence of demand. Buffer famously tested pricing this way before writing a single line of application code. Joel Gascoigne put up a landing page describing the product, added a pricing page behind it, and measured how many people clicked through to see prices. That was enough signal to start building.
Use the TAM Calculator to estimate whether the market you are targeting is large enough before committing to a landing page test.
Concierge MVP
Deliver the value of your product manually, person by person, instead of building automation. Zappos did this: Nick Swinmurn photographed shoes at local stores, posted them online, and when someone ordered, he bought the shoes at retail and shipped them. No warehouse, no inventory system, no supplier relationships. Just a guy testing whether people would buy shoes online.
This approach works well when you need to understand the user workflow deeply before automating it.
Wizard of Oz MVP
Similar to the concierge approach, but the user thinks they are interacting with a real product. Behind the scenes, a human is doing the work manually. This is common in AI products where building the real model takes months. You can test whether users value the output of an "AI feature" by having a person produce the results while you gauge demand and willingness to pay.
The key difference from a concierge MVP: the user does not know a human is involved. They interact with what looks like a software product. This lets you test the full user experience, including whether the interface makes sense and whether users trust the output.
Single-Feature MVP
Build one feature and ship it. Not three features. Not "the platform." One thing, done well. This is the most common software MVP. Instagram launched as a photo-sharing app with filters. That was it. No stories, no reels, no shopping, no messaging. Just filtered photos.
When deciding which single feature to build, the Jobs to Be Done framework helps you identify the core job your user is hiring your product to do.
Piecemeal MVP
Assemble your MVP from existing tools and services without custom development. Use Typeform for intake, Zapier for automation, Airtable for a database, and Stripe for payments. You can test a surprising amount of product logic without writing code. This approach works best when speed matters more than a polished user experience.
Choosing the Right MVP Type
The type of MVP you choose depends on what you need to learn and what resources you have.
If your biggest risk is demand (will anyone want this?), use a landing page MVP. It is the cheapest and fastest option.
If your biggest risk is value delivery (can we actually solve this problem?), use a concierge or Wizard of Oz MVP. You will learn whether the solution works before investing in engineering.
If your biggest risk is usability (will people figure out how to use this?), build a single-feature MVP with a real interface. You need to observe people interacting with the actual product.
If you are not sure which risk is biggest, start with the cheapest option (landing page) and work your way up. You can always build more later. You cannot un-spend engineering months.
How to Build an MVP: A Step-by-Step Process
Step 1: Identify Your Riskiest Assumption
Every product idea rests on assumptions. Your MVP should test the assumption that, if wrong, kills the entire idea. Common risky assumptions include:
- Demand exists. Will people pay for this? Will they even sign up for free?
- The problem is painful enough. Do people care enough to switch from their current solution?
- We can deliver the value. Can we actually solve this problem at a price point that works?
Write your assumption as a falsifiable hypothesis: "We believe [target user] will [take action] because [reason]." If you cannot articulate this clearly, you are not ready to build.
Step 2: Define Success Criteria
Before building anything, decide what "success" looks like. Pick 2-3 metrics that will tell you whether your hypothesis is correct. Common MVP metrics include:
- Signup rate. What percentage of landing page visitors sign up?
- Activation rate. What percentage of signups complete the core action?
- Retention. Do users come back after the first session?
- Willingness to pay. Will users enter payment information?
Use the RICE Calculator to score and prioritize which features make the cut for your MVP scope.
Step 3: Scope Ruthlessly
List every feature you think the MVP needs. Then cut it in half. Then cut it in half again. You should be left with 3-5 features maximum. If the list is longer than that, you are building a V1, not an MVP.
A useful exercise: for each feature, ask "Can a user get value from the product without this?" If yes, cut it. You can always add it in the next iteration.
The MVP Roadmap Template provides a ready-made structure for planning your MVP phases, timelines, and success metrics.
Step 4: Build and Ship
Set a hard deadline. Two weeks is a good default for a software MVP. The deadline forces you to make scope decisions that you would otherwise defer. If a feature does not fit in the timeline, it does not ship in the MVP.
Ship to a small group first. Your initial users should be people who feel the problem acutely and will give you honest feedback. 10-50 users is plenty for an MVP. You are not optimizing for scale. You are optimizing for learning.
Step 5: Measure and Decide
After launch, resist the urge to immediately start building new features. Instead, watch the data. Talk to users. Look at the metrics you defined in Step 2.
Three outcomes are possible:
- Signal is strong. Users are signing up, activating, and returning. Invest more. Start building toward product-market fit.
- Signal is mixed. Some metrics hit, others did not. Dig into the qualitative data. Talk to the users who churned. Find out what is missing.
- Signal is weak. Almost nobody signed up, or those who did never came back. Time to decide whether to pivot or persevere.
Real MVP Examples
Dropbox: The Video MVP
Drew Houston could not find product-market fit because explaining Dropbox required people to understand file syncing, which most people in 2007 did not. Instead of building the full product and hoping people would figure it out, Houston recorded a 3-minute demo video showing how Dropbox worked. The video went viral on Hacker News. Signups went from 5,000 to 75,000 overnight. Houston had his demand signal without shipping a single feature to users.
What it tested. Whether people wanted seamless file syncing badly enough to sign up for a waitlist.
Airbnb: The Event MVP
Brian Chesky and Joe Gebbia could not afford rent. A design conference was coming to San Francisco, and hotels were sold out. They bought three air mattresses, put up a simple website (airbedandbreakfast.com), and listed their apartment. Three people booked. It was enough to prove that strangers would pay to sleep in someone else's home.
What it tested. Whether travelers would accept a non-hotel lodging option from a stranger.
Zappos: The Concierge MVP
Nick Swinmurn did not build an e-commerce platform. He took photos of shoes at local stores, posted them on a basic website, and fulfilled orders by buying shoes at retail. The MVP tested the core assumption: people will buy shoes online without trying them on first.
What it tested. Whether the convenience of online shopping outweighed the inability to try shoes on.
Buffer: The Pricing MVP
Joel Gascoigne wanted to know if people would pay for a social media scheduling tool. He built a two-page website. Page one described the product. Page two showed three pricing tiers. If you clicked a pricing tier, you landed on a "not ready yet, give us your email" form. People clicking a paid tier was a strong signal of willingness to pay.
What it tested. Not just demand (would people sign up?) but monetization (would people pay, and at what price point?).
Common MVP Mistakes
Mistake 1: Building Too Much
The most common failure mode. Teams rationalize feature after feature into the MVP scope. "We can't launch without search." "Users will expect dark mode." "We need an admin dashboard." Before long, the MVP is a six-month project and you still have not validated demand.
Fix: set a time constraint, not a feature constraint. Ship whatever you can build in 2-4 weeks.
Mistake 2: Minimum Viable but Not Minimum Quality
"Viable" does not mean buggy, slow, or confusing. Your MVP can have a limited feature set, but the features you ship need to work correctly and deliver real value. Users will forgive a missing feature. They will not forgive a broken one.
Mistake 3: No Success Metrics
Building and launching without defining what success looks like is just shipping for the sake of shipping. If you do not know what you are measuring, you cannot learn anything. Define your hypotheses and metrics before you write code.
Mistake 4: Wrong Audience
Launching your MVP to "everyone" guarantees weak signal. Early adopters are the only audience that matters for an MVP. These are people who feel the pain so acutely that they will tolerate a rough product to solve it. Find those people specifically.
The Founder Fit Assessment helps you evaluate whether your skills and resources match the market you are targeting.
Mistake 5: Ignoring the Data
Some founders build an MVP, get negative results, and push forward anyway because they "believe in the vision." MVPs are experiments. If the experiment fails, update your beliefs. That might mean pivoting the idea, changing the audience, or adjusting the value proposition. It should rarely mean ignoring the results.
Measuring MVP Success
Track these metrics from day one:
Acquisition. How many people visit your site or hear about your product? Track traffic sources to understand which channels work.
Activation. What percentage of visitors sign up? What percentage of signups complete the core action (upload a file, send a message, create a project)? Activation rate tells you whether people understand and get value from your product quickly.
Retention. Do users come back? Check day-1, day-7, and day-30 retention. If nobody returns after the first session, you have an activation or value problem.
Revenue (if applicable). Are users willing to pay? Even if you are not charging yet, you can test willingness to pay with pricing page click-throughs, "upgrade" button clicks, or direct questions in user interviews.
Referral. Are users telling others? Organic word-of-mouth is a strong signal that you are solving a real problem. Ask new signups how they heard about you.
The Sean Ellis Test
One of the most reliable MVP success signals comes from a single survey question: "How would you feel if you could no longer use this product?" If 40% or more of your users say "very disappointed," you are approaching product-market fit. Below 25%, the product is not yet delivering enough value. Between 25% and 40%, you are in the zone where focused iteration can push you over the threshold.
Send this survey after users have had at least two weeks with the product. Asking too early captures novelty, not real value.
Setting Up a Learning Cadence
Do not treat the MVP launch as a single event with a single measurement. Set up a weekly cadence:
- Monday. Review quantitative metrics (signups, activation, retention, revenue).
- Wednesday. Conduct 2-3 user interviews or review support tickets.
- Friday. Synthesize findings and decide: build next iteration, pivot direction, or run another week of observation.
This cadence prevents the two failure modes: measuring too infrequently (waiting months to "see how it goes") and reacting too quickly (pivoting after one bad day of signups).
When NOT to Build an MVP
MVPs are not always the right approach. Skip the MVP if:
The market is well-understood and competitive. If you are building a CRM in 2026, users already know what a CRM should look like. A bare-bones MVP will get compared to Salesforce and HubSpot. In crowded markets, you often need a differentiated V1 rather than a generic MVP.
Regulatory or safety constraints exist. Medical devices, financial products, and safety-critical software have compliance requirements that do not allow "move fast and learn." In regulated industries, the minimum viable approach still applies, but the "viable" bar is much higher.
The core value requires scale. Social networks and marketplaces suffer from the cold-start problem. A social network with 10 users is not viable. In these cases, you might need to fake scale (pre-populate content, use concierge approaches) or find a niche where small numbers still deliver value.
You are extending an existing product. If you already have users and revenue, you have direct access to customer feedback. Run experiments within your existing product rather than building a standalone MVP.
Trust and safety are the core value proposition. Financial services, healthcare, and enterprise security products cannot ship "minimum" trust. If a password manager loses one user's credentials during an MVP test, the company is dead. In these domains, the "viable" threshold is essentially a production-grade product for the core security or safety function.
From MVP to Product-Market Fit
A successful MVP is not an endpoint. It is the starting line. The goal after a positive MVP result is to iterate toward product-market fit. That means systematically improving activation, retention, and satisfaction until users would be "very disappointed" if the product went away (Sean Ellis's 40% benchmark).
Use a Now-Next-Later roadmap to plan your post-MVP iterations without overcommitting to a rigid timeline. This format keeps you focused on what to build next while staying flexible enough to respond to what you learn.
The Product Launch Playbook covers the full journey from MVP through general availability, including go-to-market strategy, beta programs, and scaling operations.
The Post-MVP Iteration Loop
Once you have positive signal from your MVP, the next phase follows a tighter version of the same Build-Measure-Learn loop:
- Identify the biggest drop-off. Where in the user journey are people leaving? Signup to activation? Activation to retention? Fix the biggest leak first.
- Form a hypothesis. "We believe adding an onboarding tutorial will increase activation from 30% to 50% because new users do not understand how to complete the core workflow."
- Build the smallest fix. Not a full onboarding system. Maybe a three-step tooltip sequence. Ship it in a few days.
- Measure the result. Did activation improve? By how much? Did it affect retention downstream?
- Repeat. Move to the next biggest drop-off.
This cycle should repeat weekly or biweekly. The goal is to reach product-market fit within 3-6 months of your MVP launch. If you are not making measurable progress toward the 40% Sean Ellis threshold within that window, reassess whether you are solving the right problem for the right audience.
Key Takeaways
- An MVP tests your riskiest assumption with the smallest possible investment. It is an experiment, not a product launch.
- Not every MVP requires code. Landing pages, manual concierge services, and video demos are all valid approaches.
- Define success criteria before you build. If you do not know what you are measuring, you cannot learn anything.
- Scope ruthlessly. If your MVP takes longer than 8 weeks, you are building too much.
- The data decides what happens next. Strong signal means invest more. Weak signal means pivot or kill the idea. Mixed signal means dig deeper into the qualitative feedback.
- MVPs are wrong for regulated industries, mature competitive markets, and products that require network effects to deliver value. Adapt the approach to your context.