.

Minimum Viable Product: How to Build One That Tests Your Assumptions

An MVP isn't a miniature product. It's a test. Here are the types, the steps, and the real examples that show what good MVPs look like.
Gregory Shepard, Founder and CEO of Startup Science
Gregory Shepard
May 14, 2026
8
min read
minimum-viable-product header

In 1999, Nick Swinmurn walked into shoe stores, photographed their inventory, and posted the pictures on a basic website. When someone placed an order, he drove back to the store, bought the shoes at full retail, and shipped them himself. There was no warehouse, no supplier deal, no inventory system. That was the Zappos MVP, and it tested one thing: will people buy shoes online without trying them on first? The answer was yes, and Zappos eventually sold to Amazon for $1.2 billion. The MVP that launched a billion-dollar company was a guy with a camera and a car. A minimum viable product is a test, not a miniature version of your dream. One specific test of one specific assumption. Most founders working through the Startup Science founder program get this wrong. They spend months building features when they should spend weeks proving demand.

What a Minimum Viable Product Actually Is

A minimum viable product is the smallest thing you can put in front of real users to learn whether your core assumption holds up. That's it. It's an experiment with a clear hypothesis, not a prototype, beta, or scaled-down product roadmap.

The word "viable" does a lot of heavy lifting in that definition. Your MVP has to work well enough that someone will actually use it, pay for it, or at least give you honest feedback about it. A broken landing page with a signup form doesn't count, but a polished app with 40 features doesn't count either, because you've already committed resources before learning anything.

The startup lifecycle has distinct phases, and the MVP belongs squarely in Phase 2. You've already identified a problem worth solving. Now you need proof that your proposed solution resonates with real people. Everything else (scaling, optimizing, expanding) comes later.

Why Most MVPs Fail Before They Launch

The number one reason MVPs fail isn't bad execution. It's testing the wrong thing. Founders fall in love with their solution and forget that the MVP exists to challenge their assumptions, not confirm them.

Here's what I see constantly. A founder has an idea for a platform that connects freelance designers with small businesses. Instead of testing whether small businesses actually struggle to find designers (the assumption), they spend four months building a matching algorithm. They launch. Nobody signs up. They blame marketing. The real problem? Small businesses already use Fiverr and referrals. The assumption was wrong from the start, and no amount of engineering could fix that.

Other common failure modes:

  • Building for investors instead of users. Your MVP should impress customers, not pitch decks. Investors care about traction, and traction comes from solving real problems.
  • Skipping the "minimum" part. Every feature you add before validation is a bet you're placing with no data. Three features means three bets. Ten features means you're gambling.
  • Ignoring the feedback loop. An MVP without a structured way to collect and interpret user behavior is just a soft launch. You need to know what you're measuring before you ship anything.
  • Confusing interest with commitment. People saying "that sounds cool" at a dinner party isn't validation. People entering a credit card number is validation.

Types of MVPs (and When to Use Each One)

Not every MVP requires code. In fact, some of the best MVPs in startup history involved zero engineering. The format you choose depends on what assumption you're testing.

Concierge MVP. You deliver the service manually to a small group of users. No automation, no platform. You're testing whether the outcome you promise is something people actually want. This works well for service-based businesses and marketplace concepts where you need to understand both sides of the transaction.

Wizard of Oz MVP. It looks automated to the user, but you're doing everything by hand behind the scenes. Zappos started this way. Nick Swinmurn photographed shoes at local stores and posted them online. When someone ordered, he bought the shoes at retail and shipped them. He didn't build inventory management or supplier relationships. He tested one thing: will people buy shoes online without trying them on first?

Landing Page MVP. A single page that describes your product and asks for a signup, pre-order, or email address. Buffer validated its entire pricing model this way. Joel Gascoigne put up a page describing the product with a pricing table. When people clicked a plan, they saw a message saying the product wasn't ready yet and were asked for their email. He tested willingness to pay before writing a single line of product code.

Video MVP. Dropbox famously used a three-minute demo video to validate demand. Drew Houston couldn't easily demonstrate the product's value in text because syncing files across devices was hard to explain but easy to show. The video drove 75,000 signups overnight on a beta waiting list. No product needed.

Single-Feature MVP. You build one feature, the one that represents your core value proposition, and ship it. Nothing else. No settings page, no profile customization, no integrations. This is the right choice when your assumption is specifically about the user experience of your solution, not just about demand.

How to Build a Minimum Viable Product in Five Steps

1
Write down your riskiest assumption
Identify the single belief that, if wrong, makes everything else irrelevant.
2
Choose the cheapest way to test it
Pick the lowest-cost, fastest experiment that gives you a real signal.
3
Define your success metric before launch
Set a concrete threshold so results can't be rationalized after the fact.
4
Build and ship in under 2 weeks
Scope ruthlessly. If it takes longer than two weeks, you scoped too much.
5
Measure, interpret, decide
Compare results to your threshold. Pivot, persevere, or kill the idea.

If you're working through the process of building a startup, here's the sequence that actually works. I've watched hundreds of founders go through this, and the ones who follow the discipline consistently outperform the ones who skip steps.

Step 1: Write down your riskiest assumption. Not your product idea. Your assumption. "Busy parents will pay $30/month for pre-planned grocery lists" is an assumption. "A meal planning app" is a product idea. The assumption is what you're testing. Be specific. If you can't articulate the assumption in one sentence, you aren't ready to build anything.

Step 2: Choose the cheapest way to test that assumption. Go back to the MVP types above. Which format lets you test your specific assumption with the least time and money? A founder once told me, "AI didn't replace me. It replaced my excuses." Today, you can build landing pages in hours, create demo videos in a day, and set up manual service delivery in a weekend. There's no reason to spend months on this step.

Step 3: Define your success metric before you launch. What number, at what threshold, tells you the assumption is validated? "50 signups in two weeks" is a success metric. "Good feedback" is not. Write it down. Share it with someone who'll hold you accountable. Your traction metrics at the MVP stage should be simple and binary. Did you hit the number or didn't you?

Step 4: Build it and ship it fast. Two weeks is a reasonable timeline for most MVPs. If your MVP takes longer than a month, you're building too much. Cut features. Simplify the design. Use off-the-shelf tools. The goal isn't to impress anyone with your engineering. The goal is to learn something true about your market.

Step 5: Measure, interpret, decide. Did you hit your success metric? If yes, you've validated the assumption and can move to the next one. If no, you need to figure out why. Was the assumption wrong? Was your execution flawed? Did you reach the wrong audience? Each answer leads to a different next step. Don't just iterate blindly. Diagnose first.

Real Examples That Show What Good MVPs Look Like

The Zappos story is worth studying closely because it illustrates something counterintuitive. Nick Swinmurn's MVP didn't test whether he could build an e-commerce platform. It tested whether consumer behavior would shift. The entire risk was on the demand side, so he built a demand-side test. He didn't invest in supply chain, warehousing, or technology until after he knew people would buy.

Buffer's approach was different because the risk was different. Joel Gascoigne already knew people wanted to schedule social media posts. Competing tools existed. His risk was whether people would pay for a simpler, cleaner version. So his MVP tested pricing specifically. The landing page with a pricing tier that led to a "not ready yet" message was brilliant because it isolated the exact variable that mattered.

Dropbox had a technical risk problem. The product was genuinely hard to build, and Drew Houston needed to know if demand justified the investment. A landing page wouldn't have worked because the product concept was too abstract to describe in text. A video demonstration communicated the value instantly and generated measurable demand (those 75,000 signups) without requiring a finished product.

Notice the pattern. Each founder identified their specific risk, then chose the MVP format that tested that risk directly. They didn't copy each other's approach. They matched the experiment to the hypothesis.

When to Move Beyond the MVP

This is where founders get stuck, and the failure mode goes in both directions. Some move on too quickly (before they've actually validated anything). Others hold on too long, and this second failure mode doesn't get enough attention.

The "Stuck in MVP Mode" Trap

Too many startups are still running on their MVP two or three years after launch. They validated the core assumption, found paying customers, and then never made the transition to building a real product. They're still patching together manual processes, duct-taping workarounds, and apologizing to customers for missing features that competitors shipped a year ago.

In the startup lifecycle, the MVP belongs in Phase 2 (Product). By Phase 3 (Go-to-Market), you should be building a product that can support real sales at volume. By Phase 4 (Standardization), you need systems, reliability, and a product that doesn't require the founder to manually fix things every week. If you're in Phase 3 or 4 and your product still looks and feels like a prototype, you've stayed in MVP mode too long.

When You're Ready to Graduate

You're ready to move past the MVP when three things are true:

  1. Your core assumption has been validated with real user behavior. Not surveys. Not interviews. Behavior. People signed up, paid, used the product, came back, or referred others.
  2. You can articulate what you learned and what it means. "People liked it" isn't a learning. "42% of trial users converted to paid within 7 days, compared to our 25% target" is a learning.
  3. You have a clear hypothesis for the next phase. Validation doesn't mean you're done testing. It means you've answered one question and you know what question comes next.

The signs you've stayed too long are equally clear: customers keep asking for the same features you've been "planning to build" for months. Your churn rate stays high because the product can't keep up with expectations. Your engineering team spends more time on support tickets than on product development. New sales stall because prospects compare your MVP against competitors' full products and walk away.

The MVP was a vehicle for learning. Once you've learned what you needed to learn, retire it. Build the real thing. The discipline that made you ship a lean MVP in two weeks is the same discipline you need to build a complete product on a real timeline, with real architecture, real infrastructure, and real user experience standards. Staying lean doesn't mean staying scrappy forever.

If you're putting together your startup business plan, the MVP results should be the foundation. Not projections, not assumptions, but evidence from actual market contact. The plan becomes dramatically more credible when you can point to real numbers from real users.

The MVP is a learning milestone, not a product milestone. The founders who treat it that way build companies that survive. The ones who treat it as a miniature product launch build things that look impressive and teach them nothing. And the ones who never graduate from it build companies that stall.

Frequently Asked Questions

How long should it take to build a minimum viable product?

Most MVPs should take two to four weeks. If yours is taking longer than a month, you're probably building too much. The point is speed of learning, not quality of output. Use no-code tools, manual processes, or existing platforms to compress the timeline. The faster you get your assumption in front of real users, the less money you burn on untested ideas.

What's the difference between an MVP and a prototype?

A prototype demonstrates how something works. An MVP tests whether anyone cares. Prototypes are for internal use, investor demos, and design validation. MVPs go in front of real users who don't know you and don't owe you polite feedback. The distinction matters because prototypes don't generate market data. MVPs do.

Can I build a minimum viable product without writing code?

Yes, and in many cases you should. Zappos validated online shoe sales without building an e-commerce platform. Buffer validated its pricing model with a landing page. If your riskiest assumption is about demand, willingness to pay, or user behavior, you can test it with landing pages, manual service delivery, video demos, or pre-order campaigns. Save the engineering for after you've proven the assumption.

How do I know if my MVP results are good enough to keep going?

Define your success metric before you launch, not after. Set a specific number (signups, conversions, retention rate) and a specific timeframe. Compare your results against that pre-set target. If you hit it, your assumption holds and you can invest more. If you miss it, diagnose whether the problem is the assumption itself or your execution of the test. Changing your success criteria after seeing the results defeats the entire purpose.

Should my MVP be free or paid?

If your assumption involves willingness to pay (and it usually should), charge money. Free signups tell you people are curious. Paid signups tell you people find your solution worth spending on. Those are very different signals. You can offer a trial period or a money-back guarantee to reduce friction, but getting someone to enter payment information is one of the strongest forms of validation you can get at this stage.

About the Author
Gregory Shepard, Founder and CEO of Startup Science
Gregory Shepard
Founder and Chief Executive Officer
Built and sold 12 companies. Four private equity awards for exits between $25M-$1B. Authored The Startup Lifecycle, hosts Forbes Podcast, delivered TEDx Talk. Knows how to build, scale, and exit.
View all articles →