Eric Ries Needs a Better Editor (MVPs Explained)

Eric Ries’ The Lean Startup is a very successful book—and an even more successful movement—amongst product developers. Even if those in product development haven’t read the book, they talk about concepts introduced and popularized in it, such as the ‘Build-Measure-Learn’ cycle and Minimum Viable Products (MVPs).

When defining the ‘Build-Measure-Learn’ cycle, Ries explains that “the fundamental activity of a startup is to turn ideas into products, measure how customers respond, and then learn whether to pivot or persevere.” An MVP, he says, is “that version of the product that enables a full turn of the Build-Measure-Learn loop with a minimum amount of effort and the least amount of development time.”

Whenever I work with a new team, I inevitably hear the phrase “let’s just build an MVP”. Teams resort to this strategy when they are given work that either has a lot of unknowns, doesn’t clearly solve a user problem, or doesn’t align with the goals of their company. What these teams often mean, though, is “let’s build whatever we can build in the time frame that someone has set for us (or that we have set for ourselves) that gets us closer to our grand vision.” The MVP often ends up being a small Version One.

Others in product development have noticed this too, and have proposed a variety of alternatives to ‘MVP’ as a way to capture the nuance of Ries’ definition of a product. Minimum Valuable Product, Minimum Viable Experiment, and Riskiest Assumption Tests are all alternatives that have been proposed in the last couple of years. In his post Death of the Minimum Viable Product!, Steve Cohn, CEO of Validately, shows that the use of the word ‘product’ caused confusion amongst product people because they interpreted it as a ‘releasable product’. His analysis is spot on.

If you look past Ries’ definition of an MVP in The Lean Startup, and instead focus on the examples he uses, it becomes clear that his definition of MVP means something that you can use to run an experiment. Ries gives examples of an MVP, my favorite of which involves Food on the Table. Food on the Table’s product helps users plan meals by comparing their recipes to items on sale at their local grocery stores. The founders of Food on the Table experimented with solutions with customers for months without ever building software. They compared local grocery store sales and meal preferences for each individual customer, printed out recipes and meal plans, and, at first, went shopping with each customer until they had so many customers that the team had to build software to keep up with demand.

It’s fair to say that the printed-out paper the founders provided was, in fact, a product. The problem is that software—or even hardware—developers don’t tend to create and keep version numbers when they move from paper or physical prototypes to software. Calling the printed-out paper that the founders provided to their customers in the beginning a ‘version’ of their product is a lovely idea. But when Ries wrote the definition of MVP, he didn’t mean for ‘version’ to span the entire development process from “I have an idea” to a marketable, sellable product. Because of this, the use of the word ‘version’ didn’t fit with the definition of the word that readers typically understand.

How, then, can we rewrite the definition to capture the nuance that Ries illustrates through examples?

Since an MVP is something that can be used to test out a hypothesis, we could start by saying that rather than an MVP being a “version of the product”, it’s a “testable solution to a problem.” Understanding an MVP in this way frees us up to imagine all of the potential mediums that a solution might be, such as paper, digital, or physical representations.

Let’s move on to the next part of Ries’ definition: “that enables a full turn of the Build-Measure-Learn loop”. Building could, again, involve any medium. So let’s focus on ‘Measure’. To measure something in a meaningful way, a measurement needs to follow the SMART metric methodology: Specific, Measurable, Actionable, Realistic and Time-Bound. Whatever is being tested must have an objective right and wrong answer in a defined period of time. That way, it’s possible to make a decision easily about whether to continue with or change the course of your product. If a test doesn’t have a metric that is SMART, it’s too easy for teams to not learn something from the test and instead succumb to the sunk-cost fallacy, in which they continue to build simply because they have put time and effort into the product. I’d therefore propose that this section of the definition be changed to “used in an experiment that can generate an objective right or wrong answer in a given amount of time”.

The end of the definition, “with a minimum amount of effort and the least amount of development time”, follows standard lean practices. There’s nothing inherently wrong with or confusing about the sentence. But for the clarity of the entire definition, let’s shorten it to “with the least amount of time and development waste.”

When we pull it all together, we arrive at a definition of MVP that is much clearer and captures the nuance of what Ries was trying to teach: “The MVP is a testable solution to a problem used in an experiment that can generate an objective right or wrong answer in a given amount of time with the least amount of time and development waste.”

By being more specific in the definition, I’m hopeful that teams will not resort to saying “let’s just build an MVP” when they are caught in situation in which they are asked to build something without understanding which problem they are solving. Rather, they will help their stakeholders and teammates discover user problems, and then use their MVPs to validate if they are indeed worth solving.


Originally written for the blog on L4

Hey, What’s Your Problem?

There are a lot of failed products out there, but we never hear about them*. No one markets their failures; they only market their successes. But these failures do exist, and they’ve failed because they didn’t solve a real user problem. It’s not that the product wasn’t innovative or well built. Rather, it’s that earlier in the development process, no one asked enough questions, both of users and of themselves.

At the first sign of a problem, we have a tendency to jump right to finding a solution for it. The urge to fix what appears to be broken is prevalent throughout our lives.

Imagine this: You’re talking to a friend about some problem you have, and you get halfway through describing our problem when your friend pipes in with, “why don’t you just do X?” You start to explain to your friend that that won’t work because of Y. Your friend responds by telling you that you should instead do Z.

This cycle continues until you both get tired, and you leave the conversation unsatisfied, feeling as though you volleyed words back and forth but never really heard one other. You are left trying to figure out how to solve your problem on your own.

Users experience a similar feeling of dissatisfaction when we put products in front of them that might be lovely solutions to a problem, but don’t quite solve their problem. If we’re lucky, users will tell us how we got it wrong. In most cases, though, they’ll just stop using our product, and we’re stuck trying to figure out why.

But this doesn’t have to happen, so long as you approach your product with a clean slate and no pre-defined solution. Have no agenda other than to truly understand what problems are worthy of your efforts.

Re-thinking Product Management: A New Approach

Imagine the solutions that you could come up with if you knew that what Sally really wanted to do was save and share memories of her life, rather than focusing on the fact that she can’t take pictures.The first step in shifting toward this alternative method of product development is understanding the difference between a problem and a solution. A problem is a goal or objective that a user would like to achieve, but can’t. A solution is a product or process that can help a user achieve that goal or objective.

Oftentimes, when I ask a team which problems they’re solving, I hear them describe a solution that they want to build, but attempt to frame it as the problem: “Sally can’t take pictures.” “Michael wants to edit photos.” “Lucy needs more font options.” When this solution-focused work is taken to teams, it jeopardizes any alternative, innovative solutions the team could develop.

Imagine the solutions that you could come up with if you knew that what Sally really wanted to do was save and share memories of her life, rather than focusing on the fact that she can’t take pictures. What if you knew that the real reason that Michael wants to edit his photos is because he has an unsteady hand and always takes blurry pictures?

The second step in shifting toward this new methodology is to dig deeper when talking to users and collaborating with your team. Consider, for example, the issue that Sally can’t take pictures. If that “problem” is brought to your team, the conversation that follows might sound something like this:

Sally can’t take pictures.
Okay, so we need to put a camera in the app.

Well, she already has a camera in the OS. Do we really need to add a camera?
Okay, we’ll link out to the existing camera and build a way for her to see the shutter inside our app.

We shouldn’t need to link to existing software. That seems like a disjointed experience. Are there any other solutions?
Well, we could put a camera in the app.


But what if you found your inner two-year-old, and asked why?

Sally can’t take pictures.
Why does she need to?

Because she has kids.
Why does she want to take pictures of them?

She wants to remember what they looked like and did as children so that she can show them when they get older.
Why does she want pictures?

Because she wants to easily share these moments with her family members.

In that exchange, we learn so much more about our user. It’s then possible for us to stop focusing solely on adding a camera to our product and instead figure out a way for Sally to share her memories with family members.

It’s common in product development to get caught up in the solutions we love or the ideas we have. This method, unfortunately, leads to an abundance of features and products that users don’t want, can’t use, and/or don’t need. If we focus on understanding the underlying problems that people have, we can build stronger, more meaningful, and more successful products we feel good about.


*There’s a museum of the most popular product failures, and at least one website for those that probably never made it to your computer screen.


Originally written for the blog on L4