Webinar Series Part 4 of 4: The Real MVP

Minimum Viable Product (MVP) is a term I hear often and, unfortunately, hear used incorrectly. Though, it’s understandable. The definition that Eric Ries popularized in his book, The Lean Startup, is a bit confusing. As a lean practitioner for almost ten years, I offer up a revised definition in hopes that product teams can start using MVPs to truly roll out Build-Measure-Learn loops in the ways The Lean Startup encourages us to do.

At my current company, Globant, I’ve created a curriculum and have been leading a webinar series to help product managers and others on the team focus on doing work that helps them solve customer problems and validate that they have solved them. In this installment, The Real MVP, we discuss an alternate definition of the term Minimum Viable Product, and articulate ways that teams can start executing MVPs to maximize learning.

Webinar Series Part 3 of 4: Metrics That Matter

Tracking metrics is easy. Add a Google Analytics snippet and you’ll get visits, time on page, and bounce rate within hours. The question is, what changes will you make to your product when you know that information? Making data-driven decisions is all the rage, and it matters what data you look at to help you make those decisions.

At my current company, Globant, I’ve created a curriculum and have been leading a webinar series to help product managers and others on the team focus on doing work that helps them solve customer problems and validate that they have solved them. In this installment, Metrics That Matter, we discuss the difference between vanity and actionable metrics and how to get to actionable metrics that help you move your product forward.

Product Manager vs. Product Owner: Roles explained

People often contact me wanting to learn more about product management. Either they want to be a product manager or they want to hire one. As someone who has been in the industry a long time, I am acutely aware as to how confusing the role can be for leaders, for teams and for those transitioning into the role. What does a product manager do? Why are they necessary? And what’s a product owner? Confounding the confusion is that sometimes people in the industry will often use several terms interchangeably even though they have unique meanings. Product manager, product owner, product strategist, product leader—what’s the difference? It’s nuanced, but by defining a few key priorities and responsibilities in each role, we can help clear up the confusion.

Product Manager

A product manager makes decisions about what a development team should be working on. They do that by:

  • listening to users to identify and understand their problems
  • learning about and digging into business goals
  • seeking out recommended approaches and solutions from their development teams

Product managers internalize all of that information, weigh the costs and benefits of moving in a particular direction, and then decide which way to head. People tend to see a product manager’s output as a product vision, focused user problems for the team to solve, and/or a roadmap.

Product managers are valuable because they don’t have an incentive which aligns them with any specific company department. Product managers want to solve user problems, they want their business to achieve its goals, and they want their development teams’ work to be feasible (and awesome). Their empathy is spread equally. It’s this lack of bias, or at least the attempt to squelch the bias, that allows them to make excellent decisions.

Product Owner

Originally, a product owner was a position on a Scrum team. This role would be filled by anyone in the organization who had a vested interest in the development and success of a product and could be the “voice of the customer” for the Scrum team. If the team, for instance, is building an app to accept customer lead information, a marketing professional could be the Product Owner. The marketer understands the needs of customers who complete a lead form, they understand the business requirements and, thus, they could help define the order of the tasks or projects for the development team.

In software companies, the role of product owner is often filled by product managers because teams need someone who is dedicated to helping them move work forward. If product managers take on this role, they tend to be focused more on the delivery aspects of software development than on the strategy or research functions of product management.

Product Strategist

Product strategist is a title often used in consulting. Because consulting or development agencies work with clients, they really need to have a product owner in the client company who can make the final decision and be the voice of the customer, particularly if the engagement is about simply delivering completed software.

These agencies often want to expand their reach and have more influence earlier in the process, so the role of strategist was created. This person will work with clients to help them become clearer about what they want to build. Sometimes that will involve talking to customers, but other times it will mean conducting workshops to help clients articulate what’s in their heads.

Product Leader

Some people—like me—use this term simply to avoid painting ourselves into a box. We like to say that we can do all the things in all the ways because we’re really flexible and can lead people down a certain path. I’ll admit, it’s kind of a made-up term. Sorry if we confuse you.


I hope that a more detailed explanation of each role has made a distinction. Remember: A lot of people use these terms interchangeably rather than being specific about what they want or need. So when you hear one of these terms, the best thing you can do is ask people what they mean or what they are hoping the person in the role will do. If they can’t clarify this for you, then use these explanations to ask them questions and help them narrow it down.

Webinar Series Part 2 of 4: Making the Most of User Interviews

In product management, we’re often told to talk to users. Doing so can be intimidating and scary if we’ve never done it before. The elusive “User Interview” can bring up a lot of emotion, both for the interviewer and the interviewee, and also for the stakeholders who are depending on the team to help ship a new product to market.

At my current company, Globant, I’ve created a curriculum and workshop series to help clients and internal team members learn a different way of approaching product development. To get the word out, we created a four-part webinar series to illustrate what you might learn in the workshops. The second one, Making the Most of User Interviews provides some actionable tips for folks in product development to get started talking to their users today.

Webinar Series Part 1 of 4: Hey, What’s Your Problem?

In product management, we often get presented with solutions:

Sally wants to take a picture.
John needs to call a cab.

Unfortunately, simply building these solutions means we sometimes miss relieving real pain for our users.

At my current company, Globant, I’ve created a curriculum and workshop series to help clients and internal team members learn a different way of approaching product development. To get the word out, we created a four-part webinar series to illustrate what you might learn in the workshops. The first one, Hey, What’s Your Problem? discusses ways to approach product development by focusing on user problems.


Eric Ries Needs a Better Editor (MVPs Explained)

Eric Ries’ The Lean Startup is a very successful book—and an even more successful movement—amongst product developers. Even if those in product development haven’t read the book, they talk about concepts introduced and popularized in it, such as the ‘Build-Measure-Learn’ cycle and Minimum Viable Products (MVPs).

When defining the ‘Build-Measure-Learn’ cycle, Ries explains that “the fundamental activity of a startup is to turn ideas into products, measure how customers respond, and then learn whether to pivot or persevere.” An MVP, he says, is “that version of the product that enables a full turn of the Build-Measure-Learn loop with a minimum amount of effort and the least amount of development time.”

Whenever I work with a new team, I inevitably hear the phrase “let’s just build an MVP”. Teams resort to this strategy when they are given work that either has a lot of unknowns, doesn’t clearly solve a user problem, or doesn’t align with the goals of their company. What these teams often mean, though, is “let’s build whatever we can build in the time frame that someone has set for us (or that we have set for ourselves) that gets us closer to our grand vision.” The MVP often ends up being a small Version One.

Others in product development have noticed this too, and have proposed a variety of alternatives to ‘MVP’ as a way to capture the nuance of Ries’ definition of a product. Minimum Valuable Product, Minimum Viable Experiment, and Riskiest Assumption Tests are all alternatives that have been proposed in the last couple of years. In his post Death of the Minimum Viable Product!, Steve Cohn, CEO of Validately, shows that the use of the word ‘product’ caused confusion amongst product people because they interpreted it as a ‘releasable product’. His analysis is spot on.

If you look past Ries’ definition of an MVP in The Lean Startup, and instead focus on the examples he uses, it becomes clear that his definition of MVP means something that you can use to run an experiment. Ries gives examples of an MVP, my favorite of which involves Food on the Table. Food on the Table’s product helps users plan meals by comparing their recipes to items on sale at their local grocery stores. The founders of Food on the Table experimented with solutions with customers for months without ever building software. They compared local grocery store sales and meal preferences for each individual customer, printed out recipes and meal plans, and, at first, went shopping with each customer until they had so many customers that the team had to build software to keep up with demand.

It’s fair to say that the printed-out paper the founders provided was, in fact, a product. The problem is that software—or even hardware—developers don’t tend to create and keep version numbers when they move from paper or physical prototypes to software. Calling the printed-out paper that the founders provided to their customers in the beginning a ‘version’ of their product is a lovely idea. But when Ries wrote the definition of MVP, he didn’t mean for ‘version’ to span the entire development process from “I have an idea” to a marketable, sellable product. Because of this, the use of the word ‘version’ didn’t fit with the definition of the word that readers typically understand.

How, then, can we rewrite the definition to capture the nuance that Ries illustrates through examples?

Since an MVP is something that can be used to test out a hypothesis, we could start by saying that rather than an MVP being a “version of the product”, it’s a “testable solution to a problem.” Understanding an MVP in this way frees us up to imagine all of the potential mediums that a solution might be, such as paper, digital, or physical representations.

Let’s move on to the next part of Ries’ definition: “that enables a full turn of the Build-Measure-Learn loop”. Building could, again, involve any medium. So let’s focus on ‘Measure’. To measure something in a meaningful way, a measurement needs to follow the SMART metric methodology: Specific, Measurable, Actionable, Realistic and Time-Bound. Whatever is being tested must have an objective right and wrong answer in a defined period of time. That way, it’s possible to make a decision easily about whether to continue with or change the course of your product. If a test doesn’t have a metric that is SMART, it’s too easy for teams to not learn something from the test and instead succumb to the sunk-cost fallacy, in which they continue to build simply because they have put time and effort into the product. I’d therefore propose that this section of the definition be changed to “used in an experiment that can generate an objective right or wrong answer in a given amount of time”.

The end of the definition, “with a minimum amount of effort and the least amount of development time”, follows standard lean practices. There’s nothing inherently wrong with or confusing about the sentence. But for the clarity of the entire definition, let’s shorten it to “with the least amount of time and development waste.”

When we pull it all together, we arrive at a definition of MVP that is much clearer and captures the nuance of what Ries was trying to teach: “The MVP is a testable solution to a problem used in an experiment that can generate an objective right or wrong answer in a given amount of time with the least amount of time and development waste.”

By being more specific in the definition, I’m hopeful that teams will not resort to saying “let’s just build an MVP” when they are caught in situation in which they are asked to build something without understanding which problem they are solving. Rather, they will help their stakeholders and teammates discover user problems, and then use their MVPs to validate if they are indeed worth solving.


Originally written for the blog on L4 Digital.com

Excerpt from: A Beginner’s Guide to User Interviews

I believe user interviewing is a core skill for all product managers in the industry. Its hard to learn to do without guidance and so I ensure that I teach it in all of my classes and when I coach new product managers. To get folks started, I wrote another post about User Interviewing for General Assembly. Below is an excerpt.

We all know the products that are successful; the ones that seemed to come out of nowhere and then change the way we go about our lives — like the smartphone, ridesharing, or turn-by-turn navigation. But there are a lot of products that didn’t make it. Though there are many reasons why products aren’t successful, one that often comes up is that people don’t see value in what it does. Or, they don’t see enough value in it to pay for it.

One way to avoid going down that path is by conducting user interviews. This is where product teams go out into the world and talk to people who fit their product or service’s personas, observe their behavior, and ask them questions.

Read the full post to learn about:

  • How to Find Users to Interview

  • How to Ensure a Successful User Interview

  • User Interviews at General Assembly

Metrics that Matter

Company after company quips that their teams “make data-driven decisions”. But when you ask them which metrics they are currently tracking, they typically respond with Daily Active Users (DAU) and downloads. Both of these metrics are very popular because they make it easy for companies to compare themselves to their competitors (a key fundraising tactic for startups) and because they are super easy to track. Simply add Google Analytics to your product, log into their pre-made dashboard, and find your answers instantly! Unfortunately, it’s impossible to make any product decisions based solely on these metrics because they’re vanity metrics. If we are going to make “data-driven decisions,” we need to analyze metrics that will help us make decisions; we need actionable metrics.

There are three types of metrics product managers should care about:


Simply tracking the number of users is not enough. Rather, it’s important to take a more nuanced look at your users. A detailed look at your users will make it harder to compare your company to others, but it will give you more important information about what to do next with your product. The metric I suggest most often is repeat usage. Repeat usage of your product can tell you a lot about where you need to dig with your user interviews. If users are not coming back to your product, you know you have a market fit problem. Either you haven’t found your market yet, or you are not solving a big enough problem for the market you found. So when it comes to making a decision about what to do next, building on new features or doing small optimizations are probably not going to get you anywhere. You should instead go back and validate your value proposition and ensure that you are both meeting it with your existing product and that you have identified a customer segment that will benefit from what you’ve built.

Another way to look at repeat usage is by measuring your retention rate. This metric can be calculated in many different ways, but the most common method in analytics applications is rolling retention. Rolling retention tells you whether or not users have returned to your product in a given time period. (If you want to learn how to calculate retention, this post at AppLift explains it very simply.) Retention is an excellent metric to use once you have an established product because you’re most likely gaining and losing a lot of users each month. By using retention along with cohort analysis, you can determine whether your newer users are returning to your product, rather than mixing them into the same data as your established users.


Conversion metrics are the most versatile and powerful for making day-to-day decisions. A conversion is the number or percentage of people who complete a given task within your product. The most impactful conversion metrics are based on the value proposition of your product. What did you promise your users? What did you tell them they would be able to achieve with your product? That is the conversion you should be tracking.

If we look at some examples of popular products and their value propositions, we can deduce what some of their conversion metrics might be. Evernote, for example, has put forth this value proposition: “Get organized. Work smarter. Remember everything.” One of their conversion metrics might be as simple as “a note created.” But if they find that users who create five or more notes per day use more tags to stay organized and thus stay paid customers longer, they might make their conversions both the creation of the fifth note and that tags were added.

That brings us to funnels. Funnels are a popular way of using conversion metrics. With a funnel, you look at how many users continued through a series of actions, like adding notes in Evernote. By examining which actions users complete before and after meaningful conversions, you can determine which actions in your product lead to more engaged users or more revenue. You can then find other ways within your product to encourage users down those paths.


At the end of the day, your company and product only exist if people buy your product. Simply tracking total revenue, however, is way too broad a metric for making product decisions. A better way for a product team to approach revenue is to instead measure the amount of each sale, or the revenue generated from a particular group of users. People have many different reasons for why and how they spend their money. By focusing on segments rather than the aggregate, you can isolate different groups’ motivations for buying and modify your product to provide them with something that they feel is worth spending their money on.


Everyone loves to say they are data-driven. What people really mean when they say that is that they want to use data to help them make decisions. To do that, product people need to focus on tracking actionable metrics and leave the vanity metrics for press releases and board review slide decks.


Originally written for the blog on L4 Digital.com

Making the Most of User Interviews

My colleague at L4 Digital, Lisa Kream, interviewed me to learn more about how to conduct user interviews.  The post was originally written for the blog on L4 Digital.com


Lisa: When you begin a new project, one of the very first things that you and your team do is conduct user interviews. User interviews help your team answer some very crucial questions: Who will be using our product? Why do they need to use it? What’s their problem, and how can we solve it?

The answers to these questions ultimately determine how successful you and your team will be, as users’ problems inform the very vision and direction of your product. Conducting a meaningful user interview, then, in which you truly learn about your users and listen to their needs, is of the utmost importance.

That is, unfortunately, all the insight that I have to offer. I know that user interviews are a crucial part of the development process, but my expertise ends there.

To further explore why they’re important, and to provide guidance around conducting better, more productive user interviews, I’ve conducted an interview of my own. I sat down with Tricia Cervenan, a Senior Product Manager at L4 Digital, to learn more about this process. Tricia brings ten years of stellar product management experience to the table, and is more qualified than most (and definitely more qualified than I) to help you get the most out of your user interviews.


Lisa: Thanks for joining me, Tricia! Firstly, can you explain in a bit more detail what a user interview is?

Tricia: When those of us in product development start a new project, we have assumptions about the kind of problems users have and what a solution to those problems might be. As user-focused practitioners, we need to validate that the problems we think users have are real. We do this through the user interview. User interviews were introduced in the 1990s as “context interviews” by Wixon, Holtzblatt and Knox. In these interviews, you observe user behavior in a natural environment, and ask users questions in an attempt to understand and verify what motivates their behavior.

The term “user interview” is a bit broader than “context interview”, and really implies that users will be asked a series of questions. It’s important to ensure that the spirit of the context interview is maintained, however, and that questions are focused on previous or observed behavior rather than what-if scenarios that ask users to predict what they will do in the future.


Lisa: Why bother conducting a user interview?

Tricia: We have assumptions about our users’ problems, and we need to validate whether those assumptions are accurate. Oftentimes, teams will build directly off their assumptions without having first proven that a problem is real by observing user behavior and asking questions.

Where quantitative data tells us that there is a problem, user interviews and qualitative research tells us what the problem is and why it exists. If we see one of our conversion metrics decline, digging into the quantitative data will only tell us so much. And if we know and understand our market, we can even make inferences about what might have changed. But without talking to people, we can’t truly understand how they feel about the change. The last thing we want is to be forced to run a hundred tests when a few conversations with users could narrow the field of possible and probable solutions down to something much more manageable (and time efficient).


Lisa: What if you conduct a bad user interview? What are the consequences for your team and your product?

Tricia: The biggest risk of a poorly-conducted interview is the potential for introducing bias into our data. Bias leads us to believe we understand which problems users have when, in reality, this may not be true. There are many biases that come into play, and the ones I’ve encountered the most are confirmation bias and diagnosis bias. Confirmation bias shows up when we focus our line of questioning on the behavior we expect to see or to prove our assumptions. Diagnosis bias occurs when we start interviewing, make a judgement, and then spend the rest of the interview asking questions that confirm our judgement. By injecting bias into our research, we don’t get a truthful answer as to whether a problem is real and whether it’s worth solving.

Lisa: How can our readers conduct better user interviews? What are your top three tips?

Tricia: First, build rapport with your interviewee. Remember that you’re asking someone to be a bit vulnerable and tell you why they’re making the decisions they are. You’re also asking them to be honest. Not everyone is introspective enough to reveal that much about themselves and how they feel without trusting the person on the other side. Start interviews by easing in and assuring participants that there is no right or wrong answer, that they won’t hurt your feelings, and that you’re simply trying to understand how they use a product. Then follow up with softball questions that are easy for them to answer.

What types of technology do you own?

Tell me about your favorite app. Why do you like it?

Tell me about the last time you went grocery/clothes/car/furniture/etc. shopping.

The goal is to get them to start opening up early so that when you get to the meatier questions, they feel more comfortable revealing their deeper feelings.

Second, stop talking. When you ask people to answer questions, you need to give them the space to do so. When you hear a pause after someone has stopped talking, count to five and, more often than not, they will continue talking, both because they have more to say and to fill the silence. One of your goals during the interview is for users to reveal insight into their behavior, and our silences allow users to feel like there is room in the conversation for them to do so.

Third, don’t pitch your product or idea. We’re not salespeople on a grassroots mission to get five more people to use our product. We are researchers looking to learn from our users so that we can build solutions that solve their problems. Pitching our ideas only serves to make us feel better about our assumptions; it doesn’t prove or disprove them. Pause and reevaluate your line of questioning if you find yourself saying things like:

The product does that. Here’s how you do it.

We’re planning to build feature X. Would that be something you would like?

We’ve built this great new product that could make your life easier. Does that sound like something you’d be interested in?


I love talking about how to do user interviews. If you want to read more, I’ve also written a post for General Assembly on this topic.

Hey, What’s Your Problem?

There are a lot of failed products out there, but we never hear about them*. No one markets their failures; they only market their successes. But these failures do exist, and they’ve failed because they didn’t solve a real user problem. It’s not that the product wasn’t innovative or well built. Rather, it’s that earlier in the development process, no one asked enough questions, both of users and of themselves.

At the first sign of a problem, we have a tendency to jump right to finding a solution for it. The urge to fix what appears to be broken is prevalent throughout our lives.

Imagine this: You’re talking to a friend about some problem you have, and you get halfway through describing our problem when your friend pipes in with, “why don’t you just do X?” You start to explain to your friend that that won’t work because of Y. Your friend responds by telling you that you should instead do Z.

This cycle continues until you both get tired, and you leave the conversation unsatisfied, feeling as though you volleyed words back and forth but never really heard one other. You are left trying to figure out how to solve your problem on your own.

Users experience a similar feeling of dissatisfaction when we put products in front of them that might be lovely solutions to a problem, but don’t quite solve their problem. If we’re lucky, users will tell us how we got it wrong. In most cases, though, they’ll just stop using our product, and we’re stuck trying to figure out why.

But this doesn’t have to happen, so long as you approach your product with a clean slate and no pre-defined solution. Have no agenda other than to truly understand what problems are worthy of your efforts.

Re-thinking Product Management: A New Approach

Imagine the solutions that you could come up with if you knew that what Sally really wanted to do was save and share memories of her life, rather than focusing on the fact that she can’t take pictures.The first step in shifting toward this alternative method of product development is understanding the difference between a problem and a solution. A problem is a goal or objective that a user would like to achieve, but can’t. A solution is a product or process that can help a user achieve that goal or objective.

Oftentimes, when I ask a team which problems they’re solving, I hear them describe a solution that they want to build, but attempt to frame it as the problem: “Sally can’t take pictures.” “Michael wants to edit photos.” “Lucy needs more font options.” When this solution-focused work is taken to teams, it jeopardizes any alternative, innovative solutions the team could develop.

Imagine the solutions that you could come up with if you knew that what Sally really wanted to do was save and share memories of her life, rather than focusing on the fact that she can’t take pictures. What if you knew that the real reason that Michael wants to edit his photos is because he has an unsteady hand and always takes blurry pictures?

The second step in shifting toward this new methodology is to dig deeper when talking to users and collaborating with your team. Consider, for example, the issue that Sally can’t take pictures. If that “problem” is brought to your team, the conversation that follows might sound something like this:

Sally can’t take pictures.
Okay, so we need to put a camera in the app.

Well, she already has a camera in the OS. Do we really need to add a camera?
Okay, we’ll link out to the existing camera and build a way for her to see the shutter inside our app.

We shouldn’t need to link to existing software. That seems like a disjointed experience. Are there any other solutions?
Well, we could put a camera in the app.


But what if you found your inner two-year-old, and asked why?

Sally can’t take pictures.
Why does she need to?

Because she has kids.
Why does she want to take pictures of them?

She wants to remember what they looked like and did as children so that she can show them when they get older.
Why does she want pictures?

Because she wants to easily share these moments with her family members.

In that exchange, we learn so much more about our user. It’s then possible for us to stop focusing solely on adding a camera to our product and instead figure out a way for Sally to share her memories with family members.

It’s common in product development to get caught up in the solutions we love or the ideas we have. This method, unfortunately, leads to an abundance of features and products that users don’t want, can’t use, and/or don’t need. If we focus on understanding the underlying problems that people have, we can build stronger, more meaningful, and more successful products we feel good about.


*There’s a museum of the most popular product failures, and at least one website for those that probably never made it to your computer screen.


Originally written for the blog on L4 Digital.com