Adding features to your product roadmap can feel like a gamble. You’re devoting time and resources, hoping your efforts will pay off with increased revenue or customer satisfaction.
But not all feature requests are exciting, and not all features that seem promising will prove widely impactful. Some projects are far less glamorous but still mission-critical to the product’s core functionality. Other features may delight the customer, but not make a meaningful difference in moving the needle on overall business goals.
A decision-making framework can make feature prioritization more objective, helping you focus on the right questions as you map out your feature pipeline. Your process to prioritize features should be easy to understand, align with your product’s complexity and maturity, and incorporate the data you have available.
Here, we explain three common frameworks product teams use to determine feature prioritization and why you might choose one method over another.
The MoSCoW method
The MoSCoW method creates a hierarchy for your feature requests, based on their importance within the product. This framework divides features into four categories:
- Must Have (M) features are required for the product’s success
- Should Have (S) features are important, but not necessary
- Could Have (C) features would be “nice to have”
- Will Not Have (W) features aren’t a priority for the project
If a feature is a Must Have (M), you’ll prioritize it sooner than a Should Have (S) or Could Have (C). It also helps with resource allocation, since you’ll devote resources to the Must Have (M) features within a release first, before moving to other features.
Pros of the MoSCoW method:
The MoSCoW method is easy to understand and implement. As you review a list of features, you flag or tag them in your project management tool. Then, based on their MoSCoW designation, you add features to your product roadmap.
The MoSCoW framework also ensures that you build a minimum viable product (MVP) since you’re prioritizing the Must Have (M) features.
Cons of the MoSCoW method:
Stakeholders have to agree on the definition of each category. What some may consider Should Have (S), others may consider Could Have (C). For example, if a competitive product has the feature, does that make it a Must Have (M) for the product’s success?
You’ll need a clear guide — otherwise, categorization will seem subjective. The method loses its effectiveness without a baseline consensus.
The Kano Model
The Kano Model prioritizes features based on customer satisfaction. Developed by researcher Noraki Kano, it categorizes features into three groups:
- Basic Features: Essential functions that customers expect from the product
- Performance Features: Features that make products easier to use or increase customer satisfaction
- Delighters: Features that are unexpected and make the customers happy
Under the Kano Model, customers will be dissatisfied if a product lacks Basic Features, so these should be considered tablestakes and necessary. Performance Features and Delighters can increase revenue and market share, making them good to have but not necessarily essential.
The Kano Model can work really well for new products if you’re conducting a lot of interviews with potential users. You can gain an understanding of what users deem critical before launching.
Pros of the Kano Model
Because the Kano framework puts users front and center, it works well for products that aren’t overly complex. With more users and varying use cases, it becomes harder to determine what’s a Basic Feature versus a Performance Feature. What’s Basic to some may be Performance or Delighters to others.
Product teams that use the Kano method deeply understand customer needs based on their research and conversations with customers. They assume that by taking this customer-centric approach, revenue will follow.
Cons of the Kano Model
You need to have access to a lot of customer data in order to use the Kano method. It may be time-consuming to conduct interviews, or you need to conduct research through surveys. If you don’t have a clear understanding of what users want, you can’t assign features to the appropriate categories.
The RICE method
The RICE method is the most quantitative of the three frameworks, assigning a score to each feature based on the following criteria:
- Reach: How many users will be affected
- Impact: The anticipated positive impact of each feature
- Confidence: How much confidence you have in reach, impact, and ability to deliver
- Effort: The amount of work involved to deliver the feature.
To use the RICE method, you would need to first develop a point system. You would then assign points to each of the criteria. Based on how you assign points to the variables, you’ll then decide what a “good” RICE score is and let the scores determine your prioritization.
The formula to calculate the RICE score is to multiply Reach, Impact, and Confidence and divide by Effort. Each of the variables is assigned its own numeric value.
For example, Intercom describes the following RICE point system:
- Number of users impacted (i.e. 2,000)
- Massive Impact = 3 points
- High Impact = 2 points
- Medium Impact = 1 point
- Low Impact = .5 points
- Minimal Impact = .25 points
- High = 100%
- Medium = 80%
- Low = 50%
- Estimated in “person-months” or the amount of work one resource can do in one month
- Two weeks of planning plus six weeks of design would be 2 person-months
While you may be able to estimate the number of users impacted based on your own analytics, you’ll need to develop criteria for the others. You may opt to assign different points for Impact than Intercom’s example, like using whole numbers on a scale of 1 to 5 points instead of decimals. Or you might use 75% instead of 80% as a measure of Medium Confidence.
Pros of the RICE method
The RICE calculation puts a clear number with each project. In other methods, teams may be forced to choose between multiple Must Have (M) features (MoSCoW method) or Basic Features (Kano method). With the RICE framework, the points allow for more granular itemization, allowing you then sort features according to unique scores.
The RICE framework also takes resources into account. A feature with Medium impact and Medium confidence that requires a lot of Effort isn’t as good of a bet as a feature with High Impact and Confidence and lower Effort. Considerations like this will be reflected directly in the points of the RICE framework, making it easy to justify decisions.
Cons of the RICE method
Since you need to assign points to each feature request, RICE can be time-consuming. You risk putting a lot of time into calculating points for small feature requests. You’re also collecting a lot of data, some of which may be slightly more subjective, such as Impact and Confidence. If points aren’t applied consistently, the RICE method is less valuable.
Core considerations for feature prioritization
Product management has come a long way from guesswork. With any framework you use for feature prioritization, you can collect a lot of background information to support your decision.
There are some additional things to keep in mind as you develop a framework for prioritizing features.
Data doesn’t lie. You can assess how much customers currently use different features by looking at analytics or data within your product. Adding new features to highly used areas makes you more likely to delight the customer. On the other hand, an area might be underutilized because it’s missing some basic functionality that the customers are expecting, and spotting those opportunities can help inform priorities.
Understanding customer expectations is an important part of assessing feature value. You can collect this information from customers via survey, such as in-app or email surveys where you ask customers what they like or want to see improved. You can also collect feedback if customers cancel or leave after completing a free trial to learn how the product didn’t meet their expectations.
With more mature or complex products, you may also group feature requests into a larger project. For example, let’s say you’re using the MoSCoW method and have an HR product. The product has three core areas of functionality: time tracking, employee reviews, and applicant tracking.
You may have Must Have (M) features for all three areas, but choose to tackle only time tracking in one release. You would prioritize the Must Have (M) features, plus a few Should Haves (S). It’s a better use of resources and will be easier for end-user adoption to make changes in a single functional area.
Though harder to measure, you have to consider the costs of not doing a project or adding a new feature. Are you losing sales? Do your competitors offer the feature, so your customers consider it essential or it makes it hard for you to compete? The loss in revenue or potential revenue may push a feature further up the product roadmap.
Whether you’re using a framework that has a direct calculation for resources (like RICE) or not, you have to consider your available resources. Your team has bandwidth, and every feature is a tradeoff.
If you hire additional team members, you have to consider ramp-up time, and if you lose people, you may have to re-prioritize. You’ll also have to think about the start- and end-dates of each project, knowing that projects may compete for resources and it won’t be possible to tackle two things simultaneously.
Of course, the best feature prioritization framework for your startup is the one that makes sense for your team and your business. That may be one of the methods described — or it may be one you create, using a combination of factors. You may take the basics of MoSCoW and add in some points based on resources and effort from RICE.
Whatever framework you implement, make sure you have guidance that explains your categories, points, and/or data sources. Ultimately, the framework should make feature prioritization easier if you’re applying it consistently.
Anna Burgess Yang