← Back to guides
Best Prioritization Frameworks for Product Managers (RICE, ICE, MoSCoW)

Best Prioritization Frameworks for Product Managers (RICE, ICE, MoSCoW)

Compare the best product prioritization frameworks: RICE, ICE, MoSCoW, Kano, and weighted scoring. Includes when to use each, templates, and real examples.

prioritizationframeworksdecision-making12 min read

Why Prioritization Is Hard

Every product team has more ideas than capacity. Prioritization is choosing which few things to do from the many you could do. It's hard because you're making tradeoffs between:

  • Uncertain outcomes
  • Competing stakeholder interests
  • Different time horizons

Frameworks help by providing structure and shared vocabulary. They don't eliminate subjectivity—someone still has to score the factors. But they make the reasoning explicit and comparable across projects.


RICE: The Industry Standard

RICE scores projects by:

FactorDescriptionScale
ReachUsers affectedNumber
ImpactEffect per user0.25 - 3
ConfidenceCertaintyPercentage
EffortInvestmentPerson-weeks

Formula: (R × I × C) ÷ E = Score

Intercom developed RICE, and it's become the default at many tech companies. It works well for comparing features on a roadmap because it forces you to estimate each component explicitly.

The Confidence Multiplier

The Confidence multiplier is underappreciated—it penalizes speculative ideas appropriately.

When RICE Struggles

RICE struggles with strategic bets that have low confidence but high upside. A moonshot might score lower than an optimization even if it's the right thing to do.

Use RICE for the roadmap, but don't let it override strategic conviction.


ICE: Quick and Flexible

ICE scores Impact, Confidence, and Ease on 1-10 scales, then multiplies them. Sean Ellis popularized it for growth experiments.

Strengths

  • Simpler than RICE—no Reach calculation
  • Fast to apply
  • Great for rapid prioritization

Weaknesses

  • Two people might score the same feature very differently
  • Less rigorous for longer projects

Use ICE when you need speed and have a small group who calibrates similarly. ICE is better for experiments than features because experiments are fast and cheap. If your ICE score is wrong, you'll learn quickly.


MoSCoW: Scope Management

MoSCoW categorizes requirements:

CategoryMeaning
Must haveNon-negotiable—product doesn't function without it
Should haveImportant but not blocking
Could haveNice to have
Won't haveExplicitly out of scope

It's not a scoring system—it's a scoping tool. Use it when defining MVPs or negotiating fixed-deadline releases.

The Power of MoSCoW

Stakeholders who want everything suddenly have to choose when faced with Must vs. Should. The "Won't have" category is particularly valuable—documenting what's out of scope prevents scope creep.

MoSCoW breaks down when everything is "Must have." That's a failure of discipline, not the framework. Real must-haves are things where the product doesn't function without them. Most features are Should or Could.


Kano: Customer Satisfaction

Kano categorizes features by their effect on customer satisfaction:

TypeDescriptionExample
Basic needsExpected; dissatisfying if absentA car having brakes
Performance needsMore is better (linear)Fuel efficiency
DelightersExciting surprisesTesla's easter eggs

Strategic Application

  • A product that only does Basic needs is table stakes—no differentiation
  • A product that only invests in Delighters might frustrate users with missing basics
  • The best products nail basics, invest in performance, and sprinkle delighters

Use Kano when thinking about competitive positioning: What are table stakes in your market? Where can you differentiate?


Value vs. Effort: The 2x2 Matrix

Plot features on a 2x2 grid:

Low EffortHigh Effort
High ValueQuick Wins ✅Big Bets 🎯
Low ValueFill-ins 🤷Money Pits ❌

This matrix is great for visual communication with stakeholders. It's intuitive and generates productive discussion.

Limitations

The weakness is that everything clusters in the middle. Value and effort are both uncertain, so precise placement is false precision.

Use the matrix for discussion, not as a decision algorithm.


Weighted Scoring: Customizable

Create your own framework by:

  1. Identifying the factors that matter for your context
  2. Weighting them
  3. Scoring each option

A growth team might weight revenue impact heavily; a platform team might weight technical debt reduction.

Weighted scoring is flexible but requires alignment on weights. If stakeholders disagree on whether revenue or engagement matters more, the scoring debate becomes a proxy for the strategic debate. That's actually useful—it surfaces the real disagreement.


Opportunity Scoring

Tony Ulwick's framework from Outcome-Driven Innovation:

  1. Survey users on importance and satisfaction for each job-to-be-done
  2. Calculate: Opportunity = Importance + (Importance - Satisfaction)
  3. High scores = important jobs with low satisfaction = best opportunities

This approach grounds prioritization in user data rather than internal opinion. It's more rigorous but requires running the research. Use it for strategic planning when you have time to invest.


When Frameworks Fail

Frameworks fail when:

  • You have strategic information they can't capture (a partnership, regulatory change, founder conviction)
  • The inputs are garbage (nobody knows the effort, impact is made up)
  • Gaming the system matters more than honest assessment

They also fail for truly innovative work. RICE can't tell you whether to build AWS. These are conviction-driven bets where the usual inputs don't apply. Frameworks are for managing known trade-offs, not discovering unknowns.


Framework-Free Prioritization

Sometimes the best prioritization is no framework. A skilled PM looks at the options, synthesizes what they know about users, business, and technology, and makes a judgment call.

The Danger

The danger is post-hoc rationalization—deciding what you want to build and then justifying it.

Good judgment-based prioritization is explicit about reasoning even if not framework-structured: "I'm prioritizing X because Y, even though Z argues against it."


Making Frameworks Work

  1. Pick one framework and use it consistently so your team develops calibration
  2. Compare scores across projects to spot outliers and challenge assumptions
  3. Revisit estimates after shipping—did the impact match the prediction?
  4. Over time, your estimates improve

Remember: frameworks are communication tools. A RICE score isn't truth; it's a structured argument. The value is in the conversation it enables, not the number itself.

Get the weekly digest of top product people & jobs

One email a week. No spam.

Ready to get discovered?

Create your profile and let companies come to you.

Create Your Profile