Overview
Prioritization is the single most important skill a product manager can master. With infinite feature requests and finite engineering capacity, the framework you choose directly affects what ships. And what doesn't.
Three frameworks dominate the PM space: RICE, ICE, and MoSCoW. Each takes a fundamentally different approach to the same problem. The RICE Calculator lets you score features interactively, and our prioritization guide covers the full decision-making process. This guide breaks down when each one shines and where it falls short.
Quick Comparison
| Dimension | RICE | ICE | MoSCoW |
|---|---|---|---|
| Scoring type | Numeric (formula) | Numeric (average) | Categorical (buckets) |
| Factors | Reach, Impact, Confidence, Effort | Impact, Confidence, Ease | Must, Should, Could, Won't |
| Setup time | Medium (needs data) | Low (gut + data) | Low (workshop) |
| Best for | Data-driven teams, growth features | Fast screening, early-stage | Stakeholder alignment, fixed scope |
| Team size | 5+ people | 1-10 people | Any size |
| Objectivity | High | Medium | Low (consensus-based) |
| Granularity | High (continuous scores) | Medium (0-10 scale) | Low (4 buckets) |
RICE Scoring. Deep Dive
RICE was developed at Intercom and scores features using a formula:
RICE Score = (Reach x Impact x Confidence) / Effort
Strengths
- Most objective of the three. Forces you to quantify reach and effort with real data
- Reduces bias because each dimension is scored independently
- Scales well across large backlogs (100+ items) where you need clear rank-ordering
- Confidence factor explicitly accounts for uncertainty, which ICE and MoSCoW ignore
Weaknesses
- Slow to set up. Requires data on reach (how many users per quarter?) and effort (person-months)
- False precision. Teams treat the numeric output as gospel when inputs are often estimates
- Ignores strategic alignment. A high-RICE feature may not match your product vision
- Effort estimation is hard. Engineering estimates are notoriously unreliable
When to Use RICE
- You have usage analytics to estimate reach accurately
- Your team is 5+ PMs/engineers and needs a shared, defensible scoring system
- You're prioritizing growth features where reach and impact are measurable
- You want to reduce HiPPO bias (Highest Paid Person's Opinion)
ICE Scoring. Deep Dive
ICE was popularized by Sean Ellis (of "growth hacking" fame) and scores features on three dimensions:
ICE Score = (Impact + Confidence + Ease) / 3
Strengths
- Fast. You can score a backlog of 50 items in under an hour
- Low data requirement. Works well with gut feeling supplemented by light data
- Great for experiments. Originally designed for growth experiments where speed matters
- Easy to explain to non-PM stakeholders
Weaknesses
- Highly subjective. Without guardrails, one person's "8 Impact" is another's "5"
- No reach dimension. A feature that impacts 100 users scores the same as one impacting 100,000
- Ease ≠ Effort. "easy to build" and "low effort" can mean different things
- Averaging masks tradeoffs. A 10/1/10 and a 7/7/7 both score 7, but they're very different bets
When to Use ICE
- You're at an early-stage startup where speed of decision beats precision
- You're running growth experiments and need to quickly rank 20+ test ideas
- You have a small team (1-3 PMs) and don't need organizational consensus
- You want a lightweight screen before applying a more rigorous framework
MoSCoW. Deep Dive
MoSCoW was created by Dai Clegg while working on rapid application development at Oracle and later formalized within the DSDM framework. It categorizes features into four buckets:
- Must Have. Non-negotiable for launch; the product fails without these
- Should Have. Important but not critical; can be delayed to the next cycle
- Could Have. Nice-to-have; included only if time and resources allow
- Won't Have (this time). Explicitly out of scope for this cycle
Strengths
- Stakeholder alignment. Everyone in the room agrees on what "must" ship
- Clear communication. Executives instantly understand "Must / Should / Could / Won't"
- Scope management. Explicitly saying "Won't Have" prevents scope creep
- Works for any team size. From solo PMs to 50-person program teams
Weaknesses
- Everything becomes a Must. Without discipline, stakeholders push everything into Must Have
- No ranking within buckets. You know something is a "Should" but not whether it's the first or last Should
- Consensus-driven. Can be slow and political in large organizations
- Ignores effort. A Must Have that takes 6 months isn't differentiated from one that takes 2 days
When to Use MoSCoW
- You're planning a fixed-scope release (e.g., v2.0 launch, quarterly release)
- You need executive/stakeholder buy-in on priorities
- Your team is cross-functional and needs shared language across PM, engineering, design, and business
- You're doing sprint planning and need to triage quickly
Decision Matrix: Which Framework to Choose
Choose RICE when:
- You have quantitative data on user reach and feature impact
- You need to defend priorities to skeptical stakeholders with numbers
- Your backlog has 50+ items that need precise rank-ordering
- You're working on mature products where you can measure outcomes
Choose ICE when:
- You need to move fast and can't spend hours gathering data
- You're evaluating growth experiments or quick wins
- Your team is small and trusts each other's judgment
- You want a first pass to narrow the list before a deeper analysis
Choose MoSCoW when:
- You need organizational alignment more than numeric precision
- You're planning a specific release with a fixed timeline
- Stakeholder buy-in is the bottleneck, not lack of data
- You need to explicitly de-scope features (Won't Have is powerful)
Combining Frameworks
The most effective teams don't pick just one. Here's a powerful combination:
- Start with MoSCoW to align the organization on what's in and out of scope
- Apply RICE or ICE to rank features within the "Must Have" and "Should Have" buckets
- Re-evaluate quarterly as new data changes your confidence scores
This gives you both strategic alignment (MoSCoW) and tactical precision (RICE/ICE).
Framework Comparison Cheat Sheet
Speed of setup: ICE > MoSCoW > RICE
Objectivity: RICE > ICE > MoSCoW
Stakeholder communication: MoSCoW > ICE > RICE
Scalability (large backlogs): RICE > ICE > MoSCoW
Works without data: MoSCoW > ICE > RICE
Prevents bias: RICE > MoSCoW > ICE
Bottom Line
There's no universally "best" framework. Only the best framework for your context. If you're a data-rich growth team, start with RICE. Read the RICE framework guide for implementation details and try the RICE Calculator to score your backlog. If you're an early-stage team optimizing for speed, use ICE. If you need to align a room of stakeholders, use MoSCoW.
The biggest mistake PMs make isn't choosing the wrong framework. It's not choosing one at all.