Skip to main content
New: Deck Doctor. Upload your deck, get CPO-level feedback. 7-day free trial.
PrioritizationBeginner18 min read

RICE Framework: Score and Prioritize Features

Learn the RICE prioritization framework with the scoring formula, worked examples, a spreadsheet template, and common mistakes to avoid.

Best for: Product managers who need a quantitative, repeatable method for prioritizing features and initiatives
Published 2024-04-15Updated 2026-03-20
Share:
TL;DR: Learn the RICE prioritization framework with the scoring formula, worked examples, a spreadsheet template, and common mistakes to avoid.
Free PDF

Get the Framework Quick Reference

26 PM frameworks mapped with when to use each one, inputs needed, and expected outputs.

or use email

Join 10,000+ product leaders. Instant PDF download.

Want full SaaS idea playbooks with market research?

Explore Ideas Pro โ†’

Quick Answer (TL;DR)

RICE is a prioritization framework that scores features using four factors: Reach (how many users are affected), Impact (how much each user is affected), Confidence (how sure you are of your estimates), and Effort (how much work it takes). The formula is (Reach x Impact x Confidence) / Effort = RICE Score. Higher scores indicate higher priority. It was popularized by Intercom and is one of the most widely adopted quantitative prioritization methods in product management. Try the free RICE Calculator to score your own backlog items instantly. For help choosing between prioritization frameworks, see our RICE vs ICE vs MoSCoW comparison or RICE vs WSJF analysis.


What Is the RICE Prioritization Framework?

The RICE framework is a scoring model that helps product teams make objective decisions about which features, projects, or initiatives to pursue. Developed and popularized by Sean McAllister at Intercom, RICE replaces gut-feel prioritization with a structured, repeatable formula that considers both the potential upside and the cost of each initiative.

RICE stands for:

  • Reach. How many people will this impact in a given time period?
  • Impact. How much will it impact each person?
  • Confidence. How confident are you in your estimates?
  • Effort. How much time and resources will it take?

The beauty of RICE lies in its simplicity. By reducing prioritization to a single numerical score, it gives teams a common language for comparing wildly different initiatives. From a small UX tweak to a major platform overhaul.

The RICE Formula Explained

The core formula is straightforward:

RICE Score = (Reach x Impact x Confidence) / Effort

Let's break down each component with precise definitions so your team scores consistently.

Reach

Reach measures how many users or customers will be affected by an initiative within a defined time period (typically one quarter). Use real data wherever possible.

How to estimate Reach:

  • Pull from product analytics: DAU/MAU data, funnel conversion rates, segment sizes
  • Use customer support ticket volume for pain-point-driven features
  • Reference market research for new-market initiatives

Examples:

InitiativeReach EstimateSource
Redesign onboarding flow5,000 new signups/quarterSignup analytics
Add CSV export800 users requesting/quarterSupport tickets + feature requests
Mobile app push notifications12,000 active mobile users/quarterMobile analytics
Enterprise SSO integration50 enterprise accounts/quarterSales pipeline

Always express Reach as a number of people or accounts per time period. Avoid vague terms like "a lot" or "most users."

Impact

Impact measures how much this initiative will move the needle for each person reached. Since individual impact is harder to quantify than reach, RICE uses a standardized scale:

ScoreLabelMeaning
3MassiveTransforms the user experience or eliminates a critical blocker
2HighSignificant improvement that meaningfully changes behavior
1MediumNoticeable improvement
0.5LowMinor improvement
0.25MinimalBarely noticeable

Guidelines for scoring Impact:

  • 3 (Massive): Slack adding threaded messages. It fundamentally changed how teams communicated and reduced noise in channels.
  • 2 (High): Spotify adding offline downloads. A significant feature that changed user behavior and drove subscriptions.
  • 1 (Medium): Adding keyboard shortcuts to an existing workflow. Helpful, used regularly, but not a major change.
  • 0.5 (Low): A tooltip that clarifies a confusing label.
  • 0.25 (Minimal): A color change on a non-critical UI element.

Tie Impact to a specific metric you're trying to move: activation rate, retention, NPS, revenue, or time-to-value.

Confidence

Confidence is a percentage that reflects how sure you are about your Reach and Impact estimates. This is the factor that keeps teams honest. It penalizes wishful thinking.

ScoreLabelCriteria
100%HighBacked by quantitative data (analytics, A/B test results, large sample research)
80%MediumSupported by qualitative data (user interviews, surveys, competitive analysis)
50%LowBased on intuition, anecdotal feedback, or very small sample sizes

Rules of thumb:

  • If you have strong analytics data supporting both Reach and Impact, use 100%.
  • If you have user interviews or survey data but limited quantitative evidence, use 80%.
  • If you're largely guessing based on gut instinct or a single customer request, use 50%.
  • Never go below 50%. If your confidence is lower than 50%, you need to do more research before scoring. Not just assign a low confidence number.

Effort

Effort is measured in person-months (or person-weeks, or story points. Just be consistent across all initiatives). This is the total effort across all disciplines: engineering, design, QA, data science, marketing, and anything else required.

How to estimate Effort:

  • Break initiatives into rough work packages
  • Get time estimates from each discipline involved
  • Include QA, documentation, and rollout effort
  • Round up to account for unknowns

Examples:

InitiativeEngineeringDesignQATotal Effort
Redesign onboarding2 months1 month0.5 months3.5 person-months
CSV export0.5 months0.25 months0.25 months1 person-month
Push notifications1.5 months0.5 months0.5 months2.5 person-months
Enterprise SSO3 months0.5 months1 month4.5 person-months

Step-by-Step: How to Run a RICE Scoring Session

Step 1: Prepare Your Candidate List

Gather all features, projects, and initiatives being considered. Aim for 10-25 items. Too few and you don't need a framework, too many and the session becomes exhausting.

Step 2: Align on Definitions

Before scoring, ensure everyone agrees on:

  • The time period for Reach (usually one quarter)
  • The unit of measurement for Effort (person-months is standard)
  • The metric that Impact is measured against (activation, retention, revenue, etc.)
  • The confidence thresholds and what evidence is required for each level

Step 3: Score Each Initiative

Work through each initiative as a team. For each one:

  1. State the initiative clearly
  2. Discuss and agree on Reach (use data, not opinion)
  3. Discuss and agree on Impact (reference the 3/2/1/0.5/0.25 scale)
  4. Discuss and agree on Confidence (what evidence do you have?)
  5. Discuss and agree on Effort (get input from engineering and design leads)
  6. Calculate the RICE score

Step 4: Rank and Discuss

Sort all initiatives by RICE score from highest to lowest. Then have a critical discussion:

  • Do the top items align with your strategy?
  • Are there any surprises in the ranking?
  • Do any scores feel wrong? If so, revisit the individual components.

Step 5: Make Decisions

Use the RICE scores as a strong input to your prioritization, not the final word. Adjust for strategic considerations, dependencies, and team capacity.

Real-World RICE Scoring Example

Imagine you're a product manager at a B2B SaaS company with 10,000 active users. Your team has four initiatives to compare:

InitiativeReachImpactConfidenceEffortRICE Score
Smart search with filters6,000/quarter2 (High)80%3 person-months3,200
Bulk action toolbar4,000/quarter1 (Medium)100%1 person-month4,000
Dashboard customization8,000/quarter1 (Medium)50%4 person-months1,000
Slack integration2,000/quarter2 (High)80%2 person-months1,600

Calculations:

  • Smart search: (6,000 x 2 x 0.8) / 3 = 3,200
  • Bulk action toolbar: (4,000 x 1 x 1.0) / 1 = 4,000
  • Dashboard customization: (8,000 x 1 x 0.5) / 4 = 1,000
  • Slack integration: (2,000 x 2 x 0.8) / 2 = 1,600

The bulk action toolbar wins despite having lower reach and impact than some alternatives because it's fast to build and the team has high confidence in the estimates. Dashboard customization, despite reaching the most users, ranks last because the low confidence score and high effort drag it down.

When to Use RICE (and When Not To)

RICE Works Best When:

  • You have a large backlog of competing features and need a structured way to compare them
  • Your team tends toward opinion-based prioritization and needs a more objective framework
  • You have access to product analytics and customer data to inform Reach and Impact estimates
  • You're prioritizing within a single product where reach and effort are comparable across initiatives

RICE Is Less Effective When:

  • You're working on a brand-new product with no user data (Reach and Impact become pure guesses). In that case, run your concept through the Idea Validator first to test viability before scoring
  • The initiatives are vastly different in nature (comparing a bug fix to a new product line doesn't produce meaningful scores)
  • Strategic alignment matters more than incremental optimization (RICE doesn't account for vision or market positioning)
  • You need to factor in risk, urgency, or dependencies that RICE doesn't capture

For a detailed side-by-side comparison of scoring methods, see RICE vs. ICE vs. MoSCoW.

RICE vs. Other Prioritization Frameworks

FactorRICEMoSCoWWeighted ScoringKano ModelValue vs. Effort
QuantitativeYesNoYesPartiallyPartially
Accounts for reachYesNoOptionalNoNo
Accounts for confidenceYesNoNoNoNo
Ease of useMediumEasyMediumHardEasy
Best forFeature backlogsRelease planningComplex criteriaCustomer delightQuick triage
Stakeholder buy-inHigh (data-driven)High (simple)MediumLowMedium
Handles strategic alignmentNoSomewhatYes (custom criteria)NoNo

Common Mistakes and Pitfalls

1. Inflating Impact Scores

Teams consistently overestimate Impact because they're emotionally attached to their ideas. Combat this by requiring a written justification for any Impact score of 2 or 3, tied to a specific metric and evidence.

2. Ignoring the Confidence Factor

Some teams set Confidence to 100% for everything, which defeats the purpose. Enforce the rule: if you don't have quantitative data, you can't score above 80%. If you don't have qualitative data, you can't score above 50%.

3. Inconsistent Effort Estimates

One team measures Effort in story points, another in weeks, another in "t-shirt sizes." Pick one unit and stick with it. Person-months is the most universally understood.

4. Scoring in a Vacuum

Never let one person score all initiatives alone. RICE works best when engineers estimate Effort, data analysts inform Reach, and product managers calibrate Impact. Cross-functional input reduces bias.

5. Treating RICE Scores as Gospel

The score is an input to your decision, not the decision itself. A feature with a RICE score of 500 might still be the right thing to build if it's strategically critical. Use RICE to inform, not to dictate.

6. Not Revisiting Scores

Conditions change. A feature you scored six months ago may have very different Reach, Impact, or Effort numbers today. Re-score your top candidates at the start of each planning cycle.

Best Practices for RICE Implementation

Calibrate as a Team

Before your first scoring session, score 3-5 past features that have already shipped. Compare the predicted RICE scores to actual outcomes. This calibration exercise helps the team develop shared intuitions for what "Impact: 2" or "Reach: 5,000" actually means.

Document Your Assumptions

For every initiative, record why you chose each score. "Reach: 6,000 because our funnel shows 6,000 users hit the search page per quarter" is far more valuable than just "6,000." When you revisit scores later, you'll know whether the assumptions still hold.

Use a Spreadsheet or Tool

RICE scoring is best done in a shared spreadsheet or purpose-built tool like IdeaPlan where everyone can see the inputs, challenge assumptions, and track scores over time. Use the RICE Scoring Template to structure your scoring session with pre-built formulas, a participant voting grid, and assumption documentation. Transparency builds trust in the process.

Set a Minimum Confidence Threshold

Establish a rule: no initiative with Confidence below 50% goes into the final ranking. Instead, those items go onto a "research needed" list. This creates a healthy pipeline where discovery work feeds into prioritization.

Combine RICE with Strategic Themes

RICE optimizes for incremental value. To ensure you're also investing in long-term bets, layer strategic themes on top: allocate 70% of capacity to high-RICE items and 30% to strategic initiatives that might not score well on RICE but are critical for your long-term vision.

Review and Iterate

After shipping a high-scoring feature, compare predicted Reach and Impact against actual results. Did 6,000 users really use the new search? Did activation increase as expected? This feedback loop makes your future RICE estimates more accurate over time.

Getting Started with RICE Today

  1. Pick your top 10-15 backlog items that are candidates for the next quarter
  2. Gather data on user counts, support tickets, and usage patterns for each
  3. Schedule a 90-minute session with your PM, engineering lead, designer, and data analyst
  4. Walk through each initiative using the RICE components
  5. Rank by score and discuss whether the ranking aligns with your strategy
  6. Commit to a plan and document your reasoning

RICE won't solve every prioritization challenge, but it will give your team a shared vocabulary and a repeatable process for making better decisions. The framework's real power isn't in the formula itself. It's in the structured conversations it forces your team to have about reach, impact, confidence, and effort. Once you've scored your backlog, use the results to feed your product roadmap. For a side-by-side look at how RICE stacks up against every major scoring method, see our best prioritization frameworks list.

RICE Score Example: Step-by-Step Walkthrough

Here's a detailed walkthrough of scoring a single feature so you can see exactly how the math and reasoning work.

Feature: Add in-app onboarding checklist for new users

Step 1: Estimate Reach. Your signup analytics show 3,200 new users per quarter. The checklist would appear for every new user during their first session. Reach = 3,200 users/quarter.

Step 2: Score Impact. Based on competitor benchmarks and your own activation data, onboarding checklists typically increase 7-day retention by 15-25%. Users who complete onboarding are 3x more likely to convert to paid. This is a high-impact change that meaningfully shifts behavior. Impact = 2 (High).

Step 3: Set Confidence. You ran 8 user interviews and reviewed analytics on your current drop-off points, but you haven't A/B tested a checklist yet. Qualitative data supports the idea, but you're estimating the magnitude. Confidence = 80%.

Step 4: Estimate Effort. Engineering scopes the work at 2 weeks of frontend development, 1 week of backend API changes, 3 days of design, and 2 days of QA. Total: about 1.5 person-months. Effort = 1.5 person-months.

Step 5: Calculate.

(3,200 x 2 x 0.8) / 1.5 = 3,413

Record your assumptions alongside the score. Six months from now, you'll want to know why you chose Impact = 2 instead of 1.

Use the RICE Calculator to run these numbers automatically and compare multiple features side by side.

RICE vs ICE vs WSJF: Quick Comparison

These three frameworks solve related but different problems. Here's when each one fits.

RICE is best when you have data. The Reach dimension forces you to quantify how many users a feature affects, which kills pet projects that sound exciting but impact a tiny segment. The Confidence factor penalizes guesswork. Use RICE when you have product analytics, a backlog of 10+ items, and need to justify priorities to stakeholders with numbers. For a full breakdown, see our RICE vs ICE vs MoSCoW comparison.

ICE is best when you need speed. ICE drops Reach in favor of a simpler three-factor model (Impact, Confidence, Ease), each scored 1-10. You can score 20 experiment ideas in 15 minutes. The trade-off is subjectivity: without an explicit Reach dimension, "impact" becomes whatever the most persuasive person in the room says it is. Use ICE for growth experiments, weekly growth meetings, or as a fast first pass before applying RICE to the shortlist.

WSJF is best when timing matters. Weighted Shortest Job First adds a "cost of delay" dimension that neither RICE nor ICE captures. A compliance deadline, a competitor launch window, or a contract renewal date all create real costs if you wait. Use WSJF in SAFe environments, regulated industries, or any situation where delaying a feature has a measurable financial or strategic penalty. See RICE vs WSJF for a deeper analysis.

DimensionRICEICEWSJF
Accounts for reachYesNoNo
Accounts for confidenceYes (percentage)Yes (1-10 scale)No
Accounts for time sensitivityNoNoYes (cost of delay)
Speed to score 20 items60-90 min15-30 min60-90 min
Data requiredModerate (analytics)Low (gut + light data)Moderate (delay costs)

RICE Spreadsheet Setup Guide

You don't need special software to run RICE. A shared spreadsheet works. Here's how to set one up in Google Sheets or Excel.

Column layout:

ColumnHeaderFormatNotes
AFeature NameTextKeep descriptions under 10 words
BReach (users/quarter)NumberPull from analytics, not gut feel
CImpact (0.25-3)NumberUse the 5-point scale: 0.25, 0.5, 1, 2, 3
DConfidence (%)Percentage50%, 80%, or 100% only
EEffort (person-months)NumberInclude eng, design, QA
FRICE ScoreFormula=(B2C2D2)/E2
GAssumptionsTextDocument why you chose each score

Setup tips:

  1. Lock the Impact column to valid values. Use data validation to restrict column C to 0.25, 0.5, 1, 2, or 3. This prevents creative scoring like "Impact: 2.7."
  2. Add conditional formatting to column F. Green for scores in the top quartile, yellow for middle, red for bottom quartile. This makes the ranking visually obvious.
  3. Create an "Assumptions" column (column G). This is the most important column in the sheet. Without it, scores become meaningless numbers after two weeks.
  4. Sort by RICE score descending after scoring all items. Then review the top 5 and bottom 5 as a sanity check.

For a ready-to-use version with pre-built formulas and a voting grid, download the RICE Scoring Template. If you prefer to score interactively rather than in a spreadsheet, the RICE Calculator does the math for you and lets you compare items visually.

Explore More

Frequently Asked Questions

What does RICE stand for in product management?+
RICE stands for Reach, Impact, Confidence, and Effort. It's a prioritization scoring framework developed at Intercom that helps product managers rank features and initiatives by calculating a score: (Reach ร— Impact ร— Confidence) รท Effort.
How do you calculate a RICE score?+
Multiply the number of people reached per quarter (Reach) by the impact level (0.25 to 3), by confidence percentage (as a decimal, e.g., 0.8 for 80%), then divide by the estimated effort in person-months. Higher scores indicate higher priority.
When should you use RICE vs. other prioritization frameworks?+
Use RICE when you have quantitative data on reach and impact and need to compare many features objectively. Use MoSCoW for stakeholder alignment sessions, ICE for quick estimates, and Kano for understanding customer satisfaction drivers.
What is a good RICE score?+
RICE scores are relative, not absolute. A 'good' score depends on your other initiatives. Compare scores within the same product context. A feature with a score of 10 is twice the priority of one scoring 5, but scores aren't meaningful in isolation.
What is the difference between RICE and ICE prioritization?+
RICE uses four factors (Reach, Impact, Confidence, Effort) and produces a formula-based score. ICE uses three factors (Impact, Confidence, Ease) scored on 1-10 scales. RICE is more rigorous because it quantifies how many users a feature reaches, while ICE is faster and better suited for growth experiments where you need to score ideas quickly.
How do you set up a RICE scoring spreadsheet?+
Create columns for Feature Name, Reach (users/quarter), Impact (0.25-3 scale), Confidence (50%, 80%, or 100%), Effort (person-months), and a RICE Score formula column that multiplies Reach x Impact x Confidence and divides by Effort. Add an Assumptions column to document your reasoning. Use data validation to restrict Impact to the five standard values.
Free PDF

Want More Frameworks?

Get PM frameworks, tools and templates delivered weekly.

or use email

Join 10,000+ product leaders. Instant PDF download.

Want full SaaS idea playbooks with market research?

Explore Ideas Pro โ†’

Apply This Framework

Use our templates to put this framework into practice on your next project.