Quick Answer (TL;DR)
RICE is a prioritization framework that scores features using four factors: Reach (how many users are affected), Impact (how much each user is affected), Confidence (how sure you are of your estimates), and Effort (how much work it takes). The formula is (Reach x Impact x Confidence) / Effort = RICE Score. Higher scores indicate higher priority. It was popularized by Intercom and is one of the most widely adopted quantitative prioritization methods in product management. Try the free RICE Calculator to score your own backlog items instantly. For help choosing between prioritization frameworks, see our RICE vs ICE vs MoSCoW comparison or RICE vs WSJF analysis.
What Is the RICE Prioritization Framework?
The RICE framework is a scoring model that helps product teams make objective decisions about which features, projects, or initiatives to pursue. Developed and popularized by Sean McAllister at Intercom, RICE replaces gut-feel prioritization with a structured, repeatable formula that considers both the potential upside and the cost of each initiative.
RICE stands for:
- Reach. How many people will this impact in a given time period?
- Impact. How much will it impact each person?
- Confidence. How confident are you in your estimates?
- Effort. How much time and resources will it take?
The beauty of RICE lies in its simplicity. By reducing prioritization to a single numerical score, it gives teams a common language for comparing wildly different initiatives. From a small UX tweak to a major platform overhaul.
The RICE Formula Explained
The core formula is straightforward:
RICE Score = (Reach x Impact x Confidence) / Effort
Let's break down each component with precise definitions so your team scores consistently.
Reach
Reach measures how many users or customers will be affected by an initiative within a defined time period (typically one quarter). Use real data wherever possible.
How to estimate Reach:
- Pull from product analytics: DAU/MAU data, funnel conversion rates, segment sizes
- Use customer support ticket volume for pain-point-driven features
- Reference market research for new-market initiatives
Examples:
| Initiative | Reach Estimate | Source |
|---|---|---|
| Redesign onboarding flow | 5,000 new signups/quarter | Signup analytics |
| Add CSV export | 800 users requesting/quarter | Support tickets + feature requests |
| Mobile app push notifications | 12,000 active mobile users/quarter | Mobile analytics |
| Enterprise SSO integration | 50 enterprise accounts/quarter | Sales pipeline |
Always express Reach as a number of people or accounts per time period. Avoid vague terms like "a lot" or "most users."
Impact
Impact measures how much this initiative will move the needle for each person reached. Since individual impact is harder to quantify than reach, RICE uses a standardized scale:
| Score | Label | Meaning |
|---|---|---|
| 3 | Massive | Transforms the user experience or eliminates a critical blocker |
| 2 | High | Significant improvement that meaningfully changes behavior |
| 1 | Medium | Noticeable improvement |
| 0.5 | Low | Minor improvement |
| 0.25 | Minimal | Barely noticeable |
Guidelines for scoring Impact:
- 3 (Massive): Slack adding threaded messages. It fundamentally changed how teams communicated and reduced noise in channels.
- 2 (High): Spotify adding offline downloads. A significant feature that changed user behavior and drove subscriptions.
- 1 (Medium): Adding keyboard shortcuts to an existing workflow. Helpful, used regularly, but not a major change.
- 0.5 (Low): A tooltip that clarifies a confusing label.
- 0.25 (Minimal): A color change on a non-critical UI element.
Tie Impact to a specific metric you're trying to move: activation rate, retention, NPS, revenue, or time-to-value.
Confidence
Confidence is a percentage that reflects how sure you are about your Reach and Impact estimates. This is the factor that keeps teams honest. It penalizes wishful thinking.
| Score | Label | Criteria |
|---|---|---|
| 100% | High | Backed by quantitative data (analytics, A/B test results, large sample research) |
| 80% | Medium | Supported by qualitative data (user interviews, surveys, competitive analysis) |
| 50% | Low | Based on intuition, anecdotal feedback, or very small sample sizes |
Rules of thumb:
- If you have strong analytics data supporting both Reach and Impact, use 100%.
- If you have user interviews or survey data but limited quantitative evidence, use 80%.
- If you're largely guessing based on gut instinct or a single customer request, use 50%.
- Never go below 50%. If your confidence is lower than 50%, you need to do more research before scoring. Not just assign a low confidence number.
Effort
Effort is measured in person-months (or person-weeks, or story points. Just be consistent across all initiatives). This is the total effort across all disciplines: engineering, design, QA, data science, marketing, and anything else required.
How to estimate Effort:
- Break initiatives into rough work packages
- Get time estimates from each discipline involved
- Include QA, documentation, and rollout effort
- Round up to account for unknowns
Examples:
| Initiative | Engineering | Design | QA | Total Effort |
|---|---|---|---|---|
| Redesign onboarding | 2 months | 1 month | 0.5 months | 3.5 person-months |
| CSV export | 0.5 months | 0.25 months | 0.25 months | 1 person-month |
| Push notifications | 1.5 months | 0.5 months | 0.5 months | 2.5 person-months |
| Enterprise SSO | 3 months | 0.5 months | 1 month | 4.5 person-months |
Step-by-Step: How to Run a RICE Scoring Session
Step 1: Prepare Your Candidate List
Gather all features, projects, and initiatives being considered. Aim for 10-25 items. Too few and you don't need a framework, too many and the session becomes exhausting.
Step 2: Align on Definitions
Before scoring, ensure everyone agrees on:
- The time period for Reach (usually one quarter)
- The unit of measurement for Effort (person-months is standard)
- The metric that Impact is measured against (activation, retention, revenue, etc.)
- The confidence thresholds and what evidence is required for each level
Step 3: Score Each Initiative
Work through each initiative as a team. For each one:
- State the initiative clearly
- Discuss and agree on Reach (use data, not opinion)
- Discuss and agree on Impact (reference the 3/2/1/0.5/0.25 scale)
- Discuss and agree on Confidence (what evidence do you have?)
- Discuss and agree on Effort (get input from engineering and design leads)
- Calculate the RICE score
Step 4: Rank and Discuss
Sort all initiatives by RICE score from highest to lowest. Then have a critical discussion:
- Do the top items align with your strategy?
- Are there any surprises in the ranking?
- Do any scores feel wrong? If so, revisit the individual components.
Step 5: Make Decisions
Use the RICE scores as a strong input to your prioritization, not the final word. Adjust for strategic considerations, dependencies, and team capacity.
Real-World RICE Scoring Example
Imagine you're a product manager at a B2B SaaS company with 10,000 active users. Your team has four initiatives to compare:
| Initiative | Reach | Impact | Confidence | Effort | RICE Score |
|---|---|---|---|---|---|
| Smart search with filters | 6,000/quarter | 2 (High) | 80% | 3 person-months | 3,200 |
| Bulk action toolbar | 4,000/quarter | 1 (Medium) | 100% | 1 person-month | 4,000 |
| Dashboard customization | 8,000/quarter | 1 (Medium) | 50% | 4 person-months | 1,000 |
| Slack integration | 2,000/quarter | 2 (High) | 80% | 2 person-months | 1,600 |
Calculations:
- Smart search: (6,000 x 2 x 0.8) / 3 = 3,200
- Bulk action toolbar: (4,000 x 1 x 1.0) / 1 = 4,000
- Dashboard customization: (8,000 x 1 x 0.5) / 4 = 1,000
- Slack integration: (2,000 x 2 x 0.8) / 2 = 1,600
The bulk action toolbar wins despite having lower reach and impact than some alternatives because it's fast to build and the team has high confidence in the estimates. Dashboard customization, despite reaching the most users, ranks last because the low confidence score and high effort drag it down.
When to Use RICE (and When Not To)
RICE Works Best When:
- You have a large backlog of competing features and need a structured way to compare them
- Your team tends toward opinion-based prioritization and needs a more objective framework
- You have access to product analytics and customer data to inform Reach and Impact estimates
- You're prioritizing within a single product where reach and effort are comparable across initiatives
RICE Is Less Effective When:
- You're working on a brand-new product with no user data (Reach and Impact become pure guesses). In that case, run your concept through the Idea Validator first to test viability before scoring
- The initiatives are vastly different in nature (comparing a bug fix to a new product line doesn't produce meaningful scores)
- Strategic alignment matters more than incremental optimization (RICE doesn't account for vision or market positioning)
- You need to factor in risk, urgency, or dependencies that RICE doesn't capture
For a detailed side-by-side comparison of scoring methods, see RICE vs. ICE vs. MoSCoW.
RICE vs. Other Prioritization Frameworks
| Factor | RICE | MoSCoW | Weighted Scoring | Kano Model | Value vs. Effort |
|---|---|---|---|---|---|
| Quantitative | Yes | No | Yes | Partially | Partially |
| Accounts for reach | Yes | No | Optional | No | No |
| Accounts for confidence | Yes | No | No | No | No |
| Ease of use | Medium | Easy | Medium | Hard | Easy |
| Best for | Feature backlogs | Release planning | Complex criteria | Customer delight | Quick triage |
| Stakeholder buy-in | High (data-driven) | High (simple) | Medium | Low | Medium |
| Handles strategic alignment | No | Somewhat | Yes (custom criteria) | No | No |
Common Mistakes and Pitfalls
1. Inflating Impact Scores
Teams consistently overestimate Impact because they're emotionally attached to their ideas. Combat this by requiring a written justification for any Impact score of 2 or 3, tied to a specific metric and evidence.
2. Ignoring the Confidence Factor
Some teams set Confidence to 100% for everything, which defeats the purpose. Enforce the rule: if you don't have quantitative data, you can't score above 80%. If you don't have qualitative data, you can't score above 50%.
3. Inconsistent Effort Estimates
One team measures Effort in story points, another in weeks, another in "t-shirt sizes." Pick one unit and stick with it. Person-months is the most universally understood.
4. Scoring in a Vacuum
Never let one person score all initiatives alone. RICE works best when engineers estimate Effort, data analysts inform Reach, and product managers calibrate Impact. Cross-functional input reduces bias.
5. Treating RICE Scores as Gospel
The score is an input to your decision, not the decision itself. A feature with a RICE score of 500 might still be the right thing to build if it's strategically critical. Use RICE to inform, not to dictate.
6. Not Revisiting Scores
Conditions change. A feature you scored six months ago may have very different Reach, Impact, or Effort numbers today. Re-score your top candidates at the start of each planning cycle.
Best Practices for RICE Implementation
Calibrate as a Team
Before your first scoring session, score 3-5 past features that have already shipped. Compare the predicted RICE scores to actual outcomes. This calibration exercise helps the team develop shared intuitions for what "Impact: 2" or "Reach: 5,000" actually means.
Document Your Assumptions
For every initiative, record why you chose each score. "Reach: 6,000 because our funnel shows 6,000 users hit the search page per quarter" is far more valuable than just "6,000." When you revisit scores later, you'll know whether the assumptions still hold.
Use a Spreadsheet or Tool
RICE scoring is best done in a shared spreadsheet or purpose-built tool like IdeaPlan where everyone can see the inputs, challenge assumptions, and track scores over time. Use the RICE Scoring Template to structure your scoring session with pre-built formulas, a participant voting grid, and assumption documentation. Transparency builds trust in the process.
Set a Minimum Confidence Threshold
Establish a rule: no initiative with Confidence below 50% goes into the final ranking. Instead, those items go onto a "research needed" list. This creates a healthy pipeline where discovery work feeds into prioritization.
Combine RICE with Strategic Themes
RICE optimizes for incremental value. To ensure you're also investing in long-term bets, layer strategic themes on top: allocate 70% of capacity to high-RICE items and 30% to strategic initiatives that might not score well on RICE but are critical for your long-term vision.
Review and Iterate
After shipping a high-scoring feature, compare predicted Reach and Impact against actual results. Did 6,000 users really use the new search? Did activation increase as expected? This feedback loop makes your future RICE estimates more accurate over time.
Getting Started with RICE Today
- Pick your top 10-15 backlog items that are candidates for the next quarter
- Gather data on user counts, support tickets, and usage patterns for each
- Schedule a 90-minute session with your PM, engineering lead, designer, and data analyst
- Walk through each initiative using the RICE components
- Rank by score and discuss whether the ranking aligns with your strategy
- Commit to a plan and document your reasoning
RICE won't solve every prioritization challenge, but it will give your team a shared vocabulary and a repeatable process for making better decisions. The framework's real power isn't in the formula itself. It's in the structured conversations it forces your team to have about reach, impact, confidence, and effort. Once you've scored your backlog, use the results to feed your product roadmap. For a side-by-side look at how RICE stacks up against every major scoring method, see our best prioritization frameworks list.
RICE Score Example: Step-by-Step Walkthrough
Here's a detailed walkthrough of scoring a single feature so you can see exactly how the math and reasoning work.
Feature: Add in-app onboarding checklist for new users
Step 1: Estimate Reach. Your signup analytics show 3,200 new users per quarter. The checklist would appear for every new user during their first session. Reach = 3,200 users/quarter.
Step 2: Score Impact. Based on competitor benchmarks and your own activation data, onboarding checklists typically increase 7-day retention by 15-25%. Users who complete onboarding are 3x more likely to convert to paid. This is a high-impact change that meaningfully shifts behavior. Impact = 2 (High).
Step 3: Set Confidence. You ran 8 user interviews and reviewed analytics on your current drop-off points, but you haven't A/B tested a checklist yet. Qualitative data supports the idea, but you're estimating the magnitude. Confidence = 80%.
Step 4: Estimate Effort. Engineering scopes the work at 2 weeks of frontend development, 1 week of backend API changes, 3 days of design, and 2 days of QA. Total: about 1.5 person-months. Effort = 1.5 person-months.
Step 5: Calculate.
(3,200 x 2 x 0.8) / 1.5 = 3,413
Record your assumptions alongside the score. Six months from now, you'll want to know why you chose Impact = 2 instead of 1.
Use the RICE Calculator to run these numbers automatically and compare multiple features side by side.
RICE vs ICE vs WSJF: Quick Comparison
These three frameworks solve related but different problems. Here's when each one fits.
RICE is best when you have data. The Reach dimension forces you to quantify how many users a feature affects, which kills pet projects that sound exciting but impact a tiny segment. The Confidence factor penalizes guesswork. Use RICE when you have product analytics, a backlog of 10+ items, and need to justify priorities to stakeholders with numbers. For a full breakdown, see our RICE vs ICE vs MoSCoW comparison.
ICE is best when you need speed. ICE drops Reach in favor of a simpler three-factor model (Impact, Confidence, Ease), each scored 1-10. You can score 20 experiment ideas in 15 minutes. The trade-off is subjectivity: without an explicit Reach dimension, "impact" becomes whatever the most persuasive person in the room says it is. Use ICE for growth experiments, weekly growth meetings, or as a fast first pass before applying RICE to the shortlist.
WSJF is best when timing matters. Weighted Shortest Job First adds a "cost of delay" dimension that neither RICE nor ICE captures. A compliance deadline, a competitor launch window, or a contract renewal date all create real costs if you wait. Use WSJF in SAFe environments, regulated industries, or any situation where delaying a feature has a measurable financial or strategic penalty. See RICE vs WSJF for a deeper analysis.
| Dimension | RICE | ICE | WSJF |
|---|---|---|---|
| Accounts for reach | Yes | No | No |
| Accounts for confidence | Yes (percentage) | Yes (1-10 scale) | No |
| Accounts for time sensitivity | No | No | Yes (cost of delay) |
| Speed to score 20 items | 60-90 min | 15-30 min | 60-90 min |
| Data required | Moderate (analytics) | Low (gut + light data) | Moderate (delay costs) |
RICE Spreadsheet Setup Guide
You don't need special software to run RICE. A shared spreadsheet works. Here's how to set one up in Google Sheets or Excel.
Column layout:
| Column | Header | Format | Notes |
|---|---|---|---|
| A | Feature Name | Text | Keep descriptions under 10 words |
| B | Reach (users/quarter) | Number | Pull from analytics, not gut feel |
| C | Impact (0.25-3) | Number | Use the 5-point scale: 0.25, 0.5, 1, 2, 3 |
| D | Confidence (%) | Percentage | 50%, 80%, or 100% only |
| E | Effort (person-months) | Number | Include eng, design, QA |
| F | RICE Score | Formula | =(B2C2D2)/E2 |
| G | Assumptions | Text | Document why you chose each score |
Setup tips:
- Lock the Impact column to valid values. Use data validation to restrict column C to 0.25, 0.5, 1, 2, or 3. This prevents creative scoring like "Impact: 2.7."
- Add conditional formatting to column F. Green for scores in the top quartile, yellow for middle, red for bottom quartile. This makes the ranking visually obvious.
- Create an "Assumptions" column (column G). This is the most important column in the sheet. Without it, scores become meaningless numbers after two weeks.
- Sort by RICE score descending after scoring all items. Then review the top 5 and bottom 5 as a sanity check.
For a ready-to-use version with pre-built formulas and a voting grid, download the RICE Scoring Template. If you prefer to score interactively rather than in a spreadsheet, the RICE Calculator does the math for you and lets you compare items visually.
Related Comparisons
Explore More
- Prioritization for Director/VP Product Managers - Director and VP-level prioritization: portfolio allocation, organizational capacity planning, and strategic investment decisions across product lines.
- Prioritization for Mid-Level Product Managers - Advance your prioritization skills as a mid-level PM.
- Prioritization for New Product Managers - Learn prioritization fundamentals as a new PM.
- Prioritization for Senior Product Managers - Master senior-level prioritization.