PrioritizationBeginner14 min read

RICE Prioritization Framework: The Complete Guide to Scoring and Ranking Features

Master the RICE framework with scoring formulas, real examples, and step-by-step instructions to prioritize your product backlog effectively.

Best for: Product managers who need a quantitative, repeatable method for prioritizing features and initiatives
By Tim Adair• Published 2026-02-08

Quick Answer (TL;DR)

RICE is a prioritization framework that scores features using four factors: Reach (how many users are affected), Impact (how much each user is affected), Confidence (how sure you are of your estimates), and Effort (how much work it takes). The formula is (Reach x Impact x Confidence) / Effort = RICE Score. Higher scores indicate higher priority. It was popularized by Intercom and is one of the most widely adopted quantitative prioritization methods in product management.


What Is the RICE Prioritization Framework?

The RICE framework is a scoring model that helps product teams make objective decisions about which features, projects, or initiatives to pursue. Developed and popularized by Sean McAllister at Intercom, RICE replaces gut-feel prioritization with a structured, repeatable formula that considers both the potential upside and the cost of each initiative.

RICE stands for:

  • Reach -- How many people will this impact in a given time period?
  • Impact -- How much will it impact each person?
  • Confidence -- How confident are you in your estimates?
  • Effort -- How much time and resources will it take?
  • The beauty of RICE lies in its simplicity. By reducing prioritization to a single numerical score, it gives teams a common language for comparing wildly different initiatives -- from a small UX tweak to a major platform overhaul.

    The RICE Formula Explained

    The core formula is straightforward:

    RICE Score = (Reach x Impact x Confidence) / Effort

    Let's break down each component with precise definitions so your team scores consistently.

    Reach

    Reach measures how many users or customers will be affected by an initiative within a defined time period (typically one quarter). Use real data wherever possible.

    How to estimate Reach:

  • Pull from product analytics: DAU/MAU data, funnel conversion rates, segment sizes
  • Use customer support ticket volume for pain-point-driven features
  • Reference market research for new-market initiatives
  • Examples:

    InitiativeReach EstimateSource
    Redesign onboarding flow5,000 new signups/quarterSignup analytics
    Add CSV export800 users requesting/quarterSupport tickets + feature requests
    Mobile app push notifications12,000 active mobile users/quarterMobile analytics
    Enterprise SSO integration50 enterprise accounts/quarterSales pipeline

    Always express Reach as a number of people or accounts per time period. Avoid vague terms like "a lot" or "most users."

    Impact

    Impact measures how much this initiative will move the needle for each person reached. Since individual impact is harder to quantify than reach, RICE uses a standardized scale:

    ScoreLabelMeaning
    3MassiveTransforms the user experience or eliminates a critical blocker
    2HighSignificant improvement that meaningfully changes behavior
    1MediumNoticeable improvement
    0.5LowMinor improvement
    0.25MinimalBarely noticeable

    Guidelines for scoring Impact:

  • 3 (Massive): Slack adding threaded messages -- it fundamentally changed how teams communicated and reduced noise in channels.
  • 2 (High): Spotify adding offline downloads -- a significant feature that changed user behavior and drove subscriptions.
  • 1 (Medium): Adding keyboard shortcuts to an existing workflow -- helpful, used regularly, but not transformative.
  • 0.5 (Low): A tooltip that clarifies a confusing label.
  • 0.25 (Minimal): A color change on a non-critical UI element.
  • Tie Impact to a specific metric you're trying to move: activation rate, retention, NPS, revenue, or time-to-value.

    Confidence

    Confidence is a percentage that reflects how sure you are about your Reach and Impact estimates. This is the factor that keeps teams honest -- it penalizes wishful thinking.

    ScoreLabelCriteria
    100%HighBacked by quantitative data (analytics, A/B test results, large sample research)
    80%MediumSupported by qualitative data (user interviews, surveys, competitive analysis)
    50%LowBased on intuition, anecdotal feedback, or very small sample sizes

    Rules of thumb:

  • If you have strong analytics data supporting both Reach and Impact, use 100%.
  • If you have user interviews or survey data but limited quantitative evidence, use 80%.
  • If you're largely guessing based on gut instinct or a single customer request, use 50%.
  • Never go below 50%. If your confidence is lower than 50%, you need to do more research before scoring -- not just assign a low confidence number.
  • Effort

    Effort is measured in person-months (or person-weeks, or story points -- just be consistent across all initiatives). This is the total effort across all disciplines: engineering, design, QA, data science, marketing, and anything else required.

    How to estimate Effort:

  • Break initiatives into rough work packages
  • Get time estimates from each discipline involved
  • Include QA, documentation, and rollout effort
  • Round up to account for unknowns
  • Examples:

    InitiativeEngineeringDesignQATotal Effort
    Redesign onboarding2 months1 month0.5 months3.5 person-months
    CSV export0.5 months0.25 months0.25 months1 person-month
    Push notifications1.5 months0.5 months0.5 months2.5 person-months
    Enterprise SSO3 months0.5 months1 month4.5 person-months

    Step-by-Step: How to Run a RICE Scoring Session

    Step 1: Prepare Your Candidate List

    Gather all features, projects, and initiatives being considered. Aim for 10-25 items -- too few and you don't need a framework, too many and the session becomes exhausting.

    Step 2: Align on Definitions

    Before scoring, ensure everyone agrees on:

  • The time period for Reach (usually one quarter)
  • The unit of measurement for Effort (person-months is standard)
  • The metric that Impact is measured against (activation, retention, revenue, etc.)
  • The confidence thresholds and what evidence is required for each level
  • Step 3: Score Each Initiative

    Work through each initiative as a team. For each one:

  • State the initiative clearly
  • Discuss and agree on Reach (use data, not opinion)
  • Discuss and agree on Impact (reference the 3/2/1/0.5/0.25 scale)
  • Discuss and agree on Confidence (what evidence do you have?)
  • Discuss and agree on Effort (get input from engineering and design leads)
  • Calculate the RICE score
  • Step 4: Rank and Discuss

    Sort all initiatives by RICE score from highest to lowest. Then have a critical discussion:

  • Do the top items align with your strategy?
  • Are there any surprises in the ranking?
  • Do any scores feel wrong? If so, revisit the individual components.
  • Step 5: Make Decisions

    Use the RICE scores as a strong input to your prioritization, not the final word. Adjust for strategic considerations, dependencies, and team capacity.

    Real-World RICE Scoring Example

    Imagine you're a product manager at a B2B SaaS company with 10,000 active users. Your team has four initiatives to compare:

    InitiativeReachImpactConfidenceEffortRICE Score
    Smart search with filters6,000/quarter2 (High)80%3 person-months3,200
    Bulk action toolbar4,000/quarter1 (Medium)100%1 person-month4,000
    Dashboard customization8,000/quarter1 (Medium)50%4 person-months1,000
    Slack integration2,000/quarter2 (High)80%2 person-months1,600

    Calculations:

  • Smart search: (6,000 x 2 x 0.8) / 3 = 3,200
  • Bulk action toolbar: (4,000 x 1 x 1.0) / 1 = 4,000
  • Dashboard customization: (8,000 x 1 x 0.5) / 4 = 1,000
  • Slack integration: (2,000 x 2 x 0.8) / 2 = 1,600
  • The bulk action toolbar wins despite having lower reach and impact than some alternatives because it's fast to build and the team has high confidence in the estimates. Dashboard customization, despite reaching the most users, ranks last because the low confidence score and high effort drag it down.

    When to Use RICE (and When Not To)

    RICE Works Best When:

  • You have a large backlog of competing features and need a structured way to compare them
  • Your team tends toward opinion-based prioritization and needs a more objective framework
  • You have access to product analytics and customer data to inform Reach and Impact estimates
  • You're prioritizing within a single product where reach and effort are comparable across initiatives
  • RICE Is Less Effective When:

  • You're working on a brand-new product with no user data (Reach and Impact become pure guesses)
  • The initiatives are vastly different in nature (comparing a bug fix to a new product line doesn't produce meaningful scores)
  • Strategic alignment matters more than incremental optimization (RICE doesn't account for vision or market positioning)
  • You need to factor in risk, urgency, or dependencies that RICE doesn't capture
  • RICE vs. Other Prioritization Frameworks

    FactorRICEMoSCoWWeighted ScoringKano ModelValue vs. Effort
    QuantitativeYesNoYesPartiallyPartially
    Accounts for reachYesNoOptionalNoNo
    Accounts for confidenceYesNoNoNoNo
    Ease of useMediumEasyMediumHardEasy
    Best forFeature backlogsRelease planningComplex criteriaCustomer delightQuick triage
    Stakeholder buy-inHigh (data-driven)High (simple)MediumLowMedium
    Handles strategic alignmentNoSomewhatYes (custom criteria)NoNo

    Common Mistakes and Pitfalls

    1. Inflating Impact Scores

    Teams consistently overestimate Impact because they're emotionally attached to their ideas. Combat this by requiring a written justification for any Impact score of 2 or 3, tied to a specific metric and evidence.

    2. Ignoring the Confidence Factor

    Some teams set Confidence to 100% for everything, which defeats the purpose. Enforce the rule: if you don't have quantitative data, you can't score above 80%. If you don't have qualitative data, you can't score above 50%.

    3. Inconsistent Effort Estimates

    One team measures Effort in story points, another in weeks, another in "t-shirt sizes." Pick one unit and stick with it. Person-months is the most universally understood.

    4. Scoring in a Vacuum

    Never let one person score all initiatives alone. RICE works best when engineers estimate Effort, data analysts inform Reach, and product managers calibrate Impact. Cross-functional input reduces bias.

    5. Treating RICE Scores as Gospel

    The score is an input to your decision, not the decision itself. A feature with a RICE score of 500 might still be the right thing to build if it's strategically critical. Use RICE to inform, not to dictate.

    6. Not Revisiting Scores

    Conditions change. A feature you scored six months ago may have very different Reach, Impact, or Effort numbers today. Re-score your top candidates at the start of each planning cycle.

    Best Practices for RICE Implementation

    Calibrate as a Team

    Before your first scoring session, score 3-5 past features that have already shipped. Compare the predicted RICE scores to actual outcomes. This calibration exercise helps the team develop shared intuitions for what "Impact: 2" or "Reach: 5,000" actually means.

    Document Your Assumptions

    For every initiative, record why you chose each score. "Reach: 6,000 because our funnel shows 6,000 users hit the search page per quarter" is far more valuable than just "6,000." When you revisit scores later, you'll know whether the assumptions still hold.

    Use a Spreadsheet or Tool

    RICE scoring is best done in a shared spreadsheet or purpose-built tool like IdeaPlan where everyone can see the inputs, challenge assumptions, and track scores over time. Transparency builds trust in the process.

    Set a Minimum Confidence Threshold

    Establish a rule: no initiative with Confidence below 50% goes into the final ranking. Instead, those items go onto a "research needed" list. This creates a healthy pipeline where discovery work feeds into prioritization.

    Combine RICE with Strategic Themes

    RICE optimizes for incremental value. To ensure you're also investing in long-term bets, layer strategic themes on top: allocate 70% of capacity to high-RICE items and 30% to strategic initiatives that might not score well on RICE but are critical for your long-term vision.

    Review and Iterate

    After shipping a high-scoring feature, compare predicted Reach and Impact against actual results. Did 6,000 users really use the new search? Did activation increase as expected? This feedback loop makes your future RICE estimates more accurate over time.

    Getting Started with RICE Today

  • Pick your top 10-15 backlog items that are candidates for the next quarter
  • Gather data on user counts, support tickets, and usage patterns for each
  • Schedule a 90-minute session with your PM, engineering lead, designer, and data analyst
  • Walk through each initiative using the RICE components
  • Rank by score and discuss whether the ranking aligns with your strategy
  • Commit to a plan and document your reasoning
  • RICE won't solve every prioritization challenge, but it will give your team a shared vocabulary and a repeatable process for making better decisions. The framework's real power isn't in the formula itself -- it's in the structured conversations it forces your team to have about reach, impact, confidence, and effort.

    Frequently Asked Questions

    What does RICE stand for in product management?+
    RICE stands for Reach, Impact, Confidence, and Effort. It's a prioritization scoring framework developed at Intercom that helps product managers rank features and initiatives by calculating a score: (Reach × Impact × Confidence) ÷ Effort.
    How do you calculate a RICE score?+
    Multiply the number of people reached per quarter (Reach) by the impact level (0.25 to 3), by confidence percentage (as a decimal, e.g., 0.8 for 80%), then divide by the estimated effort in person-months. Higher scores indicate higher priority.
    When should you use RICE vs. other prioritization frameworks?+
    Use RICE when you have quantitative data on reach and impact and need to compare many features objectively. Use MoSCoW for stakeholder alignment sessions, ICE for quick estimates, and Kano for understanding customer satisfaction drivers.
    What is a good RICE score?+
    RICE scores are relative, not absolute — a 'good' score depends on your other initiatives. Compare scores within the same product context. A feature with a score of 10 is twice the priority of one scoring 5, but scores aren't meaningful in isolation.
    Free Resource

    Want More Frameworks?

    Subscribe to get PM frameworks, templates, and expert strategies delivered to your inbox.

    No spam. Unsubscribe anytime.

    Want instant access to all 50+ premium templates?

    Apply This Framework

    Use our templates to put this framework into practice on your next project.