The Framework Is Not the Answer
Every PM conference has a session on prioritization frameworks. RICE, ICE, MoSCoW, Weighted Scoring, Value vs. Effort — the options are endless. But here is the thing: the framework matters far less than the context.
A RICE score that makes perfect sense at a growth-stage SaaS company is useless at a pre-revenue startup. A weighted scoring model that serves an enterprise PM well would paralyze a team of four. The right prioritization approach depends on your stage, your constraints, and the type of decisions you are actually making.
To make this concrete, here are three fictional PMs — based on composites of real people I have worked with — and how they each approach prioritization in their specific context.
Profile 1: Maya — Early-Stage Startup PM
The context
Maya is the first PM at a 12-person startup. They have $2M in seed funding, 8 months of runway, and about 200 beta users. The product is a collaboration tool for design teams. Maya reports directly to the CEO-founder and works with 4 engineers and 1 designer.
Her prioritization problem
Maya does not have too many features to choose from. She has too many directions the product could go. Should they double down on the file-sharing workflow that beta users love? Build the review-and-approval flow that three enterprise prospects asked about? Or invest in integrations with Figma and Sketch that seem like table stakes?
At this stage, the question is not "which feature next?" It is "which bet gives us the best chance of finding product-market fit before the money runs out?"
Her approach: The One-Metric Focus
Maya uses a single metric as her prioritization filter: weekly active users who complete at least one design review. She chose this metric because it represents the core value prop — if users are reviewing designs in the product weekly, the product is working.
Every feature candidate gets one question: "Will this measurably increase weekly active reviewers within 6 weeks?"
Maya does not use a scoring framework. She does not need one. With 4 engineers and 8 months of runway, every sprint is a strategic bet. She runs rapid hypothesis-driven development cycles: build the smallest version, ship it to beta users, measure whether it moves the one metric.
What makes this approach work at this stage
What would break this approach
If Maya had 50 engineers and 500 feature requests, a single metric would not help her allocate across multiple teams and timelines. She needs a framework that works at her scale. This is a stepping stone, not a permanent system.
Profile 2: Jordan — Growth-Stage PM
The context
Jordan is one of 6 PMs at a B2B SaaS company with 150 employees, $30M ARR, and 2,000 customers. Jordan owns the "onboarding and activation" product area with a team of 8 engineers and 2 designers. The company has product-market fit and is focused on efficient growth.
His prioritization problem
Jordan has more validated ideas than capacity. His backlog contains 40 items: improvements to the onboarding flow, new activation triggers, integrations requested by sales, accessibility improvements, and technical debt from three years of rapid building. His team can realistically ship 12-15 things per quarter.
The question is not "what should we build?" It is "in what order should we build the things we already know are valuable?"
His approach: Modified RICE with Team Input
Jordan uses the RICE framework as a starting point, but with two modifications:
Modification 1: Engineers contribute to the Effort estimate collaboratively. Instead of the PM guessing effort, Jordan runs a quarterly estimation session where engineers t-shirt size every item in the backlog. This takes 90 minutes and produces dramatically more accurate estimates than PM guesswork.
Modification 2: Confidence is earned, not assumed. Jordan requires specific evidence for each confidence score:
| Confidence level | Evidence required |
|---|---|
| High (100%) | Customer research + quantitative validation (A/B test, analytics data) |
| Medium (80%) | Customer research OR quantitative data, but not both |
| Low (50%) | PM intuition or single anecdote |
This prevents the common RICE failure mode where PMs give everything 80% confidence because it feels reasonable. You can run these numbers through the RICE calculator to see how confidence adjustments change the ranking.
The quarterly planning ritual:
What makes this approach work at this stage
What would break this approach
If Jordan were managing across multiple teams or stakeholder groups with competing priorities, RICE alone would not resolve conflicts. Political alignment and executive decision-making would be needed alongside the framework. RICE is a tool for within-team prioritization, not for cross-organizational negotiation.
Profile 3: Priya — Enterprise PM
The context
Priya is a Senior PM at a public enterprise software company with 3,000 employees. She manages a platform area that other product teams build on. Her "customers" are both external users (large enterprises) and internal teams (5 product teams that depend on her platform). She has 20 engineers and 4 designers.
Her prioritization problem
Priya's prioritization challenges are multi-dimensional:
She cannot optimize for one dimension without under-serving the others.
Her approach: Portfolio Allocation + Weighted Scoring Within Buckets
Priya uses a two-level system:
Level 1: Portfolio allocation (quarterly)
She allocates her team's capacity across four buckets:
| Bucket | % of capacity | Decision maker |
|---|---|---|
| Customer-facing features | 40% | Priya + customer advisory board |
| Internal platform work | 25% | Priya + dependent PM teams |
| Technical health | 20% | Engineering lead |
| Strategic initiatives | 15% | VP of Product |
These percentages are negotiated each quarter based on business context. If the company is pushing for enterprise compliance certifications, the customer-facing bucket grows. If platform stability is degrading, the technical health bucket grows.
Level 2: Weighted scoring within each bucket
Within each bucket, Priya uses a weighted scoring model with criteria specific to that bucket:
Customer-facing features are scored on: revenue impact (30%), customer breadth (25%), competitive necessity (20%), effort (15%), strategic alignment (10%).
Internal platform work is scored on: number of teams unblocked (40%), effort (30%), architectural impact (30%).
Technical health items are scored by the engineering lead using their own criteria (severity, blast radius, fix difficulty).
What makes this approach work at this stage
What would break this approach
At a startup. The overhead of maintaining four scoring models, running quarterly allocation negotiations, and managing multi-stakeholder input would consume more time than the decisions warrant. This system makes sense when you have 20 engineers and 4 dimensions of demand. It is overkill for a team of 5.
The Meta-Lesson
The right prioritization approach is determined by three factors:
1. Team size
| Team size | Approach |
|---|---|
| 1-8 engineers | Single metric focus or simple rank-ordering |
| 8-20 engineers | Scoring framework (RICE, ICE, weighted) |
| 20+ engineers | Portfolio allocation + scoring within buckets |
2. Decision frequency
If you are making prioritization decisions weekly (early stage), you need a lightweight system. If you are making them quarterly (enterprise), you can afford a heavier process because the decision has to hold for 3 months.
3. Stakeholder complexity
Solo PM with one team and one founder: just decide. PM with multiple stakeholder groups: you need a system that creates transparency and perceived fairness.
How to Choose Your Approach
Ask yourself:
The worst prioritization approach is the one you do not actually use. A simple system applied consistently beats a sophisticated system applied sporadically. Start with the lightest-weight approach that works for your context, and add complexity only when the lightweight approach starts to fail.