What This Template Is For
Stack ranking forces a strict 1-through-N ordering of your features with no ties allowed. Unlike scoring frameworks that produce identical scores for different items, stack ranking requires explicit tradeoff decisions for every pair of features. The result is a definitive ordered list where position 1 is your most important feature and every subsequent position represents a deliberate choice to prioritize it below the items above.
This template walks your team through a structured stack ranking process. It includes criteria definition, individual ranking, team calibration, and a final consensus rank. Stack ranking works best for small-to-medium backlogs (8 to 25 items) where you need a clear build order. For larger backlogs, start with a scoring framework like RICE to filter down to a shortlist, then stack rank the top candidates.
Stack ranking is a useful complement to other prioritization methods. Where the RICE vs ICE vs MoSCoW comparison shows trade-offs between scoring systems, stack ranking removes the ambiguity of tied scores entirely. Use it when your team needs to make hard calls about what ships first, second, and third. The Product Strategy Handbook covers how to connect these ranking decisions to your broader product direction.
When to Use This Template
- Your scoring framework produced too many ties and the team cannot decide what ships next
- You need to present a definitive build order to leadership for roadmap approval
- The team has 8 to 25 features and needs to commit to a sequence, not just a tier
- You want to force explicit tradeoff conversations that scoring alone does not create
- Release planning requires a strict cut line (items above the line ship, items below do not)
- Quarterly planning needs a clear priority order for resource allocation decisions
How to Use This Template
Step 1: Define ranking criteria. Agree on 2 to 4 criteria that define "higher priority" for this ranking exercise. Common criteria include strategic alignment, customer demand, revenue impact, and implementation risk.
Step 2: List features. Add every feature under consideration. Keep the list between 8 and 25 items. If you have more, pre-filter using a scoring framework first.
Step 3: Individual ranking. Each team member independently ranks all features from 1 (highest priority) to N. No ties allowed. This forces each person to make explicit tradeoff decisions.
Step 4: Calibration discussion. Compare individual rankings. Discuss features where rankings diverge by more than 5 positions. These are the items where the team disagrees most and needs alignment.
Step 5: Consensus ranking. Produce a single team ranking. Draw a cut line showing what fits in the current cycle. Document the rationale for the top 5 and any features that shifted significantly during calibration.
The Template
# Stack Ranking Template
## Ranking Criteria
Define 2-4 criteria for "higher priority." Weight each criterion if needed.
| Criterion | Weight | Description |
|-------------------------|--------|-------------------------------------------|
| | | |
| | | |
| | | |
## Feature List
| # | Feature Name | Owner | Notes |
|---|-------------------------|---------------|------------------------------|
| 1 | | | |
| 2 | | | |
| 3 | | | |
| 4 | | | |
| 5 | | | |
| 6 | | | |
| 7 | | | |
| 8 | | | |
| 9 | | | |
| 10| | | |
## Individual Rankings
| Feature | Person A | Person B | Person C | Person D | Avg Rank | Variance |
|-------------------|----------|----------|----------|----------|----------|----------|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
## Calibration Notes
Features with high variance (>5 positions between rankers):
| Feature | Highest | Lowest | Discussion Notes |
|-------------------|---------|--------|-----------------------------|
| | | | |
| | | | |
## Final Consensus Ranking
| Rank | Feature | Rationale |
|------|----------------------|---------------------------------------|
| 1 | | |
| 2 | | |
| 3 | | |
| 4 | | |
| 5 | | |
| ---- | --- CUT LINE --- | Items below may not ship this cycle |
| 6 | | |
| 7 | | |
| 8 | | |
## Decision Record
- **Date:**
- **Participants:**
- **Cycle:** [Q2 2026 / Sprint 14 / etc.]
- **Capacity:** [X features above cut line]
- **Key tradeoff:**
Filled Example: B2B Collaboration Platform
# Stack Ranking: CollabSpace Q2 Feature Priorities
## Ranking Criteria
| Criterion | Weight | Description |
|-------------------------|--------|-------------------------------------------------|
| Customer demand | 40% | Frequency in support tickets and sales requests |
| Revenue impact | 30% | Expected effect on expansion revenue and churn |
| Strategic alignment | 20% | Fits our "enterprise-ready" Q2 theme |
| Implementation risk | 10% | Lower risk = higher ranking (inverse) |
## Individual Rankings
| Feature | PM | Eng Lead | Design | Sales | Avg Rank | Variance |
|------------------------------|------|----------|--------|-------|----------|----------|
| SSO/SAML integration | 1 | 2 | 3 | 1 | 1.75 | 0.9 |
| Role-based access controls | 2 | 1 | 2 | 3 | 2.00 | 0.7 |
| Guest user access | 3 | 4 | 1 | 5 | 3.25 | 2.6 |
| Audit log | 4 | 3 | 7 | 2 | 4.00 | 3.7 |
| Custom branding | 7 | 8 | 4 | 4 | 5.75 | 3.6 |
| Advanced search | 5 | 5 | 5 | 8 | 5.75 | 1.9 |
| API rate limit dashboard | 6 | 6 | 8 | 7 | 6.75 | 0.9 |
| Slack digest notifications | 8 | 7 | 6 | 6 | 6.75 | 0.9 |
## Calibration Notes
| Feature | Highest | Lowest | Discussion Notes |
|-------------------|---------|--------|--------------------------------------------------|
| Guest user access | 1 (Des) | 5 (Sales) | Design sees it as core UX. Sales says enterprise buyers do not ask for it. Agreed: rank 4. |
| Audit log | 2 (Sales)| 7 (Des) | Sales has 3 enterprise deals blocked on audit log. Design deprioritized because of low user interaction. Agreed: rank 3. |
| Custom branding | 4 (Des) | 8 (Eng) | Design sees brand control as high value. Eng says high effort for low strategic fit. Agreed: rank 7. |
## Final Consensus Ranking
| Rank | Feature | Rationale |
|------|----------------------------|--------------------------------------------------|
| 1 | SSO/SAML integration | Top enterprise blocker. 5 deals pending |
| 2 | Role-based access controls | Pairs with SSO for enterprise security story |
| 3 | Audit log | 3 enterprise deals blocked. Fast to build |
| 4 | Guest user access | Enables external collaboration use case |
| 5 | Advanced search | Consistent mid-rank, universal user value |
| ---- | --- CUT LINE --- | Below items move to Q3 |
| 6 | Slack digest notifications | Nice to have but not enterprise-blocking |
| 7 | Custom branding | High effort relative to strategic impact |
| 8 | API rate limit dashboard | Developer-facing, small audience |
## Decision Record
- **Date:** 2026-03-05
- **Participants:** PM, Eng Lead, Design Lead, Sales Lead
- **Cycle:** Q2 2026
- **Capacity:** 5 features above cut line
- **Key tradeoff:** Custom branding dropped to 7 despite strong design push. Enterprise security (SSO + RBAC + audit) took priority based on revenue pipeline data.
Key Takeaways
- Stack ranking eliminates ties by forcing a strict 1-through-N order with no duplicates allowed
- The method works best for 8 to 25 items. For larger backlogs, pre-filter with a scoring framework first
- Individual rankings before group discussion reduce groupthink and surface genuine disagreements
- High-variance items (where rankers disagree by 5+ positions) are the most valuable to discuss
- The cut line is the most important output. It clearly separates "ships this cycle" from "does not ship"
- Document your rationale. Six months from now, you will want to know why feature X ranked above feature Y
