Quick Answer (TL;DR)
MoSCoW is a prioritization technique that sorts requirements into four categories: Must have (non-negotiable for launch), Should have (important but not critical), Could have (nice-to-have if time allows), and Won't have (explicitly out of scope for now). It was created by Dai Clegg at Oracle and is widely used in agile development, particularly in DSDM. MoSCoW excels at building stakeholder consensus and setting clear expectations about what will and won't be delivered.
What Is the MoSCoW Prioritization Method?
MoSCoW is a prioritization technique that categorizes features, requirements, or user stories into four distinct buckets based on their importance to a specific release or time period. Unlike numerical scoring systems, MoSCoW uses plain language categories that anyone -- from engineers to executives -- can understand immediately.
The name is an acronym (the lowercase "o"s are added for pronunciation):
MoSCoW was originally developed by Dai Clegg while working at Oracle in 1994 and later became a core technique in the Dynamic Systems Development Method (DSDM). Today it's used across agile, waterfall, and hybrid development environments because of its simplicity and effectiveness at driving alignment.
The Four MoSCoW Categories Explained
Must Have
Must-have requirements are non-negotiable. If any Must-have item is missing, the release is a failure. The product doesn't work, the launch can't happen, or a legal/compliance obligation is unmet.
The test: Ask yourself, "If we ship without this, does the product fundamentally not work for its intended purpose?" If yes, it's a Must have.
Examples:
Guidelines:
Should Have
Should-have requirements are important but not essential for this release. The product works without them, but it's significantly less valuable. These are features you fully intend to include and would be disappointed to cut.
The test: "The product works without this, but it's materially worse, and users will notice the gap."
Examples:
Guidelines:
Could Have
Could-have requirements are desirable but not important. They're genuine improvements that users would appreciate, but their absence won't significantly impact the product's success. These are the first items to be cut when time or resources are constrained.
The test: "Users would like this, but they won't complain if it's missing."
Examples:
Guidelines:
Won't Have (This Time)
Won't-have items are explicitly out of scope for this release. This category is arguably the most important because it sets clear boundaries and prevents scope creep. These items are acknowledged as valid ideas but are consciously deferred.
The test: "We agree this is valuable, but we're choosing not to do it now."
Examples:
Guidelines:
How to Run a MoSCoW Prioritization Session
Before the Session
1. Define the Scope
Establish what you're prioritizing for: a specific release, a quarter, an MVP, or a sprint. The timeframe matters because it determines resource constraints, which drive the categorization.
2. Prepare the Candidate List
Gather all requirements, features, and user stories being considered. Write each one clearly enough that all participants understand it. Include effort estimates if available -- they'll be critical for validating that your Must-haves fit within capacity.
3. Invite the Right People
A MoSCoW session works best with 5-8 participants:
4. Set Ground Rules
Establish these rules before you start:
During the Session
Step 1: Review Capacity (10 minutes)
Start by establishing how much total effort is available for the release. If you have a team of 5 engineers for a 6-week sprint, that's 30 person-weeks. This number is the constraint everything else is measured against.
Step 2: Walk Through Each Item (60-90 minutes)
For each item on the list:
Step 3: Validate the Must-Haves (15 minutes)
After categorizing everything, add up the estimated effort for all Must-haves. If it exceeds 60% of capacity, you have a problem. Go back and challenge each Must have:
Step 4: Review and Confirm (15 minutes)
Read back the full list by category. Confirm everyone agrees with the final categorization. Document any dissenting views and the rationale for the final decision.
After the Session
Real-World MoSCoW Example: Launching a Team Collaboration Feature
Imagine you're a product manager at a project management tool (similar to Asana or Monday.com), and your team is planning a new "Team Workspaces" feature for Q2. You have 8 weeks and 4 engineers.
| Requirement | Category | Rationale |
|---|---|---|
| Create and name a workspace | Must have | Core functionality -- without this the feature doesn't exist |
| Invite team members by email | Must have | A workspace with no way to add people is useless |
| Role-based permissions (admin, member) | Must have | Security requirement from enterprise customers |
| Shared task board within workspace | Must have | Primary use case for workspaces |
| Activity feed showing team actions | Should have | Important for awareness, but teams can use notifications |
| Workspace-level file storage | Should have | Users expect it, but can use external tools temporarily |
| Custom workspace themes/colors | Could have | Nice personalization, zero functional impact |
| Workspace templates for quick setup | Could have | Helpful for adoption, but not critical for launch |
| Cross-workspace search | Won't have | Complex technically, deferred to Q3 |
| Workspace analytics dashboard | Won't have | Requires data infrastructure work, planned for Q4 |
| Guest access for external collaborators | Won't have | Important, but scope is too large for this release |
Effort check: The four Must-haves are estimated at 18 person-weeks. Available capacity is 32 person-weeks (4 engineers x 8 weeks). Must-haves represent 56% of capacity -- under the 60% threshold. The two Should-haves are estimated at 8 person-weeks, leaving 6 person-weeks for Could-haves and buffer.
MoSCoW for Different Contexts
MVP Planning
MoSCoW is exceptionally effective for defining MVPs. When Airbnb was building their initial platform, the Must-haves were brutally simple: hosts can list a space, guests can browse listings, guests can book and pay. Everything else -- reviews, messaging, photography services -- was Should-have or later.
For MVP planning, be aggressive about limiting Must-haves. If your MVP has more than 5-7 Must-have items, your scope is probably too large.
Sprint Planning
In sprint-level MoSCoW, the categories take on a more immediate flavor:
Quarterly/Annual Planning
At a higher level, MoSCoW helps align leadership on strategic investments:
Common Mistakes and Pitfalls
1. Everything Is a Must Have
This is the most common MoSCoW failure. When stakeholders label everything as Must have, the framework loses all value. Combat this by enforcing the 60% rule: Must-haves cannot exceed 60% of available capacity. If they do, force a re-evaluation.
2. Skipping the Won't-Have Category
Teams often neglect Won't-haves because it feels negative. But this is the category that prevents scope creep and misaligned expectations. Always explicitly list what you're not doing and ensure stakeholders acknowledge it.
3. No Effort Estimates
Without effort estimates, you can't validate whether your Must-haves are achievable. A MoSCoW list where Must-haves consume 150% of capacity is not a plan -- it's a wish list.
4. Confusing Must Have with "Stakeholder Wants It Badly"
The loudest voice in the room doesn't determine what's a Must have. Apply the test: "Does the product literally not work or not launch without this?" If the answer is no, it's a Should have at most, regardless of who's asking.
5. Not Revisiting Categories Mid-Cycle
Priorities change. Halfway through a release, a Should have might become a Must have due to a competitor move, or a Must have might turn out to require double the estimated effort. Schedule mid-cycle MoSCoW check-ins to adapt.
6. Using MoSCoW for Long Backlogs
MoSCoW works best for 15-40 items tied to a specific release or time period. If you're trying to MoSCoW 200 backlog items, use a scoring framework like RICE first to narrow the list, then apply MoSCoW to the shortlist.
MoSCoW vs. Other Prioritization Frameworks
| Factor | MoSCoW | RICE | Weighted Scoring | Kano | Value vs. Effort |
|---|---|---|---|---|---|
| Output | Categories | Numerical score | Numerical score | Feature categories | 2x2 matrix |
| Quantitative | No | Yes | Yes | Partially | Partially |
| Stakeholder alignment | Excellent | Good | Moderate | Poor | Good |
| Ease of use | Very easy | Medium | Medium | Hard | Very easy |
| Best for | Release planning, MVPs | Large backlogs | Complex multi-criteria decisions | Understanding user delight | Quick triage |
| Handles dependencies | Somewhat | No | No | No | No |
| Team size needed | 5-8 people | 2-4 people | 3-5 people | Requires user surveys | 2-3 people |
Best Practices for Effective MoSCoW
Use MoSCoW with Time-Boxing
MoSCoW works best when paired with a fixed timebox -- a sprint, a release, a quarter. The constraint forces real trade-offs. Without a deadline, everything eventually becomes a Must have "for someday."
Make the Must-Have Test Rigorous
Create a written definition of what qualifies as a Must have for your team. Post it in the room during MoSCoW sessions. Example criteria:
Document Everything
For each item, record:
Involve Customers Indirectly
Don't invite customers to your MoSCoW session, but do bring customer evidence. Support ticket volumes, NPS verbatims, churn reasons, and usage data should inform categorization. When someone pushes for Must have, ask: "What customer data supports this?"
Re-Use Won't-Haves as Input for Next Cycle
Start your next planning cycle by reviewing the Won't-have list from the previous cycle. Some items will have become more urgent; others will have become irrelevant. This creates a natural feedback loop that ensures good ideas don't get permanently lost.
Combine with RICE for Large Backlogs
For teams with large backlogs (50+ items), use RICE scoring first to rank everything by quantitative score. Then take the top 20-30 items and apply MoSCoW to determine what makes it into the next release. This two-step approach gives you both analytical rigor and stakeholder alignment.
Getting Started with MoSCoW
MoSCoW's greatest strength is its simplicity. It doesn't require complex formulas, specialized tools, or data science expertise. It requires only that your team has an honest conversation about what truly matters for the next release -- and what doesn't. That conversation, more than any framework, is what drives great product decisions.