PrioritizationBeginner13 min read

MoSCoW Prioritization Method: The Complete Guide for Product Teams

Learn the MoSCoW prioritization method with real examples, session templates, and stakeholder alignment tips for effective product planning.

Best for: Product teams who need a simple, collaborative method for aligning stakeholders on feature priorities
By Tim Adair• Published 2026-02-08

Quick Answer (TL;DR)

MoSCoW is a prioritization technique that sorts requirements into four categories: Must have (non-negotiable for launch), Should have (important but not critical), Could have (nice-to-have if time allows), and Won't have (explicitly out of scope for now). It was created by Dai Clegg at Oracle and is widely used in agile development, particularly in DSDM. MoSCoW excels at building stakeholder consensus and setting clear expectations about what will and won't be delivered.


What Is the MoSCoW Prioritization Method?

MoSCoW is a prioritization technique that categorizes features, requirements, or user stories into four distinct buckets based on their importance to a specific release or time period. Unlike numerical scoring systems, MoSCoW uses plain language categories that anyone -- from engineers to executives -- can understand immediately.

The name is an acronym (the lowercase "o"s are added for pronunciation):

  • M -- Must have
  • S -- Should have
  • C -- Could have
  • W -- Won't have (this time)
  • MoSCoW was originally developed by Dai Clegg while working at Oracle in 1994 and later became a core technique in the Dynamic Systems Development Method (DSDM). Today it's used across agile, waterfall, and hybrid development environments because of its simplicity and effectiveness at driving alignment.

    The Four MoSCoW Categories Explained

    Must Have

    Must-have requirements are non-negotiable. If any Must-have item is missing, the release is a failure. The product doesn't work, the launch can't happen, or a legal/compliance obligation is unmet.

    The test: Ask yourself, "If we ship without this, does the product fundamentally not work for its intended purpose?" If yes, it's a Must have.

    Examples:

  • User authentication for a banking app (security requirement)
  • Payment processing for an e-commerce checkout
  • GDPR consent management for a product launching in the EU
  • Core data migration for an enterprise platform switch
  • Guidelines:

  • Must-haves should represent no more than 60% of the total effort in a release. If everything is a Must have, nothing is.
  • Every Must have should have a clear rationale: compliance, contractual obligation, or the product literally breaks without it.
  • Challenge every Must have with: "What happens if we ship without this?" If the answer isn't catastrophic, downgrade it.
  • Should Have

    Should-have requirements are important but not essential for this release. The product works without them, but it's significantly less valuable. These are features you fully intend to include and would be disappointed to cut.

    The test: "The product works without this, but it's materially worse, and users will notice the gap."

    Examples:

  • Search filters on a product catalog (the catalog works, but it's harder to use)
  • Email notifications for status updates (users can check manually, but it's inconvenient)
  • Bulk editing capability (users can edit one-by-one, but it's slow for power users)
  • Dashboard with key metrics (users can query data manually, but it's not self-serve)
  • Guidelines:

  • Should-haves are the first things to cut when you run out of time, but they should be scheduled for the very next release.
  • They typically represent 20-30% of total effort.
  • If a Should have keeps getting bumped from release to release, either promote it to Must have or reconsider whether it matters.
  • Could Have

    Could-have requirements are desirable but not important. They're genuine improvements that users would appreciate, but their absence won't significantly impact the product's success. These are the first items to be cut when time or resources are constrained.

    The test: "Users would like this, but they won't complain if it's missing."

    Examples:

  • Dark mode for a productivity app
  • Animated transitions between screens
  • Optional integrations with third-party tools
  • Advanced customization options for power users
  • Guidelines:

  • Could-haves typically represent 10-20% of total effort.
  • They serve as a buffer. When Must-haves take longer than expected (and they always do), Could-haves get cut first without pain.
  • Track them in your backlog -- they often become Should-haves or Must-haves in future releases as the product matures.
  • Won't Have (This Time)

    Won't-have items are explicitly out of scope for this release. This category is arguably the most important because it sets clear boundaries and prevents scope creep. These items are acknowledged as valid ideas but are consciously deferred.

    The test: "We agree this is valuable, but we're choosing not to do it now."

    Examples:

  • Mobile app when you're focused on web first
  • AI-powered recommendations in an MVP
  • Multi-language support for a product launching in a single market
  • White-label capabilities for an early-stage product
  • Guidelines:

  • Always include Won't-haves in your MoSCoW list. They prevent "but I thought we were going to build that" conversations later.
  • Won't-have doesn't mean "never." It means "not this time." Make sure stakeholders understand the distinction.
  • These items form the starting backlog for future release planning.
  • How to Run a MoSCoW Prioritization Session

    Before the Session

    1. Define the Scope

    Establish what you're prioritizing for: a specific release, a quarter, an MVP, or a sprint. The timeframe matters because it determines resource constraints, which drive the categorization.

    2. Prepare the Candidate List

    Gather all requirements, features, and user stories being considered. Write each one clearly enough that all participants understand it. Include effort estimates if available -- they'll be critical for validating that your Must-haves fit within capacity.

    3. Invite the Right People

    A MoSCoW session works best with 5-8 participants:

  • Product manager (facilitator)
  • Engineering lead (effort reality check)
  • Design lead (user experience perspective)
  • Key stakeholder or sponsor (business priorities)
  • QA lead (quality and risk perspective)
  • Optionally: customer success, sales, or marketing representative
  • 4. Set Ground Rules

    Establish these rules before you start:

  • Must-haves cannot exceed 60% of available capacity
  • Everyone gets equal voice regardless of seniority
  • "Must have" requires a concrete rationale, not just strong opinions
  • The facilitator has final call on categorization if consensus can't be reached
  • During the Session

    Step 1: Review Capacity (10 minutes)

    Start by establishing how much total effort is available for the release. If you have a team of 5 engineers for a 6-week sprint, that's 30 person-weeks. This number is the constraint everything else is measured against.

    Step 2: Walk Through Each Item (60-90 minutes)

    For each item on the list:

  • The product manager describes the item and its rationale
  • Open discussion: Which category does this belong in?
  • If there's disagreement, each person briefly states their case
  • The facilitator proposes a category; the group confirms or escalates
  • Record the category and the reasoning
  • Step 3: Validate the Must-Haves (15 minutes)

    After categorizing everything, add up the estimated effort for all Must-haves. If it exceeds 60% of capacity, you have a problem. Go back and challenge each Must have:

  • Can this be descoped to reduce effort while keeping the Must-have core?
  • Is this really a Must have, or is it a strongly-felt Should have?
  • Can this be split into a Must-have MVP and a Should-have enhancement?
  • Step 4: Review and Confirm (15 minutes)

    Read back the full list by category. Confirm everyone agrees with the final categorization. Document any dissenting views and the rationale for the final decision.

    After the Session

  • Share the categorized list with all stakeholders within 24 hours
  • Include the rationale for each Must-have item
  • Schedule a check-in midway through the release to reassess Should-haves and Could-haves based on progress
  • Track which Could-haves and Won't-haves get promoted in future cycles
  • Real-World MoSCoW Example: Launching a Team Collaboration Feature

    Imagine you're a product manager at a project management tool (similar to Asana or Monday.com), and your team is planning a new "Team Workspaces" feature for Q2. You have 8 weeks and 4 engineers.

    RequirementCategoryRationale
    Create and name a workspaceMust haveCore functionality -- without this the feature doesn't exist
    Invite team members by emailMust haveA workspace with no way to add people is useless
    Role-based permissions (admin, member)Must haveSecurity requirement from enterprise customers
    Shared task board within workspaceMust havePrimary use case for workspaces
    Activity feed showing team actionsShould haveImportant for awareness, but teams can use notifications
    Workspace-level file storageShould haveUsers expect it, but can use external tools temporarily
    Custom workspace themes/colorsCould haveNice personalization, zero functional impact
    Workspace templates for quick setupCould haveHelpful for adoption, but not critical for launch
    Cross-workspace searchWon't haveComplex technically, deferred to Q3
    Workspace analytics dashboardWon't haveRequires data infrastructure work, planned for Q4
    Guest access for external collaboratorsWon't haveImportant, but scope is too large for this release

    Effort check: The four Must-haves are estimated at 18 person-weeks. Available capacity is 32 person-weeks (4 engineers x 8 weeks). Must-haves represent 56% of capacity -- under the 60% threshold. The two Should-haves are estimated at 8 person-weeks, leaving 6 person-weeks for Could-haves and buffer.

    MoSCoW for Different Contexts

    MVP Planning

    MoSCoW is exceptionally effective for defining MVPs. When Airbnb was building their initial platform, the Must-haves were brutally simple: hosts can list a space, guests can browse listings, guests can book and pay. Everything else -- reviews, messaging, photography services -- was Should-have or later.

    For MVP planning, be aggressive about limiting Must-haves. If your MVP has more than 5-7 Must-have items, your scope is probably too large.

    Sprint Planning

    In sprint-level MoSCoW, the categories take on a more immediate flavor:

  • Must have: Committed sprint goals -- what the team has agreed to deliver
  • Should have: Stretch goals that are likely achievable
  • Could have: Items pulled in only if the sprint goes faster than expected
  • Won't have: Backlog items explicitly not in this sprint
  • Quarterly/Annual Planning

    At a higher level, MoSCoW helps align leadership on strategic investments:

  • Must have: Commitments to major customers, compliance deadlines, critical infrastructure
  • Should have: Key product improvements that drive growth metrics
  • Could have: Experimental features, nice-to-have improvements
  • Won't have: Initiatives deferred to future quarters (and why)
  • Common Mistakes and Pitfalls

    1. Everything Is a Must Have

    This is the most common MoSCoW failure. When stakeholders label everything as Must have, the framework loses all value. Combat this by enforcing the 60% rule: Must-haves cannot exceed 60% of available capacity. If they do, force a re-evaluation.

    2. Skipping the Won't-Have Category

    Teams often neglect Won't-haves because it feels negative. But this is the category that prevents scope creep and misaligned expectations. Always explicitly list what you're not doing and ensure stakeholders acknowledge it.

    3. No Effort Estimates

    Without effort estimates, you can't validate whether your Must-haves are achievable. A MoSCoW list where Must-haves consume 150% of capacity is not a plan -- it's a wish list.

    4. Confusing Must Have with "Stakeholder Wants It Badly"

    The loudest voice in the room doesn't determine what's a Must have. Apply the test: "Does the product literally not work or not launch without this?" If the answer is no, it's a Should have at most, regardless of who's asking.

    5. Not Revisiting Categories Mid-Cycle

    Priorities change. Halfway through a release, a Should have might become a Must have due to a competitor move, or a Must have might turn out to require double the estimated effort. Schedule mid-cycle MoSCoW check-ins to adapt.

    6. Using MoSCoW for Long Backlogs

    MoSCoW works best for 15-40 items tied to a specific release or time period. If you're trying to MoSCoW 200 backlog items, use a scoring framework like RICE first to narrow the list, then apply MoSCoW to the shortlist.

    MoSCoW vs. Other Prioritization Frameworks

    FactorMoSCoWRICEWeighted ScoringKanoValue vs. Effort
    OutputCategoriesNumerical scoreNumerical scoreFeature categories2x2 matrix
    QuantitativeNoYesYesPartiallyPartially
    Stakeholder alignmentExcellentGoodModeratePoorGood
    Ease of useVery easyMediumMediumHardVery easy
    Best forRelease planning, MVPsLarge backlogsComplex multi-criteria decisionsUnderstanding user delightQuick triage
    Handles dependenciesSomewhatNoNoNoNo
    Team size needed5-8 people2-4 people3-5 peopleRequires user surveys2-3 people

    Best Practices for Effective MoSCoW

    Use MoSCoW with Time-Boxing

    MoSCoW works best when paired with a fixed timebox -- a sprint, a release, a quarter. The constraint forces real trade-offs. Without a deadline, everything eventually becomes a Must have "for someday."

    Make the Must-Have Test Rigorous

    Create a written definition of what qualifies as a Must have for your team. Post it in the room during MoSCoW sessions. Example criteria:

  • The product is unusable or unlaunable without it
  • A contractual or legal obligation requires it
  • A critical revenue stream depends on it
  • A safety or security requirement mandates it
  • Document Everything

    For each item, record:

  • The category assignment
  • Who advocated for which category
  • The rationale for the final decision
  • Any conditions that would change the category (e.g., "Should have, but becomes Must have if competitor X launches this feature")
  • Involve Customers Indirectly

    Don't invite customers to your MoSCoW session, but do bring customer evidence. Support ticket volumes, NPS verbatims, churn reasons, and usage data should inform categorization. When someone pushes for Must have, ask: "What customer data supports this?"

    Re-Use Won't-Haves as Input for Next Cycle

    Start your next planning cycle by reviewing the Won't-have list from the previous cycle. Some items will have become more urgent; others will have become irrelevant. This creates a natural feedback loop that ensures good ideas don't get permanently lost.

    Combine with RICE for Large Backlogs

    For teams with large backlogs (50+ items), use RICE scoring first to rank everything by quantitative score. Then take the top 20-30 items and apply MoSCoW to determine what makes it into the next release. This two-step approach gives you both analytical rigor and stakeholder alignment.

    Getting Started with MoSCoW

  • Pick an upcoming release or sprint as your scope
  • List all candidate requirements (aim for 15-30 items)
  • Get rough effort estimates for each item from your engineering team
  • Calculate your total available capacity for the time period
  • Schedule a 2-hour session with your cross-functional team
  • Walk through each item using the Must/Should/Could/Won't framework
  • Validate that Must-haves are under 60% of capacity
  • Share the results with all stakeholders within 24 hours
  • MoSCoW's greatest strength is its simplicity. It doesn't require complex formulas, specialized tools, or data science expertise. It requires only that your team has an honest conversation about what truly matters for the next release -- and what doesn't. That conversation, more than any framework, is what drives great product decisions.

    Frequently Asked Questions

    What does MoSCoW stand for?+
    MoSCoW stands for Must have, Should have, Could have, and Won't have (this time). The capital letters spell MoSCoW, with the lowercase 'o's added for readability. It was created by Dai Clegg while working at Oracle.
    What is the difference between MoSCoW and RICE?+
    MoSCoW is a qualitative, collaborative method for categorizing features into priority buckets during group sessions. RICE is a quantitative scoring formula that calculates individual priority scores. MoSCoW is better for stakeholder alignment; RICE is better for data-driven backlog ranking.
    How do you run a MoSCoW prioritization session?+
    Gather stakeholders, list all features or requirements, then collaboratively assign each to Must, Should, Could, or Won't. Aim for roughly 60% Must, 20% Should, 20% Could. The key is that 'Must' items are non-negotiable for the release — if any are missing, the release fails.
    Free Resource

    Want More Frameworks?

    Subscribe to get PM frameworks, templates, and expert strategies delivered to your inbox.

    No spam. Unsubscribe anytime.

    Want instant access to all 50+ premium templates?

    Apply This Framework

    Use our templates to put this framework into practice on your next project.