Skip to main content
PrioritizationBeginner8 min read read

Buy a Feature: Customer Prioritization Game (2026)

Last updated:

Buy a Feature gives customers fake currency to spend on the features they want most. Surfaces real demand and forces tradeoffs no survey captures.

Best for: Customer advisory boards, user research sessions, B2B feature prioritization with paying customers.
Published 2026-05-12
Share:
Free PDF

Get the Framework Quick Reference

26 PM frameworks mapped with when to use each one, inputs needed, and expected outputs.

or use email

Join 10,000+ product leaders. Instant PDF download.

Want full SaaS idea playbooks with market research?

Explore Ideas Pro →

TL;DR

Buy a Feature is a customer prioritization game where participants spend fake money on real product decisions. Each person gets a budget that covers only a fraction of the total feature cost. That constraint forces them to make genuine tradeoffs, negotiate with other participants, and reveal what they actually want rather than what sounds good in a survey. The game was created by Luke Hohmann and published in Innovation Games (2006). It remains one of the fastest ways to extract honest customer priorities in a group setting.


What Is Buy a Feature?

Buy a Feature is a structured workshop activity in which a group of customers or stakeholders receives a fixed amount of fake currency and uses it to "purchase" features from a pre-priced list. No single participant has enough budget to buy everything. That scarcity is the whole point.

Luke Hohmann developed the game at The Innovation Games Company and documented it in his 2006 book, Innovation Games: Creating Breakthrough Products Through Collaborative Play. The core insight is that people reveal true preferences when they must choose between options, not when they are asked to rank items on a survey where everything tends to score as "very important."

The game works in one to two hours, accommodates groups of five to thirty participants, and produces output that beats any NPS comment or feature request spreadsheet in terms of signal quality.


How the Game Works

The mechanics are deliberately simple so facilitators can run the session without extensive training.

Step 1: Define the feature list. Select eight to fifteen candidate features. Each should be concrete enough that participants can grasp its value in one or two sentences. Avoid vague items like "improved performance." Prefer "sub-100ms search response time across all catalogs."

Step 2: Price each feature. Assign a dollar value to each feature that reflects relative engineering effort or business cost. A minor UI improvement might cost $10. A full API integration might cost $80. The absolute numbers are not important. The relative ratios are. Mix cheap and expensive items so participants face real allocation decisions.

Step 3: Distribute budgets. Give each participant roughly 30 to 50 percent of the total feature cost. If all features sum to $200, each person receives $60 to $100. The exact percentage depends on your group size and how aggressively you want to force tradeoffs. Tighter budgets create more coalition-forming and negotiation.

Step 4: Open the market. Participants spend their money independently or by pooling funds with others to buy expensive features neither could afford alone. Pooling is encouraged. It surfaces which features generate enough shared enthusiasm to justify collective investment.

Step 5: Discuss as you go. The best facilitators ask participants to narrate their choices in real time. "Why are you buying Feature C over Feature A?" generates qualitative data that makes the final tally meaningful.

Step 6: Tally and debrief. After spending stops, count the money each feature attracted. Fully funded features earned broad customer support. Features that attracted partial funding sparked interest but faced budget competition. Unfunded features failed to excite anyone enough to spend on them. The debrief conversation after the tally is where the most valuable insight lives.


Why It Works

Three mechanisms make this game more revealing than standard research methods.

Forced tradeoffs. Surveys let respondents say "yes" to everything. A constrained budget eliminates that option. Participants must make choices that reflect actual priorities, just as their companies do when approving roadmaps under real-world constraints.

Social negotiation. When two participants both want an expensive feature but neither can fund it alone, they must convince each other to pool money. That negotiation exposes the strength of conviction behind each preference. Someone who makes a persuasive case for pooling reveals far more about feature value than a checkbox on a form.

Willingness-to-pay signal. The pricing mechanism creates a rough analog to economic demand. Customers who consistently fund a $60 feature over several cheaper ones are telling you that specific capability is worth significant investment. That signal is difficult to capture any other way in qualitative research.


Worked Example

A B2B SaaS company runs Buy a Feature with 12 customers from its advisory board. The product team has shortlisted eight features for an upcoming planning cycle.

FeaturePriceTotal CollectedStatus
Bulk CSV export$10$90Fully funded
Slack integration$20$180Fully funded
Custom role permissions$30$120Partially funded
AI-generated summaries$40$160Fully funded
White-label reporting$50$80Partially funded
SSO / SAML support$60$240Fully funded, coalition
Mobile app (iOS + Android)$70$30Barely funded
Offline mode$80$0Unfunded

Each of the 12 participants received $40, giving the group a combined $480 against a total feature cost of $360. Participants could not fund everything even collectively, which forced genuine choices.

SSO attracted a coalition of eight participants who pooled money after three enterprise customers lobbied for it in the room. AI-generated summaries sold out in the first five minutes as customers bought it before anyone else could. Offline mode received no bids, and when the facilitator asked why, participants said they work exclusively in corporate environments with reliable Wi-Fi. That negative signal eliminated a feature that had internal advocates on the engineering team.

The PM left with four fully funded features, two to investigate further (the "nearly funded" items often deserve a follow-up conversation about what would push them to full commitment), and two dropped with customer-backed rationale.


When to Use Buy a Feature

This format works best in specific contexts.

Customer advisory boards. If you have a group of paying customers in a room for a day, Buy a Feature fits naturally into the agenda. These participants have skin in the game and bring operational context that makes their choices meaningful.

B2B product discovery. Enterprise product teams often face long sales cycles where features drive purchasing decisions. Running the game with prospects or existing customers helps distinguish must-have capabilities from nice-to-haves during a renewal or expansion conversation.

Internal stakeholder workshops. The format also runs with internal stakeholders when you need to align executives, sales, support, and engineering on a shared roadmap. Replace "fake money" with a fixed number of votes or points. The coalition dynamics remain intact.

Narrowing a large backlog. When you have twenty-plus candidate features and need to cut the list before doing deeper sizing or RICE scoring, Buy a Feature quickly separates the top tier from the rest.


When NOT to Use It

Buy a Feature is a blunt instrument. Do not reach for it in these situations.

Early-stage products with no users. The game requires participants who understand the problem domain well enough to assign value to candidate solutions. If you are still discovering the problem, run customer interviews first.

Abstract or technical features. "Upgrade our infrastructure to Kubernetes" means nothing to most customers. Features must be expressible in terms of user outcomes. If you cannot explain a feature in one sentence that a customer cares about, price it out of the game.

Politically charged internal sessions. When senior executives dominate a room, junior participants anchor their spending on what the HiPPO buys. The game loses validity. Either run separate sessions by seniority level or use anonymous digital tools where purchases are hidden until the round closes.


Common Pitfalls

Too-generous budgets. If participants can afford most of the feature list, you have eliminated the core constraint. Run the math before the session. Each participant's budget should cover no more than 40 to 50 percent of the total cost.

Features that are too abstract. Review every item on your feature list and ask: "Can a customer immediately connect this to something they would do differently tomorrow?" If not, rewrite or remove it.

Non-representative participants. Twelve customers from your largest enterprise account will give you enterprise priorities. If your growth segment is mid-market, their spending patterns may mislead your roadmap. Match participant profile to target segment.

No follow-through after the game. Customers who invest time in a prioritization workshop expect to see their input reflected in your roadmap. If you collect the data and then build based on internal intuition anyway, you will erode trust with your most engaged users. Close the loop with a brief summary of what you heard and how it influenced your decisions.


Buy a Feature vs. Alternatives

MethodWhat It MeasuresWhat It Misses
MoSCoWStakeholder consensus on must-have vs. nice-to-haveNo budget constraint, no negotiation signal
RICE ScoringQuantitative impact-to-effort ratioNo direct customer voice
Kano ModelFeature satisfaction curves (delighters vs. basics)No prioritization among features in same category
Buy a FeatureRelative customer demand with economic signalNo effort weighting, no business model inputs

MoSCoW sessions work well for internal stakeholder alignment, but they do not force anyone to make real tradeoffs. Every participant can put everything in "Must Have" without penalty. Buy a Feature eliminates that behavior by design. Use the MoSCoW Tool when you need fast internal alignment. Use Buy a Feature when you need honest customer signal.

RICE is better suited to scoring a curated list of already-vetted ideas against each other with quantitative inputs. Buy a Feature is better suited to the earlier question of which ideas deserve to be on that list at all.

The Kano Model answers a different question entirely. It maps features to satisfaction curves, identifying which items are baseline expectations versus unexpected delighters. Run Kano first if you want to understand the nature of satisfaction. Run Buy a Feature if you want to understand relative demand within a set of potential investments. The two methods complement each other well in a multi-session research program.


Tools That Help

Running Buy a Feature in person is straightforward: print a feature card deck, hand out envelopes of play money, and run the session around a table. Remote sessions require a digital surface.

The Feature Prioritization Matrix helps after the game when you need to layer effort and confidence scores on top of the raw customer demand signal. Pair it with Buy a Feature results to build a weighted view that incorporates both customer pull and internal feasibility.

For MoSCoW-style categorization of your funded and unfunded features, the MoSCoW Tool provides an interactive way to move items across buckets with your team after the customer session wraps.

Miro and Mural both offer templates for remote Buy a Feature sessions using sticky notes as currency. Slido's Q&A point-allocation feature works for larger groups where individual negotiation is impractical.


Running Your First Session

The fastest way to learn the format is to run it small. Take a real backlog of six to eight items, invite three to five customers to a 45-minute video call, and use a shared Miro board where each person has a fixed pool of colored sticky notes representing money. The session will be imperfect. That is fine. You will leave with clearer customer signal than you had before, and you will know exactly what to tighten for the next round.

Frequently Asked Questions

Who created Buy a Feature?+
Luke Hohmann at The Innovation Games Company. Published in Innovation Games: Creating Breakthrough Products Through Collaborative Play (2006).
How much fake money do I give each participant?+
Distribute roughly half the total feature cost. If features sum to $100, give each person $30-50. Forces trade-offs and coalition forming.
How do I price features?+
Use the engineering effort estimate or business-value-adjusted price. Higher-effort features cost more. Mix small low-cost items with big expensive ones.
Should I run it in person or remote?+
Originally in-person with physical money. Remote works via Miro, Mural, or Slido boards with point allocations. In-person creates richer negotiation.
When does Buy a Feature fail?+
When participants do not represent real users, when features are too abstract to value, or when the budget is too generous (no tradeoffs forced).
Free PDF

Want More Frameworks?

Get PM frameworks, tools and templates delivered weekly.

or use email

Join 10,000+ product leaders. Instant PDF download.

Want full SaaS idea playbooks with market research?

Explore Ideas Pro →

Related Tools

Apply This Framework

Use our templates to put this framework into practice on your next project.