A painted door test places a button, link, or menu item for a feature that does not exist yet. When users click it, you measure demand. If enough users click, you have validated interest before writing a single line of production code. If nobody clicks, you just saved months of engineering effort.
How It Works
Add the feature entry point to your product UI exactly as it would appear if the feature existed. A navigation item, a CTA button, a card on a dashboard. When users click, show a message: "This feature is coming soon. Sign up to be notified when it launches." Capture their email and track the click-through rate.
The key metric is click-through rate relative to exposure. If 1,000 users see the button and 80 click it (8%), that is strong demand. If 5 click it (0.5%), the market has spoken.
When to Use Painted Door Tests
Before building expensive features. Any feature that takes more than 2-3 engineering sprints should be validated first. A painted door test takes a designer half a day and an engineer a few hours.
When debating between options. If your team cannot agree on which of three features to build next, run painted door tests for all three simultaneously. Let user behavior settle the debate. Feed the results into your RICE scoring as the reach and confidence inputs.
When stakeholders push pet features. A painted door test turns opinion-based arguments into data. "Let us test demand before we commit two sprints" is hard to argue against.
The assumption mapper helps identify which product assumptions need validation and which testing method fits best.
Setting Up the Test
Step 1: Design the entry point. It should look real. Users must believe the feature exists or is about to launch. Half-baked mockups skew results.
Step 2: Define your success threshold before launching. "If 5% of exposed users click within two weeks, we build it." Set this number before seeing results to prevent post-hoc rationalization.
Step 3: Run the test for 1-2 weeks or until you reach 500+ exposures, whichever comes first. Below 500 exposures, the data is too noisy to trust.
Step 4: Measure and decide. If above threshold, move to prototyping. If below, kill the idea or reframe the value proposition and test again.
Ethical Considerations
Be transparent. When users click, tell them the feature is coming soon. Do not make them feel tricked. Some teams add "Help us prioritize: tell us why you are interested" as a follow-up question. This adds qualitative context to the quantitative click data.
Never run painted door tests on features where failure to deliver could damage trust. Testing a "Delete my account" button that does not work is destructive. Testing a "New analytics dashboard" that is not built yet is fine.
Use the OST Builder to map which opportunities deserve painted door tests versus other validation methods. The prioritization guide covers how to integrate test results into your backlog.