What This Template Is For
Benchmarking tells you where your product stands relative to the competition. Not in the abstract sense of "we are better" or "they are ahead," but in measurable, dimension-by-dimension comparisons that reveal specific strengths to protect and specific gaps to close.
This template provides a structured format for running a benchmarking study across four dimensions: feature coverage, user experience quality, performance, and pricing/value. Each dimension has a scoring rubric, a data collection method, and a synthesis format. The output is a benchmarking scorecard you can share with your team and stakeholders to inform roadmap decisions.
This is not the same as a competitive landscape overview (which focuses on positioning and market strategy). This template is about hands-on evaluation. You will sign up for competitors, use their products, measure their performance, and score them on consistent criteria. For a broader view of how competitors position themselves and where market gaps exist, see the competitive UX audit template. For turning your findings into prioritized roadmap items, the RICE framework helps you score the gaps by impact and effort.
When to Use This Template
- Before a major redesign. Understand what the current competitive bar looks like before you set your design targets.
- During annual or quarterly planning. Benchmarking data helps your team decide where to invest. "We are 2 points behind on onboarding and 1 point ahead on reporting" is more useful than "let's improve the product."
- When entering a new market or category. If you are expanding into a space with established players, benchmarking shows you the minimum viable quality bar.
- After losing deals to a specific competitor. Instead of guessing why, benchmark the two products side by side and identify the concrete differences.
- When building a case for investment. Benchmarking data gives leadership a clear picture of competitive gaps that justifies engineering and design spend.
How to Use This Template
- Select competitors. Choose 3-5 direct competitors. Include at least one market leader and one emerging challenger.
- Define dimensions and criteria. Use the default dimensions below or customize them for your market. Each dimension should have 4-6 specific criteria.
- Collect data. Sign up for each product. Complete key workflows. Measure load times. Review pricing pages. Take screenshots.
- Score. Rate each product on each criterion using the 1-5 scale. Be consistent. Have a second evaluator score independently to reduce bias.
- Synthesize. Calculate dimension scores and overall scores. Identify your biggest gaps and biggest leads. Translate gaps into roadmap recommendations.
The Template
Study Setup
| Field | Details |
|---|---|
| Your Product | [Product name and version evaluated] |
| Competitors Evaluated | [List 3-5 competitor names] |
| Date of Evaluation | [YYYY-MM-DD] |
| Evaluator(s) | [Names and roles] |
| Key Workflows Tested | [List 3-5 core user workflows, e.g., "Onboarding," "Create project," "Invite team member"] |
| Target User Persona | [Who would be using these products?] |
Dimension 1: Feature Coverage
Rate how well each product supports the key capabilities your target users need.
Scoring scale:
- 5 = Full support with advanced options
- 4 = Full support with basic options
- 3 = Partial support or requires workarounds
- 2 = Minimal support or via third-party integration only
- 1 = Not supported
| Criterion | Your Product | Competitor A | Competitor B | Competitor C |
|---|---|---|---|---|
| [Core workflow 1] | /5 | /5 | /5 | /5 |
| [Core workflow 2] | /5 | /5 | /5 | /5 |
| [Core workflow 3] | /5 | /5 | /5 | /5 |
| [Integration ecosystem] | /5 | /5 | /5 | /5 |
| [Customization/config] | /5 | /5 | /5 | /5 |
| [Collaboration features] | /5 | /5 | /5 | /5 |
| Dimension Average | /5 | /5 | /5 | /5 |
Dimension 2: User Experience Quality
Evaluate the end-to-end experience of each product across key UX criteria.
Scoring scale:
- 5 = Exceptional. Intuitive, delightful, no friction.
- 4 = Good. Minor rough edges but effective.
- 3 = Adequate. Gets the job done but requires learning.
- 2 = Below average. Confusing or frustrating in places.
- 1 = Poor. Significant usability problems.
| Criterion | Your Product | Competitor A | Competitor B | Competitor C |
|---|---|---|---|---|
| Onboarding / first-run experience | /5 | /5 | /5 | /5 |
| Navigation and information architecture | /5 | /5 | /5 | /5 |
| Task completion efficiency | /5 | /5 | /5 | /5 |
| Visual design and consistency | /5 | /5 | /5 | /5 |
| Error handling and recovery | /5 | /5 | /5 | /5 |
| Mobile / responsive experience | /5 | /5 | /5 | /5 |
| Dimension Average | /5 | /5 | /5 | /5 |
Dimension 3: Performance
Measure objective performance metrics for each product.
| Metric | Your Product | Competitor A | Competitor B | Competitor C |
|---|---|---|---|---|
| Initial page load (seconds) | ||||
| Time to interactive (seconds) | ||||
| Key workflow completion time (seconds) | ||||
| API response time (P50, ms) | ||||
| Uptime (last 90 days, %) | ||||
| Mobile Lighthouse score |
Performance score (convert to 1-5):
- 5 = Top quartile on all metrics
- 4 = Above average on most metrics
- 3 = Average across metrics
- 2 = Below average on most metrics
- 1 = Significantly slower than competitors
| Performance Score | /5 | /5 | /5 | /5 |
|---|
Dimension 4: Pricing and Value
Compare pricing models and perceived value.
| Criterion | Your Product | Competitor A | Competitor B | Competitor C |
|---|---|---|---|---|
| Free tier available? | Yes/No | Yes/No | Yes/No | Yes/No |
| Entry-level price (monthly) | $ | $ | $ | $ |
| Mid-tier price (monthly) | $ | $ | $ | $ |
| Enterprise pricing model | ||||
| Price per seat (mid-tier) | $ | $ | $ | $ |
| Feature parity at entry price | /5 | /5 | /5 | /5 |
| Perceived value for money | /5 | /5 | /5 | /5 |
| Pricing Score | /5 | /5 | /5 | /5 |
Benchmarking Scorecard (Summary)
| Dimension | Weight | Your Product | Competitor A | Competitor B | Competitor C |
|---|---|---|---|---|---|
| Feature Coverage | [%] | /5 | /5 | /5 | /5 |
| UX Quality | [%] | /5 | /5 | /5 | /5 |
| Performance | [%] | /5 | /5 | /5 | /5 |
| Pricing & Value | [%] | /5 | /5 | /5 | /5 |
| Weighted Total | 100% | /5 | /5 | /5 | /5 |
Gap Analysis
| Gap | Your Score | Best Competitor Score | Delta | Impact | Effort to Close | Priority |
|---|---|---|---|---|---|---|
| [e.g., "Onboarding experience"] | 2.5 | 4.5 | -2.0 | High | Medium | P1 |
| [e.g., "Mobile responsive"] | 3.0 | 4.0 | -1.0 | Medium | High | P2 |
Recommendations
| # | Recommendation | Gap Addressed | Expected Impact | Timeline |
|---|---|---|---|---|
| 1 | ||||
| 2 | ||||
| 3 |
Filled Example: Project Management SaaS Benchmarking
Context. A mid-stage project management tool (400 customers, Series A) is benchmarking against Asana, Linear, and Monday.com before a major product redesign in Q3.
Scorecard (Example)
| Dimension | Weight | OurPM | Asana | Linear | Monday |
|---|---|---|---|---|---|
| Feature Coverage | 30% | 3.2 | 4.5 | 3.8 | 4.2 |
| UX Quality | 35% | 2.8 | 4.0 | 4.7 | 3.5 |
| Performance | 15% | 3.5 | 3.8 | 4.8 | 3.2 |
| Pricing & Value | 20% | 4.0 | 3.0 | 3.5 | 3.2 |
| Weighted Total | 100% | 3.24 | 3.88 | 4.17 | 3.55 |
Key Findings (Example)
Biggest gaps:
- UX Quality (-1.9 vs Linear). Linear's keyboard-first design and minimal UI set a high bar. OurPM's onboarding scored 2/5 versus Linear's 5/5. Users described Linear as "immediately productive" and OurPM as "takes a week to figure out."
- Feature Coverage (-1.3 vs Asana). Asana's workflow automation and custom fields create a gap in enterprise use cases. OurPM lacks rule-based automations entirely.
Biggest leads:
- Pricing & Value (+1.0 vs Asana). OurPM's per-seat price is 40% lower than Asana at similar feature levels. This is a defensible advantage for cost-sensitive mid-market buyers.
Gap-to-Roadmap Translation (Example)
| Gap | Roadmap Item | Effort | Q3 Target |
|---|---|---|---|
| Onboarding (2.0 vs 4.7) | Redesign first-run flow with interactive tutorial | 3 sprints | Score 4.0+ |
| Automations (1.0 vs 4.5) | Ship rule-based workflow automations v1 | 4 sprints | Score 3.0+ |
| Page load (3.5s vs 1.2s) | Performance sprint: lazy loading, bundle optimization | 2 sprints | < 2.0s |
Key Takeaways
- Benchmark against 3-5 competitors, not the entire market. Include the market leader (your aspirational target), your closest rival (deals you lose to), and an emerging player (the future threat).
- Score independently before discussing. Have two evaluators score blind, then average the scores. This reduces the bias of anchoring to the first number stated.
- Weight dimensions by what your users care about, not what your team cares about. If your users consistently mention UX quality in customer interviews, weight that dimension higher.
- Translate gaps into roadmap items with specific targets. "Improve onboarding" is vague. "Bring onboarding score from 2.0 to 4.0 by Q3" is actionable. Use a prioritization framework to stack-rank the gaps.
- Re-run the benchmarking study every 6-12 months. Competitors ship features. Your improvements change the landscape. A benchmarking study is a snapshot, not a permanent record. Track trends in your research repository.
- Screenshot everything during evaluation. Visual evidence of competitor strengths and your own weaknesses makes the case for investment more convincing than numbers alone.
About This Template
Created by: Tim Adair
Last Updated: 3/5/2026
Version: 1.0.0
License: Free for personal and commercial use
