TL;DR
Product metrics tell you whether your product is delivering value and whether that value is growing. This guide covers the metric hierarchy every PM needs, the 14 metrics that matter most, three proven frameworks for organizing them, the tools that do the math for you, and a step-by-step process for building a metrics practice that drives decisions rather than just reporting.
The short version: one North Star, three to five input metrics, weekly review, monthly cohort analysis. Everything else is context.
What Are Product Metrics?
Product metrics are quantitative signals that tell you whether users are getting value from your product and whether the business is growing as a result. They span acquisition (are people finding you?), activation (are they getting value quickly?), retention (are they coming back?), revenue (are they paying and expanding?), and referral (are they telling others?).
The distinction between a metric and a KPI matters here. A metric is any number you can measure. A KPI (Key Performance Indicator) is a metric you have decided is important enough to track and act on. Every KPI is a metric. Most metrics are not KPIs. One of the most common mistakes product teams make is treating every metric as a KPI and building dashboards with 40 charts that nobody looks at.
The glossary entry on leading vs lagging metrics captures the core tension well: leading metrics predict future outcomes, lagging metrics confirm past ones. Build your KPI set around both.
Why Metrics Matter
Teams that operate without clear metrics make slower decisions. They run experiments without knowing what they are trying to move. They ship features without measuring whether those features worked. They present roadmaps to stakeholders without evidence that the last roadmap delivered results.
The data on this is consistent across product organizations: teams with defined North Star metrics and weekly metric reviews ship faster, align more easily across functions, and catch retention problems earlier. The complete guide to product metrics covers this in depth, including how to structure metric reviews and set targets that actually stick.
None of this requires a data science team. The calculators linked throughout this guide handle the math. What it requires is discipline: pick the metrics, instrument them, review them on a cadence, and act on what they tell you.
The Metric Hierarchy
Every product metrics system needs three layers.
Layer 1: The North Star Metric. One number that captures the core value your product delivers to users. Not revenue (that's a lagging outcome). Not signups (that's a vanity metric). The North Star is the action users take when they get real value. For Spotify it's time spent listening. For Airbnb it's nights booked. For Slack it's messages sent.
Use the North Star Finder tool to stress-test your candidates. A good North Star satisfies three criteria: it reflects user value (not just activity), it leads revenue by weeks or months, and the whole team can influence it.
Layer 2: Input Metrics (3-5). These are the levers that move your North Star. Activation rate, feature adoption, invite rate, session frequency. Each input metric should have a clear owner and a clear experiment backlog. When the North Star dips, you look to the input metrics to diagnose why.
Layer 3: Health Metrics (5-10). These don't need to go up. They need to stay within acceptable ranges. Page load time, error rate, support ticket volume, revenue churn. Think of them as guardrails: you don't optimize for them, but breaching them means something is wrong. The metric tree concept is a useful mental model for mapping how input metrics connect to the North Star and how health metrics sit alongside them.
The Core Metrics
North Star Metric
The North Star Metric is covered in depth in the North Star metric guide and the metric definition page. The key point for this guide: choose it carefully, change it rarely, and make sure everyone on the team can name it and explain why it matters.
Customer Acquisition Cost (CAC)
CAC is total sales and marketing spend divided by new customers acquired in a period. It tells you how expensive growth is. A rising CAC with flat or declining revenue per customer is an early warning sign. CAC alone is not the problem; CAC relative to lifetime value is what matters. Benchmark: most healthy SaaS businesses target a CAC payback period under 12 months. The CAC payback period metric covers the calculation in detail.
Lifetime Value (LTV)
LTV is the total revenue you expect from an average customer over their entire relationship with your product. The standard formula is average revenue per user divided by churn rate. LTV is sensitive to your churn assumptions, so model conservatively. Use the LTV calculator to run scenarios.
LTV:CAC Ratio
The LTV:CAC ratio is the single most important unit economics metric for SaaS products. A ratio below 1.0 means you lose money on every customer. A ratio of 3:1 is considered healthy for growth-stage SaaS. Above 5:1 suggests you are underinvesting in growth. The LTV-CAC Calculator makes the math easy.
Activation Rate
Activation rate measures the percentage of new users who reach your defined activation event. Activation is the moment a user first experiences the core value of your product. It is not the same as signup. Signing up is the beginning of a funnel. Activation is the first meaningful outcome.
Benchmark: consumer apps target 25-40% activation in the first session. B2B SaaS targets 40-60% activation within the first week. Teams working on activation improvement should pair this metric with time to first key action and onboarding completion rate.
D1, D7, D30 Retention
Retention is the single best leading indicator of product-market fit. If users come back, the product is delivering value. If they don't, no amount of acquisition spend will fix the business.
The three benchmarks that matter:
- Day 1 retention: Did users return after their first session? Consumer apps average 25-40%. Below 15% means a serious onboarding problem.
- Day 7 retention: Did users return in the first week? This filters out curious one-time visitors. Healthy consumer apps see 15-25% D7 retention.
- Day 30 retention: Did users form a habit? D30 above 20% for consumer apps and above 40% for SaaS is the zone where growth becomes efficient.
Analyze retention by cohort using cohort retention curves to distinguish genuine improvement from mix effects. New cohorts should retain better than older ones if your product is improving.
Net Promoter Score (NPS)
NPS measures customer loyalty through a single question: "How likely are you to recommend this product to a colleague or friend?" Scores range from -100 to +100. B2B SaaS median NPS is around 30-40. Above 50 is excellent. Below 0 signals serious satisfaction problems.
NPS is a lagging, low-frequency signal. It confirms direction but does not tell you what to fix. Pair it with qualitative follow-ups from detractors. The NPS calculator handles the weighted scoring. The glossary entry on NPS explains the calculation and benchmarks in full.
Monthly Active Users (MAU) and Daily Active Users (DAU)
MAU and DAU measure the breadth of engagement. MAU counts users active at least once in 30 days. DAU counts users active at least once in 24 hours. Neither number means much alone.
The ratio that matters is the DAU/MAU ratio (stickiness). For apps where daily use is natural (messaging, social, productivity), target above 20%. World-class daily apps (WhatsApp, TikTok) exceed 60%. B2B tools with weekly-use patterns should use WAU/MAU instead. The WAU/MAU ratio captures that signal. The Weekly Active Users metric is the relevant denominator.
Churn Rate
Customer churn rate is the percentage of customers who cancel in a given period. Monthly churn above 2% compounds into serious retention problems. Revenue churn can be more informative than logo churn because expansion revenue from retained customers can mask high customer churn. The revenue churn rate metric covers the distinction.
Use the churn calculator to model the long-term impact of different churn rates on revenue. A 1 percentage point reduction in monthly churn compounding over 24 months has a larger impact than most product teams expect.
Monthly Recurring Revenue (MRR)
MRR is the normalized monthly revenue from all active subscriptions. It is the foundational revenue metric for subscription businesses. MRR breaks down into components: new MRR from new customers, expansion MRR from upgrades, churned MRR from cancellations, and contraction MRR from downgrades. Each component needs its own owner and improvement strategy.
The MRR calculator handles the decomposition. The MRR growth rate is the trend metric that most investors and boards track.
Annual Recurring Revenue (ARR)
ARR is MRR multiplied by 12. It is the standard metric for reporting to investors and boards because it smooths month-to-month variation. The ARR/MRR glossary entry covers when to use each.
Net Revenue Retention (NRR)
NRR measures how much revenue you retain from existing customers including expansion. NRR above 100% means your existing customer base is growing even without new customer acquisition. This is the signal that distinguishes excellent SaaS businesses from average ones. Median NRR for top-quartile SaaS companies is around 120-130%.
The NRR calculator calculates starting MRR, expansion, contraction, and churn. The Gross Revenue Retention metric is the companion figure: GRR excludes expansion (floor of 0%, ceiling of 100%) and isolates pure retention.
Quick Ratio
The Quick Ratio measures growth efficiency: (new MRR + expansion MRR) divided by (churned MRR + contraction MRR). A quick ratio of 4 means you add $4 in new revenue for every $1 you lose. Below 1 means the business is shrinking. The Quick Ratio calculator computes this in seconds.
Cohort Retention
Cohort analysis groups users by the period they joined and tracks how each group behaves over time. It is the tool that separates real retention improvement from mix effects. The cohort retention metric and the cohort retention curve both explain why aggregate retention numbers mislead when your user base is growing fast. Use the cohort analysis template to structure your first analysis.
The Frameworks
AARRR: Pirate Metrics
The AARRR Pirate Metrics framework organizes metrics into five stages of the user lifecycle: Acquisition, Activation, Retention, Referral, Revenue. It was coined by Dave McClure and remains one of the most widely used metric frameworks in product management because it forces teams to measure the whole funnel, not just the top.
Each stage has distinct metrics:
- Acquisition: traffic by source, signup rate, CAC
- Activation: activation rate, time to value, onboarding completion
- Retention: D1/D7/D30, cohort curves, DAU/MAU
- Referral: viral coefficient, referral rate, invites sent per user
- Revenue: MRR, LTV, NRR, quick ratio
Use the AARRR calculator to calculate and benchmark your funnel across all five stages. The pirate metrics glossary entry and the pirate metrics page complement the framework overview.
HEART Framework
The HEART framework was developed at Google to measure user experience quality across five dimensions: Happiness, Engagement, Adoption, Retention, Task Success. It is particularly useful for large products with multiple features where AARRR's funnel model is too linear.
HEART encourages teams to pair each dimension with a goal, a signal, and a metric. The goal-signal-metric structure prevents the common mistake of measuring activity (clicks, sessions) instead of the outcome those activities represent. HEART is complementary to AARRR: use HEART to evaluate specific features and flows, use AARRR to evaluate the overall business funnel. The HEART framework glossary entry covers how to implement it.
North Star Framework
The North Star Framework structures a company's metric system around one primary output metric (the North Star) and the input metrics that drive it. It is particularly useful for aligning cross-functional teams because it gives everyone a single shared definition of success.
The framework's power is in the input metric layer. Once you have a North Star, you map the three to five levers that most directly move it. Those levers become the objectives for individual product teams. When a team's work does not connect to any lever, it is a signal that the work is not a priority. The North Star metric definition page and the North Star Finder tool support this work.
Tools That Do the Math
You should not be calculating these metrics by hand or in spreadsheets when product decisions are on the line. These calculators give you instant answers:
- AARRR Calculator: Five-stage funnel analysis with benchmarks
- NPS Calculator: Weighted NPS from raw survey responses
- LTV Calculator: Lifetime value with churn-adjusted scenarios
- LTV-CAC Calculator: Unit economics ratio with payback period
- Churn Calculator: Monthly/annual churn with revenue impact modeling
- MRR Calculator: MRR decomposed into new, expansion, and churn
- NRR Calculator: Net revenue retention from component inputs
- North Star Finder: Guided selection process for your North Star
- Quick Ratio Calculator: Growth efficiency ratio
For reporting, the product metrics report template and the SaaS metrics report template give you a structured starting point. The KPI dashboard template and the executive dashboard template help you communicate metrics upward.
The Step-by-Step Process
Step 1: Define Your North Star
Start with the value exchange. What does your user get from your product that they could not easily get elsewhere? What is the action that indicates they received that value? That action is your North Star candidate.
Test it against three questions: Does it go up when users get more value? Does it lead revenue by at least four weeks? Can every team in the company influence it? If yes to all three, you have your North Star.
Step 2: Identify Input Metrics
Work backward from the North Star. What behaviors or events, if increased, would directly move the North Star? List them. Cluster them into three to five distinct levers. Each lever becomes an input metric with an owner.
Common input metric categories: acquisition quality (activation rate of new signups), engagement depth (core action frequency per active user), expansion behavior (invite rate, multi-seat adoption), and value realization speed (time to first key action).
Step 3: Instrument Everything
You cannot act on metrics you cannot measure. Audit your instrumentation. Every activation event, core action, and churn event should be tracked. If you are missing instrumentation, build it before building features. The feature adoption rate metric and feature usage frequency are metrics that require explicit event tracking to be meaningful.
Step 4: Build Your Dashboard
One dashboard per audience. The executive dashboard has five numbers: North Star, MRR, NRR, CAC payback period, and D30 retention. The product team dashboard has the North Star, three to five input metrics, and health guardrails. The engineering dashboard has error rate, uptime, and page load time.
Separate dashboards prevent the situation where every stakeholder looks at a different number and draws a different conclusion about how the product is doing.
Step 5: Set Targets and Review Cadences
Every metric on your dashboard needs a target and a review cadence. Targets should be specific and time-bound. Not "improve NPS" but "increase NPS from 32 to 45 by Q3." Review cadences: North Star and input metrics weekly, health metrics on alert-based monitoring, cohort analysis monthly, NPS quarterly.
The product-market fit score metric is worth reviewing quarterly as a longitudinal check on whether your product is still serving its core audience.
Step 6: Act on What You See
A metric that does not drive a decision is a decoration. Every weekly review should end with one of three outcomes: things are on track (no action required), things are off track and we know why (action underway), or things are off track and we do not know why (diagnosis sprint initiated).
When a metric moves unexpectedly, check the adjacent metrics in your hierarchy first. A drop in Day 7 retention may trace to a drop in activation rate, which traces to a broken onboarding step. Work down the hierarchy before assuming the problem is fundamental.
Common Mistakes
Tracking vanity metrics instead of actionable ones. Total registered users, page views, and app downloads tell you almost nothing about value delivery. Replace them with activated users, pages per session in core workflows, and app opens per week.
No cohort analysis. Aggregate retention numbers hide the trend. If your product is improving, recent cohorts should retain better than older ones. If they do not, you are not actually improving retention. Use retention by cohort every month, not just when something looks wrong.
Ignoring the LTV:CAC ratio until Series B. Unit economics matter at every stage. Knowing your CAC and CAC payback period early helps you make better decisions about channel mix and pricing long before investors ask about it.
Setting targets without understanding baselines. A 10% improvement target is meaningless without knowing where you start. Spend two weeks establishing baselines before setting quarterly targets.
Measuring output instead of outcome. Shipping 30 features in a quarter is not a product metric. It is an activity metric. Measure whether users adopted those features, whether adoption improved retention, and whether retention improved revenue.
No owner for each metric. A metric without an owner is a metric that will not improve. Every input metric should have a named PM responsible for it, an experiment backlog tied to it, and a quarterly target attached to it.
Mixing pre-PMF and post-PMF metrics. Early-stage products should focus on a small number of qualitative signals and qualitative retention patterns. Applying growth-stage frameworks like AARRR before you have product-market fit creates false precision around the wrong questions. The product-market fit score helps you assess which stage you are in.
Metrics Stack by Stage
Pre-PMF
At this stage, you are trying to answer one question: do users get enough value to come back? Track a small number of metrics closely.
- Activation rate: Is the signup-to-value journey working?
- Day 7 retention: Are users forming a habit?
- NPS and qualitative exit surveys: Why are people leaving?
- Time to value: How long until users get their first outcome?
Avoid building complex dashboards before PMF. The signal-to-noise ratio is too low. Focus on talking to users, watching sessions, and tracking the one or two behaviors that correlate with retention.
Growth Stage (Post-PMF)
Once retention is stable and you have a repeatable acquisition channel, expand your metric stack.
- Full AARRR funnel: Acquisition channels, activation rate, D1/D7/D30 retention, referral rate, MRR
- Unit economics: CAC, LTV, LTV:CAC ratio, payback period
- Engagement depth: DAU/MAU, core action frequency, power user percentage
- Revenue composition: New MRR, expansion MRR, churned MRR
This is when a North Star Metric pays the biggest dividends. Growth is expensive, and having a single shared definition of success keeps teams aligned across acquisition, product, and monetization.
Scale Stage
At scale, the metric stack adds complexity around efficiency and expansion.
- NRR as the primary health metric: Above 120% means existing customers are growing faster than they churn
- Cohort analysis by segment: Not all customers retain equally. Know which segments are stickiest.
- Revenue per employee: A scale-stage efficiency metric the board will track
- Quick ratio: Growth efficiency as you add paid channels and sales motion
The SaaS playbook covers stage-specific priorities in depth, including when to shift from acquisition-led to expansion-led growth.
Closing
The goal of a product metrics practice is not to have a great dashboard. It is to make faster, better decisions. The best metric stacks are small, stable, and tied directly to the value the product delivers.
Start with your North Star. Add three to five input metrics. Set targets. Review weekly. Run experiments against the metrics that matter, and ignore the ones that don't.
The tools and frameworks in this guide give you everything you need to build that practice. The North Star Finder, the AARRR calculator, and the product metrics report template are the right starting points. Build the system, instrument the events, and let the data tell you where to focus.
For a deeper treatment of any specific metric, every metric linked in this guide has a dedicated page with formulas, benchmarks, and worked examples. For the framework overviews, the AARRR Pirate Metrics framework and HEART framework pages are the right next stops.