Skip to main content
New: 9 PM Courses with hands-on exercises and certificates
MetricsIntermediate12 min read

HEART Framework: A Product Manager's Guide

Learn how to use Google's HEART framework to define and track meaningful UX metrics across Happiness, Engagement, Adoption, Retention, and Task Success.

Best for: Product managers and UX teams who need a structured approach to defining, tracking, and improving user experience metrics beyond basic engagement numbers
By Tim Adair• Published 2026-02-19
Share:
TL;DR: Learn how to use Google's HEART framework to define and track meaningful UX metrics across Happiness, Engagement, Adoption, Retention, and Task Success.

Quick Answer (TL;DR)

HEART is a UX metrics framework developed at Google by Kerry Rodden, Hilary Hutchinson, and Xin Fu. It organizes user experience measurement into five dimensions: Happiness (satisfaction, NPS, perceived ease of use), Engagement (frequency and depth of interaction), Adoption (new users starting to use a product or feature), Retention (users who keep coming back), and Task Success (efficiency, error rate, completion rate). For each dimension, teams follow a Goals, Signals, Metrics (GSM) process to connect business objectives to measurable user behavior. The framework works because it forces teams to be explicit about what "good UX" means in quantifiable terms.


Why UX Metrics Need Structure

Most product teams track metrics, but few track them in a way that captures the full picture of user experience. Common failure modes:

  • Vanity metrics only. Pageviews and DAU tell you volume, not quality. A user who visits your app daily but can't complete their core task is not a success story.
  • Metric sprawl. Teams track 40 metrics in a dashboard no one reads. Without a framework to organize metrics, everything gets measured and nothing gets managed.
  • Missing dimensions. A product might track retention rigorously but have no signal for whether users are actually satisfied. Users can be retained through switching costs while being deeply unhappy.

HEART solves these problems by providing a structured way to choose what to measure and why. It complements growth-focused frameworks like the AARRR Calculator by adding explicit UX quality dimensions that funnel metrics miss.

The Five Dimensions

Happiness

Happiness captures subjective user attitudes: satisfaction, perceived ease of use, and willingness to recommend. It's the only HEART dimension that relies on self-reported data rather than behavioral data.

Why it matters: Behavioral metrics can mislead. High engagement might mean users love your product, or it might mean your product is confusing and users are struggling. Happiness metrics disambiguate: they tell you how users feel about the experience, separate from what they do.

Common metrics:

  • Net Promoter Score (NPS). Use the NPS Calculator to compute and benchmark your score.
  • Customer Satisfaction Score (CSAT) on specific flows
  • System Usability Scale (SUS) for periodic UX assessments
  • In-app satisfaction surveys (e.g., "How easy was it to complete this task?" on a 1-5 scale)
  • App store rating trends

Pitfalls: Survey fatigue. If you survey too frequently, response rates drop and self-selection bias increases. Sample strategically: survey after key flows, not on every session.

Engagement

Engagement measures the depth and frequency of user interaction with your product. It goes beyond "did they visit?" to "how deeply did they interact?"

Why it matters: Engagement is an early indicator of value delivery. Users who engage deeply are finding value. Users who engage shallowly may be at risk. Engagement metrics also help you identify your most valuable features and your dead ones.

Common metrics:

  • Sessions per user per week
  • Actions per session (clicks, edits, submissions)
  • Feature-specific usage rates (% of users who used feature X in the last 7 days)
  • Time spent on core workflows (not total time in app, which can indicate confusion)
  • Content creation/consumption ratio (for UGC products)

Pitfalls: "Time in app" is often a poor engagement metric. More time can mean deep engagement or it can mean the user is lost. Measure time on core tasks specifically, not overall session duration.

Adoption

Adoption measures how many new users start using your product or a specific feature within it. It's about the top of the experience funnel: are people trying this?

Why it matters: A new feature that nobody adopts is a failed feature, regardless of how well it works for the few who find it. Adoption metrics reveal distribution and discoverability problems. They also help you distinguish growth (new users) from depth (existing users doing more).

Common metrics:

  • New user signups or activations per period
  • Feature adoption rate (% of eligible users who used a new feature within 14 days of release)
  • Upgrade conversions (free to paid)
  • New user activation rate (% who completed the onboarding milestone)
  • First-time usage of a specific workflow

Pitfalls: High adoption with low retention is a red flag. People tried it but didn't find it valuable enough to come back. Always pair adoption with retention metrics.

Retention

Retention measures whether users come back. It's the most reliable indicator of product value because users who repeatedly return are getting something they can't easily get elsewhere.

Why it matters: Retention is the foundation of sustainable growth. You can acquire users all day, but if they leave after a week, you're filling a leaky bucket. Retention metrics also connect directly to revenue in subscription businesses. Read more about retention as a core metric.

Common metrics:

  • Day 1, Day 7, Day 30 retention rates
  • Weekly or monthly active user retention (cohorted)
  • Churn rate (inverse of retention)
  • Reactivation rate (dormant users who return)
  • Feature retention (% who used a feature in week 1 and again in week 4)

Pitfalls: Retention windows matter. A daily productivity tool should measure daily retention. A tax preparation tool used once a year needs annual retention. Match the measurement window to the natural usage frequency of your product.

Task Success

Task Success measures whether users can accomplish what they came to do, and how efficiently. It bridges the gap between user intent and product capability.

Why it matters: A product can have high engagement and good retention while still having significant usability problems. Users might work around friction points rather than abandoning the product entirely. Task success metrics reveal these friction points.

Common metrics:

  • Task completion rate (% who successfully completed a defined workflow)
  • Time-on-task (how long it takes to complete a core action)
  • Error rate (% of attempts that result in errors or retries)
  • Search success rate (% of searches that lead to a clicked result)
  • Funnel drop-off rate per step (where do users abandon a multi-step flow?)

Pitfalls: You need to define what "success" looks like for each task before measuring it. If your product has 20 different workflows, you can't track task success for all of them. Pick the 3-5 core tasks that define your product's value.

The Goals-Signals-Metrics (GSM) Process

The HEART dimensions tell you what categories to measure. The GSM process tells you how. For each dimension you choose to track, work through three steps:

Step 1: Define Goals

What is the desired outcome for this dimension? Goals should be specific to your product and your current strategic focus.

DimensionWeak GoalStrong Goal
HappinessUsers are satisfiedUsers rate the report builder workflow 4.5/5 or above
EngagementUsers are engagedPower users (top 20%) create 5+ reports per week
AdoptionPeople try the new feature40% of eligible users try the new AI search within 14 days
RetentionUsers keep coming backWeek 4 retention for new cohorts exceeds 55%
Task SuccessUsers can do what they need90% of users complete their first report in under 10 minutes

Step 2: Identify Signals

What user behaviors would indicate progress toward or away from the goal? Signals are observable actions or events, not metrics yet.

For the Happiness goal above (report builder rated 4.5+):

  • Users submitting positive ratings after completing a report
  • Users sharing reports with teammates (a proxy for confidence in the output)
  • Users contacting support about report builder issues (a negative signal)

Step 3: Choose Metrics

Turn signals into specific, measurable numbers that can be tracked over time. Good metrics are:

  • Specific. "Post-task CSAT score for report builder" not "overall satisfaction"
  • Comparable. Use rates and ratios, not raw counts, to compare across time periods and cohorts
  • Actionable. If the metric moves, you know what to investigate
SignalMetric
Users submitting positive ratingsAverage post-task CSAT score (1-5 scale), measured weekly
Users sharing reportsReport share rate (% of created reports that are shared within 24 hours)
Users contacting supportSupport ticket rate per 1,000 report creation sessions

A Practical SaaS Example

Context: You're a PM at a B2B analytics platform. Your team just shipped a natural language query feature (users can type questions in plain English instead of writing SQL). You need to define success metrics.

Step 1: Choose relevant HEART dimensions. For a new feature, Adoption and Task Success are most critical. Happiness matters for long-term retention. Engagement and Retention are less relevant in the first 30 days. Focus on three dimensions:

Adoption GSM:

  • Goal: 50% of active SQL users try natural language query within 30 days
  • Signal: User submits at least one natural language query
  • Metric: NL query adoption rate = (users with 1+ NL query) / (users with 1+ SQL query in same period)

Task Success GSM:

  • Goal: Users get accurate, useful results at least 80% of the time
  • Signal: User accepts the generated query (runs it) vs. abandons or modifies it significantly
  • Metric: Query acceptance rate = (NL queries run without modification) / (total NL queries submitted). Fallback rate = (NL queries followed by a SQL query within 60 seconds) / (total NL queries)

Happiness GSM:

  • Goal: Users rate NL query as "helpful" or "very helpful" at a rate above 75%
  • Signal: Users responding to the inline "Was this helpful?" prompt
  • Metric: Helpfulness rate = (helpful + very helpful responses) / (total responses). Track weekly trend.

This gives the team three clear metrics to monitor at launch, each connected to a specific outcome. If adoption is high but task success is low, the feature is discoverable but not useful. If task success is high but adoption is low, the feature works but users don't know it exists. The metrics tell you where to invest next.

Choosing Which Dimensions to Track

You don't need all five. In fact, tracking all five creates dashboard noise and dilutes focus. Choose based on your product's maturity and current strategic priorities:

Product StagePrimary DimensionsWhy
Pre-launch / EarlyAdoption, Task SuccessCan people find it and use it successfully?
Growth phaseEngagement, RetentionAre users finding ongoing value?
Mature productHappiness, RetentionAre users satisfied enough to stay and recommend?
Major redesignTask Success, HappinessDid the redesign make things better or worse?
New feature launchAdoption, Task Success, HappinessThe full new-feature evaluation stack

Integrating HEART with Other Frameworks

HEART focuses on UX quality. It doesn't cover business outcomes (revenue, LTV, CAC) or growth mechanics (acquisition funnels, referral loops). Use it alongside other frameworks:

  • HEART + AARRR. Use the AARRR Calculator for funnel and growth metrics. Overlay HEART dimensions on each AARRR stage to add experience quality to your growth model. Acquisition has Adoption metrics. Activation has Task Success metrics. Retention has Retention metrics (the dimensions overlap here, but HEART adds Happiness and Engagement depth).
  • HEART + NPS. Net Promoter Score is one metric within the Happiness dimension. HEART provides the broader context: if NPS drops, which other dimensions also declined? That tells you whether the issue is satisfaction (Happiness), product quality (Task Success), or value decay (Engagement).
  • HEART + OKRs. HEART metrics make strong OKR key results because they're specific, measurable, and connected to user outcomes. "Increase Day 7 Retention from 52% to 60%" is a HEART-derived key result. The Product Analytics Handbook covers how to build analytics systems that support HEART measurement.

Common Mistakes

Jumping to metrics without the GSM process. Teams that skip Goals and Signals end up tracking whatever their analytics tool makes easy to measure, not what actually matters. "We track DAU because we can" is not a strategy. Start with what outcome you want, then work backward to what to measure.

Treating all dimensions equally. If you're tracking Happiness, Engagement, Adoption, Retention, and Task Success with equal weight, you're not prioritizing. At any given time, 1-2 dimensions should be your primary focus, with others monitored but not actively optimized.

Using HEART for individual features in isolation. HEART works best at the product level or major feature area level. Applying it to a single button or micro-interaction creates measurement overhead without proportional insight. Save HEART for experiences that meaningfully impact user outcomes.

Confusing engagement with value. High engagement can indicate value, habit, confusion, or even addiction. Always pair Engagement with Happiness to understand whether users are engaged because they want to be or because they have to be. A tool that requires 15 clicks to do what should take 3 will show high "engagement" while delivering poor UX.

Not revisiting goals when strategy changes. HEART metrics tied to outdated goals become vanity metrics. When your product strategy shifts (e.g., from growth to monetization), revisit which HEART dimensions and specific metrics still align with what matters.

Limitations

HEART requires instrumentation investment. You need event tracking, survey infrastructure, and cohort analysis capabilities. Teams without mature analytics tooling may find it difficult to measure all dimensions well. Start with the dimensions you can measure today and add infrastructure for the others incrementally.

The framework also doesn't prescribe what to do when metrics move. If Day 7 Retention drops from 55% to 48%, HEART tells you there's a problem in the Retention dimension but not why or what to fix. Pair HEART measurement with qualitative research (user interviews, session recordings) to diagnose root causes.

Finally, HEART was designed for web-scale consumer products at Google. B2B SaaS products, products with very small user bases, or products with infrequent usage patterns may need to adapt the dimensions. Monthly active products can't meaningfully measure weekly engagement, for example. Adjust measurement windows and signal definitions to fit your product's natural usage cadence.

Frequently Asked Questions

What is the HEART framework?+
The HEART framework is a UX metrics system developed by Kerry Rodden, Hilary Hutchinson, and Xin Fu at Google. It organizes user experience metrics into five categories: Happiness (user satisfaction), Engagement (depth of interaction), Adoption (new user acquisition), Retention (returning users), and Task Success (efficiency and completion). For each category, teams define Goals, Signals, and Metrics to create a focused measurement plan.
How is HEART different from AARRR (Pirate Metrics)?+
AARRR (Acquisition, Activation, Revenue, Retention, Referral) is a business-oriented funnel focused on growth and monetization. HEART is a UX-oriented framework focused on the quality of the user experience. They complement each other: AARRR tells you whether the business is growing, while HEART tells you whether users are having a good experience. Many teams track both.
Do you need to track all five HEART dimensions?+
No. The framework's creators explicitly recommend choosing the dimensions most relevant to your product and current goals. A mature product might focus on Retention and Happiness. A new product might prioritize Adoption and Task Success. Tracking all five creates noise and dilutes focus. Pick 2-3 dimensions per product area.
What is the Goals-Signals-Metrics process in HEART?+
For each HEART dimension you choose to track, follow three steps. Goals: what is the desired user experience outcome? Signals: what user behaviors would indicate success or failure? Metrics: what specific, measurable numbers can you extract from those signals? This process prevents the common mistake of jumping straight to metrics without connecting them to actual UX outcomes.
Can the HEART framework be applied to B2B SaaS products?+
Yes. B2B products often under-invest in UX measurement because they rely on contract renewals rather than consumer-style retention signals. HEART is valuable for B2B because it surfaces experience quality issues before they show up as churn. Adapt the dimensions: Happiness might be CSAT or NPS, Engagement might be feature adoption depth, and Task Success might be time-to-complete core workflows.
Free PDF

Want More Frameworks?

Get PM frameworks, tools and templates delivered weekly.

or use email

Instant PDF download. One email per week after that.

Want full SaaS idea playbooks with market research?

Explore Ideas Pro →

Apply This Framework

Use our templates to put this framework into practice on your next project.