Guides28 min read

The Complete Guide to Product Metrics: What to Measure and Why

A thorough guide to product metrics covering metric frameworks (AARRR, HEART, North Star), metric trees, targets, review cadence, and common measurement mistakes.

By Tim Adair• Published 2025-05-08• Updated 2026-02-12

Quick Answer (TL;DR)

Product metrics tell you whether your product is working — not just whether it is functioning, but whether it is delivering value to users and growing the business. The challenge is not finding things to measure. It is choosing the right things to measure and building the discipline to act on what the data tells you. This guide covers the three most widely used metric frameworks, how to build a metric tree, how to set targets, and the mistakes that cause teams to measure the wrong things.

Summary: Measure outcomes, not output. Track leading indicators, not just lagging ones. Choose 3-5 metrics that matter, ignore the rest, and review weekly.

Key Steps:

  • Choose a metric framework (AARRR, HEART, or North Star) that fits your product's stage
  • Build a metric tree that connects your North Star to actionable input metrics
  • Set targets based on baselines and benchmarks, then review weekly
  • Time Required: 2-4 hours for initial metric setup, 1 hour per week for reviews

    Best For: Product managers, growth leads, product analysts, and anyone responsible for measuring product health


    Table of Contents

  • Why Metrics Matter (and Why They're Hard)
  • Types of Metrics
  • The Three Major Metric Frameworks
  • Building a Metric Tree
  • Choosing Your North Star Metric
  • Setting Targets and Thresholds
  • The Metric Review Cadence
  • Metrics by Product Stage
  • Common Measurement Mistakes
  • Building a Metrics Culture
  • Key Takeaways

  • Why Metrics Matter (and Why They're Hard)

    Metrics serve three purposes for product teams:

  • Decision-making: Should we invest more in onboarding or retention? Metrics provide the evidence to decide.
  • Accountability: Did the feature we shipped actually improve the outcome we targeted? Metrics provide the answer.
  • Communication: How is the product performing? Metrics provide a shared language that executives, engineers, and designers can all understand.
  • The problem is not a lack of data. Modern analytics tools produce an overwhelming amount of data. The problem is knowing which data points actually matter.

    The Measurement Trap

    Most product teams measure too many things and act on too few. They build dashboards with 50 charts, review them in a weekly meeting, nod thoughtfully, and then make decisions based on the loudest voice in the room. The dashboard becomes decoration.

    Effective measurement requires discipline in three areas:

  • Selection: Choose 3-5 metrics that directly connect to your product strategy. Ignore everything else.
  • Interpretation: Understand what a metric is telling you, what it is not telling you, and what additional context you need.
  • Action: Define in advance what you will do if a metric goes up, goes down, or stays flat. If a metric cannot trigger a decision, stop tracking it.

  • Types of Metrics

    Before diving into frameworks, it helps to understand the categories of metrics and how they relate to each other.

    Input vs. Output Metrics

    Output metrics measure results: revenue, churn rate, NPS. They are important but hard to influence directly. You cannot tell the team to "increase revenue" without telling them how.

    Input metrics measure the activities and behaviors that drive results: onboarding completion rate, feature adoption rate, session frequency. Input metrics are actionable because the team can directly influence them through product changes.

    Example chain:

    Input metric        → Intermediate metric    → Output metric
    Onboarding          → 7-day retention        → Monthly recurring
    completion rate       rate                     revenue (MRR)
    (85% → 92%)          (45% → 55%)              ($100K → $120K)

    The team can directly improve onboarding completion rate by redesigning the onboarding flow. That improvement cascades through retention and eventually hits revenue. But telling the team "improve MRR" without this chain is like telling a pilot "fly higher" without explaining which controls to use.

    Leading vs. Lagging Metrics

    Leading metrics predict future outcomes. Lagging metrics confirm what has already happened.

    Leading (Predictive)Lagging (Confirmatory)
    Activation rateMonthly recurring revenue
    Feature adoption rateChurn rate
    Weekly active usersAnnual revenue
    NPSCustomer lifetime value
    Setup completion rateNet revenue retention

    Product teams should focus primarily on leading metrics because they provide early signal and can be influenced. Lagging metrics are essential for business reporting but too slow for day-to-day product decisions.

    Vanity vs. Actionable Metrics

    Vanity metrics go up and to the right but do not inform decisions: total registered users, total page views, number of downloads. They feel good but tell you nothing about product health.

    Actionable metrics change based on product decisions: daily active users, retention by cohort, conversion rate. They move when you ship something, and they tell you whether the thing you shipped is working.

    The test: If a metric only goes up, it is probably a vanity metric. Total users can only increase (barring account deletion). Retention rate, on the other hand, can go up or down based on product quality.


    The Three Major Metric Frameworks

    1. AARRR (Pirate Metrics)

    The AARRR framework, created by Dave McClure, tracks the user lifecycle through five stages:

    StageQuestionExample Metrics
    AcquisitionHow do users find us?Website visitors, signup rate, traffic by source
    ActivationDo users have a good first experience?Onboarding completion, time to first key action, activation rate
    RetentionDo users come back?Day 7 retention, Day 30 retention, DAU/MAU ratio
    RevenueDo users pay?MRR, ARPU, LTV
    ReferralDo users tell others?Referral rate, viral coefficient, NPS

    When to use AARRR: Early-stage and growth-stage products. AARRR is useful because it covers the entire user lifecycle and helps you identify where the biggest drop-off is. If you are losing 80% of users between Acquisition and Activation, that is where you should focus — not on Referral.

    Strengths: Simple, intuitive, covers the full funnel. Works for most B2C and B2B SaaS products.

    Weaknesses: Lacks a user satisfaction dimension. Does not distinguish between different types of engagement. Revenue and Referral metrics can take months to move, making them poor choices for weekly tracking.

    Use IdeaPlan's AARRR Calculator to map your product's metrics to the pirate framework.

    2. HEART Framework

    The HEART framework, developed by Google's research team, measures user experience across five dimensions:

    DimensionWhat It MeasuresExample Metrics
    HappinessUser satisfaction and attitudeNPS, CSAT, survey ratings
    EngagementDepth and frequency of useSession duration, sessions per user, feature usage frequency
    AdoptionNew users of a product or featureFeature adoption rate, signup rate
    RetentionUsers who return over timeDay 7 retention, retention by cohort
    Task SuccessAbility to complete tasks efficientlyTask completion rate, error rate, time on task

    When to use HEART: When you want to measure user experience quality specifically, not just business outcomes. HEART works well for teams that have a dedicated UX function and want to quantify the impact of design improvements.

    Strengths: Explicitly includes user satisfaction and task success — dimensions that AARRR misses. Each dimension can be measured with Goals, Signals, and Metrics (GSM), providing a structured way to define what "success" means.

    Weaknesses: Can produce too many metrics if you measure all five dimensions. Most teams should pick 2-3 HEART dimensions per quarter based on their strategic focus.

    For a deeper dive, see the HEART Framework glossary entry.

    3. North Star Framework

    The North Star framework centers the team around a single metric that captures the core value the product delivers to users.

    The structure:

                  ┌──────────────────┐
                  │   NORTH STAR     │
                  │   METRIC         │
                  │                  │
                  │ "Weekly active   │
                  │  teams with 3+   │
                  │  shared views"   │
                  └───────┬──────────┘
                          │
             ┌────────────┼────────────┐
             │            │            │
        ┌────▼────┐  ┌────▼────┐  ┌───▼─────┐
        │ Input 1 │  │ Input 2 │  │ Input 3 │
        │ New team│  │ Views   │  │ Team    │
        │ setups  │  │ created │  │ invites │
        │ per week│  │ per team│  │ accepted│
        └─────────┘  └─────────┘  └─────────┘

    How it works: Identify a single metric that reflects the core value your product provides. Then identify 3-5 input metrics that the team can directly influence. Focus daily work on moving the input metrics, which collectively drive the North Star.

    When to use North Star: When you need organizational alignment around a single definition of success. Particularly useful for larger teams where different groups might otherwise optimize for different (conflicting) outcomes.

    Strengths: Powerful alignment tool. Forces clarity about what "success" means. The input metric structure makes abstract goals concrete.

    Weaknesses: Choosing the wrong North Star can lead the team in the wrong direction. A North Star that is too lagging (like revenue) does not give teams actionable guidance. A North Star that is too narrow (like a single feature's usage) misses the big picture.

    Use IdeaPlan's North Star Finder to identify the right North Star for your product.

    Choosing a Framework

    FactorAARRRHEARTNorth Star
    Best forFull funnel visibilityUX quality measurementOrganizational alignment
    ComplexityMedium (5 stages)Medium (5 dimensions)Low (1 metric + inputs)
    Action orientationHigh (identifies funnel leaks)Medium (measures experience)High (focuses daily work)
    Product stageEarly to growthGrowth to maturityGrowth to maturity

    Many teams combine frameworks. For example: use a North Star metric for organizational alignment, track AARRR stages for funnel health, and use HEART dimensions for quarterly UX evaluations.


    Building a Metric Tree

    A metric tree connects your top-level business outcome to the product behaviors that drive it. It makes the causal chain explicit, so every team member understands how their work connects to business results.

    The Structure

    Level 1: Business Outcome
      └─ Revenue, Profit, Market Share
    
    Level 2: Product Outcomes
      └─ Retention, Expansion, Acquisition
    
    Level 3: User Behaviors
      └─ Feature usage, session frequency, task completion
    
    Level 4: Product Actions
      └─ Specific features, flows, and improvements

    Example: SaaS Product Metric Tree

    Revenue Growth (+20% YoY)
    ├── New Revenue
    │   ├── Signups → Activation rate (65%)
    │   │   ├── Onboarding completion (85%)
    │   │   ├── Time to first key action (< 10 min)
    │   │   └── First session duration (> 5 min)
    │   └── Trial → Paid conversion (12%)
    │       ├── Feature discovery rate (40%)
    │       └── Invite sent during trial (30%)
    ├── Expansion Revenue
    │   ├── Seat expansion rate (15%)
    │   │   └── Team invite acceptance (70%)
    │   └── Upsell rate (8%)
    │       └── Pro feature engagement (25%)
    └── Retained Revenue
        ├── Monthly retention (96%)
        │   ├── Weekly active usage (3+ days)
        │   ├── Core action frequency (5+ per week)
        │   └── Customer health score (> 70)
        └── Net revenue retention (115%)
            └── Expansion > Contraction + Churn

    How to Build Your Metric Tree

  • Start with the business outcome your leadership cares about. Usually revenue, profit, or growth rate.
  • Decompose into product outcomes. What product behaviors drive that business outcome? Acquisition, activation, retention, expansion.
  • Decompose further into user behaviors. What specific actions do users take that indicate healthy product usage?
  • Connect to product levers. What specific features, flows, or improvements can the team build to influence those behaviors?
  • The tree should be readable top-to-bottom ("revenue is driven by new revenue, expansion, and retention") and bottom-to-top ("improving onboarding completion will increase activation, which drives new revenue").

    Guardrail Metrics

    For every metric you try to improve, define a guardrail metric that should not degrade. Guardrails prevent you from optimizing one metric at the expense of another.

    Metric You're OptimizingGuardrail Metric
    Signup conversion rateCustomer acquisition cost
    Feature adoption rateSupport ticket volume
    Session durationUser satisfaction (CSAT)
    Activation rateDay 30 retention
    Revenue per userChurn rate

    Example: If you A/B test a more aggressive onboarding flow that increases activation by 15% but also increases support tickets by 40%, the guardrail metric tells you the "win" is actually a problem.


    Choosing Your North Star Metric

    Your North Star metric is the single metric that best captures the core value your product delivers. It is not a business metric (revenue). It is a user value metric that correlates with business success over time.

    Criteria for a Good North Star

  • Reflects user value: It goes up when users get more value from the product.
  • Leading indicator of revenue: When it improves, revenue eventually follows.
  • Actionable: The team can influence it through product work.
  • Understandable: Everyone in the company can explain what it means.
  • Measurable: You can track it accurately with available instrumentation.
  • North Star Examples by Product Type

    Product TypeCompanyNorth Star Metric
    Social mediaFacebookDaily active users
    Music streamingSpotifyTime spent listening
    MarketplaceAirbnbNights booked
    MessagingSlackDaily active users who send messages
    CollaborationFigmaWeekly active editors
    E-commerceAmazonPurchases per month
    Project managementAsanaWeekly active teams with tasks
    B2B SaaSSalesforceRecords created/updated per week

    The North Star Selection Process

  • List candidates. Brainstorm 5-10 metrics that could serve as your North Star.
  • Score against criteria. Rate each candidate on the five criteria above (1-5 scale).
  • Check leading indicator strength. For each candidate, analyze the historical correlation with revenue. Does it actually predict business success?
  • Test for actionability. Can your team directly influence this metric through product work? If it is driven primarily by marketing spend or market conditions, it is the wrong choice.
  • Get buy-in. Present your recommendation to leadership with the analysis. The North Star only works if the entire organization rallies around it.
  • IdeaPlan's North Star Finder walks you through this process interactively.


    Setting Targets and Thresholds

    A metric without a target is trivia. You need to know what "good" looks like before you can evaluate whether you are on track.

    How to Set Targets

    Method 1: Baseline + Improvement

    Measure your current performance (baseline) and set a target based on a realistic improvement percentage.

    Current activation rate: 45%
    Target improvement: 20% relative increase
    New target: 54%

    This works when you have reliable baseline data and the metric has been stable.

    Method 2: Benchmark Comparison

    Compare your metrics to industry benchmarks and set targets based on where you want to be relative to peers.

    MetricYour CurrentIndustry MedianTop QuartileYour Target
    Activation rate45%50%65%55%
    Day 7 retention35%40%55%45%
    NPS32365040
    Trial conversion8%10%15%12%

    Method 3: Bottoms-Up Modeling

    Model the target from first principles based on planned product changes.

    Current onboarding has 7 steps.
    Step 3 has a 40% drop-off.
    If we redesign Step 3 to reduce drop-off to 20%,
    onboarding completion should improve from 55% to 67%.
    Target: 65% (conservative estimate with 2% buffer).

    This works when you have a specific initiative and can model its expected impact.

    Thresholds: Green, Yellow, Red

    For each metric, define three thresholds:

    ThresholdDefinitionAction
    GreenOn track or above targetContinue current plan
    YellowBelow target but within acceptable rangeInvestigate root cause, adjust tactics
    RedSignificantly below target or trending dangerouslyEscalate, reprioritize, intervene

    Example for activation rate (target: 55%):

  • Green: >= 53%
  • Yellow: 48-52%
  • Red: < 48%

  • The Metric Review Cadence

    Metrics are only useful if the team reviews them regularly and acts on what they find.

    Weekly Metric Review (30 minutes)

    Who: PM, engineering lead, design lead, data analyst

    Agenda:

  • North Star check (5 min): Current value, trend, and comparison to target. Green/yellow/red?
  • Input metrics review (10 min): Each input metric's current value and trend. Any anomalies?
  • Experiment results (10 min): Any active experiments to review? What are the results? What decisions should we make?
  • Actions (5 min): What are we doing this week based on what we see?
  • Key discipline: Do not turn the metric review into a status meeting. The question is not "what are we working on?" but "what is the data telling us and what should we do about it?"

    Monthly Metric Deep Dive (60 minutes)

    Who: Product team + stakeholders

    Agenda:

  • Metric performance summary (10 min): How did our key metrics perform this month?
  • Cohort analysis (15 min): Are newer cohorts performing better or worse than older ones? This tells you whether recent product changes are improving the user experience.
  • Funnel analysis (15 min): Where is the biggest drop-off in our user funnel? Has it changed?
  • Segment analysis (10 min): Are there meaningful differences between user segments (plan type, company size, acquisition channel)?
  • Priorities (10 min): Based on this analysis, should we adjust our focus?
  • Quarterly Metric Reset (2 hours)

    Who: PM, leadership, data team

    Agenda:

  • Review quarterly targets: Did we hit them? Why or why not?
  • Update the metric tree: Have we learned anything that changes how we think about metric relationships?
  • Set next quarter's targets: Based on current performance and planned initiatives
  • Evaluate metric selection: Are we tracking the right things? Should any metrics be added or removed?

  • Metrics by Product Stage

    Pre-Product-Market Fit

    At this stage, the only metric that matters is whether users are retaining. If retention is flat (users keep coming back), you have signal. If it is declining, you do not have product-market fit yet.

    Track:

  • Retention by cohort (the single most important chart)
  • Activation rate
  • Qualitative user feedback (not yet quantifiable in many cases)
  • Ignore (for now):

  • Revenue metrics (too early)
  • Vanity metrics (total users, page views)
  • Operational metrics (load time, error rate) — unless they are causing retention problems
  • Growth Stage

    You have product-market fit. Now you need to grow efficiently.

    Track:

  • North Star metric + 3-5 input metrics
  • Full AARRR funnel with conversion rates at each stage
  • LTV:CAC ratio (should be > 3:1 for healthy SaaS)
  • Net revenue retention (should be > 100%, ideally > 110%)
  • Quick ratio (new + expansion MRR / churned + contraction MRR; should be > 4)
  • Focus on: The biggest leak in your funnel. If 60% of signups never activate, that is where you will get the most impact.

    Maturity Stage

    Growth rates have normalized. Focus shifts to efficiency, expansion, and defending market position.

    Track:

  • Revenue efficiency metrics: Rule of 40, gross margin, revenue per employee
  • Expansion metrics: expansion MRR, upsell rate, seat growth
  • Platform health: API call volume, ecosystem growth, integration usage
  • Competitive metrics: Win rate, NPS relative to competitors

  • Common Measurement Mistakes

    Mistake 1: Measuring Output Instead of Outcomes

    The problem: The team tracks "features shipped" or "story points completed" rather than the impact those features had on users. Shipping more features is not the goal. Moving metrics is.

    Instead: For every feature you ship, define the metric it should move and the expected magnitude. After shipping, measure whether it actually moved. A team that ships 3 features and moves the activation rate by 15% is outperforming a team that ships 10 features and moves nothing.

    Mistake 2: Survivorship Bias in Metrics

    The problem: Your metrics only reflect the behavior of users who are still around. You measure average session duration for active users and see it increasing — but you are not counting the users who left because the product was not valuable to them.

    Instead: Always include a cohort view. Measure metrics by the cohort of users who signed up in the same week/month. This reveals whether the user experience is improving for new users or whether you are just left with die-hard fans as everyone else churns out.

    Mistake 3: Optimizing a Local Maximum

    The problem: The team relentlessly optimizes a single metric and succeeds — but at the cost of other important metrics. Conversion rate goes up, but customer quality goes down. Engagement increases, but satisfaction drops.

    Instead: Always pair your target metric with a guardrail. "Increase conversion rate from 8% to 12% while maintaining Day-30 retention above 40%."

    Mistake 4: Confusing Correlation with Causation

    The problem: "Users who complete onboarding have 2x higher retention. Therefore, forcing all users through onboarding will double retention." No. Users who complete onboarding may simply be more motivated to begin with. Forcing unmotivated users through onboarding will not make them motivated.

    Instead: Use controlled experiments (A/B testing) to establish causal relationships. Correlation analysis identifies hypotheses. Experiments validate them.

    Mistake 5: Dashboard Overload

    The problem: The team has 12 dashboards with 200 charts. Nobody looks at them regularly. When someone does look, they cannot find the signal in the noise.

    Instead: Build three dashboards:

  • Executive dashboard: 5-7 key business metrics. One page.
  • Product health dashboard: North Star + input metrics + guardrails. One page.
  • Experiment dashboard: Active experiments with results. Updated per experiment.
  • Everything else goes into an ad hoc analysis tool for when you need to dig deeper.

    Mistake 6: Not Segmenting

    The problem: You look at aggregate metrics across all users. Everything looks fine. But enterprise users are churning while SMB users are growing. The aggregate number hides a critical problem.

    Instead: Segment metrics by plan type, company size, acquisition channel, geography, and user persona. The aggregate metric is the average of very different realities. The segments reveal which realities need attention.

    Mistake 7: Changing Metrics Too Often

    The problem: Every quarter, the team picks new metrics to focus on. Last quarter it was activation. This quarter it is engagement. Next quarter it will be something else. No metric gets enough sustained attention to actually improve.

    Instead: Commit to a North Star and input metrics for at least 2-3 quarters. Change only when the strategy fundamentally changes or when you have clearly solved the problem the metric was tracking.


    Building a Metrics Culture

    Metrics are a cultural choice, not a technical one. You can install the best analytics tools and still have a team that ignores data.

    Principle 1: Start Every Conversation with Data

    When someone proposes a feature, ask: "What metric will this move, and by how much?" When someone reports a problem, ask: "How do we know this is a problem? What does the data show?"

    This is not about being difficult. It is about building the habit of evidence-based decision-making. Over time, the team will start asking these questions of themselves before bringing ideas to the group.

    Principle 2: Celebrate Learning, Not Just Winning

    When an experiment shows that a feature did not move the target metric, that is valuable information. Celebrate it. "We just saved ourselves 6 weeks of engineering effort by discovering that this approach does not work." Teams that punish negative experiment results stop running experiments — and start guessing.

    Principle 3: Make Metrics Visible

    Put your North Star metric and input metrics on a physical or virtual wall that the team sees every day. When the metric goes up, everyone knows. When it goes down, everyone knows. Visibility creates shared accountability.

    Principle 4: Empower the Team to Act

    If the team sees a metric declining but needs to go through three layers of approval to run an experiment, the metrics are decoration. Give product teams the authority to act on what they see in the data. Define a boundary ("you can run experiments that affect up to 10% of users without approval") and let the team move.

    Principle 5: Invest in Instrumentation

    Metrics are only as good as the data behind them. Invest in proper event tracking, data quality checks, and analytics tooling. A team that does not trust its data will not use its data. If you are starting from scratch, the product analytics setup guide covers how to define your event taxonomy and get useful data within two weeks. For a comparison of the major platforms, see best product analytics tools for 2026.


    Key Takeaways

  • Product metrics should measure outcomes (value delivered to users) not outputs (features shipped). The question is not "what did we build?" but "what impact did it have?"
  • Choose a metric framework that fits your product stage: AARRR for full-funnel visibility, HEART for user experience quality, North Star for organizational alignment. Most mature teams use elements of all three.
  • Build a metric tree that connects business outcomes to user behaviors to product actions. Every team member should be able to trace their work to its expected metric impact.
  • Track 3-5 primary metrics at any given time. For each metric you optimize, define a guardrail metric that should not degrade.
  • Leading metrics (feature adoption, activation rate) are more actionable than lagging metrics (revenue, churn). Focus daily work on leading indicators.
  • Set targets using baseline data, industry benchmarks, or bottoms-up modeling. A metric without a target is trivia.
  • Review metrics weekly (30-minute check), monthly (60-minute deep dive), and quarterly (2-hour reset). The cadence matters as much as the metrics themselves.
  • The biggest measurement mistake is tracking output instead of outcomes. The second biggest is not segmenting. The third is changing metrics too often.
  • Next Steps:

  • Define your North Star metric using the North Star Finder
  • Build a metric tree connecting your North Star to 3-5 actionable input metrics
  • Set up a weekly 30-minute metric review with your product trio

  • Building a Product Experimentation Culture
  • Continuous Discovery Habits
  • User Research Methods

  • About This Guide

    Last Updated: February 12, 2026

    Reading Time: 28 minutes

    Expertise Level: Intermediate to Advanced

    Citation: Adair, Tim. "The Complete Guide to Product Metrics: What to Measure and Why." IdeaPlan, 2026. https://ideaplan.io/guides/the-complete-guide-to-product-metrics

    T
    Tim Adair

    Strategic executive leader and author of all content on IdeaPlan. Background in product management, organizational development, and AI product strategy.

    Frequently Asked Questions

    What is a North Star metric?+
    A North Star metric is the single metric that best captures the core value your product delivers to users. It serves as a unifying measure that aligns the entire product team around a shared definition of success. For example, Spotify's North Star metric is 'time spent listening,' which directly reflects user value. A good North Star correlates with both user satisfaction and long-term business growth.
    What is the difference between a leading and a lagging metric?+
    A lagging metric measures an outcome that has already happened (revenue, churn rate, NPS). A leading metric measures an activity or behavior that predicts a future outcome (feature adoption rate, onboarding completion, weekly active usage). Product teams should focus on leading metrics because they can be influenced directly, while lagging metrics can only be observed after the fact.
    How many metrics should a product team track?+
    A product team should actively track 3-5 primary metrics at any given time. You may monitor 10-15 metrics in your dashboards for context, but the metrics you actively try to move in a given quarter should number no more than five. Tracking too many metrics creates diffusion of focus. Track one North Star, 2-3 leading input metrics that drive it, and 1-2 guardrail metrics to ensure you are not causing harm.
    Free Resource

    Want More Guides Like This?

    Subscribe to get product management guides, templates, and expert strategies delivered to your inbox.

    No spam. Unsubscribe anytime.

    Want instant access to all 50+ premium templates?

    Start Free Trial →

    Put This Guide Into Practice

    Use our templates and frameworks to apply these concepts to your product.