What This Template Is For
Churn is a trailing indicator. By the time a customer cancels, the decision was made weeks or months earlier. The goal of churn prevention is to detect the signals that precede cancellation and intervene before the customer has mentally checked out. Most teams wait until the renewal conversation to discover problems. By then, the customer has already evaluated alternatives, built a business case for switching, and started a migration plan.
This template helps you build a systematic approach to churn prevention: define early warning signals, score account risk, design intervention playbooks for each risk level, and conduct post-churn analysis to prevent the same failure pattern from repeating. For a deeper understanding of churn mechanics, see the churn rate metric definition and the customer retention rate metric.
The Product Analytics Handbook covers how to instrument the product usage signals that power churn prediction. If you are working on the broader retention strategy, the PLG Handbook has a full chapter on retention loops and habit formation.
How to Use This Template
- Start with the early warning signals section. List every signal that has historically preceded churn at your company. Pull from support tickets, exit interviews, and usage data.
- Build the risk scoring model. Assign weights based on how predictive each signal has been. Start simple with 3-5 signals and refine over time.
- Design intervention playbooks for each risk level. The response to a "yellow" account should be different from a "red" account.
- Document the escalation path. Who gets involved when a high-value account is at risk?
- After every churn event, complete the post-churn analysis. This is the feedback loop that makes the system smarter over time.
The Template
Early Warning Signals
| Signal | Source | Lead Time | Reliability |
|---|---|---|---|
| [Product usage decline: e.g., DAU drops >30% over 2 weeks] | [Analytics] | [30-60 days] | [High / Medium / Low] |
| [Feature adoption stall: e.g., stopped using core feature] | [Analytics] | [60-90 days] | [High / Medium / Low] |
| [Support ticket escalation: e.g., 3+ frustrated tickets in 30 days] | [Helpdesk] | [30-45 days] | [High / Medium / Low] |
| [Champion departure: e.g., primary contact left the company] | [CRM / LinkedIn] | [60-120 days] | [High / Medium / Low] |
| [Payment failure or billing dispute] | [Billing] | [14-30 days] | [High / Medium / Low] |
| [Competitor evaluation: e.g., spotted in competitor trial] | [Sales intel] | [30-60 days] | [High / Medium / Low] |
| [NPS/CSAT decline: e.g., dropped from Promoter to Passive] | [Survey] | [60-90 days] | [High / Medium / Low] |
| [Contract utilization drop: e.g., using <30% of purchased capacity] | [Billing + Analytics] | [90+ days] | [High / Medium / Low] |
Risk Scoring Model
| Risk Level | Score Range | Definition | Account Count | Action Required |
|---|---|---|---|---|
| Green | 80-100 | Healthy. Active usage, positive sentiment, no warning signals | [N accounts] | Standard CS cadence |
| Yellow | 50-79 | At risk. 1-2 warning signals present | [N accounts] | Proactive outreach within 7 days |
| Orange | 25-49 | High risk. 3+ warning signals or champion departure | [N accounts] | Intervention plan within 48 hours |
| Red | 0-24 | Critical. Active churn threat or competitor evaluation confirmed | [N accounts] | Executive escalation within 24 hours |
Score calculation.
| Signal | Weight | Green (3 pts) | Yellow (2 pts) | Orange (1 pt) | Red (0 pts) |
|---|---|---|---|---|---|
| Product usage trend | [X%] | Increasing | Stable | Declining | Inactive |
| Feature adoption | [X%] | Using 3+ features | Using 2 features | Using 1 feature | Minimal |
| Support sentiment | [X%] | Positive | Neutral | Frustrated | Threatening |
| NPS/CSAT | [X%] | 9-10 | 7-8 | 5-6 | 0-4 |
| Champion status | [X%] | Active and engaged | Responsive | Disengaged | Departed |
Intervention Playbooks
Yellow (At Risk): Proactive Outreach
- ☐ CSM reviews account health data and prepares talking points
- ☐ Schedule check-in call within 7 days (position as "value review," not "are you leaving?")
- ☐ Identify unused features that could address the underlying issue
- ☐ Share relevant best practices, case studies, or new features
- ☐ Document findings and update health score
- ☐ Schedule follow-up in 14 days
Orange (High Risk): Structured Intervention
- ☐ CSM + CS Director review account within 48 hours
- ☐ Prepare intervention plan: specific actions to address root cause
- ☐ Executive sponsor outreach (VP or Director level contact)
- ☐ Offer concessions if appropriate (training, implementation support, feature request prioritization)
- ☐ Create 30-day recovery plan with measurable milestones
- ☐ Weekly check-ins until status improves to Yellow or Green
Red (Critical): Save Attempt
- ☐ CS Director + VP CS review within 24 hours
- ☐ Executive-to-executive outreach (your C-level to their C-level)
- ☐ Root cause analysis: what failed and can it be fixed?
- ☐ Prepare retention offer (discount, contract restructure, dedicated support)
- ☐ If churn is confirmed, begin graceful offboarding (data export, transition support)
- ☐ Schedule post-churn analysis within 5 business days
At-Risk Account Tracker
| Account | ARR | Risk Level | Primary Signal | CSM | Intervention Started | Status |
|---|---|---|---|---|---|---|
| [Account 1] | $[Amount] | [Red/Orange/Yellow] | [Signal] | [Name] | [Date] | [Active / Resolved / Churned] |
| [Account 2] | $[Amount] | [Red/Orange/Yellow] | [Signal] | [Name] | [Date] | [Active / Resolved / Churned] |
| [Account 3] | $[Amount] | [Red/Orange/Yellow] | [Signal] | [Name] | [Date] | [Active / Resolved / Churned] |
Post-Churn Analysis
| Field | Details |
|---|---|
| Account | [Name] |
| ARR lost | $[Amount] |
| Customer tenure | [X months] |
| Primary churn reason | [Price / Product gap / Poor support / Champion left / Acquired / Went to competitor] |
| Secondary factors | [List] |
| Warning signals present | [Which signals fired and when?] |
| Intervention attempted? | [Yes/No. If yes, what was tried?] |
| Could this have been prevented? | [Yes/No. If yes, what should have been done differently?] |
| Systemic issue? | [Is this a one-off or a pattern? If pattern, what needs to change?] |
- ☐ Post-churn interview conducted (or exit survey sent)
- ☐ Findings shared with product team (if product gap)
- ☐ Findings shared with CS team (if process gap)
- ☐ Early warning signals updated based on this case
- ☐ Playbook updated if intervention was insufficient
Filled Example: B2B SaaS Analytics Platform
Early Warning Signals (Validated)
| Signal | Source | Lead Time | Reliability |
|---|---|---|---|
| DAU drops >40% over 3 weeks | Product analytics | 45 days | High |
| Zero new dashboards created in 30 days | Product analytics | 60 days | High |
| 3+ support tickets with negative sentiment in 30 days | Zendesk | 30 days | Medium |
| Primary champion changes roles or leaves | LinkedIn alerts + CRM | 90 days | High |
| Account asks for data export | Support tickets | 14 days | Very High |
Post-Churn Analysis Example
| Field | Details |
|---|---|
| Account | DataCorp Inc. |
| ARR lost | $48,000 |
| Customer tenure | 14 months |
| Primary churn reason | Product gap: no real-time streaming analytics. Switched to Mixpanel. |
| Warning signals present | DAU declined 52% over 6 weeks (detected). Champion posted about evaluating alternatives on LinkedIn (missed). |
| Intervention attempted? | Yes. CSM called at week 4 of decline. Customer said "we are evaluating options." Offered roadmap preview and 15% discount. |
| Could this have been prevented? | Partially. The product gap was real and on the roadmap for Q3. Earlier communication about the roadmap timeline may have bought 3 months. |
| Systemic issue? | Yes. Third customer lost to real-time analytics gap in 6 months. Escalated to product leadership for prioritization. |
Common Mistakes to Avoid
- Treating all churn the same. A customer who leaves because they were acquired is not the same as one who leaves because of a product gap. Categorize churn reasons and focus prevention efforts on the causes you can control.
- Relying on a single signal. No single metric predicts churn reliably. Use a weighted combination of 4-6 signals to reduce false positives and false negatives.
- Intervening too late. If your first churn prevention action is a discount offer during the renewal call, you have failed. Effective prevention starts 60-90 days before renewal.
- Not closing the feedback loop. Post-churn analysis is useless if the findings stay in a spreadsheet. Route product gaps to the product team and process gaps to CS leadership.
Key Takeaways
- Churn is a lagging indicator. The decision to leave happens 60-90 days before cancellation.
- Build early warning signals from product usage, support sentiment, champion status, and contract utilization.
- Design different intervention playbooks for each risk level. A yellow account needs a different response than a red account.
- Complete post-churn analysis for every lost account and route findings to the teams that can act on them.
- The most preventable churn comes from poor onboarding, not product gaps. Fix the first 30 days first.
About This Template
Created by: Tim Adair
Last Updated: 3/4/2026
Version: 1.0.0
License: Free for personal and commercial use
