What This Template Is For
Fraud costs the global payments industry over $30 billion annually, and the number keeps climbing as transaction volumes grow. For product teams building or improving fraud detection, the challenge is balancing protection against friction. Block too aggressively and you lose legitimate customers. Block too loosely and you absorb chargebacks and reputational damage.
This template helps product managers document fraud detection system requirements in a structured way. It covers rule-based detection, machine learning models, alert workflows, false positive management, and reporting. Whether you are building a fraud system from scratch or specifying improvements to an existing one, this document ensures you cover the full detection lifecycle. Pair it with your transaction monitoring template for the operational rules layer and your risk assessment for broader product risk analysis.
To understand how AI and ML models fit into product requirements, the AI PM Handbook covers evaluation, deployment, and monitoring patterns that apply directly to fraud models.
How to Use This Template
- Copy the template into your documentation system.
- Start with the Fraud Landscape section to align your team on the types of fraud you are targeting.
- Document your detection rules and model requirements. Work with your data science team on the ML sections.
- Define alert workflows with your operations or trust-and-safety team. They handle the alerts daily.
- Set explicit targets for detection rate, false positive rate, and review latency. Without targets, you cannot measure success.
- Review the completed document with engineering, data science, operations, and legal before development starts.
The Template
System Overview
| Field | Details |
|---|---|
| System Name | [e.g., Fraud Detection Service v2] |
| Author | [PM name] |
| Date | [Date] |
| Status | Draft / In Review / Approved |
| Scope | [Which transactions, products, or user actions are in scope] |
| Stakeholders | [PM, Engineering, Data Science, Trust & Safety, Legal, Finance] |
Objective. [One sentence describing what the fraud detection system should achieve.]
Fraud Landscape
- ☐ Primary fraud types identified and prioritized
| Fraud Type | Prevalence | Current Impact | Detection Priority |
|---|---|---|---|
| Card-not-present (CNP) fraud | [High/Med/Low] | [$X/month in chargebacks] | [P0/P1/P2] |
| Account takeover (ATO) | [High/Med/Low] | [Impact description] | [P0/P1/P2] |
| Synthetic identity fraud | [High/Med/Low] | [Impact description] | [P0/P1/P2] |
| Friendly fraud (first-party) | [High/Med/Low] | [Impact description] | [P0/P1/P2] |
| Promo/coupon abuse | [High/Med/Low] | [Impact description] | [P0/P1/P2] |
| Money laundering | [High/Med/Low] | [Impact description] | [P0/P1/P2] |
| [Custom type] | [High/Med/Low] | [Impact description] | [P0/P1/P2] |
Detection Rules (Rule Engine)
- ☐ Rule categories defined (velocity, amount, geography, device, behavior)
- ☐ Each rule has a unique ID, description, trigger condition, and action
- ☐ Rule priority and conflict resolution logic documented
- ☐ Rule tuning process defined (who can modify, approval workflow, testing)
| Rule ID | Category | Trigger Condition | Action | Confidence |
|---|---|---|---|---|
| R-001 | Velocity | > [X] transactions in [Y] minutes from same card | Block + Alert | High |
| R-002 | Geography | Transaction country differs from account country | Flag for review | Medium |
| R-003 | Amount | Single transaction > [X] for new account (< 30 days) | Flag for review | Medium |
| R-004 | Device | New device + high-value transaction | Step-up auth | Medium |
| R-005 | Behavior | [Pattern description] | [Action] | [Confidence] |
| R-006 | [Category] | [Condition] | [Action] | [Level] |
Machine Learning Model Requirements
- ☐ Model objective defined (classification: fraudulent vs. legitimate)
- ☐ Training data requirements documented (volume, labeling, freshness)
- ☐ Feature set defined (transaction, user, device, behavioral features)
- ☐ Model performance targets set (precision, recall, AUC)
- ☐ Model retraining schedule defined
- ☐ Model monitoring and drift detection in place
- ☐ Explainability requirements defined (for analyst review and regulatory needs)
Model Performance Targets
| Metric | Target | Current Baseline |
|---|---|---|
| Precision (fraud class) | > [X]% | [Current value] |
| Recall (fraud class) | > [X]% | [Current value] |
| False positive rate | < [X]% | [Current value] |
| AUC-ROC | > [X] | [Current value] |
| Inference latency (P95) | < [X]ms | [Current value] |
Feature Categories
| Category | Example Features | Source |
|---|---|---|
| Transaction | Amount, currency, merchant category, time of day | Payment events |
| User | Account age, historical transaction count, KYC level | User service |
| Device | Device fingerprint, IP geolocation, browser type | Device SDK |
| Behavioral | Session duration, navigation pattern, typing speed | Behavioral analytics |
| Network | Shared payment methods, shared devices, email domain | Graph analysis |
Alert Workflow
- ☐ Alert priority levels defined (critical, high, medium, low)
- ☐ Alert routing rules documented (auto-block vs. human review)
- ☐ SLA for alert review by priority level
- ☐ Analyst decision options documented (approve, block, escalate, investigate)
- ☐ Escalation path defined for complex or high-value cases
- ☐ Alert fatigue management strategy documented
Alert Priority Matrix
| Priority | Criteria | SLA | Auto-action |
|---|---|---|---|
| Critical | Confirmed ATO or > $[X] potential loss | [X] minutes | Block transaction |
| High | ML score > [X] and amount > $[Y] | [X] hours | Hold transaction |
| Medium | Rule trigger + ML score > [X] | [X] hours | Flag for next review cycle |
| Low | Single rule trigger, low ML score | [X] business days | Log only |
False Positive Management
- ☐ False positive rate target defined and tracked
- ☐ Customer communication plan for false declines documented
- ☐ Appeal/unblock process for legitimate customers documented
- ☐ False positive feedback loop to rule engine and ML model defined
- ☐ False positive impact tracked (lost revenue, customer churn, NPS impact)
Reporting and Metrics
- ☐ Daily fraud metrics dashboard defined
- ☐ Monthly fraud report format and distribution list defined
- ☐ Regulatory reporting requirements documented (SARs, etc.)
| Metric | Frequency | Owner | Target |
|---|---|---|---|
| Fraud detection rate | Daily | [Name] | > [X]% |
| False positive rate | Daily | [Name] | < [X]% |
| Mean time to review (critical alerts) | Daily | [Name] | < [X] min |
| Chargeback rate | Monthly | [Name] | < [X]% |
| Fraud loss as % of GMV | Monthly | [Name] | < [X]% |
| Rule effectiveness (precision per rule) | Monthly | [Name] | Review |
Filled Example: E-Commerce Payment Fraud Detection
System Overview
| Field | Details |
|---|---|
| System Name | FraudShield v3 |
| Author | Priya Mehta, Senior PM |
| Date | March 2026 |
| Status | In Review |
| Scope | All card-not-present transactions on web and mobile checkout |
| Stakeholders | PM, Payments Eng, ML Team, Trust & Safety (4 analysts), Legal |
Objective. Reduce chargeback rate from 0.42% to below 0.20% while keeping false positive rate under 3%.
Detection Rules (Excerpt)
| Rule ID | Category | Trigger | Action | Confidence |
|---|---|---|---|---|
| R-001 | Velocity | > 5 transactions in 10 min from same card | Block + alert (critical) | High |
| R-002 | Geography | Shipping country differs from billing country and card country | Flag for review | Medium |
| R-003 | Amount | First transaction > $500 for account < 7 days old | Step-up verification | Medium |
| R-004 | Device | Previously flagged device fingerprint | Block + alert (high) | High |
ML Model (Excerpt)
- ☑ Gradient-boosted tree model (XGBoost) trained on 18 months of labeled transaction data
- ☑ 47 features across transaction, user, device, and behavioral categories
- ☑ Current performance: Precision 89%, Recall 76%, AUC 0.94, P95 latency 12ms
- ☑ Retraining: weekly on rolling 6-month window
- ☑ SHAP values generated for top-10 features per decision for analyst explainability
- ☐ Graph-based features (shared payment networks) planned for v3.1 (Q3 2026)
Key Takeaways
- Combine rule-based detection for known patterns with ML models for evolving threats
- Set explicit targets for detection rate, false positive rate, and review SLA before building
- False positives cost money and trust. Track their impact on revenue and customer satisfaction
- Retrain ML models regularly. Fraud tactics change faster than most product teams expect
- Build feedback loops so analyst decisions improve both rules and models over time
About This Template
Created by: Tim Adair
Last Updated: 3/4/2026
Version: 1.0.0
License: Free for personal and commercial use
