What This Template Is For
A feed ranking algorithm decides which items appear first when a user opens your product. It is the most consequential piece of code in any content-driven application. The ranking directly controls engagement, retention, and user satisfaction. A bad feed buries relevant content under noise. A good feed surfaces the right thing at the right time without the user having to search for it.
This template provides a structured approach to designing feed ranking systems for SaaS products, internal tools, social platforms, and collaboration apps. It covers the core ranking formula, signal taxonomy, diversity constraints, recency decay functions, abuse countermeasures, and A/B testing methodology. If you need to design the content discovery layer that sits alongside the feed, the Content Discovery Template covers recommendation surfaces and cold-start strategies. For understanding the retrieval layer that feeds candidates into the ranker, the Search Ranking Template covers scoring and relevance models.
The Technical PM Handbook offers guidance on working with ML engineers to scope ranking projects and define success metrics that balance engagement with user wellbeing.
How to Use This Template
- Define the feed's purpose. Is this a social feed (maximize engagement), a work feed (maximize productivity), a news feed (maximize informed decisions), or a notification feed (maximize action completion)? The purpose shapes every ranking decision.
- List the candidate item types. A feed often blends multiple content types: posts, comments, updates, announcements, tasks, recommendations. Each type needs its own scoring considerations.
- Define the signals. Separate them into engagement signals (clicks, likes, comments), quality signals (author reputation, content completeness), relevance signals (topic match, relationship strength), and contextual signals (time of day, device, user state).
- Design the scoring function. Start simple (weighted linear combination of signals) and add complexity only when measurement shows diminishing returns.
- Set diversity and fairness constraints. Without constraints, the feed devolves into showing the same popular content types repeatedly. Define rules for type diversity, author diversity, and freshness.
- Plan the A/B testing framework. Every ranking change should be testable. Define your North Star metric, guardrail metrics, and minimum detectable effect before shipping experiments.
The Template
Feed Context
| Field | Details |
|---|---|
| Product | [Product name] |
| Feed Type | [Social / Work / News / Notification / Marketplace / Mixed] |
| Feed Purpose | [One sentence: what should the feed help users do?] |
| Item Types | [Posts, updates, tasks, recommendations, ads, etc.] |
| Avg Items/Day | [How many new items enter the feed per user per day] |
| Active Users | [DAU] |
| Owner | [PM name] |
| Date | [Date] |
Current state. [How is the feed currently ranked? Reverse chronological? Simple popularity? No ranking?]
Problems.
- [Problem 1: e.g., Users miss important updates buried under low-value posts]
- [Problem 2: e.g., Engagement concentrated on top 10% of content creators]
- [Problem 3: e.g., Feed feels stale within 2 hours of last visit]
Goals.
- [Goal 1: e.g., Increase daily feed sessions from X to Y]
- [Goal 2: e.g., Increase content breadth (unique authors engaged per session)]
- [Goal 3: e.g., Reduce time-to-first-meaningful-action to under X seconds]
Ranking Architecture
Pipeline stages.
1. Candidate Generation
Source pool: all items from followed entities, teams, topics
Window: last [N] hours or [N] items per source
Output: ~[N] candidates
2. Pre-Scoring Filter
Remove: blocked users, muted topics, already-seen items, policy violations
Output: ~[N] candidates
3. Scoring
Apply ranking formula to each candidate
Output: scored and sorted list
4. Post-Scoring Rules
Apply diversity constraints, deduplication, pinned items
Output: final ranked feed
5. Delivery
Paginate (page size [N])
Track impressions for feedback loop
Signal Taxonomy
Engagement signals (how users interact):
| Signal | Type | Weight | Decay | Notes |
|---|---|---|---|---|
| Like / React | Explicit | [0.X] | None | Binary or multi-reaction |
| Comment | Explicit | [0.X] | None | Weighted by comment length |
| Share | Explicit | [0.X] | None | Strongest engagement signal |
| Save / Bookmark | Explicit | [0.X] | None | High intent signal |
| Click / Open | Implicit | [0.X] | 7 days | Must pair with dwell time |
| Dwell time | Implicit | [0.X] | 7 days | > [N] seconds = meaningful |
| Scroll past | Implicit (negative) | [-0.X] | 3 days | Item visible > 2s but no interaction |
| Hide / Not interested | Explicit (negative) | [-0.X] | 30 days | Strong negative signal |
Quality signals (content attributes):
| Signal | Type | Weight | Description |
|---|---|---|---|
| Author reputation | Computed | [0.X] | Based on historical engagement rates |
| Content completeness | Rule-based | [0.X] | Has image, > N words, has link, etc. |
| Originality | Computed | [0.X] | Not a duplicate or near-duplicate |
| Timeliness | Rule-based | [0.X] | Relates to recent events or deadlines |
Relevance signals (user-item fit):
| Signal | Type | Weight | Description |
|---|---|---|---|
| Topic match | Model | [0.X] | User's topic interests vs item topics |
| Author relationship | Graph | [0.X] | Followed, same team, interacted before |
| Collaborative filtering | Model | [0.X] | Users similar to you engaged with this |
| Contextual relevance | Rule | [0.X] | Matches user's current project, task, or role |
Scoring Function
Approach. [Choose one]
- ☐ Weighted linear:
score = w1engagement + w2quality + w3relevance + w4recency - ☐ Two-stage: pointwise ML model (logistic regression / gradient-boosted tree) for P(engagement), then apply business rules
- ☐ Multi-objective: separate models for P(click), P(like), P(comment), combined with scalarization weights
- ☐ Learning-to-rank: listwise model (LambdaMART) trained on user sessions
Formula (if weighted linear):
score = (engagement_score * 0.3)
+ (quality_score * 0.2)
+ (relevance_score * 0.3)
+ (recency_score * 0.2)
Recency decay function.
| Function | Formula | Use When |
|---|---|---|
| Linear | 1 - (age_hours / max_hours) | Even decay, simple |
| Exponential | e^(-lambda * age_hours) | Sharp drop-off, favors fresh content |
| Step | 1.0 if < 6h, 0.7 if < 24h, 0.4 if < 72h, 0.1 else | Discrete tiers, easy to tune |
| Log | 1 / (1 + log(1 + age_hours)) | Slow initial decay, gentle long tail |
Chosen function. [Which one and why. Include the specific parameter values.]
Diversity and Fairness Rules
| Rule | Constraint | Why |
|---|---|---|
| Type diversity | No more than [N] consecutive items of same type | Prevent monoculture (all posts, no tasks) |
| Author diversity | Same author appears max [N] times in first [N] items | Prevent one person dominating the feed |
| Topic diversity | No more than [N]% of feed from same topic | Prevent topic tunneling |
| Freshness floor | At least [N]% of items published in last [N] hours | Prevent stale feed |
| New creator boost | Items from creators with < [N] posts get [N]x score boost | Give new voices visibility |
| Type representation | At least 1 item of each type in first [N] items | Ensure awareness of all content types |
Pinned items.
- [System announcements: always position 1]
- [Admin pinned: positions 1-3, max 1 pinned at a time]
- [User pinned: not applicable / bookmarked items section]
Abuse and Gaming Prevention
| Attack Vector | Detection | Mitigation |
|---|---|---|
| Engagement farming | Anomalous like/comment velocity | Rate limit interactions; downweight items with suspicious engagement patterns |
| Follow/unfollow spam | High follow churn rate | Minimum follow duration before relationship affects ranking |
| Duplicate/near-duplicate | Content similarity hash | Deduplicate and keep highest-scoring version |
| Keyword stuffing | Excessive trending terms in content | Quality score penalty for keyword density above threshold |
| Coordinated activity | Cluster detection on engagement timing | Discount engagement from suspected coordinated groups |
| Self-engagement | Author interacting with own content | Exclude self-interactions from scoring |
A/B Testing Framework
North Star metric. [The single metric that best captures feed health. e.g., "Meaningful interactions per DAU" or "Tasks completed from feed per week"]
Guardrail metrics (must not regress):
- [User satisfaction survey score]
- [DAU retention (D7, D28)]
- [Content creation rate (items per creator per week)]
- [Report/hide rate per 100 impressions]
Experiment methodology.
| Parameter | Value |
|---|---|
| Randomization unit | User |
| Traffic allocation | [5% / 10% / 50% control vs treatment] |
| Minimum experiment duration | [7 days / 14 days] |
| Statistical significance | [p < 0.05, power = 0.80] |
| Minimum detectable effect | [X% change in North Star] |
| Holdout group | [Permanent 5% on current production ranking for long-term comparison] |
Experiment backlog.
| # | Hypothesis | Treatment | Expected Impact |
|---|---|---|---|
| 1 | [Increasing recency weight will improve freshness perception] | [Recency weight 0.2 to 0.3] | [+5% daily sessions] |
| 2 | [Adding author diversity rule will broaden engagement] | [Max 2 per author in top 20] | [+10% unique authors engaged] |
| 3 | [ML ranker will outperform linear scoring] | [Gradient-boosted model vs linear] | [+8% meaningful interactions] |
Monitoring and Observability
| Metric | Dashboard | Alert Threshold |
|---|---|---|
| Ranking latency (P50, P95, P99) | [Dashboard name] | P95 > [N]ms |
| Items scored per request | [Dashboard name] | Avg < [N] |
| Diversity score (Shannon entropy) | [Dashboard name] | < [N] (feed is too homogeneous) |
| Impression-to-engagement rate | [Dashboard name] | < [N]% (feed quality drop) |
| Error rate | [Dashboard name] | > [N]% |
| Model feature freshness | [Dashboard name] | Feature > [N]h stale |
Filled Example: B2B Team Collaboration Feed
Feed Context
| Field | Details |
|---|---|
| Product | TeamSync (B2B collaboration platform) |
| Feed Type | Work feed |
| Feed Purpose | Help team members stay informed on project activity and take action on items that need their attention |
| Item Types | Project updates, task mentions, document changes, comments, announcements, weekly digests |
| Avg Items/Day | 45 per user |
| Active Users | 12,000 DAU |
| Owner | David Kim, PM |
Current state. Reverse chronological feed. Users report missing important updates because they are buried under low-value automated notifications. 67% of feed items are "status changed" events that users ignore.
Goals.
- Reduce missed actionable items from 23% to under 5%
- Increase feed engagement rate from 12% to 25%
- Decrease time-to-action on task mentions from 4.2 hours to under 1 hour
Scoring Function
score = (action_urgency * 0.35)
+ (relationship_strength * 0.25)
+ (content_quality * 0.15)
+ (engagement_velocity * 0.10)
+ (recency * 0.15)
Where:
action_urgency: 1.0 if user is @mentioned or assigned, 0.7 if direct team project, 0.3 if followed project, 0.1 if org-widerelationship_strength: based on collaboration frequency with author (messages, shared tasks, meetings)content_quality: human-authored > automated status change. Posts with context (> 50 words) scored higherengagement_velocity: how quickly others on the team are engaging with this itemrecency: exponential decay, lambda = 0.05 (half-life ~14 hours)
Diversity Rules
- No more than 3 consecutive automated status updates
- At least 1 actionable item (mention, assignment, approval request) in first 5 items
- Announcements from leadership pinned for 48 hours
- Same project: max 5 of first 15 items (prevents one active project from flooding)
Common Mistakes to Avoid
- Starting with ML when rules would suffice. A well-tuned weighted linear formula with 5-8 signals outperforms a poorly trained ML model. Start simple, measure, and add complexity when the simple model plateaus. Use the RICE framework to prioritize which ranking improvements deliver the most impact.
- Optimizing for engagement without guardrails. Pure engagement optimization leads to addictive patterns that eventually hurt retention. Always pair engagement metrics with satisfaction surveys and negative-signal tracking (hides, unfollows, report rates).
- Ignoring feed freshness. A feed that shows the same items for 3 visits in a row feels dead. Users stop checking. Apply recency decay aggressively enough that the feed feels different every 4-6 hours.
- Not testing with a holdout. If you iterate on ranking without a permanent holdout group on the previous version, you lose the ability to measure cumulative long-term effects. Keep 5% of users on the baseline ranking indefinitely.
- Shipping without an A/B test. Every ranking change, no matter how small, should be tested. "We are pretty sure this is better" is not evidence. Run the experiment, wait for significance, then ship or revert.
Key Takeaways
- Start with a weighted linear scoring function before investing in ML models
- Define diversity rules to prevent content type and author concentration
- Use exponential recency decay to keep the feed feeling fresh
- Always A/B test ranking changes with a permanent holdout group
- Pair engagement metrics with satisfaction surveys and negative-signal tracking
About This Template
Created by: Tim Adair
Last Updated: 3/5/2026
Version: 1.0.0
License: Free for personal and commercial use
