Skip to main content
New: Deck Doctor. Upload your deck, get CPO-level feedback. 7-day free trial.
TemplateFREE⏱️ 60-90 minutes

Feed Ranking Algorithm Design Template

A structured template for designing feed ranking algorithms. Covers scoring models, signal weighting, diversity rules, recency decay, abuse prevention,...

By Tim Adair• Last updated 2026-03-05
Feed Ranking Algorithm Design Template preview

Feed Ranking Algorithm Design Template

Free Feed Ranking Algorithm Design Template — open and start using immediately

or use email

Instant access. No spam.

Need a custom version?

Forge AI generates PM documents customized to your product, team, and goals. Get a draft in seconds, then refine with AI chat.

Generate with Forge AI

What This Template Is For

A feed ranking algorithm decides which items appear first when a user opens your product. It is the most consequential piece of code in any content-driven application. The ranking directly controls engagement, retention, and user satisfaction. A bad feed buries relevant content under noise. A good feed surfaces the right thing at the right time without the user having to search for it.

This template provides a structured approach to designing feed ranking systems for SaaS products, internal tools, social platforms, and collaboration apps. It covers the core ranking formula, signal taxonomy, diversity constraints, recency decay functions, abuse countermeasures, and A/B testing methodology. If you need to design the content discovery layer that sits alongside the feed, the Content Discovery Template covers recommendation surfaces and cold-start strategies. For understanding the retrieval layer that feeds candidates into the ranker, the Search Ranking Template covers scoring and relevance models.

The Technical PM Handbook offers guidance on working with ML engineers to scope ranking projects and define success metrics that balance engagement with user wellbeing.


How to Use This Template

  1. Define the feed's purpose. Is this a social feed (maximize engagement), a work feed (maximize productivity), a news feed (maximize informed decisions), or a notification feed (maximize action completion)? The purpose shapes every ranking decision.
  2. List the candidate item types. A feed often blends multiple content types: posts, comments, updates, announcements, tasks, recommendations. Each type needs its own scoring considerations.
  3. Define the signals. Separate them into engagement signals (clicks, likes, comments), quality signals (author reputation, content completeness), relevance signals (topic match, relationship strength), and contextual signals (time of day, device, user state).
  4. Design the scoring function. Start simple (weighted linear combination of signals) and add complexity only when measurement shows diminishing returns.
  5. Set diversity and fairness constraints. Without constraints, the feed devolves into showing the same popular content types repeatedly. Define rules for type diversity, author diversity, and freshness.
  6. Plan the A/B testing framework. Every ranking change should be testable. Define your North Star metric, guardrail metrics, and minimum detectable effect before shipping experiments.

The Template

Feed Context

FieldDetails
Product[Product name]
Feed Type[Social / Work / News / Notification / Marketplace / Mixed]
Feed Purpose[One sentence: what should the feed help users do?]
Item Types[Posts, updates, tasks, recommendations, ads, etc.]
Avg Items/Day[How many new items enter the feed per user per day]
Active Users[DAU]
Owner[PM name]
Date[Date]

Current state. [How is the feed currently ranked? Reverse chronological? Simple popularity? No ranking?]

Problems.

  • [Problem 1: e.g., Users miss important updates buried under low-value posts]
  • [Problem 2: e.g., Engagement concentrated on top 10% of content creators]
  • [Problem 3: e.g., Feed feels stale within 2 hours of last visit]

Goals.

  • [Goal 1: e.g., Increase daily feed sessions from X to Y]
  • [Goal 2: e.g., Increase content breadth (unique authors engaged per session)]
  • [Goal 3: e.g., Reduce time-to-first-meaningful-action to under X seconds]

Ranking Architecture

Pipeline stages.

1. Candidate Generation
   Source pool: all items from followed entities, teams, topics
   Window: last [N] hours or [N] items per source
   Output: ~[N] candidates

2. Pre-Scoring Filter
   Remove: blocked users, muted topics, already-seen items, policy violations
   Output: ~[N] candidates

3. Scoring
   Apply ranking formula to each candidate
   Output: scored and sorted list

4. Post-Scoring Rules
   Apply diversity constraints, deduplication, pinned items
   Output: final ranked feed

5. Delivery
   Paginate (page size [N])
   Track impressions for feedback loop

Signal Taxonomy

Engagement signals (how users interact):

SignalTypeWeightDecayNotes
Like / ReactExplicit[0.X]NoneBinary or multi-reaction
CommentExplicit[0.X]NoneWeighted by comment length
ShareExplicit[0.X]NoneStrongest engagement signal
Save / BookmarkExplicit[0.X]NoneHigh intent signal
Click / OpenImplicit[0.X]7 daysMust pair with dwell time
Dwell timeImplicit[0.X]7 days> [N] seconds = meaningful
Scroll pastImplicit (negative)[-0.X]3 daysItem visible > 2s but no interaction
Hide / Not interestedExplicit (negative)[-0.X]30 daysStrong negative signal

Quality signals (content attributes):

SignalTypeWeightDescription
Author reputationComputed[0.X]Based on historical engagement rates
Content completenessRule-based[0.X]Has image, > N words, has link, etc.
OriginalityComputed[0.X]Not a duplicate or near-duplicate
TimelinessRule-based[0.X]Relates to recent events or deadlines

Relevance signals (user-item fit):

SignalTypeWeightDescription
Topic matchModel[0.X]User's topic interests vs item topics
Author relationshipGraph[0.X]Followed, same team, interacted before
Collaborative filteringModel[0.X]Users similar to you engaged with this
Contextual relevanceRule[0.X]Matches user's current project, task, or role

Scoring Function

Approach. [Choose one]

  • Weighted linear: score = w1engagement + w2quality + w3relevance + w4recency
  • Two-stage: pointwise ML model (logistic regression / gradient-boosted tree) for P(engagement), then apply business rules
  • Multi-objective: separate models for P(click), P(like), P(comment), combined with scalarization weights
  • Learning-to-rank: listwise model (LambdaMART) trained on user sessions

Formula (if weighted linear):

score = (engagement_score * 0.3)
      + (quality_score * 0.2)
      + (relevance_score * 0.3)
      + (recency_score * 0.2)

Recency decay function.

FunctionFormulaUse When
Linear1 - (age_hours / max_hours)Even decay, simple
Exponentiale^(-lambda * age_hours)Sharp drop-off, favors fresh content
Step1.0 if < 6h, 0.7 if < 24h, 0.4 if < 72h, 0.1 elseDiscrete tiers, easy to tune
Log1 / (1 + log(1 + age_hours))Slow initial decay, gentle long tail

Chosen function. [Which one and why. Include the specific parameter values.]


Diversity and Fairness Rules

RuleConstraintWhy
Type diversityNo more than [N] consecutive items of same typePrevent monoculture (all posts, no tasks)
Author diversitySame author appears max [N] times in first [N] itemsPrevent one person dominating the feed
Topic diversityNo more than [N]% of feed from same topicPrevent topic tunneling
Freshness floorAt least [N]% of items published in last [N] hoursPrevent stale feed
New creator boostItems from creators with < [N] posts get [N]x score boostGive new voices visibility
Type representationAt least 1 item of each type in first [N] itemsEnsure awareness of all content types

Pinned items.

  • [System announcements: always position 1]
  • [Admin pinned: positions 1-3, max 1 pinned at a time]
  • [User pinned: not applicable / bookmarked items section]

Abuse and Gaming Prevention

Attack VectorDetectionMitigation
Engagement farmingAnomalous like/comment velocityRate limit interactions; downweight items with suspicious engagement patterns
Follow/unfollow spamHigh follow churn rateMinimum follow duration before relationship affects ranking
Duplicate/near-duplicateContent similarity hashDeduplicate and keep highest-scoring version
Keyword stuffingExcessive trending terms in contentQuality score penalty for keyword density above threshold
Coordinated activityCluster detection on engagement timingDiscount engagement from suspected coordinated groups
Self-engagementAuthor interacting with own contentExclude self-interactions from scoring

A/B Testing Framework

North Star metric. [The single metric that best captures feed health. e.g., "Meaningful interactions per DAU" or "Tasks completed from feed per week"]

Guardrail metrics (must not regress):

  • [User satisfaction survey score]
  • [DAU retention (D7, D28)]
  • [Content creation rate (items per creator per week)]
  • [Report/hide rate per 100 impressions]

Experiment methodology.

ParameterValue
Randomization unitUser
Traffic allocation[5% / 10% / 50% control vs treatment]
Minimum experiment duration[7 days / 14 days]
Statistical significance[p < 0.05, power = 0.80]
Minimum detectable effect[X% change in North Star]
Holdout group[Permanent 5% on current production ranking for long-term comparison]

Experiment backlog.

#HypothesisTreatmentExpected Impact
1[Increasing recency weight will improve freshness perception][Recency weight 0.2 to 0.3][+5% daily sessions]
2[Adding author diversity rule will broaden engagement][Max 2 per author in top 20][+10% unique authors engaged]
3[ML ranker will outperform linear scoring][Gradient-boosted model vs linear][+8% meaningful interactions]

Monitoring and Observability

MetricDashboardAlert Threshold
Ranking latency (P50, P95, P99)[Dashboard name]P95 > [N]ms
Items scored per request[Dashboard name]Avg < [N]
Diversity score (Shannon entropy)[Dashboard name]< [N] (feed is too homogeneous)
Impression-to-engagement rate[Dashboard name]< [N]% (feed quality drop)
Error rate[Dashboard name]> [N]%
Model feature freshness[Dashboard name]Feature > [N]h stale

Filled Example: B2B Team Collaboration Feed

Feed Context

FieldDetails
ProductTeamSync (B2B collaboration platform)
Feed TypeWork feed
Feed PurposeHelp team members stay informed on project activity and take action on items that need their attention
Item TypesProject updates, task mentions, document changes, comments, announcements, weekly digests
Avg Items/Day45 per user
Active Users12,000 DAU
OwnerDavid Kim, PM

Current state. Reverse chronological feed. Users report missing important updates because they are buried under low-value automated notifications. 67% of feed items are "status changed" events that users ignore.

Goals.

  • Reduce missed actionable items from 23% to under 5%
  • Increase feed engagement rate from 12% to 25%
  • Decrease time-to-action on task mentions from 4.2 hours to under 1 hour

Scoring Function

score = (action_urgency * 0.35)
      + (relationship_strength * 0.25)
      + (content_quality * 0.15)
      + (engagement_velocity * 0.10)
      + (recency * 0.15)

Where:

  • action_urgency: 1.0 if user is @mentioned or assigned, 0.7 if direct team project, 0.3 if followed project, 0.1 if org-wide
  • relationship_strength: based on collaboration frequency with author (messages, shared tasks, meetings)
  • content_quality: human-authored > automated status change. Posts with context (> 50 words) scored higher
  • engagement_velocity: how quickly others on the team are engaging with this item
  • recency: exponential decay, lambda = 0.05 (half-life ~14 hours)

Diversity Rules

  • No more than 3 consecutive automated status updates
  • At least 1 actionable item (mention, assignment, approval request) in first 5 items
  • Announcements from leadership pinned for 48 hours
  • Same project: max 5 of first 15 items (prevents one active project from flooding)

Common Mistakes to Avoid

  • Starting with ML when rules would suffice. A well-tuned weighted linear formula with 5-8 signals outperforms a poorly trained ML model. Start simple, measure, and add complexity when the simple model plateaus. Use the RICE framework to prioritize which ranking improvements deliver the most impact.
  • Optimizing for engagement without guardrails. Pure engagement optimization leads to addictive patterns that eventually hurt retention. Always pair engagement metrics with satisfaction surveys and negative-signal tracking (hides, unfollows, report rates).
  • Ignoring feed freshness. A feed that shows the same items for 3 visits in a row feels dead. Users stop checking. Apply recency decay aggressively enough that the feed feels different every 4-6 hours.
  • Not testing with a holdout. If you iterate on ranking without a permanent holdout group on the previous version, you lose the ability to measure cumulative long-term effects. Keep 5% of users on the baseline ranking indefinitely.
  • Shipping without an A/B test. Every ranking change, no matter how small, should be tested. "We are pretty sure this is better" is not evidence. Run the experiment, wait for significance, then ship or revert.

Key Takeaways

  • Start with a weighted linear scoring function before investing in ML models
  • Define diversity rules to prevent content type and author concentration
  • Use exponential recency decay to keep the feed feeling fresh
  • Always A/B test ranking changes with a permanent holdout group
  • Pair engagement metrics with satisfaction surveys and negative-signal tracking

About This Template

Created by: Tim Adair

Last Updated: 3/5/2026

Version: 1.0.0

License: Free for personal and commercial use

Frequently Asked Questions

Should I start with chronological or algorithmic ranking?+
Start with chronological if your feed has fewer than 20 items per day per user. At that volume, users can scan everything, and algorithmic ranking adds confusion without benefit. Switch to algorithmic ranking when daily volume exceeds 20-30 items and users start missing important content. The transition should be gradual: offer a "Recent" toggle alongside the ranked view.
How do I handle the cold-start problem for new items?+
Give new items a time-limited ranking boost (e.g., 2x score for the first 6 hours) to ensure they get enough impressions for the algorithm to gather engagement signals. Without this boost, new items from less popular creators never get seen and therefore never accumulate the engagement data needed to rank well.
What is the right balance between relevance and recency?+
This depends on your feed type. For work feeds, recency should be weighted at 15-25% because actionable items have deadlines. For social feeds, relevance can dominate (40-50%) because the best content from yesterday is still worth seeing today. For news feeds, recency should be 30-40% because information value degrades quickly. Tracking [analytics metrics](/templates) on engagement-by-item-age helps you calibrate.
How do I measure if the feed ranking is "good"?+
Combine quantitative and qualitative signals. Quantitative: engagement rate, time-to-action, content breadth, DAU retention. Qualitative: user satisfaction surveys, support tickets about missing content, session replays. The strongest signal is D7/D28 retention: if users keep coming back, the feed is delivering value.
How often should I retrain a feed ranking model?+
For rule-based or linear scoring, update weights monthly based on engagement data review. For ML models, retrain weekly or daily depending on data volume and feature freshness. Monitor model performance drift with a control group. If the model's live metrics drop below the control group, something has changed in user behavior and the model needs retraining. ---

Explore More Templates

Browse our full library of PM templates, or generate a custom version with AI.

Free PDF

Like This Template?

Subscribe to get new templates, frameworks, and PM strategies delivered to your inbox.

or use email

Join 10,000+ product leaders. Instant PDF download.

Want full SaaS idea playbooks with market research?

Explore Ideas Pro →