What This Template Is For
Content discovery is the system that helps users find things they did not know they were looking for. Search handles intent ("I want X"). Discovery handles serendipity ("Show me something interesting"). Both are essential, but discovery is harder to get right because the user has not told you what they want.
This template provides a structured approach to designing content discovery features for SaaS products, media platforms, marketplaces, and learning tools. It covers the four pillars of discovery: algorithmic recommendations, editorial curation, browse/explore patterns, and social signals. If you are building a personalization engine, that template covers the data pipeline and model architecture in more depth. This template focuses on the product-level feature design.
For teams evaluating build-vs-buy decisions on recommendation infrastructure, the Technical PM Handbook covers how to assess technical complexity alongside business value. Understanding embeddings will also help you communicate with engineers about the underlying similarity models.
How to Use This Template
- Start with the content inventory. List every content type your product surfaces and how users currently find each one. Identify the discovery gaps where users are unlikely to encounter relevant content.
- Choose which discovery surfaces to implement. Not every product needs a full recommendation feed. Some benefit more from improved browse/filter patterns or curated collections.
- Define the signals that drive recommendations. Explicit signals (saves, likes, ratings) are strong but sparse. Implicit signals (views, time spent, scroll depth) are noisy but abundant.
- Specify the cold-start strategy. Every user and every new piece of content starts with zero interaction data. Your system must still deliver relevant suggestions.
- Set quality metrics and guardrails. Discovery systems can create engagement traps (showing only what users already like) or diversity problems (surfacing the same popular items to everyone). Define what "good" looks like beyond click-through rate.
- Document the editorial layer. Even the best algorithm needs human curation for launches, seasonal content, and quality control.
The Template
Discovery Context
| Field | Details |
|---|---|
| Product | [Product name] |
| Content Types | [Articles, courses, videos, products, templates, etc.] |
| Catalog Size | [Number of discoverable items] |
| Active Users | [MAU or DAU] |
| Owner | [PM name] |
| Date | [Date] |
| Status | Draft / In Review / Approved / Shipped |
Current state. [How do users discover content today? What are the main paths: search, browse, direct links, email, social?]
Discovery gaps. [What content exists but users rarely find? Where do users drop off without engaging?]
Goals.
- [Goal 1: e.g., Increase content engagement breadth. Average user interacts with X more content types per month]
- [Goal 2: e.g., Reduce time-to-first-relevant-content for new users]
- [Goal 3: e.g., Surface long-tail content that gets < N views per month]
Discovery Surfaces
Define each surface where discovery happens in your product.
Surface 1: [Name, e.g., Home Feed]
| Property | Value |
|---|---|
| Location | [Where in the product: homepage, sidebar, post-action, email] |
| Trigger | [Page load, scroll, action completion, scheduled] |
| Content shown | [What types of content appear here] |
| Layout | [Grid, list, carousel, single card, etc.] |
| Items visible | [How many items shown without scrolling] |
| Refresh behavior | [On every visit, daily, pull-to-refresh, infinite scroll] |
Recommendation strategy.
- [Primary algorithm: collaborative filtering, content-based, popularity, editorial, hybrid]
- [Fallback for cold-start users: popular items, editorial picks, category-based defaults]
Personalization depth.
- ☐ Fully personalized per user
- ☐ Segment-based (user cohorts)
- ☐ Same for all users (editorial/popularity)
- ☐ Hybrid (personalized with editorial slots)
[Repeat for each discovery surface]
Signal Taxonomy
Define the signals your recommendation system uses.
Explicit signals (user-declared intent):
| Signal | Strength | Availability | Decay |
|---|---|---|---|
| Bookmark / Save | Strong | Sparse | None |
| Like / Upvote | Strong | Sparse | None |
| Rating (1-5) | Strong | Very sparse | None |
| Follow topic/author | Strong | Sparse | None |
| "Not interested" | Strong (negative) | Very sparse | None |
Implicit signals (inferred from behavior):
| Signal | Strength | Availability | Decay |
|---|---|---|---|
| View / Open | Medium | Abundant | 30 days |
| Time spent (dwell time) | Medium | Abundant | 30 days |
| Scroll depth | Medium | Abundant | 7 days |
| Share | Strong | Moderate | 90 days |
| Return visit to same content | Strong | Moderate | 60 days |
| Search query | Medium | Moderate | 14 days |
| Click-through from suggestion | Medium | Moderate | 30 days |
Contextual signals:
| Signal | Use |
|---|---|
| Time of day | Surface shorter content during work hours, longer on evenings/weekends |
| Device type | Adjust layout and content length for mobile vs desktop |
| User tenure | New users get onboarding content, veterans get advanced |
| Active project/workspace | Scope suggestions to current context |
Cold-Start Strategy
New user (no interaction history):
| Phase | Duration | Strategy |
|---|---|---|
| First visit | 0 interactions | [Onboarding quiz, popular by segment, editorial picks] |
| Early engagement | 1-5 interactions | [Content-based on first interactions, trending in category] |
| Building profile | 6-20 interactions | [Hybrid: content-based + early collaborative signals] |
| Established | 20+ interactions | [Full personalization: collaborative + content-based + contextual] |
New content (no interaction data):
| Phase | Duration | Strategy |
|---|---|---|
| Just published | 0-24 hours | [Boost to relevant segments based on metadata/tags, editorial slot] |
| Gathering signals | 1-7 days | [Blend metadata-based placement with early engagement metrics] |
| Established | 7+ days | [Standard algorithmic ranking based on accumulated signals] |
Diversity and Quality Guardrails
| Guardrail | Rule | Why |
|---|---|---|
| Topic diversity | No more than [N]% of suggestions from same category | Prevents filter bubbles |
| Recency mix | At least [N]% of suggestions published in last [N] days | Surfaces fresh content |
| Author diversity | No more than [N] suggestions from same author | Prevents creator concentration |
| Popularity balance | At least [N]% of suggestions are long-tail (< N views) | Surfaces hidden gems |
| Repeat suppression | Do not show same item within [N] days of dismissal | Respects "not interested" |
| Content quality | Minimum quality score of [N] to be eligible | Filters low-quality items |
| Freshness penalty | Items older than [N] months get [N]% ranking penalty | Prevents stale recommendations |
Editorial Curation Layer
| Feature | Description |
|---|---|
| Pinned collections | [Curated sets that appear at fixed positions: "Staff Picks", "Getting Started", seasonal] |
| Featured slot | [N] editorial slots per discovery surface, manually controlled |
| Suppression list | Ability to remove specific items from recommendations globally |
| Boost/bury | Multiplier to increase or decrease an item's ranking score |
| Campaign slots | Time-bound promotions (product launches, events, partnerships) |
| Curation schedule | [How often editorial picks are refreshed: daily, weekly, per launch] |
Curation tools required.
- ☐ Admin UI for managing featured content
- ☐ Scheduling system for time-bound promotions
- ☐ Suppression workflow for flagged/outdated content
- ☐ Analytics on editorial vs algorithmic performance
Measurement Framework
Primary metrics:
| Metric | Definition | Target |
|---|---|---|
| Discovery CTR | Clicks on recommended items / Impressions | [X%] |
| Content breadth | Unique content types engaged per user per month | [N types] |
| Long-tail reach | % of catalog with at least [N] views per month | [X%] |
| Time to first engagement | Median time from login to first content interaction | [N seconds] |
Quality metrics:
| Metric | Definition | Target |
|---|---|---|
| Recommendation satisfaction | Periodic survey: "Are recommendations useful?" (1-5) | [4+] |
| Diversity score | Shannon entropy of category distribution in impressions | [> N] |
| Serendipity score | % of engaged content from categories user has not engaged before | [X%] |
| Negative feedback rate | "Not interested" / "Hide" actions per 100 impressions | [< N] |
Engagement guardrails (what to watch for):
- ☐ CTR increasing but time-spent decreasing (clickbait pattern)
- ☐ Same items appearing for > 50% of users (popularity bias)
- ☐ New content getting < 10% of impressions (cold-start failure)
- ☐ User satisfaction declining while engagement metrics are flat
Use a search analytics dashboard to monitor these metrics alongside search quality metrics for a complete picture of how users find content.
Technical Requirements
| Requirement | Specification |
|---|---|
| Latency | Recommendations returned within [N]ms |
| Throughput | [N] recommendation requests per second at peak |
| Freshness | New content eligible for recommendations within [N] minutes |
| Model retraining | [Hourly / Daily / Weekly] |
| A/B testing | [Framework and allocation strategy] |
| Fallback | [What to show if recommendation service is unavailable] |
Filled Example: Learning Platform Content Discovery
Discovery Context
| Field | Details |
|---|---|
| Product | TechLearn (online learning platform) |
| Content Types | Courses (420), tutorials (3,200), articles (8,100), videos (1,800), quizzes (950) |
| Catalog Size | 14,470 items |
| Active Users | 85,000 MAU |
| Owner | James Park, PM |
| Date | March 2026 |
| Status | In Review |
Current state. Users find content through category browse (42%), search (31%), direct links from email (18%), and social shares (9%). There is no personalized recommendation surface. The homepage shows the same featured courses to everyone.
Discovery gaps. 62% of tutorials have fewer than 50 views per month. New articles get less than 10% of the traffic of new courses despite high completion rates. Users who complete a course rarely discover related tutorials that deepen that topic.
Goals.
- Increase average content types engaged per user from 1.3 to 2.5 per month
- Surface tutorials and articles alongside courses (increase tutorial views by 40%)
- Reduce time to first relevant content for new users from 2.1 minutes to under 45 seconds
Discovery Surfaces
Surface 1: Personalized Home Feed
| Property | Value |
|---|---|
| Location | Homepage, above the fold |
| Trigger | Page load |
| Content shown | All types (courses, tutorials, articles, videos) |
| Layout | Horizontal carousels grouped by theme |
| Items visible | 4 per carousel, 3 carousels visible |
| Refresh behavior | New on every visit |
Recommendation strategy. Hybrid: collaborative filtering (users like you also engaged with) for established users. Content-based (topic similarity to completed items) for early users. Editorial picks for brand-new users.
Surface 2: Post-Completion Suggestions
| Property | Value |
|---|---|
| Location | Course/tutorial completion modal |
| Trigger | Content completion event |
| Content shown | Next-step content: deeper tutorials, related quizzes, adjacent topics |
| Layout | 3-card horizontal layout |
| Items visible | 3 |
| Refresh behavior | Generated per completion |
Recommendation strategy. Sequence-based: items most commonly completed after this one. Weighted toward content types the user has not tried yet.
Cold-Start Strategy
New user.
- First visit: 3-question onboarding quiz (role, experience level, goals). Map answers to pre-built learning paths.
- First 5 interactions: Boost content from categories selected during onboarding. Mix in trending items.
- 20+ interactions: Full collaborative filtering with content-based fallback.
New content.
- Published: Auto-assigned to relevant topic clusters based on tags and description similarity.
- First 48 hours: 10% boost in recommendation ranking for topic-matched users.
- 7+ days: Standard algorithmic ranking.
Common Mistakes to Avoid
- Optimizing only for clicks. High CTR can mean clickbait. Measure engagement depth (time spent, completion rate, return visits) alongside click-through to get a true picture of recommendation quality.
- Ignoring the cold-start problem. If your first experience for new users is "we do not know what to show you," you lose them. Always have a non-personalized fallback: editorial picks, popular items, or an onboarding quiz.
- Treating all content types equally. A 2-minute article and a 20-hour course serve different needs. Discovery surfaces should consider content weight and user intent, not just relevance scores.
- No editorial override. Algorithms make mistakes. You need the ability to pin, boost, suppress, and remove content from recommendations. Build curation tools alongside the algorithm, not after launch.
- Measuring recommendation performance in isolation. Discovery, search, browse, and direct links all interact. Use the Search Relevance Template to define quality metrics for search, then measure discovery alongside it to understand the full content access picture.
Key Takeaways
- Discovery complements search by surfacing content users did not know they wanted
- Build a cold-start strategy for both new users and new content before launch
- Set diversity guardrails to prevent filter bubbles and popularity bias
- Measure content breadth and long-tail reach alongside click-through rate
- Start with one high-traffic surface and expand once the pipeline is proven
About This Template
Created by: Tim Adair
Last Updated: 3/5/2026
Version: 1.0.0
License: Free for personal and commercial use
