Skip to main content
New: Deck Doctor. Upload your deck, get CPO-level feedback. 7-day free trial.
TemplateFREE⏱️ 1-2 hours

Affinity Mapping Template for User Research

A structured affinity mapping template for organizing qualitative research data into meaningful clusters.

Last updated 2026-03-05
Affinity Mapping Template for User Research preview

Affinity Mapping Template for User Research

Free Affinity Mapping Template for User Research — open and start using immediately

or use email

Instant access. No spam.

Get Template Pro — all templates, no gates, premium files

888+ templates without email gates, plus 30 premium Excel spreadsheets with formulas and professional slide decks. One payment, lifetime access.

Need a custom version?

Forge AI generates PM documents customized to your product, team, and goals. Get a draft in seconds, then refine with AI chat.

Generate with Forge AI

What This Template Is For

Product teams accumulate qualitative data from many sources: customer interviews, support tickets, usability tests, sales call notes, and feedback surveys. The problem is not collecting this data. The problem is making sense of it. Without a structured synthesis method, insights get lost in spreadsheets and Slack threads, and decisions default to whoever remembers the most vivid anecdote.

Affinity mapping (also called affinity diagramming or the KJ Method, after its inventor Jiro Kawakita) is a bottom-up clustering technique. You start with individual data points, written on cards or sticky notes, and silently group them by natural similarity. Themes emerge from the data rather than being imposed on it. This makes it one of the most reliable methods for surfacing patterns across messy qualitative inputs.

The technique is a core activity in product discovery. It pairs well with the Opportunity Solution Tree framework, which uses clustered insights as the starting point for identifying opportunity spaces. If you are running a broader discovery cycle, the Product Discovery Handbook covers how affinity mapping fits into the full research-to-decision pipeline.

For teams that want to validate their clustered themes quantitatively, the experiment hypothesis template provides a structure for turning affinity clusters into testable hypotheses.


How to Use This Template

  1. Gather your raw data. Pull observations, quotes, pain points, and feature requests from your research sources. Each data point should be a single observation written on one card (physical sticky note or digital equivalent).
  2. Aim for 40-100 data points. Fewer than 30 and the clusters will be obvious. More than 150 and the session will take too long. If you have more data, pre-filter to the most relevant subset.
  3. Invite 3-6 participants. Include people who collected the data (researchers, PMs, designers) and one or two people who did not (engineers, stakeholders). Fresh eyes help avoid confirmation bias.
  4. Block 60-90 minutes. The session has three phases: silent sorting (30 min), labeling (15 min), and discussion (15-30 min).
  5. Use a digital whiteboard for remote teams. Miro, FigJam, or MURAL all work. Physical sticky notes on a wall work best for co-located teams.
  6. Follow the silent sorting rule strictly. No talking during the sorting phase. This prevents anchoring, where one person's verbal framing pulls the group toward their interpretation.

The Template

Session Setup

FieldDetails
Research project[Project name]
Date[Date]
Facilitator[Name]
Participants[Names and roles]
Data sources[e.g., 12 customer interviews, 45 support tickets, 3 usability tests]
Total data points[Number]
Board tool[Miro / FigJam / Physical wall]

Pre-Session Preparation

  • Compile all raw data points into individual cards (one observation per card)
  • Write each card as a specific observation, not an interpretation (e.g., "User clicked back button 3 times looking for billing page" not "Navigation is confusing")
  • Include the source on each card (e.g., "Interview #4" or "Support ticket #892")
  • Remove exact duplicates but keep near-duplicates (similar observations from different sources validate each other)
  • Randomize the order of cards so they are not pre-grouped by source
  • Set up the board with all cards spread across the workspace

Phase 1: Silent Sorting (30 minutes)

Goal: Cluster cards into groups based on natural similarity. No labels yet.

Rules:

  • No talking. This is the single most important rule. Silence prevents anchoring bias.
  • Anyone can move any card at any time
  • If two people disagree about where a card belongs, it gets moved back and forth. This is fine. After 2-3 moves, create a duplicate and put it in both groups.
  • Cards that do not fit anywhere form their own group. Do not force them into existing clusters.
  • Aim for 5-12 clusters. Fewer than 5 means your groups are too broad. More than 15 means you are splitting too finely.

Facilitator notes:

  • Start a visible timer (30 minutes)
  • Remind participants of the silent sorting rule
  • Watch for cards that keep moving between groups (these are boundary cases worth discussing later)
  • Call time at 30 minutes even if sorting feels incomplete

Phase 2: Labeling (15 minutes)

Goal: Name each cluster with a descriptive label that captures the shared theme.

  • Talking is now allowed
  • For each cluster, the group discusses what the cards have in common
  • Write a 3-8 word label for each cluster (e.g., "Onboarding friction for new admins" not just "Onboarding")
  • If a cluster is too large (15+ cards), consider splitting it into sub-clusters with more specific labels
  • If two clusters seem redundant, merge them under a single label

Cluster Labels:

Cluster #LabelCard CountPriority (H/M/L)
1[Label][Count]
2[Label][Count]
3[Label][Count]
4[Label][Count]
5[Label][Count]
6[Label][Count]
7[Label][Count]
8[Label][Count]

Phase 3: Discussion and Prioritization (15-30 minutes)

Goal: Identify which clusters represent the most significant opportunities or problems.

  • Review each cluster briefly. The facilitator reads the label and 2-3 representative cards.
  • Mark each cluster as High, Medium, or Low priority based on frequency (how many cards?), severity (how painful?), and breadth (how many user segments?)
  • Identify any surprising clusters (themes the team did not expect)
  • Note disagreements. Clusters where the team cannot agree on priority are signals that more data is needed.
  • Assign owners for the top 3 clusters to investigate further

Discussion Notes:

ClusterKey InsightSurprising?Next StepOwner
[Label][What this cluster tells us]Yes / No[Action][Name]
[Label][What this cluster tells us]Yes / No[Action][Name]
[Label][What this cluster tells us]Yes / No[Action][Name]

Output Summary

OutputDetails
Total clusters[Number]
High-priority clusters[List names]
Surprising findings[List]
Data gaps identified[Areas where more research is needed]
Next steps[What happens with these clusters?]

Filled Example: TaskFlow Onboarding Research Synthesis

Session Setup

FieldDetails
Research projectTaskFlow New User Onboarding
DateMarch 5, 2026
FacilitatorMaya Chen (UX Researcher)
ParticipantsMaya (Research), Raj (PM), Sonia (Design), Luke (Eng Lead), Kim (CS)
Data sources8 new-user interviews, 62 support tickets (first 30 days), 4 usability tests, Heap session recordings
Total data points87
Board toolMiro

Cluster Results

Cluster #LabelCard CountPriority
1Team invitation flow is broken for SSO orgs14H
2Users cannot find the project template gallery11H
3First task creation has too many required fields9H
4Notification settings overwhelm new users8M
5Role-based permissions are unclear to admins7M
6Mobile app onboarding skips key setup steps6M
7Integration setup requires technical knowledge5L
8Billing page is hard to find for trial users4L

Key Decisions

The top three clusters (SSO invitation, template gallery, task creation) accounted for 34 of 87 data points (39%). The team decided to focus Q2 discovery on these three areas. Raj (PM) will write problem statements for each cluster and bring them to the next sprint planning session.

Key Takeaways

  • Start with 40-100 individual data points written as specific observations, not interpretations
  • Enforce silent sorting to prevent anchoring bias from the most vocal participant
  • Aim for 5-12 clusters. Fewer means groups are too broad, more means you are splitting too finely
  • Label clusters with descriptive 3-8 word phrases, not single words
  • Prioritize clusters by frequency, severity, and breadth to focus follow-up research

About This Template

Created by: Tim Adair

Last Updated: 3/5/2026

Version: 1.0.0

License: Free for personal and commercial use

Frequently Asked Questions

How is affinity mapping different from card sorting?+
Card sorting is a technique for testing information architecture. Participants sort content labels into categories to reveal how users expect navigation to work. Affinity mapping is broader. It clusters any qualitative data (observations, quotes, pain points) to find patterns. The mechanics are similar (silent sorting of cards into groups), but the inputs and outputs are different.
Can I do affinity mapping alone?+
You can, but the results will be weaker. Solo affinity mapping tends to produce clusters that confirm your existing assumptions. The value of the group exercise is that multiple perspectives create clusters you would not see alone. If you must work solo, write the cards one day and sort them the next to create some cognitive distance.
What if the team cannot agree on cluster labels?+
Label disagreements usually mean the cluster is too broad and should be split. Try breaking the disputed cluster into two sub-clusters and labeling each one separately. If the team still disagrees, it often means the data is ambiguous and more research is needed for that theme.
How do I handle a data point that fits in two clusters?+
Duplicate the card and place it in both clusters. This happens with 5-10% of cards and is normal. During the labeling phase, note which cards are duplicated. They often represent connections between themes that are worth exploring.
Should I combine affinity mapping with other discovery methods?+
Yes. Affinity mapping is a synthesis technique, not a data collection method. Pair it with [customer interviews](/templates/customer-interview-template) and usability tests for data collection. Feed the high-priority clusters into an [Opportunity Solution Tree](/frameworks/opportunity-solution-tree) to map opportunities to solutions. The [discovery guide](/discovery-guide) covers the full process from research planning through solution validation. ---

Explore More Templates

Browse our full library of PM templates, or generate a custom version with AI.

Free PDF

Like This Template?

Subscribe to get new templates, frameworks, and PM strategies delivered to your inbox.

or use email

Join 10,000+ product leaders. Instant PDF download.

Want full SaaS idea playbooks with market research?

Explore Ideas Pro →