What This Template Is For
Product teams accumulate qualitative data from many sources: customer interviews, support tickets, usability tests, sales call notes, and feedback surveys. The problem is not collecting this data. The problem is making sense of it. Without a structured synthesis method, insights get lost in spreadsheets and Slack threads, and decisions default to whoever remembers the most vivid anecdote.
Affinity mapping (also called affinity diagramming or the KJ Method, after its inventor Jiro Kawakita) is a bottom-up clustering technique. You start with individual data points, written on cards or sticky notes, and silently group them by natural similarity. Themes emerge from the data rather than being imposed on it. This makes it one of the most reliable methods for surfacing patterns across messy qualitative inputs.
The technique is a core activity in product discovery. It pairs well with the Opportunity Solution Tree framework, which uses clustered insights as the starting point for identifying opportunity spaces. If you are running a broader discovery cycle, the Product Discovery Handbook covers how affinity mapping fits into the full research-to-decision pipeline.
For teams that want to validate their clustered themes quantitatively, the experiment hypothesis template provides a structure for turning affinity clusters into testable hypotheses.
How to Use This Template
- Gather your raw data. Pull observations, quotes, pain points, and feature requests from your research sources. Each data point should be a single observation written on one card (physical sticky note or digital equivalent).
- Aim for 40-100 data points. Fewer than 30 and the clusters will be obvious. More than 150 and the session will take too long. If you have more data, pre-filter to the most relevant subset.
- Invite 3-6 participants. Include people who collected the data (researchers, PMs, designers) and one or two people who did not (engineers, stakeholders). Fresh eyes help avoid confirmation bias.
- Block 60-90 minutes. The session has three phases: silent sorting (30 min), labeling (15 min), and discussion (15-30 min).
- Use a digital whiteboard for remote teams. Miro, FigJam, or MURAL all work. Physical sticky notes on a wall work best for co-located teams.
- Follow the silent sorting rule strictly. No talking during the sorting phase. This prevents anchoring, where one person's verbal framing pulls the group toward their interpretation.
The Template
Session Setup
| Field | Details |
|---|---|
| Research project | [Project name] |
| Date | [Date] |
| Facilitator | [Name] |
| Participants | [Names and roles] |
| Data sources | [e.g., 12 customer interviews, 45 support tickets, 3 usability tests] |
| Total data points | [Number] |
| Board tool | [Miro / FigJam / Physical wall] |
Pre-Session Preparation
- ☐ Compile all raw data points into individual cards (one observation per card)
- ☐ Write each card as a specific observation, not an interpretation (e.g., "User clicked back button 3 times looking for billing page" not "Navigation is confusing")
- ☐ Include the source on each card (e.g., "Interview #4" or "Support ticket #892")
- ☐ Remove exact duplicates but keep near-duplicates (similar observations from different sources validate each other)
- ☐ Randomize the order of cards so they are not pre-grouped by source
- ☐ Set up the board with all cards spread across the workspace
Phase 1: Silent Sorting (30 minutes)
Goal: Cluster cards into groups based on natural similarity. No labels yet.
Rules:
- No talking. This is the single most important rule. Silence prevents anchoring bias.
- Anyone can move any card at any time
- If two people disagree about where a card belongs, it gets moved back and forth. This is fine. After 2-3 moves, create a duplicate and put it in both groups.
- Cards that do not fit anywhere form their own group. Do not force them into existing clusters.
- Aim for 5-12 clusters. Fewer than 5 means your groups are too broad. More than 15 means you are splitting too finely.
Facilitator notes:
- ☐ Start a visible timer (30 minutes)
- ☐ Remind participants of the silent sorting rule
- ☐ Watch for cards that keep moving between groups (these are boundary cases worth discussing later)
- ☐ Call time at 30 minutes even if sorting feels incomplete
Phase 2: Labeling (15 minutes)
Goal: Name each cluster with a descriptive label that captures the shared theme.
- ☐ Talking is now allowed
- ☐ For each cluster, the group discusses what the cards have in common
- ☐ Write a 3-8 word label for each cluster (e.g., "Onboarding friction for new admins" not just "Onboarding")
- ☐ If a cluster is too large (15+ cards), consider splitting it into sub-clusters with more specific labels
- ☐ If two clusters seem redundant, merge them under a single label
Cluster Labels:
| Cluster # | Label | Card Count | Priority (H/M/L) |
|---|---|---|---|
| 1 | [Label] | [Count] | |
| 2 | [Label] | [Count] | |
| 3 | [Label] | [Count] | |
| 4 | [Label] | [Count] | |
| 5 | [Label] | [Count] | |
| 6 | [Label] | [Count] | |
| 7 | [Label] | [Count] | |
| 8 | [Label] | [Count] |
Phase 3: Discussion and Prioritization (15-30 minutes)
Goal: Identify which clusters represent the most significant opportunities or problems.
- ☐ Review each cluster briefly. The facilitator reads the label and 2-3 representative cards.
- ☐ Mark each cluster as High, Medium, or Low priority based on frequency (how many cards?), severity (how painful?), and breadth (how many user segments?)
- ☐ Identify any surprising clusters (themes the team did not expect)
- ☐ Note disagreements. Clusters where the team cannot agree on priority are signals that more data is needed.
- ☐ Assign owners for the top 3 clusters to investigate further
Discussion Notes:
| Cluster | Key Insight | Surprising? | Next Step | Owner |
|---|---|---|---|---|
| [Label] | [What this cluster tells us] | Yes / No | [Action] | [Name] |
| [Label] | [What this cluster tells us] | Yes / No | [Action] | [Name] |
| [Label] | [What this cluster tells us] | Yes / No | [Action] | [Name] |
Output Summary
| Output | Details |
|---|---|
| Total clusters | [Number] |
| High-priority clusters | [List names] |
| Surprising findings | [List] |
| Data gaps identified | [Areas where more research is needed] |
| Next steps | [What happens with these clusters?] |
Filled Example: TaskFlow Onboarding Research Synthesis
Session Setup
| Field | Details |
|---|---|
| Research project | TaskFlow New User Onboarding |
| Date | March 5, 2026 |
| Facilitator | Maya Chen (UX Researcher) |
| Participants | Maya (Research), Raj (PM), Sonia (Design), Luke (Eng Lead), Kim (CS) |
| Data sources | 8 new-user interviews, 62 support tickets (first 30 days), 4 usability tests, Heap session recordings |
| Total data points | 87 |
| Board tool | Miro |
Cluster Results
| Cluster # | Label | Card Count | Priority |
|---|---|---|---|
| 1 | Team invitation flow is broken for SSO orgs | 14 | H |
| 2 | Users cannot find the project template gallery | 11 | H |
| 3 | First task creation has too many required fields | 9 | H |
| 4 | Notification settings overwhelm new users | 8 | M |
| 5 | Role-based permissions are unclear to admins | 7 | M |
| 6 | Mobile app onboarding skips key setup steps | 6 | M |
| 7 | Integration setup requires technical knowledge | 5 | L |
| 8 | Billing page is hard to find for trial users | 4 | L |
Key Decisions
The top three clusters (SSO invitation, template gallery, task creation) accounted for 34 of 87 data points (39%). The team decided to focus Q2 discovery on these three areas. Raj (PM) will write problem statements for each cluster and bring them to the next sprint planning session.
Key Takeaways
- Start with 40-100 individual data points written as specific observations, not interpretations
- Enforce silent sorting to prevent anchoring bias from the most vocal participant
- Aim for 5-12 clusters. Fewer means groups are too broad, more means you are splitting too finely
- Label clusters with descriptive 3-8 word phrases, not single words
- Prioritize clusters by frequency, severity, and breadth to focus follow-up research
About This Template
Created by: Tim Adair
Last Updated: 3/5/2026
Version: 1.0.0
License: Free for personal and commercial use
