What This Template Is For
A usability test report communicates what happened when real users tried to use your product, what went wrong, how severe the problems are, and what to fix. Without a structured report, usability findings get lost in Slack threads, condensed into vague summaries ("users found it confusing"), or ignored entirely because the team cannot act on them.
This template structures a usability test report around three things that matter: task success rates (did users complete the task?), severity ratings (how bad is each problem?), and specific recommendations (what exactly should change?). It covers study design, participant profiles, per-task analysis, and a prioritized findings summary. The format works for moderated and unmoderated testing, remote and in-person sessions, and prototype and live product evaluations.
This template pairs with the UX audit template for expert-based evaluation that complements user testing. Teams running structured product discovery use usability tests to validate prototypes before committing to development. For testing designs at scale, the AI UX Audit tool can supplement manual testing with automated heuristic analysis. If you need to plan the test itself rather than report findings, see the customer interview template for qualitative research planning.
How to Use This Template
- Fill in the Study Design section before running the test. This forces clarity on what you are testing, why, and with whom. Share it with stakeholders so expectations are aligned.
- After each session, log raw observations per task (success/failure, time, errors, quotes). Do not interpret yet. Just record what happened.
- After all sessions, analyze patterns across participants. A problem observed by 1 of 5 users may be an edge case. A problem observed by 4 of 5 users is a systematic failure.
- Rate each finding by severity (cosmetic through critical). Severity is based on frequency, impact on task completion, and difficulty of workaround.
- Write recommendations as specific design changes, not vague directives. "Fix the nav" is not actionable. "Move 'Settings' from the hamburger menu to the global nav bar" is.
- Present findings to the team in a 30-minute readout. Focus on the top 5 findings. Share video clips of critical failures. Let the data drive the conversation.
The Template
Study Overview
| Field | Details |
|---|---|
| Product / Feature | [Name] |
| Study Owner | [Name] |
| Date(s) | [Test dates] |
| Method | Moderated / Unmoderated, Remote / In-person |
| Testing tool | [e.g., UserTesting, Lookback, Maze, Zoom] |
| Prototype / Live | [Link to prototype or production URL tested] |
| Test plan | [Link to test plan document] |
Study goal. [1-2 sentences. What question does this study answer?]
Scope. [Which flows, pages, or features were tested. What was explicitly excluded.]
Participants
| # | ID | Role / Segment | Experience Level | Device | Session Length |
|---|---|---|---|---|---|
| 1 | P1 | [e.g., PM, 3 years] | [Novice / Intermediate / Expert] | [Desktop / Mobile / Tablet] | [Minutes] |
| 2 | P2 | [Role] | [Experience] | [Device] | [Minutes] |
| 3 | P3 | [Role] | [Experience] | [Device] | [Minutes] |
| 4 | P4 | [Role] | [Experience] | [Device] | [Minutes] |
| 5 | P5 | [Role] | [Experience] | [Device] | [Minutes] |
Recruitment criteria: [How participants were selected. Screening questions, demographic targets.]
Recruitment source: [e.g., Customer list, UserTesting panel, social media, internal employees]
Tasks
| # | Task Description | Success Criteria | Max Time |
|---|---|---|---|
| T1 | [What the user was asked to do] | [How success was defined] | [Time limit] |
| T2 | [Task description] | [Success criteria] | [Time limit] |
| T3 | [Task description] | [Success criteria] | [Time limit] |
| T4 | [Task description] | [Success criteria] | [Time limit] |
Task Results Summary
| Task | Success Rate | Avg Time | Errors | Difficulty (1-5) |
|---|---|---|---|---|
| T1 | [e.g., 4/5 (80%)] | [e.g., 2:30] | [e.g., 3 total] | [Avg self-reported] |
| T2 | [Rate] | [Time] | [Errors] | [Difficulty] |
| T3 | [Rate] | [Time] | [Errors] | [Difficulty] |
| T4 | [Rate] | [Time] | [Errors] | [Difficulty] |
Task success definitions:
- Success: User completed the task within the time limit without assistance
- Partial success: User completed the task but required a hint, made errors, or exceeded time
- Failure: User could not complete the task or gave up
Per-Task Analysis
Task 1: [Task Name]
Success rate: [e.g., 4/5 (80%)]
What went well:
- [Observation about successful behavior]
- [Observation about successful behavior]
Problems observed:
| # | Problem | Participants Affected | Severity |
|---|---|---|---|
| 1 | [Specific problem] | [e.g., P2, P4] | [Critical / Major / Minor / Cosmetic] |
| 2 | [Problem] | [Participants] | [Severity] |
Participant quotes:
"[Direct quote from participant]" (P1)
"[Direct quote from participant]" (P3)
Video clips: [Links to key moments in session recordings]
(Repeat for each task)
Severity Scale
| Rating | Label | Definition |
|---|---|---|
| Critical | Blocks task | User cannot complete the task. No workaround exists. |
| Major | Significant delay | User completes the task but with significant difficulty, errors, or frustration. |
| Minor | Friction | User notices the issue but completes the task with minimal impact. |
| Cosmetic | Polish | Minor visual or wording issue. Does not affect task completion or user satisfaction. |
Findings Summary (Prioritized)
| # | Finding | Severity | Frequency | Tasks Affected | Recommendation |
|---|---|---|---|---|---|
| 1 | [Finding description] | [Severity] | [e.g., 4/5 participants] | [e.g., T1, T3] | [Specific design change] |
| 2 | [Finding] | [Severity] | [Frequency] | [Tasks] | [Recommendation] |
| 3 | [Finding] | [Severity] | [Frequency] | [Tasks] | [Recommendation] |
Positive Findings
| # | What Worked | Evidence |
|---|---|---|
| 1 | [Feature or pattern that users liked or used successfully] | [Quote or observation] |
| 2 | [What worked] | [Evidence] |
Recommendations Summary
Immediate fixes (before next release):
- ☐ [Fix for critical findings]
- ☐ [Fix for critical findings]
Next sprint:
- ☐ [Fix for major findings]
- ☐ [Fix for major findings]
- ☐ [Fix for minor findings]
- ☐ [Fix for cosmetic findings]
Further research needed:
- ☐ [Open question that requires additional testing or data]
Filled Example: SaaS Onboarding Flow
Study Overview
| Field | Details |
|---|---|
| Product / Feature | Acme Analytics: New User Onboarding |
| Study Owner | Sarah Kim |
| Date(s) | February 24-28, 2026 |
| Method | Moderated, Remote |
| Testing tool | Zoom + Figma prototype |
| Prototype | figma.com/proto/onboarding-v2 |
Study goal. Validate whether the redesigned onboarding flow reduces time-to-first-dashboard from 4.2 days (production) to under 5 minutes (prototype).
Task Results Summary
| Task | Success Rate | Avg Time | Errors | Difficulty (1-5) |
|---|---|---|---|---|
| T1: Complete signup wizard | 5/5 (100%) | 1:45 | 1 total | 1.4 |
| T2: Create first dashboard | 4/5 (80%) | 3:20 | 5 total | 2.8 |
| T3: Add a data source | 2/5 (40%) | 4:50 | 8 total | 4.2 |
| T4: Share dashboard with a teammate | 3/5 (60%) | 2:10 | 4 total | 3.0 |
Top Findings
| # | Finding | Severity | Frequency | Recommendation |
|---|---|---|---|---|
| 1 | "Add data source" button is hidden behind a settings icon. 3/5 users could not find it. | Critical | 3/5 | Move "Add Data Source" to the dashboard toolbar as a primary button with a "+" icon. |
| 2 | Users expected to share via email address but the share flow requires selecting from a team roster that is empty for new accounts. | Major | 2/5 | Add an "Invite by email" option alongside the team roster picker. |
| 3 | "Dashboard" vs "Report" terminology confused 3/5 users who used them interchangeably. | Major | 3/5 | Standardize on "Dashboard" throughout. Remove "Report" label from the nav. |
Key Takeaways
- Five participants catch 80% of usability issues. Do not wait for a large sample
- Rate findings by severity (critical through cosmetic) so the team prioritizes correctly
- Write recommendations as specific design changes, not vague observations
- Include participant quotes and video clips to make findings persuasive
- Separate positive findings from problems. The team needs to know what to preserve
About This Template
Created by: Tim Adair
Last Updated: 3/4/2026
Version: 1.0.0
License: Free for personal and commercial use
