Skip to main content
New: Forge AI docs + Loop PM assistant. 7-day free trial.
TemplateFREE⏱️ 2-4 hours

Usability Test Report Template

A usability test findings report template with severity ratings, task success metrics, and prioritized recommendations. Covers study design, participant profiles, task analysis, and actionable findings with a filled example for a SaaS onboarding flow.

By Tim Adair• Last updated 2026-03-04
Usability Test Report Template preview

Usability Test Report Template

Free Usability Test Report Template — open and start using immediately

or use email

Instant access. No spam.

What This Template Is For

A usability test report communicates what happened when real users tried to use your product, what went wrong, how severe the problems are, and what to fix. Without a structured report, usability findings get lost in Slack threads, condensed into vague summaries ("users found it confusing"), or ignored entirely because the team cannot act on them.

This template structures a usability test report around three things that matter: task success rates (did users complete the task?), severity ratings (how bad is each problem?), and specific recommendations (what exactly should change?). It covers study design, participant profiles, per-task analysis, and a prioritized findings summary. The format works for moderated and unmoderated testing, remote and in-person sessions, and prototype and live product evaluations.

This template pairs with the UX audit template for expert-based evaluation that complements user testing. Teams running structured product discovery use usability tests to validate prototypes before committing to development. For testing designs at scale, the AI UX Audit tool can supplement manual testing with automated heuristic analysis. If you need to plan the test itself rather than report findings, see the customer interview template for qualitative research planning.


How to Use This Template

  1. Fill in the Study Design section before running the test. This forces clarity on what you are testing, why, and with whom. Share it with stakeholders so expectations are aligned.
  2. After each session, log raw observations per task (success/failure, time, errors, quotes). Do not interpret yet. Just record what happened.
  3. After all sessions, analyze patterns across participants. A problem observed by 1 of 5 users may be an edge case. A problem observed by 4 of 5 users is a systematic failure.
  4. Rate each finding by severity (cosmetic through critical). Severity is based on frequency, impact on task completion, and difficulty of workaround.
  5. Write recommendations as specific design changes, not vague directives. "Fix the nav" is not actionable. "Move 'Settings' from the hamburger menu to the global nav bar" is.
  6. Present findings to the team in a 30-minute readout. Focus on the top 5 findings. Share video clips of critical failures. Let the data drive the conversation.

The Template

Study Overview

FieldDetails
Product / Feature[Name]
Study Owner[Name]
Date(s)[Test dates]
MethodModerated / Unmoderated, Remote / In-person
Testing tool[e.g., UserTesting, Lookback, Maze, Zoom]
Prototype / Live[Link to prototype or production URL tested]
Test plan[Link to test plan document]

Study goal. [1-2 sentences. What question does this study answer?]

Scope. [Which flows, pages, or features were tested. What was explicitly excluded.]


Participants

#IDRole / SegmentExperience LevelDeviceSession Length
1P1[e.g., PM, 3 years][Novice / Intermediate / Expert][Desktop / Mobile / Tablet][Minutes]
2P2[Role][Experience][Device][Minutes]
3P3[Role][Experience][Device][Minutes]
4P4[Role][Experience][Device][Minutes]
5P5[Role][Experience][Device][Minutes]

Recruitment criteria: [How participants were selected. Screening questions, demographic targets.]

Recruitment source: [e.g., Customer list, UserTesting panel, social media, internal employees]


Tasks

#Task DescriptionSuccess CriteriaMax Time
T1[What the user was asked to do][How success was defined][Time limit]
T2[Task description][Success criteria][Time limit]
T3[Task description][Success criteria][Time limit]
T4[Task description][Success criteria][Time limit]

Task Results Summary

TaskSuccess RateAvg TimeErrorsDifficulty (1-5)
T1[e.g., 4/5 (80%)][e.g., 2:30][e.g., 3 total][Avg self-reported]
T2[Rate][Time][Errors][Difficulty]
T3[Rate][Time][Errors][Difficulty]
T4[Rate][Time][Errors][Difficulty]

Task success definitions:

  • Success: User completed the task within the time limit without assistance
  • Partial success: User completed the task but required a hint, made errors, or exceeded time
  • Failure: User could not complete the task or gave up

Per-Task Analysis

Task 1: [Task Name]

Success rate: [e.g., 4/5 (80%)]

What went well:

  • [Observation about successful behavior]
  • [Observation about successful behavior]

Problems observed:

#ProblemParticipants AffectedSeverity
1[Specific problem][e.g., P2, P4][Critical / Major / Minor / Cosmetic]
2[Problem][Participants][Severity]

Participant quotes:

"[Direct quote from participant]" (P1)
"[Direct quote from participant]" (P3)

Video clips: [Links to key moments in session recordings]

(Repeat for each task)


Severity Scale

RatingLabelDefinition
CriticalBlocks taskUser cannot complete the task. No workaround exists.
MajorSignificant delayUser completes the task but with significant difficulty, errors, or frustration.
MinorFrictionUser notices the issue but completes the task with minimal impact.
CosmeticPolishMinor visual or wording issue. Does not affect task completion or user satisfaction.

Findings Summary (Prioritized)

#FindingSeverityFrequencyTasks AffectedRecommendation
1[Finding description][Severity][e.g., 4/5 participants][e.g., T1, T3][Specific design change]
2[Finding][Severity][Frequency][Tasks][Recommendation]
3[Finding][Severity][Frequency][Tasks][Recommendation]

Positive Findings

#What WorkedEvidence
1[Feature or pattern that users liked or used successfully][Quote or observation]
2[What worked][Evidence]

Recommendations Summary

Immediate fixes (before next release):

  • [Fix for critical findings]
  • [Fix for critical findings]

Next sprint:

  • [Fix for major findings]
  • [Fix for major findings]

Backlog:

  • [Fix for minor findings]
  • [Fix for cosmetic findings]

Further research needed:

  • [Open question that requires additional testing or data]

Filled Example: SaaS Onboarding Flow

Study Overview

FieldDetails
Product / FeatureAcme Analytics: New User Onboarding
Study OwnerSarah Kim
Date(s)February 24-28, 2026
MethodModerated, Remote
Testing toolZoom + Figma prototype
Prototypefigma.com/proto/onboarding-v2

Study goal. Validate whether the redesigned onboarding flow reduces time-to-first-dashboard from 4.2 days (production) to under 5 minutes (prototype).

Task Results Summary

TaskSuccess RateAvg TimeErrorsDifficulty (1-5)
T1: Complete signup wizard5/5 (100%)1:451 total1.4
T2: Create first dashboard4/5 (80%)3:205 total2.8
T3: Add a data source2/5 (40%)4:508 total4.2
T4: Share dashboard with a teammate3/5 (60%)2:104 total3.0

Top Findings

#FindingSeverityFrequencyRecommendation
1"Add data source" button is hidden behind a settings icon. 3/5 users could not find it.Critical3/5Move "Add Data Source" to the dashboard toolbar as a primary button with a "+" icon.
2Users expected to share via email address but the share flow requires selecting from a team roster that is empty for new accounts.Major2/5Add an "Invite by email" option alongside the team roster picker.
3"Dashboard" vs "Report" terminology confused 3/5 users who used them interchangeably.Major3/5Standardize on "Dashboard" throughout. Remove "Report" label from the nav.

Key Takeaways

  • Five participants catch 80% of usability issues. Do not wait for a large sample
  • Rate findings by severity (critical through cosmetic) so the team prioritizes correctly
  • Write recommendations as specific design changes, not vague observations
  • Include participant quotes and video clips to make findings persuasive
  • Separate positive findings from problems. The team needs to know what to preserve

About This Template

Created by: Tim Adair

Last Updated: 3/4/2026

Version: 1.0.0

License: Free for personal and commercial use

Frequently Asked Questions

How many participants do I need for a usability test?+
Five participants catch approximately 80% of usability issues (Nielsen, 2000). This applies to qualitative usability testing where the goal is finding problems, not statistical significance. If you are testing multiple distinct user segments (e.g., admins vs. members), recruit 3-5 per segment. For quantitative metrics (task success rates with confidence intervals), you need 20+ participants.
How do I decide between moderated and unmoderated testing?+
Moderated testing is better for complex flows, prototype testing, and when you need to ask follow-up questions. Unmoderated testing is better for simple tasks, large sample sizes, and when you need results faster. Start with moderated testing for early-stage discovery. Switch to unmoderated for validation at scale. The [product discovery handbook](/discovery-guide) covers when to use each method in the research lifecycle.
What is the difference between usability testing and a UX audit?+
Usability testing observes real users attempting real tasks. A [UX audit](/templates/ux-audit-template) is an expert review against heuristics. They catch different problems. Audits catch technical violations (contrast, keyboard traps, inconsistency). Usability tests catch mental model mismatches (users expect the feature to work differently than it does). Run both for thorough coverage.
How do I handle findings that contradict each other?+
Sometimes one user loves a feature and another hates it. Look for patterns in user segments. A power user and a novice may have opposite reactions to the same interface. Document both perspectives and note the segment. If the finding is split 50/50 with no segment pattern, flag it as "inconclusive" and recommend A/B testing with the [experimentation](/glossary/a-b-testing) approach.
Should I share video clips or just a written report?+
Both. The written report is the system of record. Video clips are the persuasion tool. A 30-second clip of a user struggling to find the "Add Data Source" button is more convincing than a paragraph describing the problem. Include 2-3 clips in your readout presentation. Link to full session recordings for stakeholders who want more context. ---

Explore More Templates

Browse our full library of AI-enhanced product management templates

Free PDF

Like This Template?

Subscribe to get new templates, frameworks, and PM strategies delivered to your inbox.

or use email

Instant PDF download. One email per week after that.

Want full SaaS idea playbooks with market research?

Explore Ideas Pro →