What is User Acceptance Testing?
User acceptance testing (UAT) is a testing phase where actual end users or their representatives validate that a product or feature meets their requirements before it goes to production. It is the final quality gate before release.
UAT answers a simple question: "Does this do what we agreed it would do?" It compares the built product against the acceptance criteria defined during planning. If the product passes UAT, it is approved for release. If not, it goes back for fixes.
Why UAT Matters
Internal testing catches bugs. UAT catches misunderstandings. A feature can pass all automated tests and still fail UAT because the team interpreted a requirement differently than the user expected.
UAT also builds stakeholder confidence. When a key customer or internal champion has personally verified a feature before launch, they become an advocate rather than a critic.
UAT vs QA Testing vs Usability Testing
These three testing types answer different questions. PMs need to know when each one applies.
QA testing verifies technical correctness. Does the code execute as designed? Does the API return the correct status code? QA engineers write test cases against the technical specification and file bugs when behavior deviates. QA can be automated. UAT cannot.
UAT verifies business correctness. Does the product meet the user's actual need? A feature can pass every QA test and still fail UAT because the requirement was interpreted differently than the user intended. UAT requires real users or domain experts, not engineers.
Usability testing evaluates design quality. Is the interface intuitive? Can users complete tasks without confusion? Usability testing typically happens during design iteration, weeks or months before release. UAT happens right before release.
The key distinction: QA asks "does it work?" UAT asks "does it do what we agreed?" Usability testing asks "is it easy to use?" All three can pass independently. A product can work correctly, match requirements, and still be confusing to use.
How to Run UAT
Define test scenarios based on real workflows. Do not give testers a list of buttons to click. Give them tasks: "Import your Q3 sales data and generate a performance report." This mimics real usage and reveals issues that scripted testing misses.
Select the right testers. Ideal UAT testers are actual users or stakeholders who understand the domain. Internal team members who are too familiar with the product will not catch the same issues as users approaching it fresh.
Provide a structured feedback mechanism. Use a form or tracking system where testers log issues with severity, steps to reproduce, and expected vs. actual behavior. Unstructured "it does not work" feedback is hard to act on.
Set clear pass/fail criteria. Define how many critical issues block release and how many minor issues are acceptable. Without criteria, UAT becomes an endless cycle of feedback and fixes.
UAT Checklist for SaaS Products
This is a practical checklist PMs can adapt for their next UAT cycle.
Before UAT starts:
- Write test scenarios based on user stories, not technical specs
- Recruit 5-8 testers who match your target persona
- Set up a staging environment that mirrors production data
- Create a feedback form with fields for: scenario tested, pass/fail, severity, steps to reproduce, screenshots
- Define exit criteria (e.g., zero critical issues, fewer than 3 high-severity issues)
- Brief testers on goals, timeline, and how to submit feedback
During UAT:
- Give testers realistic tasks, not scripted click paths
- Allow testers to explore beyond defined scenarios. Unscripted usage often reveals the most interesting issues
- Track issue discovery rate. If testers stop finding issues after day 1, the coverage is probably too narrow
- Triage issues daily. Classify as critical (blocks release), high (fix before launch if possible), or low (backlog)
After UAT:
- Fix all critical and high-severity issues
- Re-test fixed issues with the same testers
- Get formal sign-off from the stakeholder or customer representative
- Document lessons learned for the next UAT cycle
- Archive test results as part of release management records
UAT for Remote and Distributed Teams
Running UAT with distributed teams requires extra structure. Set up a shared Slack channel or Teams thread dedicated to the UAT cycle. Record a 5-minute video walkthrough of the feature so testers start with the same context. Use asynchronous test sessions with clear deadlines rather than requiring everyone online at the same time.
For enterprise products with customer-facing UAT, provide testers with a structured test plan document and a video call during the first session to answer questions. The initial investment in onboarding testers saves time on clarification later.
UAT in Practice
Enterprise software companies like SAP and Oracle formalize UAT as a contractual milestone. Customers sign off on UAT before the implementation is considered complete. This process ensures that the delivered product meets the agreed requirements.
At Atlassian, internal employees participate in "dogfooding" programs that function as extended UAT. Teams use pre-release versions of their own products for daily work, catching issues that external testers would eventually find.
Common Pitfalls
- Skipping UAT for speed. "We tested it internally, it is fine" leads to post-launch surprises. Always validate with real users.
- UAT as the only testing. UAT should complement automated tests and QA, not replace them. Do not waste UAT testers' time on basic bugs.
- Wrong testers. Developers testing their own code is not UAT. The whole point is fresh eyes from the user's perspective.
- No time to fix issues. Schedule UAT early enough that there is time to address findings before the release date.
- Treating UAT as a checkbox. Some teams rush through UAT to hit a deadline. If you are not willing to delay the release based on UAT findings, you are not actually doing UAT. You are doing theater.
- No re-testing after fixes. Fixing an issue without verifying the fix defeats the purpose. Always re-test with the original tester who reported it.
Metrics for UAT Effectiveness
Track these numbers across UAT cycles to improve your process over time:
- Defect escape rate. Percentage of production bugs that should have been caught in UAT. Lower is better. If users report issues that your UAT scenarios covered, the test execution was weak.
- UAT cycle time. Days from UAT start to sign-off. Shorter cycles with fewer issues indicate better upstream quality.
- Issue severity distribution. A healthy UAT finds mostly low-severity issues. If you are consistently finding critical bugs in UAT, your QA process needs work.
- Tester coverage. Number of unique testers and scenarios covered. A single tester running 20 scenarios catches less than 5 testers running 4 scenarios each.
Use the RICE framework to prioritize UAT findings when you cannot fix everything before launch. Score each issue by Reach (how many users hit it), Impact (how bad is the experience), Confidence (how sure are you it is a real problem), and Effort (how hard is the fix).
Related Concepts
UAT validates against acceptance criteria and is a component of the Definition of Done. It feeds into release management as a go/no-go input. Beta testing is a broader form of UAT with a larger group. Usability testing evaluates design quality, while UAT evaluates requirements satisfaction. For SaaS products, the product launch process depends on UAT sign-off before the go-to-market plan kicks in.