Blog Post
What Is User Acceptance Testing (UAT)? A Guide for Product Teams
User acceptance testing (UAT) is the final testing phase where real users validate that software meets business requirements before launch. Unlike internal QA, UAT tests whether the product works in real-world conditions — not just whether it is bug-free. This guide covers UAT meaning, how it differs from QA, who performs it, the main types, and a practical 6-step process for product teams.
What Is User Acceptance Testing?
User acceptance testing is the final verification that software is ready for release. Real users — not the team that built it — work through authentic tasks to confirm the product meets business requirements and user expectations. It happens after internal QA (unit, integration, system testing) and before launch. UAT validates user requirements (can real users navigate and complete tasks?) and business requirements (does the product handle real-world use cases and meet acceptance criteria?).
UAT vs QA: What's the Difference?
QA asks "does it work?" UAT asks "does it work for the people who will actually use it?" Internal QA is performed by the QA team before UAT to find and fix technical bugs. UAT is performed by end users, clients, and stakeholders to validate real-world usability. QA ensures a functional, stable product. UAT ensures a viable, user-ready product. Both are necessary. There is also a third discipline — design QA — which checks whether the implementation matches the design spec.
Who Performs UAT?
End users, clients, business stakeholders, product owners, and subject matter experts. The QA team typically facilitates UAT — preparing test plans, setting up staging, writing test cases — but does not perform it. For design-heavy products, include designers in UAT to catch visual discrepancies that functional testers miss.
Types of User Acceptance Testing
Alpha testing (internal, pre-beta), beta testing (limited real users in real conditions), contract acceptance testing (meets agreed specs), operational acceptance testing (ready for day-to-day use), and regulation acceptance testing (legal and compliance requirements). Unit, integration, system, and regression testing are not types of UAT — they happen earlier in the SDLC.
When Does UAT Happen in the SDLC?
UAT sits near the end of the software development lifecycle: requirements, design, development, unit testing, integration testing, system testing, QA, then UAT, then launch. If you run UAT but skip design QA, you catch usability issues but miss visual ones. Design drift does not show up in functional test cases.
How to Run User Acceptance Testing (6 Steps)
1. Define Scope and Acceptance Criteria
Document what you are testing and what "done" looks like. For design-heavy products, include visual acceptance criteria — approved Figma designs, design system tokens, responsive breakpoints.
2. Set Up a Staging Environment
Mirror production as closely as possible — same data, services, and configuration. Use realistic data, not lorem ipsum.
3. Write Test Scenarios
Base scenarios on real user workflows, not isolated features. Include visual test cases — "Does the checkout page match the approved design?" is a valid UAT scenario.
4. Recruit and Brief Testers
Choose testers close to the real use case. Share test plan, timeline, acceptance criteria, and environment access. Balance structured test cases with open exploration.
5. Run Tests and Capture Feedback
Every report needs: what was tested, steps to reproduce, expected vs actual result, screenshots, and environment details. Clear reports get fixed fast — vague ones waste time.
6. Triage, Fix, Retest, Sign Off
Categorize issues as blockers, high, medium, or post-launch. A fix is not done until the original scenario passes on retest. Sign off when critical defects are resolved and stakeholders are confident.
Common UAT Mistakes
- No visual baseline — testers have no design reference to compare against
- Feedback in Slack threads — issues get buried and lack context
- Skipping design QA before UAT — testers waste time on pixel-level bugs instead of workflow validation
- Testing on one device — responsive issues only surface on the devices users actually use
- No retest loop — marking issues "fixed" without verifying
User Acceptance Testing Checklist
- Scope and acceptance criteria documented
- Staging environment mirrors production
- Test scenarios based on real user workflows
- Visual test cases included
- Testers recruited, briefed, and given access
- Feedback tool configured
- Triage categories defined
- Retest loop planned
- Sign-off criteria agreed
- Design QA completed before UAT begins
Frequently Asked Questions
What is the difference between UAT and QA testing?
QA testing is performed by the internal QA team to find and fix technical bugs. UAT is performed by end users or stakeholders to validate that the product meets business requirements and works in real-world conditions.
Can you automate user acceptance testing?
Partially. You can automate setup and repetitive checks, but the core value of UAT — real users validating real workflows — requires human judgment.
What is the difference between alpha testing and beta testing?
Alpha testing happens first with internal testers to catch major issues. Beta testing follows with real users in real conditions to validate usability and experience. Alpha stabilizes; beta validates.
Who writes UAT test cases?
The QA team typically writes them in collaboration with product owners and business analysts, mapping test cases to business requirements and real user workflows.
What is the difference between UAT and beta testing?
Beta testing is a type of UAT. UAT is the broader category — any testing where real users validate the product before launch. Beta testing specifically refers to a limited external release.