Blog Post

You Lint Your Code. Why Don't You Lint Your UI?

Your team ships a feature. The code passes five quality gates. Linting, tests, type checks, CI, code review. The PR is clean. Merge. Deploy. Someone opens staging. The button is the wrong shade of blue. Spacing is off by 12px. The font weight is 400 instead of 500. Nobody catches it until a designer sees it in production a week later. This team has world-class code quality. Their UI quality process is "hope someone notices."

The Quality Stack You Already Have

Here's what runs on every commit at a modern engineering team: linting (ESLint, Prettier), type checking (TypeScript), unit tests, integration tests, CI pipelines, code review, and monitoring. Seven layers. Each catches a different class of problem. Some are automated, some are human. All are non-negotiable. Nobody questions whether you should lint your code. It's just how you ship software.

The Quality Stack You Don't

Here's what happens when a developer implements a design: designer hands off a Figma file, developer builds it, maybe the designer reviews on staging if they have time, ship. That's the whole process. No automated comparison against the design spec. No structured checklist. No fidelity score. No CI check that flags drift. Seven layers for logic. Zero layers for what users actually see.

This Isn't a Tooling Problem. It's a Process Problem.

Design drift doesn't happen because teams are sloppy. It happens because there's no defined step in the pipeline where someone checks whether the implementation matches the design. Code review checks logic, not visual fidelity. QA testing checks functionality, not design accuracy. Designers are already working on next sprint's features. The gap exists because nobody owns it. UI quality is where code quality was 15 years ago. The fix isn't a new tool. It's a new step.

What Design QA Actually Looks Like

Design QA is a step, like code review. Not a department. It happens pre-merge, takes 10-15 minutes, and answers one question: does the implementation match the design intent? Four things to check: does it match the design? Is the design system intact? Does it work in motion? Is it accessible? Attach it to PR review, give it an owner, start with a checklist, and timebox it to 10-15 minutes per PR with UI changes.

The Cost of Not Doing It

A design inconsistency caught pre-merge takes about 5 minutes to fix. The same inconsistency caught post-launch takes 45-60 minutes. That's a 10-12x cost multiplier. If a team ships 10 design inconsistencies per sprint, that's ~50 minutes total caught pre-merge versus ~8-10 hours total caught post-launch. The gap isn't just visual quality. It's engineering time that could go toward building features.

Related Resources