Why Code Review Quality Matters More Than Quantity

November 13, 2017

Most teams do code review. Fewer teams do code review well. The difference between perfunctory review (glance, approve) and quality review (understand, critique, improve) determines whether code review provides value or just adds process.

Quality review catches bugs before production. It spreads knowledge across the team. It improves code through constructive feedback. It mentors junior engineers.

Quantity review—reviews that happen because they’re required but without genuine engagement—provides none of these benefits while adding friction.

What Quality Review Looks Like

Actually Understanding the Code

Quality review starts with understanding what the code does and why. This requires:

Reading the description: Pull request descriptions should explain what, why, and how. If they don’t, request better descriptions.

Understanding context: What problem is being solved? What’s the broader system context?

Reading the code: Actually read it. Understand the logic. Don’t skim.

If you can’t explain what the code does, you haven’t reviewed it.

Looking for What Matters

Focus on what affects correctness, maintainability, and security:

Logic errors: Does the code do what it’s supposed to? Are edge cases handled?

Security issues: Input validation? Authorization checks? Secrets exposed?

Architectural concerns: Does this fit the system design? Will it scale? Is it maintainable?

Error handling: What happens when things fail? Are errors handled appropriately?

Testing: Are there tests? Do they test the right things?

Don’t get lost in style nitpicks while missing logic bugs.

Providing Actionable Feedback

Good feedback is specific, constructive, and actionable:

Bad: “This is confusing.”

Good: “The processUserData function does both validation and transformation. Consider splitting into validateUserData and transformUserData for clarity.”

Bad: “This won’t work.”

Good: “This will throw a NullPointerException if user.address is null, which can happen for users who haven’t completed profile setup. Consider null check or optional handling.”

Actionable feedback tells the author what to change and why.

Distinguishing Severity

Not all feedback is equally important. Distinguish between:

Blocking: Must fix before merge. Correctness issues, security vulnerabilities, obvious bugs.

Suggestions: Would improve the code. Style improvements, alternative approaches, minor refactoring.

Questions: Clarification requests. Not necessarily changes needed.

Label severity clearly. Don’t block on style preferences while framing them as blocking issues.

Common Review Anti-Patterns

The Rubber Stamp

Glance at code, click approve. No comments, no questions, no engagement.

This provides no value. It’s review theater that satisfies process requirements without achieving review goals.

The Style Police

Obsessing over formatting, naming conventions, and style while ignoring logic, security, and design.

Style matters, but it’s the least important review concern. Automate style enforcement (linters, formatters) and focus human review on what requires judgment.

The Nitpick Festival

Requesting dozens of minor changes that don’t materially improve the code.

Excessive nitpicking creates review fatigue. Authors stop taking feedback seriously. Save feedback for things that matter.

The Delayed Review

Letting pull requests sit for days before reviewing.

Delayed review blocks progress, forces authors to context-switch back to old work, and increases merge conflict likelihood. Review promptly.

The Rewrite Request

“I would have done this completely differently” when the submitted approach works.

Unless the approach is fundamentally wrong, don’t request rewrites based on preference. Save major architectural feedback for design review before implementation.

The Approval Without Reading

Approving because you trust the author, because the diff is large, or because you don’t have time.

Trust but verify. Large diffs deserve more attention, not less. If you don’t have time to review properly, say so.

Making Review Work

Set Expectations

Document what review should cover:

Document what’s not review’s job:

Clear expectations align reviewers and authors.

Keep Changes Small

Large pull requests are hard to review well. Encourage small, focused changes:

Small changes get better reviews. Better reviews catch more issues.

Require Adequate Descriptions

Pull requests should explain:

Good descriptions enable good reviews. Require them.

Respond Quickly

Review latency affects team velocity. Set expectations:

Quick review keeps work flowing.

Make Review a Conversation

Review isn’t an author submitting for judgment. It’s a conversation between engineers.

Authors: Explain non-obvious decisions. Accept feedback graciously. Ask clarifying questions.

Reviewers: Ask questions when confused. Explain reasoning behind suggestions. Accept that reasonable people disagree.

Collaboration produces better outcomes than adversarial review.

Learn from Reviews

Review is a learning opportunity:

Authors: Learn from feedback. Patterns in feedback reveal skill gaps.

Reviewers: Learn from code you review. See different approaches. Understand unfamiliar parts of the system.

Teams: Discuss interesting review findings. Share learnings.

Tooling and Automation

Automate what can be automated to focus human review on what matters:

Linting and formatting: Enforce automatically. Don’t review for style.

Static analysis: Run security scanners, code quality tools automatically.

Test requirements: Require tests to pass before review.

Coverage requirements: Require minimum test coverage.

Automated checks: Type checking, dependency scanning.

Automation handles routine checks. Humans handle judgment.

Key Takeaways