Most teams do code review. Fewer teams do code review well. The difference between perfunctory review (glance, approve) and quality review (understand, critique, improve) determines whether code review provides value or just adds process.
Quality review catches bugs before production. It spreads knowledge across the team. It improves code through constructive feedback. It mentors junior engineers.
Quantity review—reviews that happen because they’re required but without genuine engagement—provides none of these benefits while adding friction.
What Quality Review Looks Like
Actually Understanding the Code
Quality review starts with understanding what the code does and why. This requires:
Reading the description: Pull request descriptions should explain what, why, and how. If they don’t, request better descriptions.
Understanding context: What problem is being solved? What’s the broader system context?
Reading the code: Actually read it. Understand the logic. Don’t skim.
If you can’t explain what the code does, you haven’t reviewed it.
Looking for What Matters
Focus on what affects correctness, maintainability, and security:
Logic errors: Does the code do what it’s supposed to? Are edge cases handled?
Security issues: Input validation? Authorization checks? Secrets exposed?
Architectural concerns: Does this fit the system design? Will it scale? Is it maintainable?
Error handling: What happens when things fail? Are errors handled appropriately?
Testing: Are there tests? Do they test the right things?
Don’t get lost in style nitpicks while missing logic bugs.
Providing Actionable Feedback
Good feedback is specific, constructive, and actionable:
Bad: “This is confusing.”
Good: “The processUserData function does both validation and transformation. Consider splitting into validateUserData and transformUserData for clarity.”
Bad: “This won’t work.”
Good: “This will throw a NullPointerException if user.address is null, which can happen for users who haven’t completed profile setup. Consider null check or optional handling.”
Actionable feedback tells the author what to change and why.
Distinguishing Severity
Not all feedback is equally important. Distinguish between:
Blocking: Must fix before merge. Correctness issues, security vulnerabilities, obvious bugs.
Suggestions: Would improve the code. Style improvements, alternative approaches, minor refactoring.
Questions: Clarification requests. Not necessarily changes needed.
Label severity clearly. Don’t block on style preferences while framing them as blocking issues.
Common Review Anti-Patterns
The Rubber Stamp
Glance at code, click approve. No comments, no questions, no engagement.
This provides no value. It’s review theater that satisfies process requirements without achieving review goals.
The Style Police
Obsessing over formatting, naming conventions, and style while ignoring logic, security, and design.
Style matters, but it’s the least important review concern. Automate style enforcement (linters, formatters) and focus human review on what requires judgment.
The Nitpick Festival
Requesting dozens of minor changes that don’t materially improve the code.
Excessive nitpicking creates review fatigue. Authors stop taking feedback seriously. Save feedback for things that matter.
The Delayed Review
Letting pull requests sit for days before reviewing.
Delayed review blocks progress, forces authors to context-switch back to old work, and increases merge conflict likelihood. Review promptly.
The Rewrite Request
“I would have done this completely differently” when the submitted approach works.
Unless the approach is fundamentally wrong, don’t request rewrites based on preference. Save major architectural feedback for design review before implementation.
The Approval Without Reading
Approving because you trust the author, because the diff is large, or because you don’t have time.
Trust but verify. Large diffs deserve more attention, not less. If you don’t have time to review properly, say so.
Making Review Work
Set Expectations
Document what review should cover:
- Correctness and logic
- Security considerations
- Test coverage
- Error handling
- Code clarity
Document what’s not review’s job:
- Comprehensive testing (that’s testing)
- Style enforcement (that’s linting)
- Architecture decisions (that’s design review)
Clear expectations align reviewers and authors.
Keep Changes Small
Large pull requests are hard to review well. Encourage small, focused changes:
- One logical change per PR
- Limit to ~400 lines of meaningful changes
- Break large features into incremental PRs
Small changes get better reviews. Better reviews catch more issues.
Require Adequate Descriptions
Pull requests should explain:
- What changed
- Why it changed
- How to test
- Any areas of concern
Good descriptions enable good reviews. Require them.
Respond Quickly
Review latency affects team velocity. Set expectations:
- Acknowledge PR within hours
- Complete review within 1 business day
- Don’t let PRs sit in review for days
Quick review keeps work flowing.
Make Review a Conversation
Review isn’t an author submitting for judgment. It’s a conversation between engineers.
Authors: Explain non-obvious decisions. Accept feedback graciously. Ask clarifying questions.
Reviewers: Ask questions when confused. Explain reasoning behind suggestions. Accept that reasonable people disagree.
Collaboration produces better outcomes than adversarial review.
Learn from Reviews
Review is a learning opportunity:
Authors: Learn from feedback. Patterns in feedback reveal skill gaps.
Reviewers: Learn from code you review. See different approaches. Understand unfamiliar parts of the system.
Teams: Discuss interesting review findings. Share learnings.
Tooling and Automation
Automate what can be automated to focus human review on what matters:
Linting and formatting: Enforce automatically. Don’t review for style.
Static analysis: Run security scanners, code quality tools automatically.
Test requirements: Require tests to pass before review.
Coverage requirements: Require minimum test coverage.
Automated checks: Type checking, dependency scanning.
Automation handles routine checks. Humans handle judgment.
Key Takeaways
- Quality review requires actually understanding the code, not just glancing
- Focus on logic, security, architecture, and testing rather than style nitpicks
- Provide specific, actionable feedback with clear severity levels
- Avoid anti-patterns: rubber stamps, style policing, delayed reviews, rewrite requests
- Keep changes small for better reviews; require adequate PR descriptions
- Review is a conversation, not judgment; both parties learn
- Automate routine checks to focus human review on what requires judgment