Code review sounds great in theory: a second set of eyes, bugs caught early, knowledge spreading across the team. In practice it usually collapses into one of two failure modes. Either the review is slow and the PR sits for two or three days, or it is so shallow that every PR gets a reflexive “LGTM”.
Over more than a decade on different teams I have seen there is a sweet spot between those two, but reaching it takes discipline. Here are the practical habits that actually get me there.
The real goal of review
The goal is not to catch bugs. Your CI and your test suite are better at that. I run code review for three reasons.
1. Knowledge transfer. A second developer gets familiar with that area of the codebase. Your bus factor shrinks.
2. Design feedback. You want to ask “is this the right approach?” early. Architecture decisions get debated in review, not six months later.
3. Consistency. Does the team’s style, patterns, and conventions still hold? Review is what protects that discipline.
Catching bugs is a bonus. The three above are the real reasons.
PR size discipline
Large PRs get bad reviews. When a PR touches 500+ lines, the reviewer’s eyes glaze over and they stop looking carefully.
My rule: a PR stays under 300 net lines changed (not counting deletions).
If it is bigger than that, ask:
Is this really one change, or can it be split? Are the tests in the same PR when they could go separately? Are refactor and feature mixed together? Is there generated code that could be gitignored instead?
Anything over 500 lines is a strong warning sign. 1000+ usually needs to be split into two or three PRs.
Blocking time for review
You need calendar blocks. Without them, review keeps getting pushed and the PR sits for a day or two.
My pattern:
A 30 minute review slot at 10:00 in the morning. Another 30 minute slot at 14:00 in the afternoon.
With two slots, a PR that lands in the morning gets reviewed before lunch, and one that lands in the afternoon gets reviewed the same day.
Team expectation: a first review within 4 hours of opening a PR, the whole review done within 24 hours.
What to actually check
Things that matter beyond the subject line of the PR.
Correctness: does the code actually do what the PR claims? Are the edge cases handled? What about null, empty, or invalid input?
Design: is the abstraction at the right level? Is the complexity justified, or is this over-engineered? Does the pattern match the rest of the codebase?
Readability: are the variable and function names meaningful? Are the comments where they should be? (Code should be self-explanatory, and comments should only explain “why”.) Are the functions a reasonable length?
Tests: is there a test for the new behaviour? Does the test actually exercise what you think it does? Are the mocks realistic?
Performance: any N+1 queries? Pointless allocations? Is the right index being used?
Security: any SQL injection risk? XSS risk? Is the authorization check in the right place?
You cannot look at all of these on every PR. Focus where the context demands it.
How I write review comments
I label every comment with its type.
Blocking: this has to be fixed before merge. Critical bug, security issue, or API break.
Suggestion: would be nicer fixed, but not a merge blocker. The author decides.
Question: a real question to understand the code. Not blocking, just clarification.
Praise: something done well. Positive reinforcement matters.
I prefix every comment with its type:
[Blocking] This SQL query is not parameterized, injection risk.
[Suggestion] This function is 50 lines, splitting into 2 might be cleaner.
[Question] What happens if `userId` is negative?
[Praise] Thanks for this refactor, the old code was a mess.Now the author knows which comments absolutely need a reply. Review iterations move faster.
Nits vs important feedback
A nit is a minor style or preference comment. “Should this be snake_case or camelCase?” “This spacing is off.”
Nits slow review down. My recommendation: let a linter catch them automatically.
ESLint, Prettier, SwiftFormat, RuboCop. Pre-commit hook or CI check. Automated tooling enforces style so review does not waste cycles arguing about it.
Style arguments are not a good use of team time. Outsource them to the tool.
Acknowledge vs change
You are not required to change code for every review comment.
Reviewer: “This variable could have a clearer name.”
Author options:
1. Change it (“good point, renamed”). 2. Explain and keep it (“I kept this name because it matches the API naming in our legacy code”). 3. Close the thread as acknowledged, future work for now.
Replying “fixed” to every comment actually degrades review quality. You want a real conversation. Sometimes the reviewer is wrong, sometimes the author’s context wins.
Async and sync together
Most review is async: open a PR, comments come in, author replies, iterate. That spreads over one to two days.
Sometimes sync is faster. A 15 minute Zoom call beats async when:
The review is architectural and complex. The thread count is getting out of hand. There are disagreements that need resolution.
A conversation that would take 5 days in writing can finish in 15 minutes live.
Hybrid approach: async as the default, sync as an escalation.
Pair programming vs review
Pair programming is synchronous live review. Two developers write the code together.
Upside: instant feedback, no async delay. Downside: two developers’ time on the same work.
Complex features, onboarding, and knowledge transfer are good candidates for pairing. Day to day work is better served by PR review.
Hybrid: a big refactor is written pair programming style, so it is effectively reviewed as it is written. Then one fresh set of eyes reviews the result.
Junior and senior review dynamics
A senior reviewing a junior’s PR is an education opportunity. Detailed explanations, teaching patterns.
A junior reviewing a senior’s PR is just as valuable in the other direction. A junior saying “I did not follow this part” surfaces a fresh perspective. The junior learns, and the senior has their assumptions tested.
Team culture has to support this bidirectional review. Seniors get reviewed too. No top-down-only review.
Review metrics (useful but dangerous)
Things you can measure:
Time to first review. Total PR time from open to merge. Review coverage (how many reviewers, how many comments). Comments per PR.
Careful: metrics shape behaviour. If you reward “comment count”, people will start leaving pointless comments to hit the number.
My preference: watch the trend of “time to first review”. If it creeps past two days, that is the signal. I do not set absolute targets.
Toxic review patterns
Signs review is going bad.
1. Nitpick competition. Several reviewers leaving 20+ nits on every PR, ego contest. That is a culture problem.
2. “Why didn’t you do X” framing. It puts the author on the defensive. “What if we did X?” is more collaborative.
3. Review as gatekeeping. A senior rejecting every PR from a junior. Power dynamics, productivity killer.
4. Bot-like approval. “LGTM” with no real look. You are doing the ritual without the work.
5. Single approver bottleneck. One person reviewing every PR. Single point of failure, slows the whole team.
Talk about these patterns openly in retrospectives. Fix them before they grow.
Takeaway
Code review sits at the core of team velocity. Done wrong it is a bottleneck. Done right it delivers quality and knowledge sharing at the same time.
PR size discipline, allocated review time, structured comments, automated tooling for style. Those four pillars let you review fast and deep.
Then fine-tune to your context. A startup is different from an enterprise. The fundamentals are universal though: the goal is to enable, not to block.