Growing up in the country meant casual jobs were farmhands or anything manual work related. When apple picking season came around, there was always a need for available hands which meant pocket money! I never thought it would give me analogies for later life. The first instructions when picking from a tree: do a pass and move on.
When picking apples you look for the ripest fruit first — the ones that will perish if not harvested, the ones most ready for market. With a Royal Gala you’re looking for the deepest red. You work your way around the tree in one pass, selecting as you go.
At the end of that pass, there’s a temptation to look back at the tree. And here’s the thing: you’ll always see more apples you could pick. Not because you missed them — but because you’ve introduced an exclusion bias. You selected the reddest apples, so the ones remaining are now the reddest apples on the tree. Your reference point has shifted. The experienced picker knows to close the bag and move on.
A pull request review has the same failure mode.
The Re-Review Problem
A PR that has been through multiple rounds of feedback is a tree that has already been picked. The obvious problems were identified in the first pass — the broken logic, the missing edge case, the function that does too many things. Each round of iteration raises the floor. What’s left is comparatively fine work. But if a reviewer goes back over it looking for something to find, they’ll find something — not because the code is bad, but because their threshold has recalibrated against what’s already been fixed.
This is how nitpicking compounds. It’s rarely malicious; it’s a cognitive trap. The reviewer feels an implicit obligation to justify their continued involvement. The result is a late-stage PR comment about a variable name or an import order on code that is otherwise correct, well-tested, and ready to ship.
What Automation Should Handle
The clearest principle I’ve landed on for what belongs in a PR comment: if it can be caught by a tool, it shouldn’t be caught by a human.
Code style, import ordering, line length, naming conventions, unused variables, missing semicolons — all of these are automatable. If your team has opinions about them, the right place to encode those opinions is in a linter configuration, a formatter, or a pre-commit hook. Not in a PR review comment on a Friday afternoon.
There’s a secondary benefit to this approach: automation is consistent. A reviewer enforcing formatting conventions manually will apply them differently depending on their mood, their familiarity with the author, and how late in the day it is. A formatter doesn’t. If a standard matters enough to enforce, it matters enough to automate. If it’s not in the linter, it probably isn’t actually a standard — it’s a preference.
This also removes a whole category of defensiveness from code review. When a formatter flags something, nobody takes it personally. When a reviewer flags the same thing manually, the author is left wondering whether this is objective feedback or aesthetic friction. The friction is unnecessary and the doubt it introduces is corrosive over time.
What Belongs in a Review
Stripping away everything automation handles, a PR review should focus on what tools cannot assess: the soundness of what’s being built.
Genuine review comments concern whether the code achieves what it claims to achieve, and whether that thing is the right thing to achieve. More specifically:
Hard failures — code that will demonstrably break: a logic error in a critical path, an unhandled case that causes a crash, a race condition under concurrent load. These are blockers regardless of review round.
Business logic — does the implementation match the requirement? This is the thing a reviewer is often best placed to catch because they may hold context the author doesn’t — a related system that behaves differently, an edge case from a previous incident, a stakeholder constraint that didn’t make it into the ticket.
Design soundness — is the solution structurally appropriate for the problem? Not “I would have done it differently” — that’s almost always noise — but “this approach will create significant problems when we need to extend it for the known use case coming in Q3.” Design feedback needs to be grounded in specific, foreseeable consequences. Abstract architectural preference is not design feedback.
Security — covered separately below.
Security Is a Hard Exception
Everything above is about calibrating the threshold for review comments. Security findings are not subject to that calibration.
An injection vulnerability doesn’t become acceptable because the PR is in its third review round and everyone is tired. An exposed credential doesn’t get a pass because the author has seniority. A broken access control isn’t a nit. These are not points on a spectrum of code quality — they’re a separate category with a different cost function. Shipping a slightly awkward abstraction has a recoverable cost. Shipping an authentication bypass does not.
If you find a security issue in a PR, raise it regardless of where in the review cycle you are. If you’re uncertain whether something is a genuine security concern, treat it as one until you’re certain it isn’t.
The distinction worth holding onto: a security review comment is not “this pattern makes me uncomfortable” — it’s “this code allows behaviour that the application should not permit.” Be specific. Name the attack vector. The specificity matters both for the author’s understanding and for the team’s collective security awareness.
If You’re on the Receiving End
PR review is not one-directional. If you find yourself fielding a comment that reads as an unjustified preference — “don’t do X here, do Y” with no explanation of why Y is better in this context — it’s reasonable to ask for the justification. Not defensively; genuinely. “Can you help me understand what the concern with X is here?” is a fair question that either produces useful reasoning or surfaces the fact that there isn’t any.
If a particular pattern recurs — a reviewer consistently blocking on style preferences that aren’t in the agreed standards, repeated feedback on things that should be automated — take it to a team conversation rather than resolving it PR by PR. The right venue is a retrospective or a team meeting where the question is: what are our actual standards for blocking a PR, and what should be automated instead? That conversation, done openly, is more productive than the accumulated friction of a hundred minor disputes in code review threads.
The tree has been picked. Move on.