What's in a Code Review?

What's in a good code review? Where do you start?

There are so many different angles we can take this one, depending on the goals of the organization and the team.

Let's start by laying some groundwork.

What can code review get us?

  • Increased Code Quality - Improved code quality and maintainability, in terms of readability, uniformity, understandability, and more.
  • Reduced Defects – Improved quality with regard to logical correctness, performance problems, potential security vulnerabilities.
  • Learning/Knowledge Transfer – Transferred knowledge about the codebase, existing patterns, expectations regarding quality.
  • Collective Ownership of the Codebase – By working to give suggestions and sign off to pieces of the code they didn't explicitly write, you can increase the feelings of ownership of the codebase across the team as whole.
  • Better Solutions - At the time of code review you may see new suggestions for how the code could be improved, either make it more readable, or a dramatically simpler way to achieve the same outcome.
  • Compliance – Code reviews are mandatory in some contexts. This probably isn't news to you if you're operating in one of those contexts.

What does the data say?

Maintenance is the big winner from code review

Code review does find defects. But the way most organizations do it, they account for around 15% of the total comments [1]. That means that maintainability issues take around 75% [2]. This is a great thing assuming you want the thing you're building, and therefore the code that makes it up, to be around for more than a week. This means that code review is primarily about improving the quality of the code delivered in terms of total cost of ownership, with the bonus of occasionally finding bugs.

What should code review get us?

Re-framing it in this way, some of the "benefits" of code review are actually patches over flaws in our process that we should be examining.

One of the things we believe code review should get us is better solutions. However, this should be looked at as serendipitous rather than as a primary benefit. Better designs and solutions should be looked for upstream, before you cut code, and certainly before it's reviewed. If it happens after code is written then take it and implement it, but learn from it. Reflect on why that wasn't proposed earlier in the process at a less costly juncture.

Learning and knowledge transfer can be accomplished via code review. But it should be treated as a bonus. It can and should be accomplished via pairing in a better way. Pairing allows for shorter feedback loops, higher bandwidth of knowledge imparted in a shorter time frame.

Collective ownership can also be fostered by pairing or mobbing on a piece of code. Nothing quite says that's mine like actually having an active hand in creating it.

How to do it well?


The biggest thing you can do to improve your code reviews is to have a checklist. This isn't a joke. Right now your code reviews are probably inconsistent at best. And I don't just mean your team's code reviews. I mean your reviews personally. I'm willing to bet the code review you give at noon on Monday after a great weekend is dramatically different from the end of day Friday, dear god please get me out of here code review, or the review that you give when you're in a good mood, compared to when you've got a bone to pick.

Furthermore, with the limited available data applicable directly to software we see that reviewers guided by a checklist found more defects in less time compared to not using a checklist, to the tune of 30% more effective [3]. Given how effective checklists prove in most other industries, I'd actually expect this is a lower bound of how effective they can be.

These checklists can and should be improved regularly as part of your regular iteration cadence. This allows you to update them with the things that you're struggling with most at the moment.

Test driven code review

Test driven code review has limited academic coverage that I've found to date, but the limited available data indicates that test driving your code review like test driving your software development experience, improves the outcomes for all parties [4], although the interface for code review (usually GitHub) doesn't always make this natural.

Now like an empathetic human being

Humans are on your team. Treat them with respect. Assume positive intent. Review code with positive intent. Funny enough this is one of the most well-tread areas regarding code reviews in the blogosphere. Likely because so many folks have had so many bad experiences with code review at many organizations.

I'm not going to retread this one too much. Check out the further reading section for more on this.

What not to do?

See my post on 9 Common Pull Request Failure Modes.

What's next?

Subscribe to get my code review checklist and updates on how to improve your software development process.

Further reading:

How to Make Good Code Reviews Better - Stack Overflow Blog
I have been doing day-to-day code reviews for over a decade now. The benefits of code reviews are plenty: someone spot checks your work for errors, they get to learn from your solution, and the collaboration helps to improve the organization’s overall approach to tooling and automation. If you’re no…
How to Code Review Effectively - The Seeley Coder
In order to maintain clean repositories as well as coach and mentor your team it is important that you learn how to code review effectively.
Code Review Guidelines for Humans
Guidelines describing the required mindset and phrasing techniques for effective and respectful code reviews
How to Do Code Reviews Like a Human (Part One)
Lately, I’ve been reading articles about best practices for code reviews. I notice that these articles focus on finding bugs to the exclusion of almost every...


^1) Bosu, Amiangshu; Greiler, Michaela; Bird, Chris (May 2015). "Characteristics of Useful Code Reviews: An Empirical Study at Microsoft

^2) A. Bacchelli and C. Bird, "Expectations, outcomes, and challenges of modern code review"

^3) Alastair Dunsmore, Marc Roper, and Murray Wood. 2000. "Object-oriented inspection in the face of delocalisation."

^4) Davide Spadini, Fabio Palomba, Tobias Baum, Stefan Hanenberg, Magiel Bruntink, and Alberto Bacchelli. 2019. Test-driven code review: an empirical study.