Our circle recently revisited our definition of done. One point: code reviews. They did happen, but often couldn't unfold their full potential. Question: How could we improve them?
Answer: It's complicated. As always. But bear with me!
A few words about code reviews
From my point of view, code reviews should be an integral part of agile software development. There's an IEEE standard for code reviews that defines all kinds of reviews, but I won't go into much details here.
For me, the main goal of code reviews should be to improve collaboration within a development team and to improve the quality of the product. Each and every review should be seen as a learning opportunity for both the person reviewing and the author of the change.
Our approach to improvement
It's generally a good idea to know and analyze the status quo before implementing any change. And what's better to measure how code reviews are done than doing actual code reviews? No sooner said than done!
So we created a repository with a small application. Nothing too fancy, a simple calculator for the CLI written in Python, no libraries, no frameworks. Next, we created a bunch of merge requests and let all the devs review them in pairs in a workshop. We gave them around 30 minutes to review the code as they usually would.
And then came the fun part:
Reviewing the reviews
Each team explained how they did the code review, what they focussed on and what kind of problems they found. The results were rather interesting: There was a set of problems every team found, but also some issues that were unique per team.
And there's a good reason behind that: The approaches were vastly different. One or two teams started by cloning the repo and checked if they could get it running. Others started to analyze the code to find places it could crash. Yet another team read the documentation and the tests first, checking if it was up to date.
They were all doing a code review and they were all doing it very differently. There's nothing wrong with that. Every approach is good and has it's advantages, as long as it's not the only thing that's looked at.
The goal of this workshop was not only to analyze how code reviews were done so far. We also wanted to establish a common understanding of how code should be reviewed and why: A set of best practices and guide lines. We also added some well-established guidelines into the mix.
Some of our Dos and Don'ts
This is a structured, boiled down version of the best practies and guidelines we established during the workshop.
A review has three phases: Preparation, the actual review and post-processing. There's two parties involved: The author and the reviewer.
In general, for everyone
- Every change is an opportunity to do a review
- Avoid reviewing too much code at once. Going beyond 200-400 LOC per hour might have a negative impact on the quality of the reviews
- Don’t assume malice
- Strive to create learning opportunities for both the author and the reviewer
- Be kind
- Let different people review your code
- Review code of others on a regular basis
- Seniors are encouraged to let juniors review their code, too
- Juniors are encouraged to ask to perform reviews
- Code style and conventions can be checked automatically: Consider introducing automated sniffs/checks to eliminate any inconsistencies beforehand
Before a review, for the author
These are some steps that can be taken in order to prepare the code for the review. They aim to streamline the process and catch some points upfront.
- Review the code yourself
- Ask for a review proactively
- For larger or more fundamental changes, several people may be asked to do reviews. They can do a review together or independently. This has the added benefit of more people knowing about a change and might induce further discussion about the architecture
- Make sure it’s understood what your code tries to achieve
- Add a comment to the ticket for handover with
- If a solution is complex: Explain why, ideally directly in the code
- Comments on the change can also be added by the author before a review happens. Those can be used to explain things where comments in the code are not possible (JSON files etc.)
- Alternatively, do pair programming or even group programming
During the review, for the reviewer
These tips aim to optimize the quality of the review and its results. They also make sure that the review is exhaustive by covering for edge cases, checking documentation, trying different approaches in your head and identify oppourtunities.
- Don’t neglect positive feedback: If something is well-structured, especially elegant etc., point these things out, too
- Ask yourself how you would have solved the problem at hand
- Ask for information if it is not present. Rule of thumb: If the author needs to explain something, the code should be adapted to either explain itself or contain an adequate level of documentation
- If a solution doesn’t appear “right”, try to come up with alternative solutions
- Keep a balance between showing how to do it right and pointing the author in the right direction
- Write if your proposal is necessary (has to be fixed) or a suggestion (might be fixed for better understanding etc.)
- Ask open-ended questions
- Check for edge cases (such as unexpected input, NULL, MAX_INT, MIN_INT, “false”, FALSE, etc.) and try to force the code to fail
- Consider under which conditions the code would not behave correctly anymore (loading too much data, too many users, etc.).
- Encourage simplification where possible
- Look for shortcuts that should not have been taken
- If you deem it necessary, ask someone or multiple people to assist with the review or do the review together. This has the added benefit of inducing discussions about the solution and more people knowing about a certain change
After the review, for both
These three points target post-processing.
- Walk through the review if necessary or helpful
- Do pair programming if necessary or helpful
- Review the following changes as well, if possible. Depending on the size or complexity of the code change, a pragmatic process of solving issues together can be preferable over several disconnected feedback loops
The feedback for the workshop was amazing. People loved the discussions that it sparked. The nice thing about this list is that everyone can agree with it. The whole dev team was involved. Everyone contributed and learned a lot along the way. It was a true collaboration!
Which of these guidelines would you adopt in your process and why? Leave a comment!