Developer Experience/Productivity

Scaling Collective Code Ownership with Code Reviews

Recently, I gave a presentation at SCNA 2018 about scaling collective code ownership in the LinkedIn Flagship product engineering organization. In this presentation I talked about the relationship between code ownership and the quality of the code, as well as the modern code review process and how it’s practiced at LinkedIn. I based this presentation on empirical research conducted by researchers at Microsoft and in academia, and also shared some data that we’ve collected about the code reviews at LinkedIn. In this post, I’ll share a summary of the most important points and key takeaways from my presentation.

codereview1

Need for code ownership

From “Don’t Touch My Code! Examining the Effects of Ownership on Software Quality” and “Code Ownership and Software Quality: A Replication Study,” we know that code which doesn’t have a clear owner, contributor, or group of primary contributors will likely have more bugs. These results are not an absolute rule, of course, and don’t necessary establish an “ownership law.” For example, the correlation between code ownership and code defects is not as strong in open source projects, according to “Code Ownership in Open-Source Software.”  Nonetheless, there does seem to exist a link between code ownership and code quality.

Modern code review

Convergent Contemporary Software Peer Review Practices” outlines the code review practices that are common to commercial and open source projects today. The code review process is asynchronous, lightweight, and tool-assisted, and code is reviewed prior to being accepted into the source code. Most code submitted for review gets feedback within a day and is accepted into the main repository with a few days. The code reviews are small and frequent. These results are similar to what we’ve experienced internally at LinkedIn, which is not surprising given that our code review practice is similar to the process described in the source above.

Additionally, the code review process has more uses than just finding defects. According to "Expectations, Outcomes, and Challenges Of Modern Code Review,” “Characteristics of Useful Code Reviews: An Empirical Study at Microsoft,” and “Modern Code Review in Open-Source Projects,” most of the feedback provided during a code review is about the evolvability or maintainability of the code, rather than defects in the code. As a result, code review can help spread knowledge across an organization and help scale code ownership.

Code review at LinkedIn

codereview3

Any contributor at LinkedIn can submit a contribution for review to any repository at LinkedIn. The code review process we have lets us scale code ownership to the organization level. The tooling helps you find the right owners who would need to review your contribution; it also lets you know what Slack channel you can find them in. Since we know that context is important when doing a code review, you can choose to apply the submitted changes locally and review them in the IDE. For Java, we favor IntelliJ at LinkedIn. The conversation about the code happens in Review Board, on Slack, or sometimes face-to-face. It’s preferable to use a high-bandwidth channel when there is major confusion around an issue. Once all issues have been addressed, and a “Ship It” has been given out, tools help us add review sign off information to the commit message and off it goes to trunk and into our continuous deployment pipeline.

Setting up a code review practice of your own

If you are ready to setup a code review practice in your organization to help scale collective code ownership, there are somethings that you might want to consider.

Code reviews can take between 6-10 hours a week according to surveys across industry and open source (“Process aspects and social dynamics of contemporary code review: insights from open source development and industrial practice at microsoft”), so make sure that engineers have the time to conduct thorough code reviews. According to "Best kept secrets of peer code review,” conducted at Cisco, code review effectiveness drops when reviewers rush the review process.

Implementing a standardized code review system is a culture change, not just a process change. Asking engineers to submit their work for review might not be as simple as you think, especially for engineers who have spent most of their professional careers without code reviews. You might need to consider changes to training and onboarding, and potentially even how you go about hiring and conducting performance evaluation. At LinkedIn, code review participation is mandatory, and participation , as both an author and as a reviewer, is taken into consideration when performing promotion evaluations.

Code reviews are performed by people who have to balance multiple priorities, and there can be a natural tension between people submitting code for review and people doing the code reviews. This is especially the case when each side’s priority is different; the submitter may be focused on delivering a product change quickly, while the reviewer could be more concerned with the long-term quality and health of the software product being modified.

For these reasons, it’s important to instrument the review process and monitor it for trouble spots and bottlenecks. You may find under-resourced teams, design and architecture decisions that are creating bottlenecks, or even interpersonal communication and collaboration issues.

Since most of the code review feedback is not about defects or defect removal, code reviews are not a substitute for other QA processes, like testing.

Tips for code maintainers

If you already have a code review process in place and you want to focus on improving as a reviewer, you might want to try making the following adjustments.

Take your time. In "Best kept secrets of peer code review,” Cohen recommends inspecting at a rate of less than 300 LOC per hour for the best defect detection. Since we know that finding defects can be both the most useful and time-consuming feedback to give, take your time.

Hurry up. But at the same time, please respond to any review request in a timely manner. In “Code Reviewing in the Trenches: Understanding Challenges and Best Practices,” an in-depth study of code review practices at Microsoft, the author identified “receiving feedback in a timely manner” as the number one issue in the code review process.

Don’t do it all at once. Your ability to do a good job reviewing code diminishes with the amount of time you spend on a review. In “Best kept secrets of peer code review,” Cohen suggests that the total time spent on any given review needs to be around 60 minutes, and should not exceed 90 minutes.

Communicate compassionately. In “Mindful Communication In Code Reviews,” Amy Ciavolino suggests asking open-ended questions and trying to understand the contributor, with the assumption that they did what they did in the code for a good reason.

Focus on useful feedback. Remember that you are a maintainer, so you can always clean up the code later if needed—the contributors are trying to add value to your system with this change.  

Tips for contributors

If you already have a code review process in place and you want to improve as a contributor, you might want to try making the following adjustments.

Take your time. In “Code Reviewing in the Trenches: Understanding Challenges and Best Practices,” MacLeod suggest checking your work before submitting, building the software, running tests and automatic style checks.

Make your changes small. The number-one issue faced by maintainers is large changes. In "Best kept secrets of peer code review,” the author recommends that changes should be kept to around 200 LOC and not exceed 400 LOC.  

Respond to feedback promptly. Act on feedback from the review in a timely manner, because your reviewers need to manage the time they spend on code reviews with the other priorities they have.  

Find the right reviewer. For a code review, someone who’s familiar with the particular code will be able to give you the most useful feedback.

Communicate with compassion. Assume the reviewer has the best intent and is trying hard to help you make this contribution.

Acknowledgements

I would like to thank the LinkedIn communications team for helping me put this talk together, especially Stephen, Anne, and Tom, 8thLight for inviting me to speak at this conference, and a special thank you to Evan Farina, Shane Afsar, and the rest of the LinkedIn NYC-based developer team, who did the research and the analysis which I shared in this talk.