Relationships Ruin Your Code Reviews

In this episode of the Software Engineering Unlocked podcast, hosted by Dr. McKayla, we delve into the critical aspects of code reviews in software development. Dr. McKayla returns to the airwaves to explore how interpersonal relationships influence the outcomes of code reviews. The discussion highlights that nearly 70% of developers feel that their relationship with reviewers affects the review process, impacting the rigor and tone of the feedback.

Key Insights:
  • Emotional dynamics play a significant role during code reviews. 30% of developers said they are reviewing code from less favored colleagues, which can lead to biased judgments and negative feelings.
  • Despite personal feelings, approximately 76% of developers strive for objectivity to maintain professionalism.
  • The experience level of a developer also influences the depth of code review feedback they get and the manner in which feedback is provided.
  • Reviewers' perceptions of code quality can affect their views on the author's skills or character.

Strategies to Mitigate Bias: The episode outlines multiple strategies to reduce bias in code reviews, such as involving multiple reviewers, standardizing review criteria, and implementing anonymous reviews.

Additional Resources: Visit to discover Dr. McKayla’s latest article on the top 10 code review techniques and methodologies, including systematic approaches like using checklists and change-impact analysis.

Conclusion: The podcast sheds light on both the positive and negative impacts of human factors in code reviews and emphasizes the need for strategies to minimize bias, enhancing both code quality and team dynamics.

Make code reviews your superpower at!

Read the whole episode "Relationships Ruin Your Code Reviews" (Transcript)

Michaela: [00:00:00] Hello and welcome to the Software Engineering Unlocked Podcast. I'm your host, Dr. McKay, and after quite some time of radio silence on my end, I'm so, so happy to be able to be back on air and to talk to you. Today's episode is all about code reviews and especially the impact our relationships that we have with our colleagues have on the code.

today's episode is very much, Influenced by a paper that's called how social interaction can affect modern culture view Uh, that's research done by a bunch of different people coming really from different universities around the globe So I will link it in the in the show notes You can really check it out and I found it very interesting because they are actually investigating what's the impact of our relationships and how we feel our emotions that we have for our colleagues on our code review process.

And there are actually older papers that also show that there's a big bias actually in code reviews and how we conduct code reviews and the findings and the pushback that came out of different studies, a lot of open source studies as well where we see that really, uh, the gender, for example, or, um, the age and so on, um, how we perceive the person really influences how we review the code, right?

So that there's a lot of, uh, bias, um, often even bias that we are not aware of in code reviews, and this study now is, [00:01:30] um, it's, it's a study from 2000 and 23, so, uh, a pretty new study, um, that. Um, looks into the emotions that we have, uh, towards our colleagues, um, and, um, also if this influences our judgments, right?

Um, well, so, uh, the first thing that the research, uh, researchers looked at is if the people that they interviewed found, uh, code reviews beneficial and, uh, all of them. Um, Said yes. I think this is probably also a little bit of bias of the study because, um, if you're not interested in code reviews, you're probably not, uh, willing to spend time, uh, to talk about code reviews.

Um, but yeah, it's really good news, right? That , all of them said, well, code reviews are very, very important. Um, and then what I found interesting as well is, um, the benefits that people reported because finding errors was, again, one of the most, um, reported benefits of code reviews.

There were a 36 percent of the people said, well, this is, this is the number one reason. Right. Uh, then it was about understanding new features, uh, helps to really get a knowledge sharing across, right. Uh, another reason was make you follow code standards. Yes, that's true. Um, and then also, uh, 20 percent was that it's only, that it is allows us to, To make sure that we only push code, right?

We only ship code that has been reviewed, right? So it's a, it's a safety guard as well. Um, and then this research really dives into [00:03:00] this emotional, um, perception and, and 70 percent of the people that they interviewed, right, they did an interview, they did a grounded theory study around 25, uh, developers and 70 percent of them indicated that they feel close to the people.

Who reviewed their work, right? And they also feel that it affects the way they review, right? And there were several camps in that, right? It's not one direction, it's several directions. So some people say, well, they feel like they are stricter because they have different They have a relationship with them because they know them, right?

They maybe, um, one person said, well, this is my cousin, so obviously I'm, I'm a little bit stricter there and I'm looking more closely, um, what they're doing. Uh, and other people said, for example, no, I, I try to be nicer because I have a relationship with that person, right? I, I like them and I try not to be rude and I try to protect our, um, relationships and, and yeah.

Um, and most of the people that, um, that they interviewed were really aware that those relationship influence how they review code and how to give a comment, right? Um, so this was the positive, uh, relationship somehow that they have, but the researchers also investigated. What happens if you do Review code of people that you can't stand and around 30 percent.

So still some significant percentage of people said well, yes I have to review code of people that I don't like that. I can't stand right and [00:04:30] and and and a Small percentage of those admits openly that the emotion that you have are impacting the judgment, right? So they feel maybe aggression Um, some a person said I wish to rewrite his code or another person said I didn't want to let his code go to production.

And another person said, if a person was extremely unpleasant to me, I simply ignore his request. Right? So there's a, there's a tendency not to review code of people that you, um, that you don't like, or maybe you let them wait. Right. Uh, and I've seen that, um, uh, quite often, right? Not even if you don't like the person, but also if you know, the code of the person isn't really good or they are writing a mess and this can also happen or that they are really.

Writing these large PRs, right? So which is related to a person, you know what you can expect. And then sometimes people are not willing to take on these requests. Um, Yeah, this pushback of code of people that we, that we don't like, um, was also reported in open source, but I think here's a different angle, right? Because in open source, often we don't even know the person that we are reviewing with, right? So the biases are coming from maybe the avatar picture that they have, the name, or what we think about the person

. Here, the people are really working with each other. Or the researchers. made sure that the people that they interviewed and that they, um, yeah, talk to [00:06:00] had at least six months, um, working with their colleagues in their team.

Right. So, um, that they are not new to the team, that they know a little bit who they are talking to. Right. So this is really a, a team that we are talking about. Yeah. Um, well, but the majority of the people reported, uh, was around, uh, 80 percent almost, right, said, well, I really try to be objective, right? So even if I know I don't like that person, I try to be objective and I try to see the professional side of things.

I try to separate the code, uh, from the person and, and, and all of that, right? So this is, this is a very good attitude. Um, but from other research, you know, it's not always, you know, always happening that way, right? Even if you intend to.

Another interesting aspect that they dived a little bit into was the experience of a developer, right? So if you perceive the other person as senior, as experienced, as knowledgeable, as skilled, um, quite a few developers indicated that this impacts how they review code, right? So it could be that they, for example, say, well, if I know they know what they're doing, um, I'm not going so deeply into that code, uh, as if I know this is how So if there's a person new to the team or, uh, you know, a junior or an intern, then I really go into depth of the code and, and, and yeah, remark all the nitty gritty parts as well.

Another aspect that they looked into was that, um, the code that we review. Right. Again, [00:07:30] Um, influences how we see the person, right? So not only the person influences how we reviewed a code, but also the other way around, right? So if we get poorly written code, or, um, if you see people make the same mistake all over again, or they make mistakes that we don't think they should actually be doing at this time, influences our perception of the skills or even of the character of the author, right?

Even though a couple of people said they try not to have that, you know, influence their perception, I think it's it's a natural cycle that's happening, right? That's going back and forth. Another, um, last aspect that the people actually did here was, um, thinking about Um, how we can reduce the bias, right?

So they, they brainstormed with the developers that they interviewed, um, not together, but one by one, um, what they think, what could help to reduce these biases, and there were a couple of, um, ideas I want to throw them out. I think, um, well, I personally think, um, some of them are just not doable, not feasible, but, um, it's, it's good to, to think about them, right.

And I would love to hear what you think about them. So one of the ideas that, um, that evolved out of this research was to involve at least two reviewers in the review process. I think this is a, this is a good, um, idea. And a lot of research papers actually suggest two reviewers as, as a better means of, or [00:09:00] a better number of reviewers than one person, right?

But we also have to think about the turnaround times that we really drastically increase, right? Significantly increase with two reviewers. It's probably double the time of what we normally have. So there's a lot of workload involved, but it's a good, it's a good measurement. Um, to discuss the review criteria with the team and have some standardization.

I think this is excellent, uh, way to do that, right. To really have a very concrete way of, um, Knowing what we expect, having a shared understanding. This is what I'm doing also with my code review workshops a lot, right? Working with the teams to have this shared understanding, have this standardization, have this, um, review criteria in place, and even maybe the review approaches that you use to do the code reviews.

Um, then another one was conversations. Well, there's not a lot of info given what that means, right? Having conversations around that, I think maybe can be that we are changing the channel, which is also good, right? Not only, um, giving written feedback, but also, you know, um, getting face to face and we can see there's research actually, um, by Kruger, for example, at all that shows that if you have this face to face conversation, we are reducing our bias, right?

Um, they also talked about reallocation of teams. Team extensions, right? They talked about anonymity, well, doing anonymous code reviews, um, helping, uh, or getting help from the team [00:10:30] leader, right? And I think, Involving somebody else if we have like, um, yeah, some conflicts and so on can be a good tactic.

Checking soft skills during onboarding, team hierarchy, and enabling the creator of the pull request to replace the reviewer. I found that a very interesting one. Well, um, yeah, so that's it sort of, um, from this study. Uh, yeah, I would like to hear your ideas and your, your thoughts on these ideas to reduce the bias in code reviews, right?

As I said from multiple studies, we know there is significant bias in code reviews. There's pushbacks, um, often for not valid reason, right? Or reason that are not connected to the code, but more to the person, to what we perceive from the person and so on, which is obviously not something that we want. Um, yeah, there are a couple of negative impacts from that, right?

We are missing errors for people that we perceive as really good, or we are, um, unwilling to review work from some people, or we are, you know, um, giving pushback for people that we don't like and so on. So this is really interfering with our objectiveness. What I missed a little bit from this work is the look on the positive things, right?

Um, Because while it's true that our relationship have this negative bias, but there's also positive things come out, right? That come with these relationships. Um, so for example, I think friendships or mentorships, right? They might be formed through [00:12:00] code reviews. I even had mentorship relationships that were purely based on code reviews that we never really worked together in other capacity than doing code reviews together, looking at pull requests together, communicating via pull

requests and so on, right?

Also the admiration that you can have for a person, right? They briefly mentioned that in the study, but didn't go really deep into that, right? So you can really learn a lot from people, but not only, you know, objectively learn, but also really feel admiration and feel like, oh, thankful, right? Having a relationship to a person that you read their code, that you're learning from them.

Even though, um, yeah, uh, you don't, you don't see them on a day to day basis, right? Some people are completely remote and maybe communicate only via code reviews and I think they are forming relationships and not only in a negative way, right, but also in a positive way. Yeah, so that's it. Um, that's it for today.

I hope you liked this episode. But before you go, don't forget to look at my project. Awesomecodereviews. com last week, I finished a new article. Um, it's about the 10 best code review approaches, right? Which directly fits to this topic today, because if you have concrete, systematic code review approaches, right?

Not just, you know, ad hoc, uh, reviewing code, however we feel like is the right way to do today. But it really a systematic and explicit approach. Um, then we can also reduce the bias of our code reviews more. We can have these [00:13:30] shared expectations in our team. Right. Um, so what are those approaches that I'm talking about?

Well. Code review checklists that are very well known, I think, but you can also do a change impact analysis or do some cross referencing or you have a data flow or control flow inspection in your code, right? All of that makes code reviews much more systematic, much more explicit. Um, It also means that we have a much more thorough and, um, coherent approach through our team and reduce our bias.

So yeah. Hop over to awesome code reviews. com. Check out this article. I will also link it in the show notes as well as the research article that I talked about today. And, uh, yeah, I hope to see you soon. I can't promise when I will be on here again because there's a lot of things going on in my life

um, yeah. Anyway, I will be back on, probably in a couple of weeks with some news on code reviews or developer experience or some other topics related to software engineering. So I'm really looking forward to that and have a great day. Bye bye.

Copyright 2022 Doctor McKayla