From subjective interviews to binary hires - a story

Mon, Jul 23, 2018 5-minute read

Hiring is not easy, and sometimes we fail to do it right. This is the tale of one such failure and what we learned from it.

We hired our 200th engineer midway through last year. It took us three years to get to that magic number. Our business grew 6600x in the same period, and we launched 18 products - creating a Super App in the process. All of this would be near impossible without hiring smart and trusting our people to weave their magic.

We’re now expanding across Southeast Asia and good talent is hard to come by. (On that note, we’re hiring. Check out superapp.is for more). We’re also trying various experiments in an attempt to scale our interview process. This post is covers just that: our attempt to spar with the nature of interviews to ensure we get the best to work with us.

In this constant tussle, we encountered a problem that expounds the nature of hiring and why it’s so complicated.

This is not to claim we’ve cracked how to hire quality folks. Quite the contrary.

If anything, this is a short read on how we messed up, and are working on fixing it.

How it all began

We experimented doing one of our interview rounds remotely in an attempt to scale our interview process. This is our ‘code pairing interview round’ where the candidate pairs with one of our engineers to solve a problem. It’s almost always done in person, but this time it was over a video call. Here’s the feedback we got from the candidate:

  • It wasn’t pair programming
  • Goals weren’t defined
  • Expected outcomes were unclear
  • Unclear requests and imprecise feedback such as, “add more unit tests” instead of “let’s take another look at file X…”

Overall, the candidate felt the interviewer was not prepared for the interview.

This is important to us because regardless of whether someone joins GOJEK, the interview process as an ‘experience’, is a metric we track.

Even the ones who don’t make it are ambassadors for us. It also helps us hone the manner in which we approach interviews. Everyone who is part of our recruitment process is also a brand evangelist. And somewhere in this process, we messed up.

We’re not right, we don’t claim to have the answers. And if we’re right all the time, we’re simply not doing it right.

Keeping that in mind, we wanted to get to the bottom of it all. Our India Head Sidu Ponnappa and I caught up with our interview panelist to understand what actually went down.

Finding loose ends

Soon, we realised our panelist was not clear on what he was looking for. He had a list of ‘must haves’, which were of ‘high importance’. Most of them were influenced by our engineering principles of Test-Driven Development (TDD), Pairing, IDE comfort etc. But when we asked what will convert this specific candidate to a clear hire, he lacked the answers.

There was some understanding of what ‘good’ meant, but the articulation was missing.

This was an eye-opener. He understood on what parameters he was eliminating candidates, but not how he was ‘selecting’ a good candidate. The difference between the two is subtle, but huge.

Back to the drawing board

With the aim to make this more meaningful, we asked the interviewer to do a few iterations of the coding problem himself. This helped us understand what he was looking for in that problem, and the draft of his findings for others to see. At that moment, we had an epiphany. A huge chunk of our panel is composed of high-achieving twenty-something professionals who are bound to have some biases that are hard to shake off. This only comes with experience and a fair degree of maturity . This seem so obvious in hindsight - but stunningly wrong in plain sight. We were wrong, and we had to rectify it.

Interviews are all about converting a lot of subjective thoughts into a binary outcome. This is easier said than done, and achieving standardisation is even harder.

Either we ‘filter’ a candidate because they fail to demonstrate a certain trait we highly value (listening and arguing during pair programming) or we ‘select’ someone because they demonstrated something we value. (demonstration of an optimisation mindset in Editor usage.)

In early stages, the bias of an interviewer doesn’t factor in:

We want to see more of ourselves in others, but that bias is a death knell that kills diversity in an org.

One way to fight this is to write down what an ideal candidate looks like for the role he/she is being interviewed for. (Keep making changes as you interact and learn from candidates.) So, the next time you don’t like a specific candidate, dig deeper - maybe they have more to offer than your own confined definition of what is required for the role.

Needless to say, we went back to the rejected candidate, apologised for our shortcomings, and did the entire process all over again. And this process is still a WIP. We’re investing time and effort to make this better. If you have any tips, ideas or suggestions, please ping me.

We’re not always right, but we sure as hell want to be able to correct ourselves. For me, that’s a bigger challenge; failing as fast as possible and remedying it.