Kristian Glass - Do I Smell Burning?

Mostly technical things

What I Want From A Hiring Take-Home Test

We’re doing a bunch of hiring right now. This process deliberately doesn’t include any kind of “take-home” exercise, no “go away and write some code and send it to us when you’re done”, no “programming challenge” or similar. Instead we use a pair programming exercise to try to assess technical ability.

Why? Because building a good take-home exercise is hard, building a bad take-home exercise is easy, and we’ve not yet come up with something that feels right.

Here’s a couple of properties I feel are absolutely critical and often missed:

  • The test represents the work and skills needed
  • The success criteria are clear
  • The investment required from candidates is not excessive

Representative work

Everybody is a genius. But if you judge a fish by its ability to climb a tree, it will live its whole life believing that it is stupid - Einstein

“Show me on this here whiteboard how to reverse a linked list” ranks pretty highly on my list of “signs this job interview isn’t for me”. That’s not to say it’s not without value in some circumstances, that computer science (as opposed to software engineering) isn’t important, or that LOREM IPSUM.

For us though and the work we do? I don’t think we have an (explicit) linked list in our entire codebase.

The skills and knowledge you generally need for most things we’re hiring for? Understanding complex systems. Debugging. Building resilient and scalable web services. Writing clean and understandable code.

Testing for these things is hard. Lots of these things can be subjective and are hard to define. Make sure you’re assessing things you care about, rather than things that are proxies for “has done computer science”.

For us that’s probably having candidates fix bugs, or identify failure modes, or refactor or write tests; it’s not algorithms or “Big Data” or “build a new thing without context”.

Clear Success Criteria and Low Candidate Investment

All too often I’ve seen take-home feedback like “not only did they successfully frob the foo as asked, but they also wrote some really good documentation / added a whole bunch of tests / made it webscale” as praise for going above-and-beyond the requirements.

Sure, lots of work, lots of our work, doesn’t have clear binary success criteria. Documentation is great and you should expect that from your colleagues. Testing is great and you should expect that from your colleagues. Your organisation should be being explicit about these expectations, and expecting these things of people you are employing to build software for you is very different from expecting these things of people giving up their time to interview with you, especially if you don’t make it clear to them.

If you don’t just want someone to Code The Thing but you care about the cleanliness of their code, tell them. If you want them to document it, tell them. If you want them to test it, tell them. You might then find that once you’ve made all your unstated expectations explicit, that your task is somewhat more demanding than you first thought, which comes with its own suite of problems.

If you’re unclear about your requirements or they’re fairly high, then you’re particularly open to biasing towards those with more spare time - the probably-young person with no dependents - and miss out on the awesome engineer with a family and busy job and multiple competing offers.

FizzBuzz is a great example of this - either it prints out the right information or it doesn’t. “Fix the bugs” or “decode the thing” questions all tend to have well-defined success criteria; tuning the effort required can be a different matter though.

So?

It’s really easy to construct a bad system that fails and biases non-obviously. Please don’t do that.

Comments