How to Interview Engineers

We do a lot of interviewing at Triplebyte. Indeed, over the last 2 years, I've interviewed just over 900 engineers. Whether this was a good use of my time can be debated! (I sometimes wake up in a cold sweat and doubt it.) But regardless, our goal is to improve how engineers are hired. To that end, we run background-blind interviews, looking at coding skills, not credentials or resumes. After an engineer passes our process, they go straight to the final interview at companies we work with (including Apple, Facebook, Dropbox and Stripe). We interview engineers without knowing their backgrounds, and then get to see how they do across multiple top tech companies. This gives us, I think, some of the best available data on interviewing.

In this blog post, I'm going to present what we've learned so far from this data. Technical interviewing is broken in a lot of ways. It's easy to say this. (And many blog posts do!) The hard part is coming up with what to do about it. My goal for this post is to take on that challenge, and lay out specific advice for hiring managers and CTOs. Interviewing is hard. But I think that many of the problems can be fixed by running a careful process [1].

The Status Quo

Most interview processes includes two main steps:
  1. Applicant screening
  2. In-person final interview
The goal of applicant screening is to filter out candidates early, and save engineering time in interviews. The screening process usually involves a recruiter scanning a candidate's resume (in about 10 seconds), followed by a 30-minute to 1-hour phone call. Eighteen percent of the companies we work with also use a take-home programming challenge (either in place of or in addition to the phone screen). Screening steps, interestingly, are where the significant majority of candidates are rejected. Indeed, across all the companies we work with, over 50% of candidates are rejected on the resume scan alone, and another 30% are rejected on on the phone screens / take-home. Screening is also where hiring can be at its most capricious. Recruiters are overwhelmed with volume, and need to make snap decisions. This is where credentials and pattern matching come into play.

In-person final interviews almost-universally consist of a series of 45-minute to 1-hour sessions, each with a different interviewer. The sessions are primarily technical (with one or two at each company focusing on culture fit and soft skills). The final hire/no hire decisions are made in a decision meeting after the candidate has left, with the hiring manager and everyone who interviewed the candidate. Essentially, a candidate needs at least one strong advocate and no strong detractors to be made an offer [2].

Beyond the common format, however, final interviews vary widely.
  • 39% of the companies we work with run interviews with a marker on a whiteboard
  • 52% allow the candidate to use their own computer (the remaining 9% are inconsistent)
  • 55% let interviewers pick their own questions (the remaining 45% use a standard bank of questions)
  • 40% need to see academic CS skills in a candidate to make an offer
  • 15% dislike academic CS (and think that talking about CS is a sign that a candidate will not be productive)
  • 80% let candidates use any language in the interview (the remaining 20% require a specific language)
  • 5% explicitly evaluate language minutia during the interview
Across all the companies we work with, 22% of final interviews result in a job offer. (This figure comes from asking companies about their internal candidate pipeline. Candidates applying through Triplebyte get offers after 53% of their interviews.) About 65% of offers are accepted (result in a hire). After 1 year, companies are very happy with approximately 30% of hires, and have fired about 5% [3].

False Negatives vs. False Positives

So, what's wrong with the status quo? Fire rates, after all, don't seem to be out of control. To see the problem, consider that there are two ways an interview can fail. An interview can result in a bad engineer being hired and later fired (a false positive). And an interview can disqualify someone who could have done that job well (a false negatives). Bad hires are very visible, and expensive to a company (in salary, management cost and morale for the entire team). A bad hire sucks the energy from a team. Candidates who could have done the job well but are not given the chance, in contrast, are invisible. Any one case is always debatable. Because of this asymmetry, companies heavily bias their interviews toward rejection.

This effect is strengthened by noise in the process. Judging programming skill in 1 hour is just fundamentally hard. Add to this a dose of pattern matching and a few gut calls as well as the complex soup of company preferences discussed above, and you're left with a very noisy signal.

In order to keep the false positive rate low in the face of this noise, companies have to bias decision ever farther toward rejection. The result is a process that misses good engineers, still often preferences credentials over real skill, and often feels capricious and frustrating to the people involved. If everyone at your company had to re-interview for their current jobs, what percentage would pass? This is a scary question. The answer is almost certainly well under 100%. Candidates are harmed when they are rejected by companies they could have done great work for, and companies are harmed when they can't find the talent they need.

To be clear, I am not saying the companies should lower the bar in interviews. Rejection is the point of interviewing! I'm not even saying that companies are wrong to fear false positives far more than false negatives. Bad hires are expensive. I am arguing that a noisy signal paired with the need to avoid bad hires results in a really high false negative rate, and this harms people. The solution is to improve the signal.

Concrete ways to reduce noise in interviews

1. Decide what skills you're looking for

There is not a single set of skills that define a good programer. Rather, there is a sea of diverse skill sets. No engineer can be strong in all off these areas. In fact, at Triplebyte we often see excellent, successful software engineers with entirely disjoint sets of skills. The first step to running a good interview, then, is deciding what skills matter for the role. I recommend you ask yourself the following questions (these are questions we ask when we onboard a new company at Triplebyte).
  • Do you need fast, iterative programmers, or careful rigorous programmers?
  • Do you want someone motivated by solving technical problems, or building product?
  • Do you need skill with a particular technology, or can a smart programmer learn it on the job?
  • Is academic CS / math / algorithm ability important or irrelevant?
  • Is understanding concurrency / the C memory model / HTTP important?
There are no right answers to these questions. We work with successful companies that come down on both sides of each one. But what is key is making an intentional choice, based on your needs. The anti-pattern to avoid is simply picking interview questions randomly (or letting each interviewer decide). When that happens, company engineering culture can skew in a direction where more and more engineers have a particular skill or approach that may not really be important for the company, and engineers without this skill (but other important skills) are rejected.

2. Ask questions as close as possible to real work

Professional programmers are hired to solve large, sprawling problems over weeks and months. But interviewers don't have weeks or months to evaluate candidates. Each interviewer typically has 1 hour. So instead, interviewers look at a candidates' ability to solve small problems quickly, while under duress. This is a different skill. It is correlated (interviews are not completely random). But it's not perfectly correlated. Minimizing this difference is the goal when developing interview questions.

This is achieved by making interview question as similar as possible to the job you want the candidate to do (or to the skill you're trying to measure). For examples, if what you care about is back-end programming, asking the candidate to build a simple API endpoint and then add features is almost certainly a better question than asking them to solve a BFS word chain problem. If you care about algorithm ability, asking the candidate to apply algorithms to a problem (say, build a simple search index, perhaps backed by a BST and a hashmap for improved deletion performance) is almost certainly a better problem than asking them to determine if a point is contained in a concave polygon. And a debugging challenge, where the candidate works in a real codebase, is almost certainly better than asking the candidate to solve a small problem on a whiteboard.

That said, there is an argument for doing interviews on whiteboards. As an interviewer, I don't care if an engineer has the Python itertools module memorized. I care if they can think through how to use iterators to solve a problem. By having the candidate work on a whiteboard, I free them from having to get the exact syntax right, and let them focus on the logic. Ultimately I think this argument fails, because there's just not enough justification for the different format. You can get all the benefit by allowing the candidate to work on a computer, but telling them their code does not need to run (or even better, making it an open book interview and letting them look up anything they want with Google).

There is an important caveat to the idea that interview questions should mirror work. It is important that an interview question be free from external dependencies. For example, asking a candidate to write a simple web scraper in Ruby might seem like a good real-word problem. However, if a candidate needs to install Nokogiri (a Ruby parsing library that can be a pain to install) and they end up burning 30 minutes wrestling with the native extensions, this becomes a horrible interview. Not only has time been wasted, stress for the candidate has gone through the roof.

3. Ask multi-part questions that can't be given away

Another good rule of thumb for interview questions is to avoid questions that can be “given away”, i.e. avoid questions where there's some magic piece of information that the candidate could have read on Glassdoor ahead of time that would allow them to answer easily. This obviously rules out brain teasers or any question requiring a leap of insight. But it goes beyond that, and means that questions need to be a series of steps that build on each other, not a single central problem. Another useful way to think about this is to ask your self whether you can help a candidate who gets stuck, and still end the interview with a positive impression. On a one-step question, if you have to give the candidate significant help, they fail. On a multi-part problem, you can help with one step, and the candidate can then ace everything else and do well.

This is important not only because your question will leak onto Glassdoor, but also (and more importantly) because multi-part problems are less noisy. Good candidates will become stressed and get stuck. Being able to help them and see them recover is important. There is significant noise in how well a candidate solves any one nugget of programming logic, based on whether they've seen a similar problem recently, and probably just chance. Multi-part problems smooth out some of that noise. They also give candidates the opportunity to see their effort snowball. Effort applied to one step often helps them solve a subsequent step. This is an important dynamic when doing real work, and capturing it in an interview decreases noise.

To give examples, asking a candidate to implement the game Connect Four in a terminal (a series of multiple steps) is probably a better question than asking a candidate to rotate a matrix (a single step, with some easy giveaways). And implementing k-means clustering (multiple operations that build on each other) is probably better than determining the largest retangle that can fit under a histogram.

4. Avoid hard questions

If a candidate solves a really hard question well, that tells you a lot about their skill. However, because the question is hard, most candidates will fail to solve it well. The expected amount of information gained from a question, then, is heavily impacted by the difficulty of the question. We find that the optimal difficulty level is significantly easier than most interviewers guess.

This effect is amplified by the fact that there are two sources of signal when interviewing a candidate: whether they give the “correct” answer to a question, and their process / how easily they arrive at that answer. We've gathered data on this at Triplebyte (scoring questions both on whether the candidate reached the correct answer, and how much effort it took them, and then measuring which scores predict success at companies). What we found is a tradeoff. For harder questions, whether the candidate answers correctly carries most the signal. For easier questions, in contrast, most of the signal is found in the candidate's process and how much they struggle. Considering both sources of signal, the sweet spot is toward the easier end of the spectrum.

The rule of thumb we now follow is that interviewers should be able to solve a problem in 25% of the time they expect candidates to spend. So, if I'm developing a new question for a 1-hour interview, I want my co-workers (with no warning) to be able to answer the question in 15 minutes. Paired with the fact that we use multi-part real-world problems, this means that the optimal interview question is really pretty straightforward and easy.

To be clear, I am not arguing for lowering the bar in terms of pass rate. I am arguing to ask easy questions, and then including in your evaluation how easily the candidate answered the questions. I'm arguing for asking easy questions, but then judging fairly harshly. This is what we find optimizes signal. It has the additional benefit of being lower stress for most applicants.

To give examples, asking a candidate to create a simple command line interface with commands to store and retrieve key-value pairs (and adding functionality if they do well) is probably a better problem than asking a candidate to implement a parser for arithmetic expressions. And a question involving the most common data structures (lists, hashes, maybe trees) is probably better than a question about skiplists, treaps or other more obscure data structures.

5. Ask every candidate the same questions

Interviews are about comparing candidates. The goal is to sort candidates into those who can contribute well to the company and those who can't (and in the case of hiring for a single position, select the best person who applies). Given this, there is no justification for asking different questions to different candidates. If you evaluate different candidates for the same job in different ways, you are introducing noise.

The reason it continues to be common to select questions in an ad-hoc fashion, I think, is because it's what interviewers prefer. The engineers at tech companies typically don't like interviewing. It's something they do sporadically, and it takes them away from their primary focus. In order to standardize the questions asked to every candidate, the interviewers would need to take more time to learn the questions and talk about scoring and delivery. And they would need to re-do this every time the question changed. Also, always asking the same question is just a little more tedious.

Unfortunately, the only answer here is for the interviewers to put in the effort. Consistency is key to running good interviews, and that means asking every candidates the same questions, and standardizing delivery. There's simply no alternative.

6. Consider running multiple tracks

In conflict with my previous point, consider offering several completely different versions of your interview. The first step when designing an interview is to think about what skills matter. However, some of the answers might be in conflict! It's pretty normal, for example, to want some really mathy engineers, and some very productive / iterative engineers (maybe even for the same role). In this case, consider offering multiple versions of the interview. They key point is that you need to be at enough scale that you can fully standardize each of the tracks. This is what we do at Triplebyte. What we've found is that you can simply ask each candidate which type of interview they'd prefer.

7. Don't let yourself be biased by credentials

Credentials are not meaningless. Engineers who have graduated from MIT or Stanford, or worked at Google and Apple really are better, as a group, than engineers who did not. The problem is that the vast majority of engineers (myself included) have done neither of these things. So if a company relies on these signals too heavily, they will miss the majority of skilled applicants. Giving credentials some weight in a screening step is not totally irrational. We don't do this at Triplebyte (we do all of our evaluation 100% background blind). But giving some weight to credentials when screening might make sense.

Letting credentials sway final interview decision, however, does not make sense. And we have data showing that this happens. For a given level of performance on our background-blind process, candidates with a degree from a top school go on to pass their interviews at companies at a 30% higher rate than candidates without the name-brand resume. If interviewers know that candidate has a degree from MIT, they are more willing to forgive rough spots in the interview.

This is noise, and you should avoid it. The most obvious way is just to strip school and company names from resumes before giving them to your interviewers. Some candidates may mention their school or company, but we do all our interviews without knowing the candidates' backgrounds, and it's actually pretty rare for a candidate to bring it up during technical evaluation.

8. Avoid hazing

One of the ugliest ways interview can fail is that they can take on an aspect of hazing. They're not just about evaluating the skill of a candidate, they're also about a group or team admitting a member. In that second capacity, they can become a rite of passage. Yes, the interview is stressful and horrible, but we all did it so so should the candidates. This can be accentuated when a candidate is doing badly. As an interviewer, it can be frustrating to watch a candidate beat their head against a problem, when the answer seems so obvious! You can get short tempered and frustrated. This, of course, only increases the stress for the applicant in a downward spiral.

This is something you want to stay a mile away from. The solution is talking about the issue and training the interviewers. One trick that we use is, when a candidate is doing really poorly, to switch from evaluation mode, where the goal is to judge the candidate, to teaching mode, where the goal is to make the candidate understand the answer to the question. Mentally making the switch can help a lot. When you're in teaching mode, there no reason to withhold information or be anything other than friendly.

9. Make decisions based on max skill, not average or min skill

So far, I've only talked about individual questions, not the final interview decision. My advice here is to try to base the decision on the maximum level of skill that the candidate shows (across the skill areas you care about), not the average level or minimum level.

This is likely what you are already doing, intentionally or not! The way hire/no hire decisions are made is that everyone who interviewed a candidate gets together in a meeting, and an offer is made if at least one person is strongly in favor of hiring, and no one is strongly against. To get one interviewer to be strongly in favor, what a candidate needs to do is ace one section of the interview. Across our data, max skill is the attribute that's most correlated with acing at least one section of a company's interview. However, to be made an offer, a candidate also needs no one to be a strong no against them. Strong noes come when a candidate looks really stupid on a question.

Here we find just a great deal of noise. There are so many different ways to be a skilled engineer, that almost no candidates can master them all. This means if you ask the right (or wrong) question, any engineer can look stupid. Candidates get offers, then, when at least one interview lines up with an area of strength (max skill) and no areas line up with a significant weakness. The problem is that this is noisy. The same engineer who fails one interview because they looked stupid on a question about networking passes other interviews with flying colors because that topic did not come up.

The best solution, I think, is for companies to focus on max skill, and be a little more comfortable making offers to people who looked bad on parts of the interview. This is, looking for strong reasons to say yes, and not worrying so much about technical areas where the candidate was weak. I don't want to be absolute about this. There are of course technical areas that just matter to a company. And deciding that you want to have a culture where everyone on the team is at a certain level in a certain area may well make sense. But focusing more on max skill does reduce interview noise.

Why do interviews at all?

A final question I should answer is why do interviews at all? I'm sure some readers have been gritting their teeth, and saying “why think so much about a broken system? Just use take-home projects! Or just use trial employment!” After all, some very successful companies use trial employment (where a candidate joins the team for a week), or totally replace in-person interviews with take-home projects. Trial employment makes a lot of sense. Spending a week working beside an engineer (or seeing how they complete a substantial project) almost certainly provides a better measure of their abilities than watching them solve interview problems for 1 hour. However, there are two problems that keep trial employment from replacing standard interviews:
  1. Trial employment is expensive for the company. No company can spend a full week with every person who applies. To decide who makes it to the trial, companies must use some other interview process.
  2. Trial employment (and large take-home projects) are expensive for the candidate. Even when they are paid, not all candidates have the time. An engineer working a full-time job, for example, may simply not be able to take the time off. And even if they can, many won't. If an engineer already has job offers in hand, they are less likely be willing to take on the uncertainty of a work trial. We see this clearly among Triplebyte candidates. Many of the best candidates (with other offers in hand) will simply not do large projects or work trials.
The result of this that trial employment is an excellent option to offer some candidates. I think if you have the scale to support multiple tracks, adding a trial employment track is a great idea. However, it's not viable as a total replacement for interviews.

Talking to candidates about past experience is also sometimes put forward as a replacement for technical interviews. To see if a candidate can do good work in the future, the logic goes, just see what they've done in the past. We've tested this at Triplebyte, and unfortunelty we've not had great results. Communication ability (ability to sell yourself) ended up being a stronger signal than technical ability. It's just too common to find well-spoken people who exaggerate their role (take credit for a team's work), and modest people who downplay what they did. Given enough time and enough questioning, it should be possible to get to the bottom of this. However, we found that within the time limits of a regular interview, talking about past experience is not a general replacement for interviewing. It is a great way to break the ice with a candidate and get a sense of their interests (and judge communication ability and perhaps culture fit). But it's not a viable total replacement for interviews

Good things about programming interviews!

I want to end up this post on a more positive note. For everything that's wrong with interviews, there is a lot that's right about them.

Interviews are direct skill assessment. I have friends who are teachers, who tell me that teacher interviews are basically a measure of communication ability (ability to sell yourself), and a credential. This seems to be true of many many professions. Silicon Valley is not a perfect meritocracy. But we do at least try to directly measure the skills that matter, and stay open to the idea anyone with those skills, regardless of background, can be a great engineer. Credential bias often stands in the way of this. But we've been able to mostly overcome this at Triplebyte, and help a lot of people with unconventional backgrounds get great tech jobs. I don't think Triplebyte would be possible, for example, in the legal field. The reliance on credentials is just too high.

Programmers also choose interviews. While this is a very controversial topic (there are certainly programmers who feel differently), when we've run experiments offering different types of evaluation, we find that most programmer still pick a regular interview. And we find that only a minority of programmers are interested in companies that use trial employment or take-home projects. For better or worse, programming interviews seem to be here to say. Other types of evaluation are great supplements, but they seem unlikely to replace interviews as the primary way engineers are evaluated. To misquote Churchill, “Interviews are the worst way to evaluate engineers, except for all the other ways that have been tried from time to time.”

Conclusion

Interviewing is hard. Human beings are hopelessly complex. On some level, judging human ability in a 4-hour interview is just a fool's errand. I think it's important to stay humble about this. Any interview process is bound to fail a lot of the time. People are just too complex.

But that's not an argument for giving up. Trying to run a meritocratic process is better than not trying. At Triplebyte, our interview is our product. We brainstorm ideas, we test them, and we improve over time. This, I think, is the approach that's needed to improve how engineers are hired. In this post, I've shared some of the big things we've learned over the last two years. I'd love to get feedback, and hear if these ideas are helpful for people. Send me an email at ammon@triplebyte.com

If you're a company looking for engineers, we'd also love to help you hire. You can send me an email, or check out our companies page.

Thanks to Adora Cheung and Jared Friedman for reading earlier drafts of this post.


[1] I'm limiting this blog post to technical skill assessment. I'll be writing a future post about culture fit, behavioral interviews and non-technical evaluation.

[2] There is of course variation here. At opposite ends of the spectrum we see companies that require a unanimous yes from every interviewer to make a hire, and companies where the hiring manager is solely responsible for the decision.

[3] These numbers are what companies report about their internal candidates. And the numbers vary widely between companies (they report fire rates, for example, as low as 1% and as high as 30%). The numbers are significantly better for Triplebyte candidates. So far, our candidates at companies have received offers after 53% of interviews, and 2% have been fired.

Triplebyte for front-end and mobile engineers

Today, we're launching new versions of the Triplebyte process for front-end and mobile engineers. We started Triplebyte to try to fix some of the problems with programming interviews. Over the last two years, we've built a background-blind interview process, and helped hundreds of engineers get jobs. We've worked with people trying to break into their first job (we helped a pizza delivery person get an engineering job at Instacart), and we've worked with credentialed engineers looking for new opportunities (and helped startups hire their first employee). I'm proud of the process we built. We've convinced major companies to waive their phone screens for our candidates, and globally our candidates receive job offers after 1 out of every 2 interviews they do. (This is about twice the average rate in the industry.) 

But I have a confession to make. Our interviews do not work well for specialists. We built our process by interviewing thousands of engineers, and empirically testing which questions are most predictive of engineering skill. Because most engineers are generalists (and most companies hire primarily generalists), general web engineering has come to dominate what we look for. We do work with front-end and mobile engineers. But until today, we've required that they pass a process dominated by general programming and back-end web concepts.

Today we're changing this. We've spent the last two months repeating the process that we went through when we launched Triplebyte. We've interviewed hundreds of candidates, tested questions, and are now launching background-blind front-end and mobile interviews!

Going deeper

Our new interviews are particularly exciting because they're a big step toward solving a broader problem. One thing I've learned doing 900 background-blind interviews is that skill in one area does not necessarily translate to skill in another (even adjacent) area. We see expert distributed systems folks who do remarkably poorly talking about a simple normalized schema, and strong back-end web developer who choke when talking about JavaScript. It's easy to quip that perhaps these are not skilled engineers. But they are. These are often people who have done important work at successful companies. The truth is that there is no single definition of engineering skill. The field is broader than what any one engineer can master, and as a result everyone will look weak if you ask them the right question. Even among companies hiring generalists, there is not a consensus on what skills make up the core of the discipline (everyone seems to think it's whatever they themselves are best at).

This fact is why engineers who go through our process pass their interviews with companies at an elevated rate. Each company has a specific engineering culture, and values a specific set of skills (either explicitly, or in the practices and questions of interviewers that have built up over time). But companies don't have a good way to telegraph this to applicants. All they can do is fail every engineer who applies and has the wrong set of strengths. What we've done so far at Triplebyte is design an interview that covers the most common areas that the companies we work with care about. We then pass anyone in our interview who is strong in any of these areas, and match them with the companies that care about their areas of strength.

Matching in this way has doubled our candidates' offer rate at companies. But to bring this back to our new front-end and mobile interviews, we've so far been limited by the fact that we give every candidate the same interview. We've only been able to match based on the most common skills. The front-end and mobile interviews change this! We're now at a scale where we can break out specialized tracks, and measure broader skills. This is the direction interviewing needs to move, and front-end and mobile are just the beginning. Our candidates already receive offers after 50% of the interviews they do. With broader data, I think we can push this number up. I think a 75% pass rate is possible.

Conclusion

If you want to give our front-end or mobile (or generalist) process a try, you can create an account here. After entering your details, you can pick which track you want to try (you can go back and try multiple as well). The front-end and mobile processes are new. I'm sure we'll be making tweaks / fixing issues. I'd love any feedback you have on the process (or on this blog post). Send me an email at ammon@triplebyte.com.

If you're a company hiring engineers and want to learn more about using Triplebyte, you can get started here.

Bootcamps vs. College

Programming bootcamps seem to make an impossible claim. Instead of spending four years in university, they say, you can learn how to be a software engineer in a three month program. On the face of it, this sounds more like an ad for Trump University than a plausible educational model.

But this is not what we’ve found at Triplebyte. We do interviews with engineers, and match them with startups where they’ll be a good fit. Companies vary widely in what skills they look for, and by mapping these differences, we’re able to help engineers pass more interviews and find jobs they would not have found on their own. Over the last year, we’ve worked with about 100 bootcamp grads, and many have gone on to get jobs at great companies. We do our interviews blind, without knowing a candidate's background, and we regularly get through an interview and give a candidate very positive scores, only to be surprised at the end when we learn that the candidate has only been programming for 6 months.

Bootcamp grads are junior programmers. They have a lot to learn, and represent an investment on the part of a company that hires them. That said, this is also true of recent college graduates. We’ve found bootcamp grads as a group to be better than college grads at web programming and writing clean, modular code, and worse at algorithms and understanding how computers work. All in all, we’ve had roughly equivalent success working with the two groups.

In this post, I'm going to try to shed some light on how this can be true. I’ll dig more into the differences that we see between the two groups, and hopefully explain how some people can become competitive junior programmers in under a year.

The Analysis

Our technical interview at Triplebyte is about two and a half hours long, and is broken into four main parts, focusing on practical programming, web architecture, low-level system understanding, and algorithmic understanding. Not every engineer completes every question (we let programmers focus on their strengths), but the process gives us a good measure of the relative strengths of each engineer in each of these areas.

To get a better idea of how bootcamp grads and college grads compare, I graphed the two groups’ average performance in each of these areas. The y axis is the score on each problem (where 1 = strong no, 2 = weak no, 3 = weak yes, 4 = strong yes). For reference, I also included the entire population of applicants, and also all engineers who pass our interview. 
The first thing to note about this graph is that bootcamp grads do as well as or better than college grads on practical programming and web system design, and do worse on algorithms and low-level systems. Our practical programming questions are not easy. They require understanding a problem, coming up with abstraction to solve the problem, and rendering this in code. If anything, our practical programming questions require more on-the-spot thinking than our algorithm problems do. They do not, however, require academic CS or math, or any specific knowledge. This is the crux of the issue. Bootcamp grads match or beat college grads on practical skills, and lose on deep knowledge.

A similar pattern holds on the design questions. Bootcamp grads do better on web questions involving web servers, databases and load balancers. College grads do better on low-level design questions involving bit/bytes, threading, memory allocation, and understanding how a computer actually works.

Triplebyte sees a biased sample of both bootcamp grads and college grads. We do background-blind screening via an online programming test, and only interview engineers who pass this test. Thus we have no way to know what percentage of bootcamp grads and college grads fail early in our process, and the graph above reflects only people who pass our test. Still, a significant number of bootcamp grads pass our test and go on to do as well as college grads on our interviews.

I want to specifically draw attention to the performance of college grads on algorithm problems. They are not only better than bootcamp grads, they are a lot better. They are significantly better than the average programmer making it to our interview (most of whom have 2+ years of experience), and almost as good at the average engineers who we pass. This is interesting. It backs up the assertion that algorithm skills are not used on the job by most programmers, and atrophy over time.

How is this possible?

Our data aside, it still just seems hard to believe that 3 months can compete with a 4-year university degree. The time scales just seem off. The first thing to note is that the difference in instructional time is not as large as it seems. Bootcamps, are intense. Students complete 8 hours of work daily, and many stay late and work on the weekends (one popular bootcamp runs 6 days per week). TAs are working with the students during this entire time. What bootcamps lack in duration they perhaps make up in intensity. 

The second point is that bootcamps teach practical skills. Traditional CS programmers spend significant amounts of time on concepts like NP-completeness and programming in Scheme. Now, I in no sense mean to belittle this material. I love academic CS (and the Cook–Levin theorem). It’s beautiful, fascinating stuff, and I got a lot out of learning it. But it is not directly applicable to what most programmers do most of the time. Bootcamps are able to show outsized results by relentlessly focusing on practical skills. Bootcamp TAs continually critique each student's coding style. They teach testing. They teach their students how to use their editors. How to use an editor is something that a traditional CS degree program would never think of teaching.

This does not leave bootcamp grads equivalently skilled to university grads. If you want to do hard algorithmic or low-level programming, you’re still better served by a traditional CS eduction. But it does leave the best of them surprisingly competitive for many entry-level development positions.

Conclusion

There are two ways to interpret the results in this blog post. One way is to say that bootcamps are window dressing. They teach inexperienced programers what they need to know to look like good programmers, but skimp on the heart of the discipline. However, I think this view is too cynical. The other way to view this post is as evidence that bootcamps focus on totally different areas than CS programs. They focus intensely on the practical skills required to be a productive programmer. These are skills that CS programs expect students to pick up around the edges of their course work. By being this pragmatic and giving students an intense workload, bootcamps are able to match the practical skills of CS grads.

Bootcamp grads don’t make sense for all companies. Just like recent college grads, they are an investment for a company that hires them. They have much to learn. And they are clearly worse at algorithms and low level systems than engineers with academic training. A database or self-driving car company should probably stick to folks with CS degrees. But the significant majority of companies need programers to solve practical problems on the web. On this axis, we’ve found bootcamp grads totally competitive.

Triplebyte is one year old. In that time, we’ve both placed bootcamp grads at top companies, and also watched them grow. We’ve watched them learn some of the CS skills that they lack on graduation. We’ve watched them learn about large-scale production systems. We’ve watched them take on leadership positions. It’s really incredible how quickly and how well the best bootcamp grads learn. It’s been a pleasure to work with them, and we’ll definitely keep working with bootcamp grads.

If you’re a bootcamp grad (or a college grad, or anyone else), and are interested in a way to find companies where you’re a strong technical match, give our process a try. I'm also interested in your thoughts on this post! Send me an email at ammon@triplebyte.com.

Thanks to Jared Friedman and Daniel Gackle for reading drafts of this, and Buck Shlegeris for major help writing it.

How to pass a programming interview

This post started as the preparation material we send to our candidates, but we decided to post it publicly.

Being a good programmer has a surprisingly small role in passing programming interviews. To be a productive programmer, you need to be able to solve large, sprawling problems over weeks and months. Each question in an interview, in contrast, lasts less than one hour. To do well in an interview, then, you need to be able to solve small problems quickly, under duress, while explaining your thoughts clearly. This is a different skill [1]. On top of this, interviewers are often poorly trained and inattentive (they would rather be programming), and ask questions far removed from actual work. They bring bias, pattern matching, and a lack of standardization.

Running Triplebyte, I see this clearly. We interview engineers without looking at resumes, and fast-track them to on-sites at YC companies. We’ve interviewed over 1000 programmers in the last nine months. We focus heavily on practical programming, and let candidates pick one of several ways to be evaluated. This means we work with many (very talented) programmers without formal CS training. Many of these people do poorly on interviews. They eat large sprawling problems for breakfast, but they balk at 45-min algorithm challenges.

The good news is that interviewing is a skill that can be learned. We’ve had success teaching candidates to do better on interviews. Indeed, the quality that most correlates with a Triplebyte candidate passing interviews at YC companies is not raw talent, but rather diligence. 

I fundamentally do not believe that good programmers should have to learn special interviewing skills to do well on interviews. But the status quo is what it is. We’re working at Triplebyte to change this. If you’re interested in what we’re doing, we’d love you to check out our process. In the meantime, if you do want to get better at interviewing, this blog post describes how we think you can most effectively do so. 

1. Be enthusiastic

Enthusiasm has a huge impact on interview results. About 50% of the Triplebyte candidates who fail interviews at companies fail for non-technical reasons. This is usually described by the company as a “poor culture fit”. Nine times out of ten, however, culture fit just means enthusiasm for what a company does. Companies want candidates who are excited about their mission. This carries as much weight at many companies as technical skill. This makes sense. Excited employees will be happier and work harder.

The problem is that this can be faked. Some candidates manage to convince every company they talk to that it’s their dream job, while others (who are genuinely excited) fail to convince anyone. We’ve seen this again and again. The solution is for everyone to get better at showing their enthusiasm. This is not permission to lie. But interviewing is like dating. No one wants to be told on a first date that they are one option among many, even though this is usually the case. Similarly, most programmers just want a good job with a good paycheck. But stating this in an interview is a mistake. The best approach is to prepare notes before an interview about what you find exciting about the company, and bring this up with each interviewer when they ask if you have any questions. A good source of ideas is to read the company’s recent blog posts and press releases and note the ones you find exciting.

This idea seems facile. I imagine you are nodding along as you read this. But (as anyone who has ever interviewed can tell you) a surprisingly small percentage of applicants do this. Carefully preparing notes on why you find a company exciting really will increase your pass rate. You can even reference the notes during the interview. Bringing prepared notes shows preparation.

2. Study common interview concepts

A large percentage of interview questions feature data structures and algorithms. For better or worse, this is the truth. We gather question details from our candidates who interview at YC companies (we’ll be doing a in-depth analysis of this data in a future article), and algorithm questions make up over 70% of the questions that are asked. You do not need to be an expert, but knowing the following list of algorithms and data structures will help at most companies.

  • Hash tables
  • Linked lists
  • Breadth-first search, depth-first search
  • Quicksort, merge sort
  • Binary search
  • 2D arrays
  • Dynamic arrays
  • Binary search trees
  • Dynamic programming
  • Big-O analysis

Depending on your background, this list may look trivial, or may look totally intimidating. That’s exactly the point. These are concepts that are far more common in interviews than they are in production web programming. If you’re self-taught or years out of school and these concepts are not familiar to you, you will do better in interviews if you study them. Even if you do know these things, refreshing your knowledge will help. A startlingly high percentage of interview questions reduce to breadth-first search or the use of a hash table to count uniques. You need to be able to write a BFS cold, and you need to understand how a hash table is implemented.

Learning these things is not as hard as many of the people we talk to fear. Algorithms are usually described in academic language, and this can be off-putting. But at its core, nothing on this list is more complicated than the architecture of a modern web app. If you can build a web app (well), you can learn these things. The resource that I recommend is the book The Algorithm Design Manual by Steven Skiena. Chapters 3 through 5 do a great job of going over this material, in a straightforward way. It does use C and some math syntax, but it explains the material well. Coursera also has several good algorithms courses. This one, in particular, focuses on the concepts that are important in interviews.

Studying algorithms and data structures helps not only because the material comes up in interviews, but also because the approach to problems taken in an algorithm course is the same approach that works best in interviews. Studying algorithms will get you in an interview mindset.

3. Get help from your interviewer

Interviewers help candidates. They give hints, they respond to ideas, and they generally guide the process. But they don’t help all candidates equally. Some programmers are able to extract significant help, without the interviewer holding it against them. Others are judged harshly for any hints they are given. You want to be helped.

This comes down to process and communication. If the interviewer likes your process and you communicate well with them, they will not mind helping. You can make this more likely by following a careful process. The steps I recommend are:

  1. Ask questions
  2. Talk through a brute-force solution
  3. Talk through an optimized solution
  4. Write code

After you are asked an interview question, start by clarifying what was asked. This is the time to be pedantic. Clarify every ambiguity you can think of. Ask about edge cases. Bring up specific examples of input, and make sure you are correct about the expected output. Ask questions even if you’re almost sure you know the answers. This is useful because it gives you a chance to come up with edge cases and fully spec the problem (seeing how you handle edge-cases is one of the main things that interviewers look for when evaluating an interview), and also because it gives you a minute to collect your thoughts before you need to start solving the problem.

Next, you should talk through the simplest brute-force solution to the problem that you can think of. You should talk, rather than jump right into coding, because you can move faster when talking, and it’s more engaging for the interviewer. If the interviewer is engaged, they will step in and offer pointers. If you retreat into writing code, however, you'll miss this opportunity. 

Candidates often skip the brute-force step, assuming that the brute-force solution to the problem is too obvious, or wrong. This is a mistake. Make sure that you always give a solution to the problem you’ve been asked (even if it takes exponential time, or an NSA super computer). When you’ve described a brute-force solution, ask the interviewer if they would like you to implement it, or come up with more efficient solution. Normally they will tell you to come up with a more efficient solution.

The process for the more efficient solution is the same as for the brute force. Again talk, don’t write code, and bounce ideas off of the interviewer. Hopefully, the question will be similar to something you’ve seen, and you’ll know the answer. If that is not the case, it’s useful to think of what problems you’ve seen that are most similar, and bring these up with the interviewer. Most interview questions are slightly-obscured applications of classic CS algorithms. The interviewer will often guide you to this algorithm, but only if you begin the process.

Finally, after both you and your interviewer agree that you have a good solution, you should write your code. Depending on the company, this may be on a computer or a whiteboard. But because you’ve already come up with the solution, this should be fairly straightforward. For extra points, ask your interviewer if they would like you to write tests.

4. Talk about trade-offs

Programming interviews are primarily made up of programming questions, and that is what I have talked about so far. However, you may also encounter system design questions. Companies seem to like these especially for more experienced candidates. In a system design question, the candidate is asked how he or she would design a complex real-world system. Examples include designing Google maps, designing a social network, or designing an API for a bank.

The first observation is that answering system design questions requires some specific knowledge. Obviously no one actually expects you to design Google maps (that took a lot of people a long time). But they do expect you to have some insight into aspects of such a design. The good news is that these questions usually focus on web backends, so you can make a lot of progress by reading about this area. An incomplete list of things to understand is:
  • HTTP (at the protocol level)
  • Databases (indexes, query planning)
  • CDNs
  • Caching (LRU cache, memcached, redis)
  • Load balancers
  • Distributed worker systems
You need to understand these concepts. But more importantly, you need to understand how they fit together to form real systems. The best way to learn this is to read about how other engineers have used the concepts. The blog High Scalability is a great resource for this. It publishes detailed write-ups of the back-end architecture at real companies. You can read about how every concept on the list above is used in real systems.

Once you’ve done this reading, answering system design questions is a matter of process. Start at the highest level, and move downward. At each level, ask your interviewer for specifications (should you suggest a simple starting point, or talk about what a mature system might look like?) and talk about several options (applying the ideas from your reading). Discussing tradeoffs in your design is key. Your interviewer cares less about whether your design is good in itself, and more about whether you are able to talk about the trade-offs (positives and negatives) of your decisions. Practice this.

5. Highlight results

The third type of question you may encounter is the experience question. This is where the interviewer asks you to talk about a programming project that you completed in the past. The mistake that many engineers make on this question is to talk about a technically interesting side-project. Many programmers choose to talk about implementing a neural network classifier, or writing a Twitter grammar bot. These are bad choices because it’s very hard for the interviewer to judge their scope. Many candidates exaggerate simple side projects (sometimes that never actually worked), and the interviewer has no way to tell if you are doing this.

The solution is to choose a project that produced results, and highlight the results. This often involves picking a less technically interesting project, but it’s worth it. Think (ahead of time) of the programming you’ve done that had the largest real-world impact. If you’ve written a iOS game, and 50k people have downloaded it, the download number makes it a good option. If you’ve written an admin interface during an internship that was deployed to the entire admin staff, the deployment makes it a good thing to talk about. Selecting a practical project will also communicate to the company that you focus on actual work. Programmer too focused on interesting tech is an anti-pattern that companies screen against (these programmers are sometimes not productive).

6. Use a dynamic language, but mention C

I recommend that you use a dynamic language like Python, Ruby or JavaScript during interviews. Of course, you should use whatever language you know best. But we find that many people try interviewing in C , C++ or Java, under the impression these are the “real’ programming languages. Several classic books on interviewing recommend that programmers choose Java or C++. At startups at least, we’ve found that this is bad advice. Candidates do better when using dynamic languages. This is true, I think, because of dynamic languages’ compact syntax, flexible typing, and list and hash literals. They are permissive languages. This can be a liability when writing complex systems (a highly debatable point), but it’s great when trying to cram binary search onto a whiteboard.

No matter what language you use, it’s helpful to mention work in other languages. An anti-pattern that companies screen against is people who only know one language. If you do only know one language, you have to rely on your strength in that language. But if you’ve done work or side-projects in multiple languages, be sure to bring this up when talking to your interviewers. If you have worked in lower-level languages like C, C++, Go, or Rust, talking about this will particularly help.

Java, C# and PHP are a problematic case. As we described in our last blog post, we’ve uncovered bias against these languages in startups. We have data showing that programmers using these languages in the interview pass at a lower rate. This is not fair, but it is the truth. If you have other options, I recommend against using these languages in interviews with startups.

7. Practice, practice, practice

You can get much better at interviewing by practicing answering questions. This is true because interviews are stressful, but stress harms performance. The solution is practice. Interviewing becomes less stressful with exposure. This happens naturally with experience. Even within a single job search, we find that candidates often fail their initial interviews, and then pass more as their confidence builds. If stress is something you struggle with, I recommend that you jumpstart this process by practicing interview stress. Get a list of interview questions (the book Cracking the Coding Interview is one good source) and solve them. Set a 20-minute timer on each question, and race to answer. Practice writing the answers on a whiteboard (not all companies require this, but it’s the worst case, so you should practice it). A pen on paper is a pretty good simulation of a whiteboard. If you have friends who can help you prepare, taking turns interviewing each other is great. Reading a lot of interview questions has the added benefit of providing you ideas to use when in actual interviews. A surprising number of questions are re-used (in full or in part).

Even experienced (and stress-free) candidates will benefit from this. Interviewing is a fundamentally different skill from working as a programmer, and it can atrophy. But experienced programers often (reasonably) feel that they should not have to prepare for interviews. They study less. This is why junior candidates often actually do better on interview questions than experienced candidates. Companies know this, and, paradoxically, some tell us they set lower bars on the programming questions for experienced candidates.

8. Mention credentials

Credentials bias interviewers. Triplebyte candidates who have worked at a top company or studied at a top school go on to pass interviews at a 30% higher rate than programmers who don’t have these credentials (for a given level of performance on our credential-blind screen). I don’t like this. It’s not meritocratic and it sucks, but if you have these credentials, it’s in your interest to make sure that your interviewers know this. You can’t trust that they’ll read your resume.

9. Line up offers

If you’ve ever read fund-raising advice for founders, you’ll know that getting the 1st VC to make an investment offer is the hardest part. Once you have one offer, more come pouring in. The same is true of job offers. If you already have an offer, be sure to mention this in interviews. Mentioning other offers in an interview heavily biases the interviewer in your favor.

This brings up the strategy of making a list of the companies you’re interested in, and setting up interviews in reverse order of interest. Doing well earlier in the process will increase your probability of getting an offer from you number one choice. You should do this.

Conclusion

Passing interviews is a skill. Being a great programmer helps, but it’s only part of the picture. Everyone fails some of their interviews, and preparing properly can help everyone pass more. Enthusiasm is paramount, and research helps with this. As many programmers fail for lacking enthusiasm as fail for technical reasons. Interviewers help candidates during interviews, and if you follow a good process and communicate clearly, they will help you. Practice always helps. Reading lots of interview questions and inuring yourself to interview stress will lead to more offers.

This situation is not ideal. Preparing for interviews is work, and forcing programmers to learn skills other than building great software wastes everyone’s time. Companies should improve their interview processes to be less biased by academic CS, memorized facts, and rehearsed interview processes. This is what we’re doing at Triplebyte. We help programmers get jobs without looking at resumes. We let programmers pick one of several areas in which to be evaluated, and we study and improve our process over time. We’d love to help you get a job at a startup, without jumping through these hoops. You can get started here. But the status quo is what it is. Until this changes, programmers should know how to prepare.

Thanks to Jared Friedman, Emmett Shear, Garry Tan, Alexis Ohanian and Daniel Gackle for reading drafts of this.



Footnote [1]: This is not to say that interview performance does not correlate with programing skill. It does. But the correlation is far weaker than most companies assume, and factors other than programing skill explain a large part of interview variance.


Who Y Combinator Companies Want

If you’re a programmer interested in joining a YC startup, apply to Triplebyte and we’ll match you with the ones you’d be the best fit for.

Companies disagree significantly about the types of programmers they want to hire. After 6 months doing technical interviews and sending the best engineers to Y Combinator companies (and interviewing the founders and CTOs at the top 25), we’ve analyzed our data. There are broad trends, but also a lot of unpredictability. Key takeaways include:

1. The types of programmers that each company looks for often have little to do with what the company needs or does. Rather, they reflect company culture and the backgrounds of the founders. It’s nearly impossible to judge these preferences from the outside. At most companies, however, non-technical recruiters reject 50% of applicants by pattern matching against these preferences. This is a huge frustration for everyone involved.

2. Across the companies we work with there are several notable trends. First, companies are more interested in engineers who are motivated by building a great product, and less interested in engineers with pure technical interests.This is at odds with the way the majority of programmers talk about their motivations. There’s a glut of programmer interest in Machine Learning and AI. Second, companies dislike programmers with enterprise backgrounds. Our data shows that companies are less likely to hire programmers coming from Java or C# backgrounds.

3. These results show extrapolation from insufficient data on the part of many companies. Talent can be found among programmers of all backgrounds. We’re mapping the preferences across all YC Companies in more detail, and encouraging companies to consider people they would normally reject. In the meantime, programmers looking for jobs with YC companies may want focus more on product and be sure to mention experience outside of Java and C#.

The problem

My co-founders and I have been running a recruiting company (Triplebyte) for the last 6 months. We interview programmers, and help the best ones get jobs at YC companies. We do our interviews without looking at resumes (in order to find great people who look bad on paper), and then see feedback on each engineer from multiple companies. This gives us a unique perspective on who YC companies want to hire.

When we started, we imagined a linear talent scale. We thought that most companies would be competing for the same (top 5%) of applicants, and all we had to do was measure this. One of the first people to pass our process really impressed us. He was a superb, intelligent programmer. He solved hard algorithm problems like they were nothing, and understood JavaScript deeply. We introduced him to a company he was excited about, and sat back to watch him get a job. We were startled when he failed his first interview. The company told us they valued process more than raw ability, and he’d not written tests during the interview. He went on to get a bunch of offers from other companies, and one founder told us he was among the best programmers they had ever interviewed.

This lack of agreement is the rule, not the exception. Almost no one passes all their programming interviews. This is true because of randomness in many interview processes (even great people are bad at some things, and an interviewer focusing on this can yield a blocking no), and also because companies look for very different skills. The company that rejected our first candidate ranked testing in the interview above algorithmic ability and JavaScript knowledge.

Mapping these preferences, it was clear, was key to helping engineers find the right startups. If we could route programmers to companies where their skills were valued, everyone would win. To that end, we’ve spent the last two months doing detailed interviews with CTOs and lead recruiters at the top 25 Y Combinator companies. In this blog post I’m going to write about what we learned from talking to these companies and sending them engineers. It’s interesting, and I hope useful for people applying for programming jobs.

Setup

To map the preferences of the top YC companies, we wrote paragraphs describing 9 hypothetical programmers, embodying patterns we’d seen from running 1000+ interviews over the last 6 months. These range from the “Product Programmer” who is more excited about designing a product and talking to users than solving technical challenges (we internally call this the Steve Jobs programmer) to the “Trial and Error Programmer” who programs quickly and is very productive, but takes an ad hoc approach to design. In reality, these profiles are not mutually exclusive (one person can have traits of several).

We then set up meetings with the founders and lead recruiters at the top 25 YC Companies. In the meetings we asked each company to rank the 9 profiles in terms of how excited they were to talk to people with those characteristics.

Results

The grid that follows shows the results[1]. Each row shows the preferences of a single (anonymized) company. Each column is a hypothetical profile. Green squares means the company wants to interview engineers matching the profile, red means they do not. Empty squares are cases where the founders’ opinions were too nuanced to be rounded to interest or lack of interest.


The first thing that jumps out is the lack of agreement. Indeed, there’s no single company interested (or disinterested) in all 9 profiles. And no profile was liked (or disliked) by more than 80% of companies. The inter-rater reliability of this data (a measure of the agreement of a group of raters) comes out at 0.09[2]. This is fairly close to 0. Company preferences are fairly close to unpredictable.

The impact of these preferences on programmers, however, is totally predictable. They fail interviews for opaque reasons. Most companies reject a high percentage of applicants during a recruiter call (or resume screen). Across the 25 companies we interviewed, an average of 47% of applicants were rejected in this way (the rate at individual companies went as high as 80%, and as low as 0%). The recruiters doing this rejecting are non technical. All they can do is reject candidates who don’t match the profile they’ve been taught to look for. We’ve seen this again and again when we intro candidates to companies. Some companies don’t want to talk to Java programmers. Others don’t want academics. Still others only want people conversant in academic CS. We’ve seen that most engineers only have the stomach for a limited number of interviews. Investing time in the wrong companies carries a high opportunity cost.

I don’t want to be too hard on recruiters. Hiring and interviewing are hard, shortcuts must be taken to keep the team sane, and there are legitimate reasons for a company to enforce a specific engineering culture. But from the point of view of programmers applying for jobs, these company preferences are mercurial. Companies don’t advertise their preferences. People who don’t match simply apply, and are rejected (or often never hear back).

Patterns

There is some agreement among companies, however, and it’s interesting.

1. There’s more demand for product-focused programmers than there is for programmers focused on hard technical problems. The “Product Programmer” and “Technical Programmer” profiles are identical, except one is motivated by product design, and the other by solving hard programming problems. There is almost twice as much demand for the product programmer among our companies. And the “Academic Programmer” (hard-problem focused, but without the experience) has half again the demand. This is consistent with what we’ve seen introducing engineers to companies. Two large YC companies (both with machine learning teams) have told us that they consider interest in ML a negative signal. It’s noteworthy that this is almost entirely at odds with the motivations that programmers express to us. We see ten times more engineers interested in Machine Learning and AI than we see interested in user testing or UX.

2. (Almost) everyone dislikes enterprise programmers. We don’t agree with this. We’ve seen a bunch of great Java programmers. But it’s what our data shows. The Enterprise Java profile is surpassed in dislikes only by the Academic Programmer. This is in spite of the fact we explicitly say the Enterprise Programmer is smart and good at their job. In our candidate interview data, this carries over to language choice. Programmers who used Java or C# (when interviewing with us) go on to pass interviews with companies at half the rate of programmers who use Ruby or JavaScript. (The C# pass rate is actually much lower than the Java pass rate, but the C# numbers are not yet significant by themselves.) Tangential facts: programmers who use Vim with us pass interviews with companies at a higher rate than programmers who use Emacs, and programmers on Windows pass at a lower rate than programmers on OS X or Linux.

3. Experience matters massively. Notice that the Rusty Experienced Programmer beats both of the junior programmer profiles, in spite of stronger positive language in the junior profiles. It makes sense that there’s more demand for experienced programmers, but the scale of the difference surprised me. One prominent YC company just does not hire recent college grads. And those that do set a higher bar. Among our first group of applicants, experienced people passed company interviews at a rate 8 times higher than junior people. We’ve since improved that, I’ll note. But experience continues to trump most other factors. Recent college grads who have completed at least one internship pass interviews with companies at twice the rate of college grads who have not done internships (if you’re in university now, definitely do an internship). Experience at a particular set of respected companies carries the most weight. Engineers who have worked at Google, Apple, Facebook, Amazon or Microsoft pass interviews at a 30% higher rate than candidates who have not.

Advice

If you’re looking for a job as a programmer, you should pay attention to these results. Product focused programmers pass more interviews. Correlation is not causation, of course. But company recruiter decisions are driven largely by pattern matching, so there is a strong argument that making yourself look like candidates who companies want will increase your pass rate. You may want to focus more on product when talking to companies (and perhaps focus on companies where you are interested in the product). This is a way to stand out. Similarly, if you’re a C# or Java programmer applying to a startup, it may behoove you to use another language in the interview (or at least talk about other languages and platforms with your interviewer). Interestingly, we did talk to two YC companies that love enterprise programmers. Both were companies with founders who have this background themselves. Reading bios of founders and applying to companies where the CTO shares your background is probably an effective job-search strategy (or you could apply through Triplebyte).

If you run a startup and are struggling to hire, you should pay attention to these results too. Our data clearly shows startups missing strong candidates because of preconceptions about what a good programmer looks like. I think the problem is often extrapolation from limited data. One company we talked to hired two great programmers from PhD programs early on, and now loves academics. Another company had a bad PhD hire, and is now biased against that degree. In most cases, programming skill is orthogonal to everything else. Some companies have legitimate reasons to limit who they hire, but I challenge all founders and hiring managers to ask themselves if they are really in that group. And if you’re hiring, I suggest you try to hire from undervalued profiles. There are great Ph.Ds and enterprise C# programmers interested in startups. Show them some love!

Conclusion

YC Startups disagree strikingly about who’s a good engineer. Each company brings a complex mix of domain requirements, biases, and recruiter preferences. Some of these factors make a lot of sense, others less so. But all of them are frustrating for candidates, who have no way to tell what companies want. They waste everyone’s time.

I’m excited about mapping this. Since we started matching candidates based on company preferences (as well as candidate preferences), we’ve seen a significant increase in interview pass rates. And we only just completed the interviews analyzed in the post. I’m excited to see what this data does. Our planned next step is to not only interview founders and recruiters at these companies, but also have the engineers who do the bulk of the actual interviewing provide the same data.

Our goal at Triplebyte is to build a better interview process. We want to help programmers poorly served by standard hiring practices. We’d love to have you apply, even if — or especially if — you come from one of the undervalued groups of programmers mentioned in this article. We’d also love to get your thoughts on this post. Send us an email at founders@triplebyte.com.

Thanks to Jared Friedman, Emmett Shear and Daniel Gackle and Greg Brockman and Michael Seibel for reading drafts of this.


Footnotes:

1. Astute readers will notice that there are more than 25 rows in the graph. This is because we’ve recently added these questions to our onboarding flow for new companies we work with. If you run a YC Company, you can log into Triplebyte with your company email address, and add this data (we’ll use it to send you more candidates).

2. I calculated this using Fleiss’ kappa. This measures the agreement between a number or raters, with -1 being perfect disagreement, 0 being the agreement that would result from random coin tosses, and 1 being perfect agreement.

A Taxonomy of Programmers

We’ve been interviewing hundreds of programmers and matching them with YC startups. To help intelligently match programmers with companies, we’ve created a number of hypothetical programmer descriptions. These profiles are drawn from patterns we’ve seen in 1000+ technical interviews over the last 6 months. We’ve had success using these profiles to match engineers with companies. If you have any suggestions for additional profiles, we’d love to hear about them in the comments.

Academic Programmer: Candidate has spent most of their career in academia, programming as part of their Masters/PHD research. They have very high raw intellect and can use it to solve hard programming problems, but their code is idiosyncratic.

Experienced Rusty Programmer: Candidate has a lot of experience, and can talk in depth about different technology stacks and databases, explaining their positives and negatives with fine detail. When programming during an interview, they’re a little rusty. They usually get to the right place but it takes a while.

Trial and Error Programmer: Candidate writes code quickly and cleanly. Their approach seems to involve a lot of trial and error, however. They dive straight into programming problems and seem a little ad hoc but their speed enables them to ultimately solve the problems productively.

Strong Junior Programmer: Candidate is fresh out of college, with some internships and less than a year full time work experience. They really impress during a technical interview, have numerous side projects and impressive knowledge of computer science and programming in general. They’re well above average from other junior programmers.

Child Prodigy Programmer: Candidate is very young (e.g. 19 years old) and decided to go straight into work, skipping college. They’ve been programming since a very young age and are very impressive in their ability to solve hard technical problems. They’ve also been prolific with side projects and are mature for their age. It’s likely they’ll found a company in the future when they’re older.

Product Programmer: Candidate performs well on technical interviews and will have the respect of other engineers. They’re not motivated by solving technical problems, however. They want to think about the product, talk to customers and have an input into how product decisions are made.

Technical Programmer: Candidate is the inverse of the Product Programmer. They interview well and communicate clearly. But they aren’t motivated to think about the user experience or product decisions. They want to sink their teeth into hard technical problems.

Practical Programmer: Candidate solves practical programming problems with ease, even very abstract programs. They aren’t comfortable with computer science terminology though (e.g. data structures, algorithms) and don’t have a deep understanding of how computers work. They are strongest with ruby/python/javascript, not so much with lower level languages like C.

Enterprise Programmer: Candidate is strong in academic computer science (algorithms, data structures, complexity analysis), has experience, and solves technical problems well. Their working experience is with large enterprise companies (e.g. Dell/Oracle/IBM). They want to join the startup, although they don’t have experience taking ownership of projects. They program mostly in Java using an IDE such as Eclipse.

Note: If you run a YC Company, you can log into Triplebyte with your company email address, and add your preferences (we’ll use it to send you more candidates).

Take-home interviews

Today we're announcing our second experiment, take-home projects. We're going to try a new way of assessing programming ability by having programmers work on a project on their own time instead of coding during an interview. We know there are benefits and drawbacks to this approach, I'll go into more detail into our thinking behind this below.

Anyone who passes our take-home project assessment will get exactly the same service from us as people who do the regular interviews. We'll work hard to find several YC startups they'd be a great fit for, fast track them through the hiring processes, and handle all logistics of flights/accommodations/scheduling.

The Problem

Several weeks ago, we interviewed a recent college grad. He'd done well on our quiz, had great personal projects, and I was excited to talk to him. As soon as the interview started, however, I could tell that something was wrong. I gave him a programming problem, but he could not get started. He'd start to write one thing, mutter that it was a bad place to start, and go back to something else. He switched languages. His breathing accelerated. He started to shake.

Programming interviews are stressful. Fundamentally, the applicant is being judged. They have to understand the question, produce a working solution in limited time, while explaining everything they are doing with no time to stop and gather their thoughts. At its worst it's adversarial.

Some programmers find that this stress pushes them to do their best in interviews. Others find it debilitating. There are programmers with track records of solving hard problems who simply freeze when subjected to the stress of an interview. They babble. They become unable to program.

This does not mean that they are bad programmers[1]. I gave the fellow in our interview a much harder problem to do on his own time. I assumed that he'd never get back to us. The project was a lot of work. Three days later, however, I had a complete solution in my inbox. We got him back on the phone, and he was able to talk in depth about what he had done, about the underlying algorithms, and about the design trade-offs he'd made. The code was clean. He was clearly a skilled programmer.

The Solution

To solve the problem of interview anxiety, we're adding a second track to our interview process at Triplebyte. Applicants, if they choose, will be able go through our process by completing programming projects on their own time. They'll still do interviews with us, but rather than doing interview problems, they will just talk about the project they already completed. Those who do well will be matched with Y Combinator companies, just like programmers who go through our regular interview.

The project-based track will require a larger time commitment (and we expect lots of people to stick with the standard track for this reason). However, doing a larger project is almost certainly a better measure of actual ability to do a job then a traditional interview is.

Here's how our process works:
  1. When a candidate books a 45-minute interview, they can indicate that they want to do a project.
  2. Three days before the interview, we'll send them a list of projects, and they'll pick one and start to work on it. We expect them to spend about 3 hours on the project (or as long as they want to spend to show us that they're a good programmer).
  3. During the interview, we'll talk about what they've programmed, go over design choices and give feedback.
People who pass the 45-min interview will go though the same process in the 2-hour final interview. Rather than pick a new project, however, they'll take the same project further, incorporating feedback from the 1st interview. Those who pass the 2-hour will talk to Harj, get intro-ed to YC companies, and start new jobs!

I'm particularly excited being able to see iterative improvements to the project between the two interviews (an important part of doing an actual job). It's an experiment, and I have no idea how it will turn out, but giving people the option to do larger projects and avoid stressful interviews just seems like a good idea. In a few months, after we've done a meaningful number of these interviews, I'll write about how their results compare to our other interviews.

1. The stress of interviewing seems to be different than the stress of performing a job. None of the people we've spoken to who do poorly in interviews report problems performing under deadlines at work, or when a website is down and there's pressure to get it back up.

Three hundred programming interviews in thirty days

We launched Triplebyte one month ago, with the goal of improving the way programmers are hired. Too many companies run interviews the way they always have, with resumes, white boards and gut calls. We described our initial ideas about how to do better than this in our manifesto. Well, a little over a month has now passed. In the last 30 days, we've done 300 interviews. We've started to put our ideas into practice, to see what works and what doesn't, and to iterate on our process. In this post, I'm going to talk about what we've learned from the first 300 interviews.

I go into a lot of detail in this post. The key findings are:
  1. Performance on our online programming quiz is a strong predictor of programming interview success
  2. Fizz buzz style coding problems are less predictive of ability to do well in a programming interview
  3. Interviews where candidates talk about a past programing project are also not very predictive

Process

Our process has four steps:
  1. Online technical screen.
  2. 15-minute phone call discussing a technical project.
  3. 45-minute screen share interview where the candidate writes code.
  4. 2-hour screen share where they do a larger coding project.
Candidates work on their own computers, using their own dev environments and strongest languages. In both of the longer interviews, they pick the problem or project to work on from a short list. We're looking to find strengths, so the idea is that most candidates should be able to pick something they're comfortable with. We keep the list of options short, however, to help standardize evaluation. We want to have a lot of data on each problem.

We're looking for programming process and understanding, not leaps of insight. We do this by offering help with design/algorithm of each problem (and not penalizing candidates for this). We evaluate interviews with a score card. For now we go a little overboard, tracking the time to reach a number of milestones in each problem. We also score on understanding, whether they speak specifically or generally, do they seem nervous, and a bunch of other things (basically everything we can think of). Most of these, no doubt, are horrible measures of performance. We record them now so that we can figure out which are good measures later.

Screening

The first experiment we ran was screening people without looking at resumes. Most job applicants are rejected at the screening stage. The sad truth is that a high percentage of the people applying for any job post on the Internet are bad. To protect the time of their interviewers, companies need a way to filter people early, at the mouth of the hiring funnel. Resumes are the traditional way to do this. However, as Aline Lerner has shown, resumes don't work. Good programmers can't be reliably distinguished from bad ones by looking at their resumes. This is a problem. What the industry needs is a way to screen candidates by looking at their actual ability, not where they went to school or worked in the past[1]. To this end, we tested two screening steps:
  1. A fizzbuzz-like programming assignment. Applicants completed two simple problems. We tracked the time to complete each, and manually graded each on correctness and code quality.
  2. An automated quiz. The questions on the quiz were multiple choice, but involved understanding actual code (e.g., look at a function, and select which of several bugs is present).
We then correlated the results of these two steps with success in our subsequent 45 minute technical interview. The following graph shows the correlations after 300 interviews.

Correlation between screening steps and interview decisions


We can see that the quiz is a strong predictor of success in our interviews! Almost a quarter of interview performance (23%) can be explained by the score on the quiz. 15% can be explained by quiz completion time (faster is better). Speed and score are themselves only loosely correlated (being accurate means you're only slightly more likely to be fast). This means that they can be combined, into what we're calling the composite score, which has the strongest correlation of all and explains 29% of interview performance![2].

The fizzbuzz-style coding problems, however, did not perform as well. While the confidence intervals are large, the current data shows less correlation with interview results. I was surprised by this. Intuitively, asking people to actually program feels like the better test of ability, especially because our interviews (the measures we're using to evaluate screening effectiveness) are heavily focused on coding. However, the data shows otherwise. The coding problems were also harder for people to finish. We saw twice the drop off rate on the coding problems as we saw on the quiz.

Talking versus coding

Before launching, we spoke to a number of smart people with experience in technical hiring to collect ideas for the interviewing. The one I liked the most was having candidates talk us through a technical project, including looking at source code. This seemed like it’d be the least adversarial, most candidate friendly approach.

As soon as we started doing them however, I saw a problem. Almost everyone was passing. Our filter was not filtering. We tried extending the duration of the interviews to probe deeper and looking at code over Google hangouts. Still, the pass rate remained too high.

The problem was we weren’t getting enough signal from talking about projects to confidently fail people. So we started following up with interviews where we asked people to write code. Suddenly, a significant percentage of the people who had spoken well about impressive-sounding projects failed, in some cases spectacularly, when given relatively simple programming tasks. Conversely, people who spoke about very trivial sounding projects (or communicated so poorly we had little idea what they had worked on) were among the best at actual programming.

In total we did 90 experience interviews, scoring across several factors (did the person seem smart, did they understand their project well, were they confident, and was the project impressive). Then we correlated our factors with performance in the 45 minute programming interview. Confidence had essentially zero correlation. Impressiveness, smartness and understanding each had about a 20% correlation. In other words, experience interviews underperformed our automated quiz in predicting success at coding.

Now, talking about past experience in more depth may be meaningful. This is how (I think) I know which of my friends are great programmers. But, we found, 45 minutes is not enough time to make talking about coding a reasonable analog for actually coding.

Interview duration, and interviewer sentiment

A final test we ran was to look at when during the interview we make decisions. Laszlo Bock, VP of People at Google, has written much about how interviewers often make decisions in the first few minutes of an interview, and spend the rest of the time backing up this decision. I wanted to make sure this was not true for us. To test this, we added a pop-up to our interviewing software, asking us every five minutes during each interview if the candidate is performing well, or poorly. Looking at these sentiments in aggregate, we can tell exactly when during each interview we made the decision.

We found that in 50% of our 45-min interviews, we "decide" (become positive for someone who ends up passing, or negative for someone who does not pass) in the first 20 minutes. In 20%, however, we do not settle on our final sentiment until the last 5 minutes. In the 2-hour interview, the results are similar. We decide 60% in the first 20 minutes (both positively and negatively), but 10% make it almost to the 2-hour mark. (In that case, unfortunately, it's positives turning to negatives, because we can't afford to send people we're unsure about to companies)[3].

Conclusion

It's been a crazy month. Guillaume, Harj and I have spent nearly all our time in interviews. Sometimes, at 10 PM on a Saturday, after a day of interviewing, I wonder why we started this company. But as I write this blog post, I remember. Hiring decisions are important, and too many companies are content to do what they've always done. In our first 30 days, we've come up with a replacement for resume screens, and shown that it works well. We've found that programming experience interviews (used at a bunch of companies) don't work particularly well. And we've written software to help us measure when and why we make decisions.

For now, we're evaluating all of our experiments against our final round interview decisions. This does create some danger of circular reasoning (perhaps we're just carefully describing our own biases). But we have to start somewhere, and basing our evaluations on how people write actual code seems like a good place. The really exciting point comes when we can re-run all this analysis, basing it on actual job performance, rather than interview results. Doing that is why we started this company.

Next, we want to experiment with giving candidates projects to do on their own time (I'm particularly interested in making this an option, to help with interview anxiety), and interviews where candidates are asked to work with an existing codebase. We're also adding harder questions to the quiz, to see if we can improve its effectiveness. We'd love to hear what you think about these ideas. Email us at founders@triplebyte.com.

Thanks to Emmett Shear, Greg Brockman and Robby Walker for reading drafts of this.

An earlier version of this post confused the correlation coefficient R with R^2, and overstated the correlations. Since this blog was posted, however, a new version of the quiz has increased the correlation of the composite score to 0.69 (0.47 R^2)

1. This is a complex issue. There are good arguments for allowing experienced programmers to skip screening steps, and not have to continually re-prove themselves. At some point, track record should be enough. However, this type of screening can also be done in very bad ways (e.g., only interviewing people who have worked at top companies or come from a few schools). Evaluating experience is something we plan to experiment with, but for now we're focusing on how to directly identify programming ability.

2. It’s worth noting the error bars (showing 95% confidence intervals). The true value for each of the correlations in the graph falls in the range shown with 95% confidence. The error bars are large because our sample is small. However, even comparing the bottom of our confidence interval to Aline Lerner’s results on resume screening (she found a correlation close to 0), shows our quiz is a far better first step in a hiring funnel than resumes are.

3. We're not perfect, and we certainty reject great people. I always like to mention this when talking about rejections. We know this (and think it's true of all interview processes). We're trying to get better.