tag:blog.triplebyte.com,2013:/posts Triplebyte 2017-07-25T02:15:06Z Triplebyte tag:blog.triplebyte.com,2013:Post/1174787 2017-07-18T17:13:16Z 2017-07-18T17:13:16Z Triplebyte in New York
We're excited to announce that Triplebyte is now available for engineers and companies in New York! 

Until now we've only been working with companies based in the Bay Area and engineers who want to work there. As we've grown, the biggest request we've had from engineers has been expanding to new locations. The most requested location has been New York, we've seen applications to Triplebyte from engineers either based in New York or wanting to relocate there double since the start of this year.

That's why we're excited to make New York the first new location we're expanding to. We're launching with a great initial group of partner New York companies, which we'll be adding to over time. As in the Bay Area, we'll be working with a mix of companies across all size and stages. We're working with exciting late stage startups hiring in New York like WeWorkPelotonDropbox and Palantir. We're also partnering with earlier stage companies working on things like changing online education (Teachable), fixing healthcare by using data to improve the efficiency of clinical trials (Trialspark) and helping us sleep better (Eight). We'll also be working with companies taking an engineering approach to the finance industry like Bridgewater and Jane Street

If you're an engineer based in New York, or looking to relocate (we'll fly you out for interviews and cover the costs), the first step is completing our programming quiz here.  

If you're a company in New York hiring engineers and you'd like to learn more about working with Triplebyte, you can get started here.

We'll be opening up to more locations throughout the year and are excited to help more engineers find their ideal company!

tag:blog.triplebyte.com,2013:Post/1166706 2017-06-26T17:22:10Z 2017-07-25T02:15:06Z How to Interview Engineers

We do a lot of interviewing at Triplebyte. Indeed, over the last 2 years, I've interviewed just over 900 engineers. Whether this was a good use of my time can be debated! (I sometimes wake up in a cold sweat and doubt it.) But regardless, our goal is to improve how engineers are hired. To that end, we run background-blind interviews, looking at coding skills, not credentials or resumes. After an engineer passes our process, they go straight to the final interview at companies we work with (including Apple, Facebook, Dropbox and Stripe). We interview engineers without knowing their backgrounds, and then get to see how they do across multiple top tech companies. This gives us, I think, some of the best available data on interviewing.

In this blog post, I'm going to present what we've learned so far from this data. Technical interviewing is broken in a lot of ways. It's easy to say this. (And many blog posts do!) The hard part is coming up with what to do about it. My goal for this post is to take on that challenge, and lay out specific advice for hiring managers and CTOs. Interviewing is hard. But I think that many of the problems can be fixed by running a careful process [1].

The Status Quo

Most interview processes includes two main steps:
  1. Applicant screening
  2. In-person final interview
The goal of applicant screening is to filter out candidates early, and save engineering time in interviews. The screening process usually involves a recruiter scanning a candidate's resume (in about 10 seconds), followed by a 30-minute to 1-hour phone call. Eighteen percent of the companies we work with also use a take-home programming challenge (either in place of or in addition to the phone screen). Screening steps, interestingly, are where the significant majority of candidates are rejected. Indeed, across all the companies we work with, over 50% of candidates are rejected on the resume scan alone, and another 30% are rejected on on the phone screens / take-home. Screening is also where hiring can be at its most capricious. Recruiters are overwhelmed with volume, and need to make snap decisions. This is where credentials and pattern matching come into play.

In-person final interviews almost-universally consist of a series of 45-minute to 1-hour sessions, each with a different interviewer. The sessions are primarily technical (with one or two at each company focusing on culture fit and soft skills). The final hire/no hire decisions are made in a decision meeting after the candidate has left, with the hiring manager and everyone who interviewed the candidate. Essentially, a candidate needs at least one strong advocate and no strong detractors to be made an offer [2].

Beyond the common format, however, final interviews vary widely.
  • 39% of the companies we work with run interviews with a marker on a whiteboard
  • 52% allow the candidate to use their own computer (the remaining 9% are inconsistent)
  • 55% let interviewers pick their own questions (the remaining 45% use a standard bank of questions)
  • 40% need to see academic CS skills in a candidate to make an offer
  • 15% dislike academic CS (and think that talking about CS is a sign that a candidate will not be productive)
  • 80% let candidates use any language in the interview (the remaining 20% require a specific language)
  • 5% explicitly evaluate language minutia during the interview
Across all the companies we work with, 22% of final interviews result in a job offer. (This figure comes from asking companies about their internal candidate pipeline. Candidates applying through Triplebyte get offers after 53% of their interviews.) About 65% of offers are accepted (result in a hire). After 1 year, companies are very happy with approximately 30% of hires, and have fired about 5% [3].

False Negatives vs. False Positives

So, what's wrong with the status quo? Fire rates, after all, don't seem to be out of control. To see the problem, consider that there are two ways an interview can fail. An interview can result in a bad engineer being hired and later fired (a false positive). And an interview can disqualify someone who could have done that job well (a false negatives). Bad hires are very visible, and expensive to a company (in salary, management cost and morale for the entire team). A bad hire sucks the energy from a team. Candidates who could have done the job well but are not given the chance, in contrast, are invisible. Any one case is always debatable. Because of this asymmetry, companies heavily bias their interviews toward rejection.

This effect is strengthened by noise in the process. Judging programming skill in 1 hour is just fundamentally hard. Add to this a dose of pattern matching and a few gut calls as well as the complex soup of company preferences discussed above, and you're left with a very noisy signal.

In order to keep the false positive rate low in the face of this noise, companies have to bias decision ever farther toward rejection. The result is a process that misses good engineers, still often preferences credentials over real skill, and often feels capricious and frustrating to the people involved. If everyone at your company had to re-interview for their current jobs, what percentage would pass? This is a scary question. The answer is almost certainly well under 100%. Candidates are harmed when they are rejected by companies they could have done great work for, and companies are harmed when they can't find the talent they need.

To be clear, I am not saying the companies should lower the bar in interviews. Rejection is the point of interviewing! I'm not even saying that companies are wrong to fear false positives far more than false negatives. Bad hires are expensive. I am arguing that a noisy signal paired with the need to avoid bad hires results in a really high false negative rate, and this harms people. The solution is to improve the signal.

Concrete ways to reduce noise in interviews

1. Decide what skills you're looking for

There is not a single set of skills that define a good programer. Rather, there is a sea of diverse skill sets. No engineer can be strong in all off these areas. In fact, at Triplebyte we often see excellent, successful software engineers with entirely disjoint sets of skills. The first step to running a good interview, then, is deciding what skills matter for the role. I recommend you ask yourself the following questions (these are questions we ask when we onboard a new company at Triplebyte).
  • Do you need fast, iterative programmers, or careful rigorous programmers?
  • Do you want someone motivated by solving technical problems, or building product?
  • Do you need skill with a particular technology, or can a smart programmer learn it on the job?
  • Is academic CS / math / algorithm ability important or irrelevant?
  • Is understanding concurrency / the C memory model / HTTP important?
There are no right answers to these questions. We work with successful companies that come down on both sides of each one. But what is key is making an intentional choice, based on your needs. The anti-pattern to avoid is simply picking interview questions randomly (or letting each interviewer decide). When that happens, company engineering culture can skew in a direction where more and more engineers have a particular skill or approach that may not really be important for the company, and engineers without this skill (but other important skills) are rejected.

2. Ask questions as close as possible to real work

Professional programmers are hired to solve large, sprawling problems over weeks and months. But interviewers don't have weeks or months to evaluate candidates. Each interviewer typically has 1 hour. So instead, interviewers look at a candidates' ability to solve small problems quickly, while under duress. This is a different skill. It is correlated (interviews are not completely random). But it's not perfectly correlated. Minimizing this difference is the goal when developing interview questions.

This is achieved by making interview question as similar as possible to the job you want the candidate to do (or to the skill you're trying to measure). For examples, if what you care about is back-end programming, asking the candidate to build a simple API endpoint and then add features is almost certainly a better question than asking them to solve a BFS word chain problem. If you care about algorithm ability, asking the candidate to apply algorithms to a problem (say, build a simple search index, perhaps backed by a BST and a hashmap for improved deletion performance) is almost certainly a better problem than asking them to determine if a point is contained in a concave polygon. And a debugging challenge, where the candidate works in a real codebase, is almost certainly better than asking the candidate to solve a small problem on a whiteboard.

That said, there is an argument for doing interviews on whiteboards. As an interviewer, I don't care if an engineer has the Python itertools module memorized. I care if they can think through how to use iterators to solve a problem. By having the candidate work on a whiteboard, I free them from having to get the exact syntax right, and let them focus on the logic. Ultimately I think this argument fails, because there's just not enough justification for the different format. You can get all the benefit by allowing the candidate to work on a computer, but telling them their code does not need to run (or even better, making it an open book interview and letting them look up anything they want with Google).

There is an important caveat to the idea that interview questions should mirror work. It is important that an interview question be free from external dependencies. For example, asking a candidate to write a simple web scraper in Ruby might seem like a good real-word problem. However, if a candidate needs to install Nokogiri (a Ruby parsing library that can be a pain to install) and they end up burning 30 minutes wrestling with the native extensions, this becomes a horrible interview. Not only has time been wasted, stress for the candidate has gone through the roof.

3. Ask multi-part questions that can't be given away

Another good rule of thumb for interview questions is to avoid questions that can be “given away”, i.e. avoid questions where there's some magic piece of information that the candidate could have read on Glassdoor ahead of time that would allow them to answer easily. This obviously rules out brain teasers or any question requiring a leap of insight. But it goes beyond that, and means that questions need to be a series of steps that build on each other, not a single central problem. Another useful way to think about this is to ask your self whether you can help a candidate who gets stuck, and still end the interview with a positive impression. On a one-step question, if you have to give the candidate significant help, they fail. On a multi-part problem, you can help with one step, and the candidate can then ace everything else and do well.

This is important not only because your question will leak onto Glassdoor, but also (and more importantly) because multi-part problems are less noisy. Good candidates will become stressed and get stuck. Being able to help them and see them recover is important. There is significant noise in how well a candidate solves any one nugget of programming logic, based on whether they've seen a similar problem recently, and probably just chance. Multi-part problems smooth out some of that noise. They also give candidates the opportunity to see their effort snowball. Effort applied to one step often helps them solve a subsequent step. This is an important dynamic when doing real work, and capturing it in an interview decreases noise.

To give examples, asking a candidate to implement the game Connect Four in a terminal (a series of multiple steps) is probably a better question than asking a candidate to rotate a matrix (a single step, with some easy giveaways). And implementing k-means clustering (multiple operations that build on each other) is probably better than determining the largest retangle that can fit under a histogram.

4. Avoid hard questions

If a candidate solves a really hard question well, that tells you a lot about their skill. However, because the question is hard, most candidates will fail to solve it well. The expected amount of information gained from a question, then, is heavily impacted by the difficulty of the question. We find that the optimal difficulty level is significantly easier than most interviewers guess.

This effect is amplified by the fact that there are two sources of signal when interviewing a candidate: whether they give the “correct” answer to a question, and their process / how easily they arrive at that answer. We've gathered data on this at Triplebyte (scoring questions both on whether the candidate reached the correct answer, and how much effort it took them, and then measuring which scores predict success at companies). What we found is a tradeoff. For harder questions, whether the candidate answers correctly carries most the signal. For easier questions, in contrast, most of the signal is found in the candidate's process and how much they struggle. Considering both sources of signal, the sweet spot is toward the easier end of the spectrum.

The rule of thumb we now follow is that interviewers should be able to solve a problem in 25% of the time they expect candidates to spend. So, if I'm developing a new question for a 1-hour interview, I want my co-workers (with no warning) to be able to answer the question in 15 minutes. Paired with the fact that we use multi-part real-world problems, this means that the optimal interview question is really pretty straightforward and easy.

To be clear, I am not arguing for lowering the bar in terms of pass rate. I am arguing to ask easy questions, and then including in your evaluation how easily the candidate answered the questions. I'm arguing for asking easy questions, but then judging fairly harshly. This is what we find optimizes signal. It has the additional benefit of being lower stress for most applicants.

To give examples, asking a candidate to create a simple command line interface with commands to store and retrieve key-value pairs (and adding functionality if they do well) is probably a better problem than asking a candidate to implement a parser for arithmetic expressions. And a question involving the most common data structures (lists, hashes, maybe trees) is probably better than a question about skiplists, treaps or other more obscure data structures.

5. Ask every candidate the same questions

Interviews are about comparing candidates. The goal is to sort candidates into those who can contribute well to the company and those who can't (and in the case of hiring for a single position, select the best person who applies). Given this, there is no justification for asking different questions to different candidates. If you evaluate different candidates for the same job in different ways, you are introducing noise.

The reason it continues to be common to select questions in an ad-hoc fashion, I think, is because it's what interviewers prefer. The engineers at tech companies typically don't like interviewing. It's something they do sporadically, and it takes them away from their primary focus. In order to standardize the questions asked to every candidate, the interviewers would need to take more time to learn the questions and talk about scoring and delivery. And they would need to re-do this every time the question changed. Also, always asking the same question is just a little more tedious.

Unfortunately, the only answer here is for the interviewers to put in the effort. Consistency is key to running good interviews, and that means asking every candidates the same questions, and standardizing delivery. There's simply no alternative.

6. Consider running multiple tracks

In conflict with my previous point, consider offering several completely different versions of your interview. The first step when designing an interview is to think about what skills matter. However, some of the answers might be in conflict! It's pretty normal, for example, to want some really mathy engineers, and some very productive / iterative engineers (maybe even for the same role). In this case, consider offering multiple versions of the interview. They key point is that you need to be at enough scale that you can fully standardize each of the tracks. This is what we do at Triplebyte. What we've found is that you can simply ask each candidate which type of interview they'd prefer.

7. Don't let yourself be biased by credentials

Credentials are not meaningless. Engineers who have graduated from MIT or Stanford, or worked at Google and Apple really are better, as a group, than engineers who did not. The problem is that the vast majority of engineers (myself included) have done neither of these things. So if a company relies on these signals too heavily, they will miss the majority of skilled applicants. Giving credentials some weight in a screening step is not totally irrational. We don't do this at Triplebyte (we do all of our evaluation 100% background blind). But giving some weight to credentials when screening might make sense.

Letting credentials sway final interview decision, however, does not make sense. And we have data showing that this happens. For a given level of performance on our background-blind process, candidates with a degree from a top school go on to pass their interviews at companies at a 30% higher rate than candidates without the name-brand resume. If interviewers know that candidate has a degree from MIT, they are more willing to forgive rough spots in the interview.

This is noise, and you should avoid it. The most obvious way is just to strip school and company names from resumes before giving them to your interviewers. Some candidates may mention their school or company, but we do all our interviews without knowing the candidates' backgrounds, and it's actually pretty rare for a candidate to bring it up during technical evaluation.

8. Avoid hazing

One of the ugliest ways interview can fail is that they can take on an aspect of hazing. They're not just about evaluating the skill of a candidate, they're also about a group or team admitting a member. In that second capacity, they can become a rite of passage. Yes, the interview is stressful and horrible, but we all did it so so should the candidates. This can be accentuated when a candidate is doing badly. As an interviewer, it can be frustrating to watch a candidate beat their head against a problem, when the answer seems so obvious! You can get short tempered and frustrated. This, of course, only increases the stress for the applicant in a downward spiral.

This is something you want to stay a mile away from. The solution is talking about the issue and training the interviewers. One trick that we use is, when a candidate is doing really poorly, to switch from evaluation mode, where the goal is to judge the candidate, to teaching mode, where the goal is to make the candidate understand the answer to the question. Mentally making the switch can help a lot. When you're in teaching mode, there no reason to withhold information or be anything other than friendly.

9. Make decisions based on max skill, not average or min skill

So far, I've only talked about individual questions, not the final interview decision. My advice here is to try to base the decision on the maximum level of skill that the candidate shows (across the skill areas you care about), not the average level or minimum level.

This is likely what you are already doing, intentionally or not! The way hire/no hire decisions are made is that everyone who interviewed a candidate gets together in a meeting, and an offer is made if at least one person is strongly in favor of hiring, and no one is strongly against. To get one interviewer to be strongly in favor, what a candidate needs to do is ace one section of the interview. Across our data, max skill is the attribute that's most correlated with acing at least one section of a company's interview. However, to be made an offer, a candidate also needs no one to be a strong no against them. Strong noes come when a candidate looks really stupid on a question.

Here we find just a great deal of noise. There are so many different ways to be a skilled engineer, that almost no candidates can master them all. This means if you ask the right (or wrong) question, any engineer can look stupid. Candidates get offers, then, when at least one interview lines up with an area of strength (max skill) and no areas line up with a significant weakness. The problem is that this is noisy. The same engineer who fails one interview because they looked stupid on a question about networking passes other interviews with flying colors because that topic did not come up.

The best solution, I think, is for companies to focus on max skill, and be a little more comfortable making offers to people who looked bad on parts of the interview. This is, looking for strong reasons to say yes, and not worrying so much about technical areas where the candidate was weak. I don't want to be absolute about this. There are of course technical areas that just matter to a company. And deciding that you want to have a culture where everyone on the team is at a certain level in a certain area may well make sense. But focusing more on max skill does reduce interview noise.

Why do interviews at all?

A final question I should answer is why do interviews at all? I'm sure some readers have been gritting their teeth, and saying “why think so much about a broken system? Just use take-home projects! Or just use trial employment!” After all, some very successful companies use trial employment (where a candidate joins the team for a week), or totally replace in-person interviews with take-home projects. Trial employment makes a lot of sense. Spending a week working beside an engineer (or seeing how they complete a substantial project) almost certainly provides a better measure of their abilities than watching them solve interview problems for 1 hour. However, there are two problems that keep trial employment from replacing standard interviews:
  1. Trial employment is expensive for the company. No company can spend a full week with every person who applies. To decide who makes it to the trial, companies must use some other interview process.
  2. Trial employment (and large take-home projects) are expensive for the candidate. Even when they are paid, not all candidates have the time. An engineer working a full-time job, for example, may simply not be able to take the time off. And even if they can, many won't. If an engineer already has job offers in hand, they are less likely be willing to take on the uncertainty of a work trial. We see this clearly among Triplebyte candidates. Many of the best candidates (with other offers in hand) will simply not do large projects or work trials.
The result of this that trial employment is an excellent option to offer some candidates. I think if you have the scale to support multiple tracks, adding a trial employment track is a great idea. However, it's not viable as a total replacement for interviews.

Talking to candidates about past experience is also sometimes put forward as a replacement for technical interviews. To see if a candidate can do good work in the future, the logic goes, just see what they've done in the past. We've tested this at Triplebyte, and unfortunately we've not had great results. Communication ability (ability to sell yourself) ended up being a stronger signal than technical ability. It's just too common to find well-spoken people who exaggerate their role (take credit for a team's work), and modest people who downplay what they did. Given enough time and enough questioning, it should be possible to get to the bottom of this. However, we found that within the time limits of a regular interview, talking about past experience is not a general replacement for interviewing. It is a great way to break the ice with a candidate and get a sense of their interests (and judge communication ability and perhaps culture fit). But it's not a viable total replacement for interviews

Good things about programming interviews!

I want to end up this post on a more positive note. For everything that's wrong with interviews, there is a lot that's right about them.

Interviews are direct skill assessment. I have friends who are teachers, who tell me that teacher interviews are basically a measure of communication ability (ability to sell yourself), and a credential. This seems to be true of many many professions. Silicon Valley is not a perfect meritocracy. But we do at least try to directly measure the skills that matter, and stay open to the idea anyone with those skills, regardless of background, can be a great engineer. Credential bias often stands in the way of this. But we've been able to mostly overcome this at Triplebyte, and help a lot of people with unconventional backgrounds get great tech jobs. I don't think Triplebyte would be possible, for example, in the legal field. The reliance on credentials is just too high.

Programmers also choose interviews. While this is a very controversial topic (there are certainly programmers who feel differently), when we've run experiments offering different types of evaluation, we find that most programmer still pick a regular interview. And we find that only a minority of programmers are interested in companies that use trial employment or take-home projects. For better or worse, programming interviews seem to be here to say. Other types of evaluation are great supplements, but they seem unlikely to replace interviews as the primary way engineers are evaluated. To misquote Churchill, “Interviews are the worst way to evaluate engineers, except for all the other ways that have been tried from time to time.”


Interviewing is hard. Human beings are hopelessly complex. On some level, judging human ability in a 4-hour interview is just a fool's errand. I think it's important to stay humble about this. Any interview process is bound to fail a lot of the time. People are just too complex.

But that's not an argument for giving up. Trying to run a meritocratic process is better than not trying. At Triplebyte, our interview is our product. We brainstorm ideas, we test them, and we improve over time. This, I think, is the approach that's needed to improve how engineers are hired. In this post, I've shared some of the big things we've learned over the last two years. I'd love to get feedback, and hear if these ideas are helpful for people. Send me an email at ammon@triplebyte.com

If you're a company looking for engineers, we'd also love to help you hire. You can send me an email, or check out our companies page.

Thanks to Adora Cheung and Jared Friedman for reading earlier drafts of this post.

[1] I'm limiting this blog post to technical skill assessment. I'll be writing a future post about culture fit, behavioral interviews and non-technical evaluation.

[2] There is of course variation here. At opposite ends of the spectrum we see companies that require a unanimous yes from every interviewer to make a hire, and companies where the hiring manager is solely responsible for the decision.

[3] These numbers are what companies report about their internal candidates. And the numbers vary widely between companies (they report fire rates, for example, as low as 1% and as high as 30%). The numbers are significantly better for Triplebyte candidates. So far, our candidates at companies have received offers after 53% of interviews, and 2% have been fired.

Ammon Bartram
tag:blog.triplebyte.com,2013:Post/1150958 2017-05-02T18:24:03Z 2017-06-26T14:40:20Z Triplebyte for front-end and mobile engineers

Today, we're launching new versions of the Triplebyte process for front-end and mobile engineers. We started Triplebyte to try to fix some of the problems with programming interviews. Over the last two years, we've built a background-blind interview process, and helped hundreds of engineers get jobs. We've worked with people trying to break into their first job (we helped a pizza delivery person get an engineering job at Instacart), and we've worked with credentialed engineers looking for new opportunities (and helped startups hire their first employee). I'm proud of the process we built. We've convinced major companies to waive their phone screens for our candidates, and globally our candidates receive job offers after 1 out of every 2 interviews they do. (This is about twice the average rate in the industry.) 

But I have a confession to make. Our interviews do not work well for specialists. We built our process by interviewing thousands of engineers, and empirically testing which questions are most predictive of engineering skill. Because most engineers are generalists (and most companies hire primarily generalists), general web engineering has come to dominate what we look for. We do work with front-end and mobile engineers. But until today, we've required that they pass a process dominated by general programming and back-end web concepts.

Today we're changing this. We've spent the last two months repeating the process that we went through when we launched Triplebyte. We've interviewed hundreds of candidates, tested questions, and are now launching background-blind front-end and mobile interviews!

Going deeper

Our new interviews are particularly exciting because they're a big step toward solving a broader problem. One thing I've learned doing 900 background-blind interviews is that skill in one area does not necessarily translate to skill in another (even adjacent) area. We see expert distributed systems folks who do remarkably poorly talking about a simple normalized schema, and strong back-end web developer who choke when talking about JavaScript. It's easy to quip that perhaps these are not skilled engineers. But they are. These are often people who have done important work at successful companies. The truth is that there is no single definition of engineering skill. The field is broader than what any one engineer can master, and as a result everyone will look weak if you ask them the right question. Even among companies hiring generalists, there is not a consensus on what skills make up the core of the discipline (everyone seems to think it's whatever they themselves are best at).

This fact is why engineers who go through our process pass their interviews with companies at an elevated rate. Each company has a specific engineering culture, and values a specific set of skills (either explicitly, or in the practices and questions of interviewers that have built up over time). But companies don't have a good way to telegraph this to applicants. All they can do is fail every engineer who applies and has the wrong set of strengths. What we've done so far at Triplebyte is design an interview that covers the most common areas that the companies we work with care about. We then pass anyone in our interview who is strong in any of these areas, and match them with the companies that care about their areas of strength.

Matching in this way has doubled our candidates' offer rate at companies. But to bring this back to our new front-end and mobile interviews, we've so far been limited by the fact that we give every candidate the same interview. We've only been able to match based on the most common skills. The front-end and mobile interviews change this! We're now at a scale where we can break out specialized tracks, and measure broader skills. This is the direction interviewing needs to move, and front-end and mobile are just the beginning. Our candidates already receive offers after 50% of the interviews they do. With broader data, I think we can push this number up. I think a 75% pass rate is possible.


If you want to give our front-end or mobile (or generalist) process a try, you can create an account here. After entering your details, you can pick which track you want to try (you can go back and try multiple as well). The front-end and mobile processes are new. I'm sure we'll be making tweaks / fixing issues. I'd love any feedback you have on the process (or on this blog post). Send me an email at ammon@triplebyte.com.

If you're a company hiring engineers and want to learn more about using Triplebyte, you can get started here.
Ammon Bartram
tag:blog.triplebyte.com,2013:Post/1106661 2016-12-14T18:21:14Z 2017-03-11T16:50:32Z Does it Make Sense for Programmers to Move to the Bay Area?

If you’re a programmer considering a move to the Bay Area, you probably know at least two basic facts: 1) tech salaries are higher here than elsewhere, and 2) living here is really expensive. Both facts have been true for a long time, but they have become especially true in the past four years. Since 2012 home prices have risen by about 60% and rents by about 70% in both the San Francisco and San Jose metro areas. The absence of any apparent upper limit to these increases has given rise to a new journalistic subgenre, the Bay Area Housing Horror Story. Maybe you’ve heard about the cheapest house in San Francisco, a $350,000 “decomposing wooden shack” whose interior is “unlivable in its current condition”? Or the tent next to Google X that was renting for $895 a month? Or the guy on Reddit who calculated that it would be cheaper to commute daily to the Bay Area from Las Vegas by plane than to rent an apartment in San Francisco?

It’s easy to hear data and stories like these and conclude that programmers moving to the Bay Area are suckers. After all, salaries have not risen by 70% in the past four years. But what this analysis misses is the extent to which this place and time is exceptional. The Bay Area in the early 21st century has produced an astounding number of successful tech companies. Uber was valued at $60 million in 2011 and at around $68 billion in late 2015 [1]; Stripe at around $500 million in 2012 and $9 billion during its most recent funding round; and Twitch at just under $99 million in September 2013, before Amazon acquired it for $970 million less than a year later. There have been many additional large-scale successes during the current boom, along with hundreds of smaller-scale successes that would be considered enormous in other local economies. Of course, Bay Area companies also fail spectacularly (Theranos, Good Technology). But an outsized percentage of tech's biggest successes happen here, and this creates opportunities that simply don't exist in other locations. 

So does it still make sense for programmers to move to the Bay Area? The answer of course is that it depends. SF is very expensive! And there are many other great places to be a programmer. At Triplebyte, we help engineers around the country (and world) get jobs at top Bay Area companies, so we talk to a lot of people facing exactly this calculation. In this blog post, I'm going to go over the publicly available data as well as our internal data, and try to better answer that question. I am going to focus specifically on people looking to work at tech companies, not people trying to found startups (much has already been written about the latter). 

The Baseline: Salary vs. Rent

To begin answering our question, let’s look at the best available data on salaries and rents in the Bay Area (San Francisco and Silicon Valley) and see how they compare both to national figures and to data for another metro area that attracts elite tech talent. Seattle is the obvious choice for our “other metro area” for a number of reasons. It’s home to leading tech companies, and it’s a West Coast city with high living standards, making it an appealing destination for many people who like the Bay Area. Finally, housing costs there are cheaper than in the Bay Area, so it’s got a key competitive advantage when it comes to attracting talent.

One major caveat is in order at the outset. It’s hard to get accurate salary information in any field, and it’s even harder to isolate accurate figures exclusive to the tech industry (as opposed to figures for programmers in any industry) in a specific locality. The chart below shows average salaries for software developers from the U.S. Bureau of Labor Statistics, which collects wage data from employers by location and occupation, and from Indeed, a job-search website that collects data from jobs listings as well as placements and self-reports. Neither data set is perfect (the BLS data are categorized in ways that make them only approximately suitable for our purposes, and the Indeed data are proprietary and of unknown reliability), but they are in agreement on the level of basic trends. They also broadly confirm trends in a third data set, self-reported salaries for individual companies and job titles on the site Glassdoor, which are specific enough that their reliability can be assessed (though industry-wide data at Glassdoor seem more suspect). 
This data suggests that San Jose (i.e., Silicon Valley) salaries are significantly higher than Seattle salaries. The situation in San Francisco, however, is more complicated. The BLS data shows salaries significantly lower than in San Jose. This is not in line with what we see at Triplebyte. Triplebyte candidates taking jobs in San Francisco earn slightly more than candidates taking jobs on the peninsula (and our average comes out at $139,000, right in line with the San Jose numbers). One explanation of the discrepancy may be that San Francisco has a higher percentage of banks and other non-tech companies that employ programmers but don't compete for top talent. 

In any case, our interpretation of the BLS data is broadly corroborated by the Indeed data, which show that San Jose developer salaries are on average $6,000 higher than San Francisco salaries and that San Francisco developer salaries are on average $27,000 higher than Seattle developer salaries. Salary figures for specific companies self-reported at Glassdoor suggest a similar pattern. For example, according to Glassdoor, software engineers working at companies like Google, Facebook, Twitter, Airbnb, and Uber start out at around $115,000 a year, and earn north of $150,000 as senior engineers, regardless of whether they are located in the Valley or San Francisco. The numbers at each career stage are roughly $15,000-20,000 lower at Amazon and Microsoft in Seattle.

Putting all of these data sources together, then, we can estimate that engineers at top tech companies in the Bay Area stand to make between $15,000 and $33,000 more per year than engineers at top tech companies in Seattle.[2]

What happens when we factor in cost of living? The chart below, derived from Zillow’s August 2016 Local Market Reports on the three metro areas, shows that median rent is about $1400-$1500 a month (or roughly $17,000-$18,000 a year) higher in the Bay Area than in the Seattle metro area [3].
So assuming you’re looking throughout the Bay Area for a good deal and you’re comfortable renting rather than buying a home, as most of us are during the early stages of our careers, higher Bay Area salaries at least cover the costs of higher rents. If you are content to live with roommates or otherwise economize on housing, you could potentially save that extra $15,000-$33,000 annually and take better advantage of your higher Bay Area salary. Along these same lines, an ability to live frugally as your career advances would theoretically pay off more handsomely here than in Seattle.

Buying a House

As our careers and lives progress, however, many of us will want to buy rather than rent homes, whether out of a desire to build equity or out of necessity, because we need room for our families. It is here that the advantages of a place like Seattle become noticeable. The median home value (again, from Zillow) across the San Francisco metro area is $807,800, and the median home value in the San Jose metro area is $948,600. The Seattle metro area’s median home value is well short of half both Bay Area metros’, at $394,600.
While it might be technically possible to accumulate $200,000 or so for a standard downpayment in the Bay Area during the first decade of a tech career, for many of us this goal will be out of reach well into our 30s. As it happens, stories about Bay Area tech workers relocating elsewhere tend to focus on people in their 30s who have saved enough to buy a house in most parts of the country but not in the Bay Area. Often these developers move—you guessed it—to Seattle.

Beyond the Salary/Housing Baseline

If you see yourself wanting to buy a house relatively early in life, the salary and housing data above indicated it might make sense to start your career in Seattle rather than San Francisco. However, this does not consider equity and career progression. And on these fronts, the Bay Area tech ecosystem seems to bring benefits.

A 2015 report by Hired found that when engineers from the Bay Area relocate to other areas, they out-earn engineers on the local market. Experience in the Bay Area seems to advance careers. Engineers moving from San Francisco to Seattle make an average of $9,000 more than others who get offers in Seattle. This Bay Area premium is even higher in other cities: $16,000 in Boston, $17,000 in Chicago, and $19,000 in San Diego.

Another data set crowd-sourced by the startup Step in early 2016 suggests that for at least a subset of talented developers, working in the Bay Area brings more equity than working in other locations. The Step study compares total compensation at the two dominant Seattle tech companies, Amazon and Microsoft, with compensation at Silicon Valley leaders Google and Facebook. The study finds that compensation is comparable for junior engineers in both locations, with those in the Valley getting paid slightly higher, roughly as you would predict based on the BLS, Indeed, and Glassdoor data. But with increasing experience, total compensation at Google and Facebook comfortably outpaces that at Amazon and Microsoft, largely as a result of much more generous cash and stock bonuses. For comparable job titles in Step’s Level 3 engineer category, for example, total annual compensation is $180,000 at Amazon, $199,000 at Microsoft, $249,700 at Facebook, and $306,500 at Google. That is, senior engineers in the Bay Area appear to earn a $50K to $126K yearly premium in total comp.

This difference in equity does not apply directly to jobs at smaller startups, where equity is likely to come in the form of illiquid options. But here an argument can be made for the Bay Area as well. Startup equity is high variance. In most cases it's worth little (most early-stage startups fail) but in a percentage of cases the startups succeed spectacularly, and their equity is highly valuable. If you want to make money from startup equity, it's all about joining a company that succeeds. The Bay Area both has an outsized percentage of startup success, and also just has more startups on the ground (making it easier to be picky and search for a startup that is doing well) [4].

Summing Up

The Bay Area in 2016 is to technology as 1930s Detroit was to automobiles or 14th-century Venice was to the European spice trade, except that these and all other historical analogies are unable to capture the magnitude and speed of local tech growth. It only took Uber about five and a half years to exceed the valuations of all but four of the world's top automakers. In 2016, its eighth year of existence, Airbnb was about 25% more valuable than the world's most valuable hotel company (Hilton, founded in 1919). No precedent exists for growth like this, occurring among so many companies and concentrated in such a small geographical area.

This growth creates opportunity. Startup jobs, big company jobs, drone programming in Clojure —the Bay Area has them all. There are simply more tech companies and tech investors here than in any other single location. However, the tech growth has also raised prices and raised tensions. San Francisco is among the most expensive cities in the US (and many people here are not happy about that). If you get a good job, your salary increase will probably cover the costs. But paying exorbitantly for an apartment may feel burdensome no matter how big your income is.

So should you move the Bay Area? Not all startups succeed, and not everyone lands a senior engineer position at a name-brand company. And you may have lifestyle ambitions that simply won't fit with what's available here. If a comfortable house with a big lawn is a non-negotiable part of your vision for yourself, you probably shouldn’t move to the Bay Area. You also may just prefer to live somewhere else!   

If, however, you're looking to maximize your probability of joining the next Google (or Google itself), moving to the Bay Area probably makes sense. The salaries here do cover the higher cost of living, and if you are able to capitalize on the additional opportunities that are uniquely available here, you could end up doing much more than covering costs. This is the heart of the industry, and it's an exciting time to live and work in the Bay Area.

If you are interested in moving to the Bay Area and joining a successful company or a tiny startup, Triplebyte can help you find a job. Give our process a try here.

[1] Using private valuations as a measure of growth in an industry is dangerous. Selling 1% of a company for $10 million does not necessarily mean that the entire company is worth $1 billion (this is especially true when investors are given liquidation preference). When Uber and Airbnb go public, we may see their valuations go down (perhaps more in line with GM or Hilton). Silicon Valley in general could be in a bubble. This is a reasonable concern. But even at half their current valuations, Bay Area companies represent outstanding success. And while there are negative cases (Groupon, Theranos) there are also positive cases (Facebook, Tesla).

[2] Taxes are a significant caveat to this. CA has a pretty high state income tax, and WA has none. The rates in CA are graduated, making this a little complicated, but putting a $130k income into a CA state tax estimator gives $9,190 in state taxes. State income taxes can also be taken as a deduction on federal incomes taxes (in some cases), lowering this.

[3] Both the rental and median home value numbers used here are for the San Francisco, San Jose and Seattle Metro Areas (each city and surrounding urban area). The numbers change if we just look within the city limits (San Francisco proper is more expensive than San Jose proper). However, the conclusions remain the same.

[4] We find that engineers often under-value startup success (growth rate, revenue) when looking for jobs, and instead place an emphasis on brand-recognition, or whether they find the subject area exciting. Now, I don't mean to judge anyone for this — working in an areas of passion may be great choice. But if your goal is to maximize your financial outcome, looking at startups more like an investor and picking a company in a big market on a promising trajectory is likely a winning strategy. The Bay Area, with a large number of startups, is probably the best place to do this.

Mark Lane
tag:blog.triplebyte.com,2013:Post/1102841 2016-11-07T18:59:32Z 2017-07-03T04:48:53Z 12,000 engineers evaluated

We launched Triplebyte with the goal of building the first credentials blind hiring process for engineers. Our mission is to give anyone who has the right skills, the opportunity to work at the best technology companies in the world regardless of what school they went to or which companies they've worked at.

We've now evaluated over 12,000 engineers without using their resumes. We've done this by designing a two step process. The first is an online programming test. If you do well on the test, the next step is a technical interview with our interviewing team where the interviewer knows nothing about your background.  We've now interviewed over 2,000 engineers and 15% made it through to the final step of being introduced to the companies we work with.

To put that into context, companies at the size of Airbnb or Dropbox would expect to do technical interviews with approximately 50 engineers a month. We're already interviewing 3x that number every month and we've built software to track every tiny detail of what happens during these technical interviews. This means we're getting data on how to accurately interview an engineer, faster than anyone. In total we've now done over 3,000 hours of technical interviewing, or 127 full days. 

Once an engineer makes it through our process, we match them with companies they'll be a good technical fit for. As we wrote before companies disagree significantly about the types of engineers they want to hire. We're optimizing our matching process for accuracy so we gather as much data about the technical preferences of the current engineering team and use that to match engineers with them. We get this by having the current engineering team complete a technical questionnaire that gives us a fingerprint of their hiring preferences. This matching model is working really well. At our partner companies like Dropbox and Cruise, we're seeing offer rates on our candidates of over 60%. That's more than 2x better than the average they see on their own candidates (the industry average is about 25-30% onsite to offer rate i.e. about 1 in 4 engineers who make it to onsite will receive an offer) and our candidates are going straight to an onsite interview, skipping recruiter and phone screens.

What's really exciting about such a high offer rate is that we're achieving it without doing any culture fit screening. Our process *only* looks at technical skills and that's the data we use for matching engineers to companies. That shows the way for companies to hire more engineers is to get better at identifying candidates with the right skills early on, not doing more culture fit screening early on.

We can also beat companies on the most important metric of all - the number of internal engineering hours they have to spend per new engineering hires. Sequoia recently estimated that it takes a company 82.5 total hours to hire an engineer. Around 30-35 of these are engineering hours. We're able to deliver an engineering hire at an average “cost” of 15 internal engineering hours. 

We're only able to interview so many engineers each month because our programming test can accurately identify good engineers. After many iterations our test now has 70% precision (precision means of the engineers our test identifies as being good, how many of them actually are). This means we can identify great programmers with much higher accuracy than a resume screen and only interview the good ones. 

What's unique about the programming test is that we've iterated on the questions we ask using actual data from interview outcomes at the companies we work with (i.e. we correlate performance on specific questions with performance on the onsite interviews at companies like Dropbox). We only keep the questions that have high signal and continually replace low signal questions with new ones. This process is the only way you can build a test that is actually accurate at screening for what companies want. 

The data set we're building is unique and we use it to improve the accuracy of our process over time. The data set is unique because:

  1. we track a lot more data from each interview than a typical company would. We've built specialized software to track data points like how long it takes to complete each section of a problem and what the interviewer is thinking every 5 minutes throughout the interview.
  2. We also get data on how our engineers perform on the onsite interviews at the companies we work with. We use this data to build a predictive model specific to each company, based on our engineering genome, which we continually update over time.

It's really exciting for us to see that a credentials blind technical evaluation can work at identifying good engineers and matching them to the right companies.

tag:blog.triplebyte.com,2013:Post/1052313 2016-05-19T16:46:19Z 2017-07-19T15:10:33Z Bootcamps vs. College

Programming bootcamps seem to make an impossible claim. Instead of spending four years in university, they say, you can learn how to be a software engineer in a three month program. On the face of it, this sounds more like an ad for Trump University than a plausible educational model.

But this is not what we’ve found at Triplebyte. We do interviews with engineers, and match them with startups where they’ll be a good fit. Companies vary widely in what skills they look for, and by mapping these differences, we’re able to help engineers pass more interviews and find jobs they would not have found on their own. Over the last year, we’ve worked with about 100 bootcamp grads, and many have gone on to get jobs at great companies. We do our interviews blind, without knowing a candidate's background, and we regularly get through an interview and give a candidate very positive scores, only to be surprised at the end when we learn that the candidate has only been programming for 6 months.

Bootcamp grads are junior programmers. They have a lot to learn, and represent an investment on the part of a company that hires them. That said, this is also true of recent college graduates. We’ve found bootcamp grads as a group to be better than college grads at web programming and writing clean, modular code, and worse at algorithms and understanding how computers work. All in all, we’ve had roughly equivalent success working with the two groups.

In this post, I'm going to try to shed some light on how this can be true. I’ll dig more into the differences that we see between the two groups, and hopefully explain how some people can become competitive junior programmers in under a year.

The Analysis

Our technical interview at Triplebyte is about two and a half hours long, and is broken into four main parts, focusing on practical programming, web architecture, low-level system understanding, and algorithmic understanding. Not every engineer completes every question (we let programmers focus on their strengths), but the process gives us a good measure of the relative strengths of each engineer in each of these areas.

To get a better idea of how bootcamp grads and college grads compare, I graphed the two groups’ average performance in each of these areas. The y axis is the score on each problem (where 1 = strong no, 2 = weak no, 3 = weak yes, 4 = strong yes). For reference, I also included the entire population of applicants, and also all engineers who pass our interview. 
The first thing to note about this graph is that bootcamp grads do as well as or better than college grads on practical programming and web system design, and do worse on algorithms and low-level systems. Our practical programming questions are not easy. They require understanding a problem, coming up with abstraction to solve the problem, and rendering this in code. If anything, our practical programming questions require more on-the-spot thinking than our algorithm problems do. They do not, however, require academic CS or math, or any specific knowledge. This is the crux of the issue. Bootcamp grads match or beat college grads on practical skills, and lose on deep knowledge.

A similar pattern holds on the design questions. Bootcamp grads do better on web questions involving web servers, databases and load balancers. College grads do better on low-level design questions involving bit/bytes, threading, memory allocation, and understanding how a computer actually works.

Triplebyte sees a biased sample of both bootcamp grads and college grads. We do background-blind screening via an online programming test, and only interview engineers who pass this test. Thus we have no way to know what percentage of bootcamp grads and college grads fail early in our process, and the graph above reflects only people who pass our test. Still, a significant number of bootcamp grads pass our test and go on to do as well as college grads on our interviews.

I want to specifically draw attention to the performance of college grads on algorithm problems. They are not only better than bootcamp grads, they are a lot better. They are significantly better than the average programmer making it to our interview (most of whom have 2+ years of experience), and almost as good at the average engineers who we pass. This is interesting. It backs up the assertion that algorithm skills are not used on the job by most programmers, and atrophy over time.

How is this possible?

Our data aside, it still just seems hard to believe that 3 months can compete with a 4-year university degree. The time scales just seem off. The first thing to note is that the difference in instructional time is not as large as it seems. Bootcamps, are intense. Students complete 8 hours of work daily, and many stay late and work on the weekends (one popular bootcamp runs 6 days per week). TAs are working with the students during this entire time. What bootcamps lack in duration they perhaps make up in intensity. 

The second point is that bootcamps teach practical skills. Traditional CS programmers spend significant amounts of time on concepts like NP-completeness and programming in Scheme. Now, I in no sense mean to belittle this material. I love academic CS (and the Cook–Levin theorem). It’s beautiful, fascinating stuff, and I got a lot out of learning it. But it is not directly applicable to what most programmers do most of the time. Bootcamps are able to show outsized results by relentlessly focusing on practical skills. Bootcamp TAs continually critique each student's coding style. They teach testing. They teach their students how to use their editors. How to use an editor is something that a traditional CS degree program would never think of teaching.

This does not leave bootcamp grads equivalently skilled to university grads. If you want to do hard algorithmic or low-level programming, you’re still better served by a traditional CS eduction. But it does leave the best of them surprisingly competitive for many entry-level development positions.


There are two ways to interpret the results in this blog post. One way is to say that bootcamps are window dressing. They teach inexperienced programers what they need to know to look like good programmers, but skimp on the heart of the discipline. However, I think this view is too cynical. The other way to view this post is as evidence that bootcamps focus on totally different areas than CS programs. They focus intensely on the practical skills required to be a productive programmer. These are skills that CS programs expect students to pick up around the edges of their course work. By being this pragmatic and giving students an intense workload, bootcamps are able to match the practical skills of CS grads.

Bootcamp grads don’t make sense for all companies. Just like recent college grads, they are an investment for a company that hires them. They have much to learn. And they are clearly worse at algorithms and low level systems than engineers with academic training. A database or self-driving car company should probably stick to folks with CS degrees. But the significant majority of companies need programers to solve practical problems on the web. On this axis, we’ve found bootcamp grads totally competitive.

Triplebyte is one year old. In that time, we’ve both placed bootcamp grads at top companies, and also watched them grow. We’ve watched them learn some of the CS skills that they lack on graduation. We’ve watched them learn about large-scale production systems. We’ve watched them take on leadership positions. It’s really incredible how quickly and how well the best bootcamp grads learn. It’s been a pleasure to work with them, and we’ll definitely keep working with bootcamp grads.

If you’re a bootcamp grad (or a college grad, or anyone else), and are interested in a way to find companies where you’re a strong technical match, give our process a try. I'm also interested in your thoughts on this post! Send me an email at ammon@triplebyte.com.

Thanks to Jared Friedman and Daniel Gackle for reading drafts of this, and Buck Shlegeris for major help writing it.
Ammon Bartram
tag:blog.triplebyte.com,2013:Post/1045715 2016-05-04T17:42:03Z 2017-05-08T18:10:14Z Triplebyte Engineer Genome Project

We launched Triplebyte last year with the goal of building a hiring process focused on evaluating skills and not credentials. Since then we've evaluated over 10,000 engineers without using their resumes, and helped them join companies ranging from three person startups up to Dropbox. Doing thousands of blind technical interviews has forced us to get really good at identifying programming skills directly, and allowed us to work with engineers from a great diversity of backgrounds.

One of the most surprising things we've learned through this process is just how much companies differ in which programming skills they value most. A deliberate, academic programmer, for example, may do extremely well at one company, which thinks that she will be able to tackle tough problems. Other companies instead want fast, intuitive thinkers, and will reject this same engineer on the grounds that she may not be productive enough. Some companies want all their engineers to understand deeply how a computer works. Candidates have no way to know what specific companies prefer, and this results in a large amount of wasted time.  Rejection is also demoralizing and we've seen many engineers, especially those working outside Silicon Valley, start questioning their own abilities after a few failed technical interviews.   

To make the process of finding the right company better for engineers, we're announcing the Triplebyte Engineer Genome project. Using the data we've gathered through our technical interviews, we've mapped out the engineering attributes that technology companies care most, and measured how the companies we work with weigh these attributes differently. We've used this data to build software to intelligently match engineers with the companies where they're the best technical fit, and we're using this software with engineers who go through our process.

Intelligent matching with software is how hiring should work. Failed technical interviews are a big loss for both sides. They cost companies their most valuable resource, engineering time. Applicants lose time they could have spent interviewing with another company that would have been a better fit. 

Moving skills assessment and company matching into software also has another important consequence. It increases diversity in the hiring pool. If companies can trust that an applicant has the technical skills they're looking for, it gives them confidence to speak with candidates who lack the usual credentials of attending a top school or working at a prestigious company. 

We've built up the list of engineering attributes in the Triplebyte Engineer Genome by collecting a large amount of data from the thousands of blind technical interviews we've completed ourselves. We've then tracked how the data we've collected about each engineer matches with their interview performance at top technology companies we work with. This has been a huge schlep but it's the only way to build a matching system that actually works.

By evaluating this many engineers and working with over a hundred companies, we've seen how little consensus there is on what a "great engineer" means to any single company.  We’ve calculated statistically the extent to which interviewers at different companies agree about which candidates are good and which are bad (for the statistics nerds, we calculated the inter-rater reliability), and found it to be about the same as the extent to which people agree on which movies on Netflix are best.

Recruiting services today avoid tacking this problem altogether.  Mapping what companies actually want is a much harder problem than scaling the traditional recruiting agency model of spamming companies and candidates. There are so many different engineering attributes you could conceivably look for it's hard to narrow it down to a concise list. Even if you could cleanly identify these attributes, assigning the right weight to each one adds another layer of complexity. It's too much to expect a single person or team making hiring decisions to do this well.

Zooming out, it's too much to expect a single company to be capable of solving this problem either. Google arguably does the best job and their data is still limited to (1) engineers who applied to Google (2) finding which attributes are most important for success at Google. Most companies are bad at identifying what a great engineer looks like. Even the famous ones get it badly wrong, like Facebook rejecting WhatsApp founder Jan Koum (they did eventually hire him but the price went up a bit).

Triplebyte is uniquely positioned to fix this. Over the past year, we've collected both quantitative (e.g. time to complete milestones within programming problems) and qualitative (e.g. problem solving approaches or code quality) data from several thousand technical interviews and have use it to create a list of the engineering skills most important to technology companies - the Triplebyte Engineer Genome. These are:
  1. Applied problem solving
  2. Algorithms knowledge
  3. Professional code
  4. Communication skill
  5. Architecture Skill
  6. Low-level systems understanding
  7. Back-end web understanding
By scoring engineers on these criteria and then assigning weights to each companies based on empirical observations of their hiring decisions, we can use software to better identify engineering skill than humans. We're excited about this because it moves us towards removing human biases from the hiring process altogether. Humans making judgement calls about objectively measurable skills introduces bias and hurts diversity. If we want more diversity in tech, this needs to be done with machines crunching objective data.

We expect the list of attributes in our Engineer Genome to continue evolving over time as we gather more data on what companies are looking for. We'd welcome your thoughts or feedback on the project and thanks to everyone who has completed the Triplebyte technical evaluation.
tag:blog.triplebyte.com,2013:Post/1009250 2016-03-08T17:38:59Z 2017-07-05T06:01:52Z How to pass a programming interview

This post started as the preparation material we send to our candidates, but we decided to post it publicly.

Being a good programmer has a surprisingly small role in passing programming interviews. To be a productive programmer, you need to be able to solve large, sprawling problems over weeks and months. Each question in an interview, in contrast, lasts less than one hour. To do well in an interview, then, you need to be able to solve small problems quickly, under duress, while explaining your thoughts clearly. This is a different skill [1]. On top of this, interviewers are often poorly trained and inattentive (they would rather be programming), and ask questions far removed from actual work. They bring bias, pattern matching, and a lack of standardization.

Running Triplebyte, I see this clearly. We interview engineers without looking at resumes, and fast-track them to on-sites at YC companies. We’ve interviewed over 1000 programmers in the last nine months. We focus heavily on practical programming, and let candidates pick one of several ways to be evaluated. This means we work with many (very talented) programmers without formal CS training. Many of these people do poorly on interviews. They eat large sprawling problems for breakfast, but they balk at 45-min algorithm challenges.

The good news is that interviewing is a skill that can be learned. We’ve had success teaching candidates to do better on interviews. Indeed, the quality that most correlates with a Triplebyte candidate passing interviews at YC companies is not raw talent, but rather diligence. 

I fundamentally do not believe that good programmers should have to learn special interviewing skills to do well on interviews. But the status quo is what it is. We’re working at Triplebyte to change this. If you’re interested in what we’re doing, we’d love you to check out our process. In the meantime, if you do want to get better at interviewing, this blog post describes how we think you can most effectively do so. 

1. Be enthusiastic

Enthusiasm has a huge impact on interview results. About 50% of the Triplebyte candidates who fail interviews at companies fail for non-technical reasons. This is usually described by the company as a “poor culture fit”. Nine times out of ten, however, culture fit just means enthusiasm for what a company does. Companies want candidates who are excited about their mission. This carries as much weight at many companies as technical skill. This makes sense. Excited employees will be happier and work harder.

The problem is that this can be faked. Some candidates manage to convince every company they talk to that it’s their dream job, while others (who are genuinely excited) fail to convince anyone. We’ve seen this again and again. The solution is for everyone to get better at showing their enthusiasm. This is not permission to lie. But interviewing is like dating. No one wants to be told on a first date that they are one option among many, even though this is usually the case. Similarly, most programmers just want a good job with a good paycheck. But stating this in an interview is a mistake. The best approach is to prepare notes before an interview about what you find exciting about the company, and bring this up with each interviewer when they ask if you have any questions. A good source of ideas is to read the company’s recent blog posts and press releases and note the ones you find exciting.

This idea seems facile. I imagine you are nodding along as you read this. But (as anyone who has ever interviewed can tell you) a surprisingly small percentage of applicants do this. Carefully preparing notes on why you find a company exciting really will increase your pass rate. You can even reference the notes during the interview. Bringing prepared notes shows preparation.

2. Study common interview concepts

A large percentage of interview questions feature data structures and algorithms. For better or worse, this is the truth. We gather question details from our candidates who interview at YC companies (we’ll be doing a in-depth analysis of this data in a future article), and algorithm questions make up over 70% of the questions that are asked. You do not need to be an expert, but knowing the following list of algorithms and data structures will help at most companies.

  • Hash tables
  • Linked lists
  • Breadth-first search, depth-first search
  • Quicksort, merge sort
  • Binary search
  • 2D arrays
  • Dynamic arrays
  • Binary search trees
  • Dynamic programming
  • Big-O analysis

Depending on your background, this list may look trivial, or may look totally intimidating. That’s exactly the point. These are concepts that are far more common in interviews than they are in production web programming. If you’re self-taught or years out of school and these concepts are not familiar to you, you will do better in interviews if you study them. Even if you do know these things, refreshing your knowledge will help. A startlingly high percentage of interview questions reduce to breadth-first search or the use of a hash table to count uniques. You need to be able to write a BFS cold, and you need to understand how a hash table is implemented.

Learning these things is not as hard as many of the people we talk to fear. Algorithms are usually described in academic language, and this can be off-putting. But at its core, nothing on this list is more complicated than the architecture of a modern web app. If you can build a web app (well), you can learn these things. The resource that I recommend is the book The Algorithm Design Manual by Steven Skiena. Chapters 3 through 5 do a great job of going over this material, in a straightforward way. It does use C and some math syntax, but it explains the material well. Coursera also has several good algorithms courses. This one, in particular, focuses on the concepts that are important in interviews.

Studying algorithms and data structures helps not only because the material comes up in interviews, but also because the approach to problems taken in an algorithm course is the same approach that works best in interviews. Studying algorithms will get you in an interview mindset.

3. Get help from your interviewer

Interviewers help candidates. They give hints, they respond to ideas, and they generally guide the process. But they don’t help all candidates equally. Some programmers are able to extract significant help, without the interviewer holding it against them. Others are judged harshly for any hints they are given. You want to be helped.

This comes down to process and communication. If the interviewer likes your process and you communicate well with them, they will not mind helping. You can make this more likely by following a careful process. The steps I recommend are:

  1. Ask questions
  2. Talk through a brute-force solution
  3. Talk through an optimized solution
  4. Write code

After you are asked an interview question, start by clarifying what was asked. This is the time to be pedantic. Clarify every ambiguity you can think of. Ask about edge cases. Bring up specific examples of input, and make sure you are correct about the expected output. Ask questions even if you’re almost sure you know the answers. This is useful because it gives you a chance to come up with edge cases and fully spec the problem (seeing how you handle edge-cases is one of the main things that interviewers look for when evaluating an interview), and also because it gives you a minute to collect your thoughts before you need to start solving the problem.

Next, you should talk through the simplest brute-force solution to the problem that you can think of. You should talk, rather than jump right into coding, because you can move faster when talking, and it’s more engaging for the interviewer. If the interviewer is engaged, they will step in and offer pointers. If you retreat into writing code, however, you'll miss this opportunity. 

Candidates often skip the brute-force step, assuming that the brute-force solution to the problem is too obvious, or wrong. This is a mistake. Make sure that you always give a solution to the problem you’ve been asked (even if it takes exponential time, or an NSA super computer). When you’ve described a brute-force solution, ask the interviewer if they would like you to implement it, or come up with more efficient solution. Normally they will tell you to come up with a more efficient solution.

The process for the more efficient solution is the same as for the brute force. Again talk, don’t write code, and bounce ideas off of the interviewer. Hopefully, the question will be similar to something you’ve seen, and you’ll know the answer. If that is not the case, it’s useful to think of what problems you’ve seen that are most similar, and bring these up with the interviewer. Most interview questions are slightly-obscured applications of classic CS algorithms. The interviewer will often guide you to this algorithm, but only if you begin the process.

Finally, after both you and your interviewer agree that you have a good solution, you should write your code. Depending on the company, this may be on a computer or a whiteboard. But because you’ve already come up with the solution, this should be fairly straightforward. For extra points, ask your interviewer if they would like you to write tests.

4. Talk about trade-offs

Programming interviews are primarily made up of programming questions, and that is what I have talked about so far. However, you may also encounter system design questions. Companies seem to like these especially for more experienced candidates. In a system design question, the candidate is asked how he or she would design a complex real-world system. Examples include designing Google maps, designing a social network, or designing an API for a bank.

The first observation is that answering system design questions requires some specific knowledge. Obviously no one actually expects you to design Google maps (that took a lot of people a long time). But they do expect you to have some insight into aspects of such a design. The good news is that these questions usually focus on web backends, so you can make a lot of progress by reading about this area. An incomplete list of things to understand is:
  • HTTP (at the protocol level)
  • Databases (indexes, query planning)
  • CDNs
  • Caching (LRU cache, memcached, redis)
  • Load balancers
  • Distributed worker systems
You need to understand these concepts. But more importantly, you need to understand how they fit together to form real systems. The best way to learn this is to read about how other engineers have used the concepts. The blog High Scalability is a great resource for this. It publishes detailed write-ups of the back-end architecture at real companies. You can read about how every concept on the list above is used in real systems.

Once you’ve done this reading, answering system design questions is a matter of process. Start at the highest level, and move downward. At each level, ask your interviewer for specifications (should you suggest a simple starting point, or talk about what a mature system might look like?) and talk about several options (applying the ideas from your reading). Discussing tradeoffs in your design is key. Your interviewer cares less about whether your design is good in itself, and more about whether you are able to talk about the trade-offs (positives and negatives) of your decisions. Practice this.

5. Highlight results

The third type of question you may encounter is the experience question. This is where the interviewer asks you to talk about a programming project that you completed in the past. The mistake that many engineers make on this question is to talk about a technically interesting side-project. Many programmers choose to talk about implementing a neural network classifier, or writing a Twitter grammar bot. These are bad choices because it’s very hard for the interviewer to judge their scope. Many candidates exaggerate simple side projects (sometimes that never actually worked), and the interviewer has no way to tell if you are doing this.

The solution is to choose a project that produced results, and highlight the results. This often involves picking a less technically interesting project, but it’s worth it. Think (ahead of time) of the programming you’ve done that had the largest real-world impact. If you’ve written a iOS game, and 50k people have downloaded it, the download number makes it a good option. If you’ve written an admin interface during an internship that was deployed to the entire admin staff, the deployment makes it a good thing to talk about. Selecting a practical project will also communicate to the company that you focus on actual work. Programmer too focused on interesting tech is an anti-pattern that companies screen against (these programmers are sometimes not productive).

6. Use a dynamic language, but mention C

I recommend that you use a dynamic language like Python, Ruby or JavaScript during interviews. Of course, you should use whatever language you know best. But we find that many people try interviewing in C , C++ or Java, under the impression these are the “real’ programming languages. Several classic books on interviewing recommend that programmers choose Java or C++. At startups at least, we’ve found that this is bad advice. Candidates do better when using dynamic languages. This is true, I think, because of dynamic languages’ compact syntax, flexible typing, and list and hash literals. They are permissive languages. This can be a liability when writing complex systems (a highly debatable point), but it’s great when trying to cram binary search onto a whiteboard.

No matter what language you use, it’s helpful to mention work in other languages. An anti-pattern that companies screen against is people who only know one language. If you do only know one language, you have to rely on your strength in that language. But if you’ve done work or side-projects in multiple languages, be sure to bring this up when talking to your interviewers. If you have worked in lower-level languages like C, C++, Go, or Rust, talking about this will particularly help.

Java, C# and PHP are a problematic case. As we described in our last blog post, we’ve uncovered bias against these languages in startups. We have data showing that programmers using these languages in the interview pass at a lower rate. This is not fair, but it is the truth. If you have other options, I recommend against using these languages in interviews with startups.

7. Practice, practice, practice

You can get much better at interviewing by practicing answering questions. This is true because interviews are stressful, but stress harms performance. The solution is practice. Interviewing becomes less stressful with exposure. This happens naturally with experience. Even within a single job search, we find that candidates often fail their initial interviews, and then pass more as their confidence builds. If stress is something you struggle with, I recommend that you jumpstart this process by practicing interview stress. Get a list of interview questions (the book Cracking the Coding Interview is one good source) and solve them. Set a 20-minute timer on each question, and race to answer. Practice writing the answers on a whiteboard (not all companies require this, but it’s the worst case, so you should practice it). A pen on paper is a pretty good simulation of a whiteboard. If you have friends who can help you prepare, taking turns interviewing each other is great. Reading a lot of interview questions has the added benefit of providing you ideas to use when in actual interviews. A surprising number of questions are re-used (in full or in part).

Even experienced (and stress-free) candidates will benefit from this. Interviewing is a fundamentally different skill from working as a programmer, and it can atrophy. But experienced programers often (reasonably) feel that they should not have to prepare for interviews. They study less. This is why junior candidates often actually do better on interview questions than experienced candidates. Companies know this, and, paradoxically, some tell us they set lower bars on the programming questions for experienced candidates.

8. Mention credentials

Credentials bias interviewers. Triplebyte candidates who have worked at a top company or studied at a top school go on to pass interviews at a 30% higher rate than programmers who don’t have these credentials (for a given level of performance on our credential-blind screen). I don’t like this. It’s not meritocratic and it sucks, but if you have these credentials, it’s in your interest to make sure that your interviewers know this. You can’t trust that they’ll read your resume.

9. Line up offers

If you’ve ever read fund-raising advice for founders, you’ll know that getting the 1st VC to make an investment offer is the hardest part. Once you have one offer, more come pouring in. The same is true of job offers. If you already have an offer, be sure to mention this in interviews. Mentioning other offers in an interview heavily biases the interviewer in your favor.

This brings up the strategy of making a list of the companies you’re interested in, and setting up interviews in reverse order of interest. Doing well earlier in the process will increase your probability of getting an offer from you number one choice. You should do this.


Passing interviews is a skill. Being a great programmer helps, but it’s only part of the picture. Everyone fails some of their interviews, and preparing properly can help everyone pass more. Enthusiasm is paramount, and research helps with this. As many programmers fail for lacking enthusiasm as fail for technical reasons. Interviewers help candidates during interviews, and if you follow a good process and communicate clearly, they will help you. Practice always helps. Reading lots of interview questions and inuring yourself to interview stress will lead to more offers.

This situation is not ideal. Preparing for interviews is work, and forcing programmers to learn skills other than building great software wastes everyone’s time. Companies should improve their interview processes to be less biased by academic CS, memorized facts, and rehearsed interview processes. This is what we’re doing at Triplebyte. We help programmers get jobs without looking at resumes. We let programmers pick one of several areas in which to be evaluated, and we study and improve our process over time. We’d love to help you get a job at a startup, without jumping through these hoops. You can get started here. But the status quo is what it is. Until this changes, programmers should know how to prepare.

Thanks to Jared Friedman, Emmett Shear, Garry Tan, Alexis Ohanian and Daniel Gackle for reading drafts of this.

Footnote [1]: This is not to say that interview performance does not correlate with programing skill. It does. But the correlation is far weaker than most companies assume, and factors other than programing skill explain a large part of interview variance.

Ammon Bartram
tag:blog.triplebyte.com,2013:Post/1007915 2016-03-05T21:58:23Z 2017-02-15T00:28:06Z Fixing the Inequity of Startup Equity

tl;dr Short stock option exercise windows suck. They force startup employees to make hard decisions, and often rob them of fairly earned compensation. We’ve created docs that companies can use to give their employees 10 years to exercise their options. YC will recommend all their startups use these documents going forward. We’re advising Triplebyte candidates to favor companies making this change, and we’ve already convinced 12 companies to pledge to do this.

Stock options are valuable compensation for startup employees. The high potential upside of these options motivates employees to turn down larger salaries at bigger companies and work at startups. It seems obvious, then, to expect that employees should own their vested options outright, even if they leave the company. Stock options are compensation for work that’s already been done. Returning them to the company when you leave would be inequitable.

Unfortunately, this is exactly what often happens. The industry standard stock option agreement gives employees 90 days after leaving a company to exercise their vested options, or they are returned to the company. Many employees don’t have the money to exercise their options within such a short window and lose them. To fix this we’re announcing three things:

  1. We’ve worked with the team at Ironclad (YC S15) and Nancy Chen at Orrick to create the Triplebyte extended window stock option plan. This is standardized paperwork any company can use to give their employees 10 years from grant date to exercise their options. You can download these directly here or use Ironclad to set up and manage them here.
  2. We’ve been encouraging startups to make this change and are publishing a list of YC companies who offer an extended exercise window here. Currently 14 have already implemented an extended window option plan, and 9 have pledged to do so using our plan. More companies are in the process of deciding and we’ll be updating the list as they do. We’ll be encouraging Triplebyte candidates to weigh this heavily when choosing companies.
  3. Y Combinator has agreed to recommend that its companies use the Triplebyte extended window option plan documents when they form an option plan, beginning with the current Winter 2016 batch.

If you’ve been an employee at a startup, you’ll know this issue is important and causes stress in two ways.

(1) To exercise the options, you need enough money to cover both the exercise price and the taxes you now owe. In the case where the company has performed well and the valuation has increased (i.e. when people care most about their options), this will be more money than you can afford, unless you’re already wealthy.

(2) How do you know if now is the right time to exercise the options? The company may still be years away from a liquidity event, with some uncertainty remaining over its future outcome. Investors in the company are diversified and can absorb this uncertainty. You can only work for one company at a time, with all your eggs likely in this one basket. The stock is also likely illiquid right now so you can’t sell some to recoup your cost of exercising.

So now you’re left with three choices. Give up your options, stay at the company longer or scramble to find a financial solution quickly so you can afford to exercise the options. As Sam Altman wrote, this is an unfair situation and needs to be fixed.

At Triplebyte, we spend a lot of time talking to engineers who are thinking through startup job offers. They’re becoming increasingly savvy about how stock options work, and ask thoughtful questions about the mechanics of options. We also see an outsider’s perspective on this, as we talk to many engineers from non-traditional backgrounds who are based outside of the Valley. Invariably when we get into discussing the details of options, they are surprised by how quickly they’d be forced to exercise their options if they left a company. We agree with them. It’s not a fair situation to put someone in. People should stay at a company because they want to, not because they feel locked in by fear of loss.

There is a growing trend to fix this inequity by increasing the post-termination option exercise window for employees. This “option extension” gives you more time to exercise your options, which increases the likelihood there will be a liquidity event to help you pay the exercise price and the tax triggered upon exercise. Quora, Palantir, Pinterest, Asana and Coinbase have all increased the post-termination option exercise window for their employees. This is the future and we’re going to accelerate this trend to make it the industry standard to give employees 10 years from getting their options, to decide whether to exercise.

As a company, if you haven’t already implemented a stock option plan, adopting the increased exercise window is simple. You can just use our documents.

If you have an existing option plan in place, amending it requires thought and analysis of the tradeoffs. We’ve created a summary of the gory details on both the business and legal aspects here. Our goal is for founders to use this to have an informed discussion with their counsel and make a decision. What we want to make clear is that it is possible to do this for your existing employees by amending their outstanding options and adding a longer exercise window to them.

Much has been written about this issue in the past but not enough has changed. Most companies continue issuing options with a 90 day window. Employees are often either not sophisticated enough to ask about this issue, or are reluctant to ask a company to incur the expense of paying lawyers to draft new and complicated paperwork.

We’re applying the three forces we believe will make a real change (1) standardized paperwork accessible to all, (2) public recognition for companies who have made the change, (3) educating employees about the issue. We expect increasing the exercise window to become a necessary condition for startups who want to hire the best people, which is ultimately what their success depends on.

Thanks to Sam Altman, Carolynn Levy, Jonathan Levy, Jason Boehming and Nancy Chen for reading drafts of this post.

tag:blog.triplebyte.com,2013:Post/1007912 2016-03-05T21:55:25Z 2017-02-27T22:38:41Z Extending Stock Option Exercise Window Guide

We wrote here why we believe giving employees 10 years (i.e. the full term of their options) to exercise their stock options is the future of startup employee compensation.

If you haven’t already approved a stock option plan and would like to set this up, you can just use our documents to set up your plan, or Ironclad to both set up and manage them.

If you have an existing option plan in place, amending it requires thought and analysis of the tradeoffs. We encourage every company to think about this deeply and make an informed decision. What we want to make clear is that it is possible to amend outstanding options held by existing employees and add a longer post-termination exercise window to them.

There has been a lot of discussion on this topic, with arguments made both for and against implementing the extension. When talking with founders who agreed with making the change in principle, but were deterred by speaking with lawyers, we found there to be a lot of misinformation about the key issues in circulation.

Here is a summary of the issues, both business and legal, we most commonly hear brought up by founders thinking through this. Our goal here is to provide the information needed to raise this topic for internal discussion with your team and make a decision.

Business Issues

1) This matters only to the “wrong” kind of employee

Issue: The exercise window matters most to the mercenaries who want to move onto the hot new company each year. The company will lose an important retention mechanism that comes with options.

Thoughts: Having people stay at your company because they feel locked in isn’t good for employee morale. Companies want motivated people working for them, and should find positive ways to incentivize employees to remain in employment. Companies can also implement the option extension in a way to encourage retention by requiring employees remain in employment for e.g. 2 years to be eligible for the option extension. This is how Coinbase and Pinterest implemented it.

2) This is worse for employees from a tax perspective

Issue: There are two types of stock options, incentive stock options (ISOs) and nonstatutory stock options (NSOs). Only NSOs can have this extended exercise window but they also have less favorable tax treatment than ISOs.

Thoughts: It’s true that ISOs have better tax treatment than NSOs but the difference is not as great as commonly thought because of AMT (Alternative Minimum Tax). To understand this, let’s compare how the tax treatments differ at the two most important events, exercising your options and selling your stock.

1. Employee exercises stock option

ISO: Employee now owes AMT (Alternative Minimum Tax) on the difference between the amount they paid to exercise their options (the exercise price) and the fair market value of that stock today. Calculating exactly AMT can be tricky, most likely you’ll pay 28% on the difference.

NSO: Employee owes Ordinary Income Tax (38%) on the difference between the exercise price and fair market value of the stock.

2. Employee sells stock

ISO: Employees owes capital gains tax on difference between the sale price and fair market value of the stock at exercise. If this sale happens within a year, you’ll owe short term capital gains (38%). If it’s been over a year and two years since the options were granted, you’ll owe long term capital gains (20%).

NSO: The capital gains treatment is the same as above.

You can see the tax difference between ISOs and NSOs is most important at the time of exercise. What’s important is companies don’t have to decide whether employees get the flexibility of an NSO or the preferential tax treatment of an ISO. Employees can make this decision themselves. If an employee exercises their options, post leaving the company, within 90 days they’ll still get the ISO tax treatment. Otherwise they’ll get NSO tax treatment. They can decide whether to tradeoff better tax treatment against the flexibility of having more time to decide if they’d like to exercise.

3) Cap table management

Issue: Increasing the post-termination exercise window means having shareholders who haven’t been actively involved with the company for years. This equity could have been “re-invested” to incentivize new or current employees. This also makes cap table management difficult, putting a burden on the company to keep updated contact information for them all.

Thoughts: Stock options are a reward for work that has already been done. Once vested, employees deserve to own their options outright and have the opportunity to exercise the rest of their vested options. The additional administrative overhead is small in comparison to protecting this right.

4) Dual class of employees

Issue: Extending the exercise window is not possible for existing employees and will only be applicable to new hires. This creates two classes of employees, penalizing the loyal long-time employees whose options did not originally have the option extension.

Thoughts: This is simply not true. It is possible to amend outstanding options held by existing employees to add an option extension. This amendment may convert an incentive stock option (ISO) into a nonstatutory stock option (NSO), and the company may have to comply with the tender offer rules to offer employees this option extension. However, this is completely feasible, and many companies have complied with the tender offer rules to implement this.

5) Better handled on a case by case basis

Issue: This issue is better handled on a case by case basis with each individual employee. Companies should have a personal conversation and find the best solution for each person, rather than offering this to everyone by default.

Thoughts: Option extensions require board and optionee consent. Handling option extensions on a case by case basis is administratively burdensome, because the company has to remember to seek board approval of an option extension each time someone leaves. In addition, a case by case analysis may make the company vulnerable to a claim of discrimination if employees and their options are not treated in the same manner. Finally, if there are multiple case by case option extensions, there comes a point when the tender offer rules will get triggered anyway, because of the number of option extensions that have been offered. This isn’t something a company wants sprung upon them without time to prepare for it.

Legal and Accounting Issues

1) Tender Offer

Giving current employees the decision whether to keep their ISOs without the option extension or have NSOs with the extension is an investment decision, which may require the company to engage in a tender offer when offering the option extension. This tender offer process is relatively easy to implement, but the company has to give its employees at least 20 business days to think it over and decide.

2) Accounting Charge

An option extension will likely result in a higher accounting charge with respect to the options, although many companies have not found this additional charge to be material. The company needs to check with its outside auditors on the accounting implications of an option extension, which will vary depending on the number of options affected by the option extension and the length of the option extension.

3) Taxes

With an option extension, a company can anticipate that most options will be NSOs when exercised. If NSOs are exercised when there is a gain on the exercise date, the company will incur an additional tax cost, because the company has to pay the employer portion of the employment tax on such gain. At the same time, the company gets a deduction equal to the gain recognized on the exercise of an NSO, so the company has to balance the cost of the employment tax against the benefit of a tax deduction. There is no comparable tax cost with the exercise of an ISO.

4) Acquisitions

If the company is acquired, occasionally, a buyer may require that the company track down all former employee optionees to obtain their consent to the option treatment, as a closing condition. However, the form of the Triplebyte stock plan is drafted to give maximum flexibility in the treatment of options in an acquisition, so optionee consent to the option treatment shouldn’t be required in most acquisitions.

5) IPO

Due to the option extension, there may be more options outstanding at an IPO, which may result in a larger overhang (i.e. larger number of outstanding equity). However, it’s possible that the company will have a similar overhang without the option extension, if most optionees exercise options and become shareholders.

If you have any questions about these, please get in touch. More companies are starting to follow the trend set by Pinterest, Quora, Coinbase, Amplitude and others and we’d like to see this become the standard.

Other Resources

A collection of thoughts discussing this issue we’d recommend reading:

Sam Altman: Employee Equity

Adam D’Angelo Quora Post

George Grellas (Founder of Grellas Shah LLP) Hacker News Post

Extending the Option Exercise Period — A Tactical Guide

Thanks to Nancy Chen at Orrick for providing the legal details in this post.

tag:blog.triplebyte.com,2013:Post/1007943 2015-12-09T20:00:00Z 2017-06-08T00:06:41Z Who Y Combinator Companies Want

If you’re a programmer interested in joining a YC startup, apply to Triplebyte and we’ll match you with the ones you’d be the best fit for.

Companies disagree significantly about the types of programmers they want to hire. After 6 months doing technical interviews and sending the best engineers to Y Combinator companies (and interviewing the founders and CTOs at the top 25), we’ve analyzed our data. There are broad trends, but also a lot of unpredictability. Key takeaways include:

1. The types of programmers that each company looks for often have little to do with what the company needs or does. Rather, they reflect company culture and the backgrounds of the founders. It’s nearly impossible to judge these preferences from the outside. At most companies, however, non-technical recruiters reject 50% of applicants by pattern matching against these preferences. This is a huge frustration for everyone involved.

2. Across the companies we work with there are several notable trends. First, companies are more interested in engineers who are motivated by building a great product, and less interested in engineers with pure technical interests.This is at odds with the way the majority of programmers talk about their motivations. There’s a glut of programmer interest in Machine Learning and AI. Second, companies dislike programmers with enterprise backgrounds. Our data shows that companies are less likely to hire programmers coming from Java or C# backgrounds.

3. These results show extrapolation from insufficient data on the part of many companies. Talent can be found among programmers of all backgrounds. We’re mapping the preferences across all YC Companies in more detail, and encouraging companies to consider people they would normally reject. In the meantime, programmers looking for jobs with YC companies may want focus more on product and be sure to mention experience outside of Java and C#.

The problem

My co-founders and I have been running a recruiting company (Triplebyte) for the last 6 months. We interview programmers, and help the best ones get jobs at YC companies. We do our interviews without looking at resumes (in order to find great people who look bad on paper), and then see feedback on each engineer from multiple companies. This gives us a unique perspective on who YC companies want to hire.

When we started, we imagined a linear talent scale. We thought that most companies would be competing for the same (top 5%) of applicants, and all we had to do was measure this. One of the first people to pass our process really impressed us. He was a superb, intelligent programmer. He solved hard algorithm problems like they were nothing, and understood JavaScript deeply. We introduced him to a company he was excited about, and sat back to watch him get a job. We were startled when he failed his first interview. The company told us they valued process more than raw ability, and he’d not written tests during the interview. He went on to get a bunch of offers from other companies, and one founder told us he was among the best programmers they had ever interviewed.

This lack of agreement is the rule, not the exception. Almost no one passes all their programming interviews. This is true because of randomness in many interview processes (even great people are bad at some things, and an interviewer focusing on this can yield a blocking no), and also because companies look for very different skills. The company that rejected our first candidate ranked testing in the interview above algorithmic ability and JavaScript knowledge.

Mapping these preferences, it was clear, was key to helping engineers find the right startups. If we could route programmers to companies where their skills were valued, everyone would win. To that end, we’ve spent the last two months doing detailed interviews with CTOs and lead recruiters at the top 25 Y Combinator companies. In this blog post I’m going to write about what we learned from talking to these companies and sending them engineers. It’s interesting, and I hope useful for people applying for programming jobs.


To map the preferences of the top YC companies, we wrote paragraphs describing 9 hypothetical programmers, embodying patterns we’d seen from running 1000+ interviews over the last 6 months. These range from the “Product Programmer” who is more excited about designing a product and talking to users than solving technical challenges (we internally call this the Steve Jobs programmer) to the “Trial and Error Programmer” who programs quickly and is very productive, but takes an ad hoc approach to design. In reality, these profiles are not mutually exclusive (one person can have traits of several).

We then set up meetings with the founders and lead recruiters at the top 25 YC Companies. In the meetings we asked each company to rank the 9 profiles in terms of how excited they were to talk to people with those characteristics.


The grid that follows shows the results[1]. Each row shows the preferences of a single (anonymized) company. Each column is a hypothetical profile. Green squares means the company wants to interview engineers matching the profile, red means they do not. Empty squares are cases where the founders’ opinions were too nuanced to be rounded to interest or lack of interest.

The first thing that jumps out is the lack of agreement. Indeed, there’s no single company interested (or disinterested) in all 9 profiles. And no profile was liked (or disliked) by more than 80% of companies. The inter-rater reliability of this data (a measure of the agreement of a group of raters) comes out at 0.09[2]. This is fairly close to 0. Company preferences are fairly close to unpredictable.

The impact of these preferences on programmers, however, is totally predictable. They fail interviews for opaque reasons. Most companies reject a high percentage of applicants during a recruiter call (or resume screen). Across the 25 companies we interviewed, an average of 47% of applicants were rejected in this way (the rate at individual companies went as high as 80%, and as low as 0%). The recruiters doing this rejecting are non technical. All they can do is reject candidates who don’t match the profile they’ve been taught to look for. We’ve seen this again and again when we intro candidates to companies. Some companies don’t want to talk to Java programmers. Others don’t want academics. Still others only want people conversant in academic CS. We’ve seen that most engineers only have the stomach for a limited number of interviews. Investing time in the wrong companies carries a high opportunity cost.

I don’t want to be too hard on recruiters. Hiring and interviewing are hard, shortcuts must be taken to keep the team sane, and there are legitimate reasons for a company to enforce a specific engineering culture. But from the point of view of programmers applying for jobs, these company preferences are mercurial. Companies don’t advertise their preferences. People who don’t match simply apply, and are rejected (or often never hear back).


There is some agreement among companies, however, and it’s interesting.

1. There’s more demand for product-focused programmers than there is for programmers focused on hard technical problems. The “Product Programmer” and “Technical Programmer” profiles are identical, except one is motivated by product design, and the other by solving hard programming problems. There is almost twice as much demand for the product programmer among our companies. And the “Academic Programmer” (hard-problem focused, but without the experience) has half again the demand. This is consistent with what we’ve seen introducing engineers to companies. Two large YC companies (both with machine learning teams) have told us that they consider interest in ML a negative signal. It’s noteworthy that this is almost entirely at odds with the motivations that programmers express to us. We see ten times more engineers interested in Machine Learning and AI than we see interested in user testing or UX.

2. (Almost) everyone dislikes enterprise programmers. We don’t agree with this. We’ve seen a bunch of great Java programmers. But it’s what our data shows. The Enterprise Java profile is surpassed in dislikes only by the Academic Programmer. This is in spite of the fact we explicitly say the Enterprise Programmer is smart and good at their job. In our candidate interview data, this carries over to language choice. Programmers who used Java or C# (when interviewing with us) go on to pass interviews with companies at half the rate of programmers who use Ruby or JavaScript. (The C# pass rate is actually much lower than the Java pass rate, but the C# numbers are not yet significant by themselves.) Tangential facts: programmers who use Vim with us pass interviews with companies at a higher rate than programmers who use Emacs, and programmers on Windows pass at a lower rate than programmers on OS X or Linux.

3. Experience matters massively. Notice that the Rusty Experienced Programmer beats both of the junior programmer profiles, in spite of stronger positive language in the junior profiles. It makes sense that there’s more demand for experienced programmers, but the scale of the difference surprised me. One prominent YC company just does not hire recent college grads. And those that do set a higher bar. Among our first group of applicants, experienced people passed company interviews at a rate 8 times higher than junior people. We’ve since improved that, I’ll note. But experience continues to trump most other factors. Recent college grads who have completed at least one internship pass interviews with companies at twice the rate of college grads who have not done internships (if you’re in university now, definitely do an internship). Experience at a particular set of respected companies carries the most weight. Engineers who have worked at Google, Apple, Facebook, Amazon or Microsoft pass interviews at a 30% higher rate than candidates who have not.


If you’re looking for a job as a programmer, you should pay attention to these results. Product focused programmers pass more interviews. Correlation is not causation, of course. But company recruiter decisions are driven largely by pattern matching, so there is a strong argument that making yourself look like candidates who companies want will increase your pass rate. You may want to focus more on product when talking to companies (and perhaps focus on companies where you are interested in the product). This is a way to stand out. Similarly, if you’re a C# or Java programmer applying to a startup, it may behoove you to use another language in the interview (or at least talk about other languages and platforms with your interviewer). Interestingly, we did talk to two YC companies that love enterprise programmers. Both were companies with founders who have this background themselves. Reading bios of founders and applying to companies where the CTO shares your background is probably an effective job-search strategy (or you could apply through Triplebyte).

If you run a startup and are struggling to hire, you should pay attention to these results too. Our data clearly shows startups missing strong candidates because of preconceptions about what a good programmer looks like. I think the problem is often extrapolation from limited data. One company we talked to hired two great programmers from PhD programs early on, and now loves academics. Another company had a bad PhD hire, and is now biased against that degree. In most cases, programming skill is orthogonal to everything else. Some companies have legitimate reasons to limit who they hire, but I challenge all founders and hiring managers to ask themselves if they are really in that group. And if you’re hiring, I suggest you try to hire from undervalued profiles. There are great Ph.Ds and enterprise C# programmers interested in startups. Show them some love!


YC Startups disagree strikingly about who’s a good engineer. Each company brings a complex mix of domain requirements, biases, and recruiter preferences. Some of these factors make a lot of sense, others less so. But all of them are frustrating for candidates, who have no way to tell what companies want. They waste everyone’s time.

I’m excited about mapping this. Since we started matching candidates based on company preferences (as well as candidate preferences), we’ve seen a significant increase in interview pass rates. And we only just completed the interviews analyzed in the post. I’m excited to see what this data does. Our planned next step is to not only interview founders and recruiters at these companies, but also have the engineers who do the bulk of the actual interviewing provide the same data.

Our goal at Triplebyte is to build a better interview process. We want to help programmers poorly served by standard hiring practices. We’d love to have you apply, even if — or especially if — you come from one of the undervalued groups of programmers mentioned in this article. We’d also love to get your thoughts on this post. Send us an email at founders@triplebyte.com.

Thanks to Jared Friedman, Emmett Shear and Daniel Gackle and Greg Brockman and Michael Seibel for reading drafts of this.


1. Astute readers will notice that there are more than 25 rows in the graph. This is because we’ve recently added these questions to our onboarding flow for new companies we work with. If you run a YC Company, you can log into Triplebyte with your company email address, and add this data (we’ll use it to send you more candidates).

2. I calculated this using Fleiss’ kappa. This measures the agreement between a number or raters, with -1 being perfect disagreement, 0 being the agreement that would result from random coin tosses, and 1 being perfect agreement.

Ammon Bartram
tag:blog.triplebyte.com,2013:Post/1007940 2015-12-08T20:00:00Z 2017-05-24T07:16:51Z A Taxonomy of Programmers

We’ve been interviewing hundreds of programmers and matching them with YC startups. To help intelligently match programmers with companies, we’ve created a number of hypothetical programmer descriptions. These profiles are drawn from patterns we’ve seen in 1000+ technical interviews over the last 6 months. We’ve had success using these profiles to match engineers with companies. If you have any suggestions for additional profiles, we’d love to hear about them in the comments.

Academic Programmer: Candidate has spent most of their career in academia, programming as part of their Masters/PHD research. They have very high raw intellect and can use it to solve hard programming problems, but their code is idiosyncratic.

Experienced Rusty Programmer: Candidate has a lot of experience, and can talk in depth about different technology stacks and databases, explaining their positives and negatives with fine detail. When programming during an interview, they’re a little rusty. They usually get to the right place but it takes a while.

Trial and Error Programmer: Candidate writes code quickly and cleanly. Their approach seems to involve a lot of trial and error, however. They dive straight into programming problems and seem a little ad hoc but their speed enables them to ultimately solve the problems productively.

Strong Junior Programmer: Candidate is fresh out of college, with some internships and less than a year full time work experience. They really impress during a technical interview, have numerous side projects and impressive knowledge of computer science and programming in general. They’re well above average from other junior programmers.

Child Prodigy Programmer: Candidate is very young (e.g. 19 years old) and decided to go straight into work, skipping college. They’ve been programming since a very young age and are very impressive in their ability to solve hard technical problems. They’ve also been prolific with side projects and are mature for their age. It’s likely they’ll found a company in the future when they’re older.

Product Programmer: Candidate performs well on technical interviews and will have the respect of other engineers. They’re not motivated by solving technical problems, however. They want to think about the product, talk to customers and have an input into how product decisions are made.

Technical Programmer: Candidate is the inverse of the Product Programmer. They interview well and communicate clearly. But they aren’t motivated to think about the user experience or product decisions. They want to sink their teeth into hard technical problems.

Practical Programmer: Candidate solves practical programming problems with ease, even very abstract programs. They aren’t comfortable with computer science terminology though (e.g. data structures, algorithms) and don’t have a deep understanding of how computers work. They are strongest with ruby/python/javascript, not so much with lower level languages like C.

Enterprise Programmer: Candidate is strong in academic computer science (algorithms, data structures, complexity analysis), has experience, and solves technical problems well. Their working experience is with large enterprise companies (e.g. Dell/Oracle/IBM). They want to join the startup, although they don’t have experience taking ownership of projects. They program mostly in Java using an IDE such as Eclipse.

Note: If you run a YC Company, you can log into Triplebyte with your company email address, and add your preferences (we’ll use it to send you more candidates).

Ammon Bartram
tag:blog.triplebyte.com,2013:Post/935685 2015-11-18T18:43:34Z 2016-03-28T06:09:38Z Gaming the H-1B system (for good)

A recent article in the NY Times exposed how flawed the H-1B lottery process is. A handful of giant outsourcing companies flood the system with applications, making it near impossible for startups to hire international engineers. 

These companies are gaming the system. But there is a way to turn this game against them, by exploiting the Achilles heel in their plan - the H-1B transfer. Getting a H-1B is tough because regardless of your personal merits, you're in a lottery with thousands of other candidates. Your choice of employer is limited by those willing to play the lottery.  There's no lottery for transferring a H-1B though. The process is straightforward with no quota, you just have to find an employer willing to file the paperwork. This gave us an idea. 

We're announcing the Triplebyte H-1b transfer program. If you're working on a H-1B at one of these outsourcing companies, apply to Triplebyte and we'll cover all the costs of transferring your H-1B. We'll help you find a startup doing work you're excited about and walk them through the H-1B transfer process, making it a no brainer for them. We'll also provide you with an immigration lawyer, to answer any questions you have, and we'll cover the cost of that too.   

We're going to expand the pool of startups doing H-1B transfers so you have the same choice as anyone else.  We recently placed an engineer using a H-1B transfer, at a startup who wouldn't have considered doing this without our help. Many founders mistakenly assume that applying for and transferring a H-1B are synonymous. 

Helping great people move here is something that's personally important to us. My life was changed by moving out here to work on my first startup (after a year of struggling with trying various approaches to getting a visa). My co-founder Guilllaume moved here from France to work at Justin.tv and then found his own startup, Socialcam. We want to see more talented people coming here to work on building the future, not being cheap labor for giant corporations.  

Thanks to Theo Negri and Buildzoom for shining a light on this issue in the original story.]]>
tag:blog.triplebyte.com,2013:Post/887699 2015-07-29T18:26:03Z 2017-04-12T11:02:40Z Take-home interviews Today we're announcing our second experiment, take-home projects. We're going to try a new way of assessing programming ability by having programmers work on a project on their own time instead of coding during an interview. We know there are benefits and drawbacks to this approach, I'll go into more detail into our thinking behind this below.

Anyone who passes our take-home project assessment will get exactly the same service from us as people who do the regular interviews. We'll work hard to find several YC startups they'd be a great fit for, fast track them through the hiring processes, and handle all logistics of flights/accommodations/scheduling.

The Problem

Several weeks ago, we interviewed a recent college grad. He'd done well on our quiz, had great personal projects, and I was excited to talk to him. As soon as the interview started, however, I could tell that something was wrong. I gave him a programming problem, but he could not get started. He'd start to write one thing, mutter that it was a bad place to start, and go back to something else. He switched languages. His breathing accelerated. He started to shake.

Programming interviews are stressful. Fundamentally, the applicant is being judged. They have to understand the question, produce a working solution in limited time, while explaining everything they are doing with no time to stop and gather their thoughts. At its worst it's adversarial.

Some programmers find that this stress pushes them to do their best in interviews. Others find it debilitating. There are programmers with track records of solving hard problems who simply freeze when subjected to the stress of an interview. They babble. They become unable to program.

This does not mean that they are bad programmers[1]. I gave the fellow in our interview a much harder problem to do on his own time. I assumed that he'd never get back to us. The project was a lot of work. Three days later, however, I had a complete solution in my inbox. We got him back on the phone, and he was able to talk in depth about what he had done, about the underlying algorithms, and about the design trade-offs he'd made. The code was clean. He was clearly a skilled programmer.

The Solution

To solve the problem of interview anxiety, we're adding a second track to our interview process at Triplebyte. Applicants, if they choose, will be able go through our process by completing programming projects on their own time. They'll still do interviews with us, but rather than doing interview problems, they will just talk about the project they already completed. Those who do well will be matched with Y Combinator companies, just like programmers who go through our regular interview.

The project-based track will require a larger time commitment (and we expect lots of people to stick with the standard track for this reason). However, doing a larger project is almost certainly a better measure of actual ability to do a job then a traditional interview is.

Here's how our process works:
  1. When a candidate books a 45-minute interview, they can indicate that they want to do a project.
  2. Three days before the interview, we'll send them a list of projects, and they'll pick one and start to work on it. We expect them to spend about 3 hours on the project (or as long as they want to spend to show us that they're a good programmer).
  3. During the interview, we'll talk about what they've programmed, go over design choices and give feedback.
People who pass the 45-min interview will go though the same process in the 2-hour final interview. Rather than pick a new project, however, they'll take the same project further, incorporating feedback from the 1st interview. Those who pass the 2-hour will talk to Harj, get intro-ed to YC companies, and start new jobs!

I'm particularly excited being able to see iterative improvements to the project between the two interviews (an important part of doing an actual job). It's an experiment, and I have no idea how it will turn out, but giving people the option to do larger projects and avoid stressful interviews just seems like a good idea. In a few months, after we've done a meaningful number of these interviews, I'll write about how their results compare to our other interviews.

1. The stress of interviewing seems to be different than the stress of performing a job. None of the people we've spoken to who do poorly in interviews report problems performing under deadlines at work, or when a website is down and there's pressure to get it back up.

Ammon Bartram
tag:blog.triplebyte.com,2013:Post/872332 2015-06-23T18:41:04Z 2016-10-12T23:46:37Z Three hundred programming interviews in thirty days

We launched Triplebyte one month ago, with the goal of improving the way programmers are hired. Too many companies run interviews the way they always have, with resumes, white boards and gut calls. We described our initial ideas about how to do better than this in our manifesto. Well, a little over a month has now passed. In the last 30 days, we've done 300 interviews. We've started to put our ideas into practice, to see what works and what doesn't, and to iterate on our process. In this post, I'm going to talk about what we've learned from the first 300 interviews.

I go into a lot of detail in this post. The key findings are:
  1. Performance on our online programming quiz is a strong predictor of programming interview success
  2. Fizz buzz style coding problems are less predictive of ability to do well in a programming interview
  3. Interviews where candidates talk about a past programing project are also not very predictive


Our process has four steps:
  1. Online technical screen.
  2. 15-minute phone call discussing a technical project.
  3. 45-minute screen share interview where the candidate writes code.
  4. 2-hour screen share where they do a larger coding project.
Candidates work on their own computers, using their own dev environments and strongest languages. In both of the longer interviews, they pick the problem or project to work on from a short list. We're looking to find strengths, so the idea is that most candidates should be able to pick something they're comfortable with. We keep the list of options short, however, to help standardize evaluation. We want to have a lot of data on each problem.

We're looking for programming process and understanding, not leaps of insight. We do this by offering help with design/algorithm of each problem (and not penalizing candidates for this). We evaluate interviews with a score card. For now we go a little overboard, tracking the time to reach a number of milestones in each problem. We also score on understanding, whether they speak specifically or generally, do they seem nervous, and a bunch of other things (basically everything we can think of). Most of these, no doubt, are horrible measures of performance. We record them now so that we can figure out which are good measures later.


The first experiment we ran was screening people without looking at resumes. Most job applicants are rejected at the screening stage. The sad truth is that a high percentage of the people applying for any job post on the Internet are bad. To protect the time of their interviewers, companies need a way to filter people early, at the mouth of the hiring funnel. Resumes are the traditional way to do this. However, as Aline Lerner has shown, resumes don't work. Good programmers can't be reliably distinguished from bad ones by looking at their resumes. This is a problem. What the industry needs is a way to screen candidates by looking at their actual ability, not where they went to school or worked in the past[1]. To this end, we tested two screening steps:
  1. A fizzbuzz-like programming assignment. Applicants completed two simple problems. We tracked the time to complete each, and manually graded each on correctness and code quality.
  2. An automated quiz. The questions on the quiz were multiple choice, but involved understanding actual code (e.g., look at a function, and select which of several bugs is present).
We then correlated the results of these two steps with success in our subsequent 45 minute technical interview. The following graph shows the correlations after 300 interviews.

Correlation between screening steps and interview decisions

We can see that the quiz is a strong predictor of success in our interviews! Almost a quarter of interview performance (23%) can be explained by the score on the quiz. 15% can be explained by quiz completion time (faster is better). Speed and score are themselves only loosely correlated (being accurate means you're only slightly more likely to be fast). This means that they can be combined, into what we're calling the composite score, which has the strongest correlation of all and explains 29% of interview performance![2].

The fizzbuzz-style coding problems, however, did not perform as well. While the confidence intervals are large, the current data shows less correlation with interview results. I was surprised by this. Intuitively, asking people to actually program feels like the better test of ability, especially because our interviews (the measures we're using to evaluate screening effectiveness) are heavily focused on coding. However, the data shows otherwise. The coding problems were also harder for people to finish. We saw twice the drop off rate on the coding problems as we saw on the quiz.

Talking versus coding

Before launching, we spoke to a number of smart people with experience in technical hiring to collect ideas for the interviewing. The one I liked the most was having candidates talk us through a technical project, including looking at source code. This seemed like it’d be the least adversarial, most candidate friendly approach.

As soon as we started doing them however, I saw a problem. Almost everyone was passing. Our filter was not filtering. We tried extending the duration of the interviews to probe deeper and looking at code over Google hangouts. Still, the pass rate remained too high.

The problem was we weren’t getting enough signal from talking about projects to confidently fail people. So we started following up with interviews where we asked people to write code. Suddenly, a significant percentage of the people who had spoken well about impressive-sounding projects failed, in some cases spectacularly, when given relatively simple programming tasks. Conversely, people who spoke about very trivial sounding projects (or communicated so poorly we had little idea what they had worked on) were among the best at actual programming.

In total we did 90 experience interviews, scoring across several factors (did the person seem smart, did they understand their project well, were they confident, and was the project impressive). Then we correlated our factors with performance in the 45 minute programming interview. Confidence had essentially zero correlation. Impressiveness, smartness and understanding each had about a 20% correlation. In other words, experience interviews underperformed our automated quiz in predicting success at coding.

Now, talking about past experience in more depth may be meaningful. This is how (I think) I know which of my friends are great programmers. But, we found, 45 minutes is not enough time to make talking about coding a reasonable analog for actually coding.

Interview duration, and interviewer sentiment

A final test we ran was to look at when during the interview we make decisions. Laszlo Bock, VP of People at Google, has written much about how interviewers often make decisions in the first few minutes of an interview, and spend the rest of the time backing up this decision. I wanted to make sure this was not true for us. To test this, we added a pop-up to our interviewing software, asking us every five minutes during each interview if the candidate is performing well, or poorly. Looking at these sentiments in aggregate, we can tell exactly when during each interview we made the decision.

We found that in 50% of our 45-min interviews, we "decide" (become positive for someone who ends up passing, or negative for someone who does not pass) in the first 20 minutes. In 20%, however, we do not settle on our final sentiment until the last 5 minutes. In the 2-hour interview, the results are similar. We decide 60% in the first 20 minutes (both positively and negatively), but 10% make it almost to the 2-hour mark. (In that case, unfortunately, it's positives turning to negatives, because we can't afford to send people we're unsure about to companies)[3].


It's been a crazy month. Guillaume, Harj and I have spent nearly all our time in interviews. Sometimes, at 10 PM on a Saturday, after a day of interviewing, I wonder why we started this company. But as I write this blog post, I remember. Hiring decisions are important, and too many companies are content to do what they've always done. In our first 30 days, we've come up with a replacement for resume screens, and shown that it works well. We've found that programming experience interviews (used at a bunch of companies) don't work particularly well. And we've written software to help us measure when and why we make decisions.

For now, we're evaluating all of our experiments against our final round interview decisions. This does create some danger of circular reasoning (perhaps we're just carefully describing our own biases). But we have to start somewhere, and basing our evaluations on how people write actual code seems like a good place. The really exciting point comes when we can re-run all this analysis, basing it on actual job performance, rather than interview results. Doing that is why we started this company.

Next, we want to experiment with giving candidates projects to do on their own time (I'm particularly interested in making this an option, to help with interview anxiety), and interviews where candidates are asked to work with an existing codebase. We're also adding harder questions to the quiz, to see if we can improve its effectiveness. We'd love to hear what you think about these ideas. Email us at founders@triplebyte.com.

Thanks to Emmett Shear, Greg Brockman and Robby Walker for reading drafts of this.

An earlier version of this post confused the correlation coefficient R with R^2, and overstated the correlations. Since this blog was posted, however, a new version of the quiz has increased the correlation of the composite score to 0.69 (0.47 R^2)

1. This is a complex issue. There are good arguments for allowing experienced programmers to skip screening steps, and not have to continually re-prove themselves. At some point, track record should be enough. However, this type of screening can also be done in very bad ways (e.g., only interviewing people who have worked at top companies or come from a few schools). Evaluating experience is something we plan to experiment with, but for now we're focusing on how to directly identify programming ability.

2. It’s worth noting the error bars (showing 95% confidence intervals). The true value for each of the correlations in the graph falls in the range shown with 95% confidence. The error bars are large because our sample is small. However, even comparing the bottom of our confidence interval to Aline Lerner’s results on resume screening (she found a correlation close to 0), shows our quiz is a far better first step in a hiring funnel than resumes are.

3. We're not perfect, and we certainty reject great people. I always like to mention this when talking about rejections. We know this (and think it's true of all interview processes). We're trying to get better.

Ammon Bartram
tag:blog.triplebyte.com,2013:Post/852663 2015-05-07T18:32:23Z 2016-03-15T02:06:00Z Improving the technical hiring process

Guillaume, Ammon and I are excited to announce the launch of our new company, Triplebyte. Our goal is to build a consistent and data-driven process for hiring programmers.

Most companies make up their hiring process as they go along. We certainly did that when hiring at our own startups. This has problems. Resumes are relied on heavily as the first screen, but many great programmers have really bad resumes. Technical interviews are typically run by an interviewer who is unsure which questions to ask or how to evaluate answers. Final hiring decisions are based on gut feeling, which is rarely (i.e. never) measured for accuracy.

This is a manifesto of how we believe technical hiring should work. We want to build a company that specializes in assessing the ability of engineers without relying on the prestige of their resume credentials. Once we've identified them, we're going to help them find great places to work. We'll use the latter to measure how well we're doing at the former.

We're going to do two things differently. First, track decisions as quantitatively as possible. Second, run experiments with our own process. We expect it to change completely over time. Frankly, we'd love to get rid of interviews entirely.

We're starting our first experiment today - blind phone screens. First, we ask a few questions to verify you're a programmer. It's our version of an online FizzBuzz. Once you pass those, we ask you to schedule a 15-minute technical phone call. We only want to talk about one thing: code you've written in the past. That's literally the only thing we'll ask you about. Our hypothesis is that's enough to help good programmers stand out.  After that we'll go deeper into code you've written before over a couple of 45 minute technical interviews via screen share.   

Humans are complicated and making decisions about their ability is difficult. We're excited about trying because the potential reward is so large. A better hiring process can significantly reduce bias. It'll open up the opportunity for anyone, from anywhere, to be assessed on their ability. It'll help startups find the programmers they need to build great products. We think this would be a great thing for the world and we're excited to build it!

If you have ideas for other ways we could experiment with our process, or if you think there's a better approach than the one we're taking, we'd love to hear from you. founders@triplebyte.com.