HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Good at programming competitions does not equal good on the job

www.catonmat.net · 372 HN points · 5 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention www.catonmat.net's video "Good at programming competitions does not equal good on the job".
Watch on www.catonmat.net [↗]
www.catonmat.net Summary
A few days ago I watched How Computers Learn talk by Peter Norvig. In this talk, Peter talked about how Google did machine learning and at one point he mentioned that at Google they also applied machine learning to hiring. He said that one thing that was surprising to him was that being a winner at programming contests...
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
This is probably just a variant of Berkson's paradox, similar to Google's observation that success in programming competitions is negatively correlated with job performance: http://www.catonmat.net/blog/programming-competitions-work-p...

The mechanism would work this way: sales people exhibit multiple features, and they are promoted based on some combination of those. If a sales person has outstanding other credentials, they might be promoted despite poor sales percentile. Those other credentials might actually be better predictors of managerial experience. Conversely, many of the top sales people might have been promoted on the grounds that they were good sales people, without exhibiting any other skills.

Note that there might still be a positive correlations between sales skills and managerial skills, but due to how the promotions are selected, you end up observing a negative correlation in the promoted group.

https://en.wikipedia.org/wiki/Berkson%27s_paradox

citrin_ru
If Google knows that success in programming competitions is negatively correlated with job performance, then why Google organizes codejam and invites participants with good results to a job interview? Also as mentioned above Google interview questions looks like problems from programmings competitions.
lkrubner
True, and someone who is a great salesperson probably has skills that are optimized for the context of sales. But another aspect of getting promoted is that a person is given oversight of new kinds of activities, where teams work with different cultures and different rules. I've seen some sales manager succeed by being bullies to their sales team. But I think it is a disaster when someone attempts to bully a tech team. So what works in one context fails in another context. I've tried to describe this previously:

----------------------------

Every industry has certain euphemisms for the least savory aspects of its business. In sales, there is the secretly ugly phrase, “goal-oriented.” That sounds pleasant, doesn’t it? If I point at a woman and I say, “That entrepreneur is goal-oriented,” then you probably think I am complimenting her. But if I point at her and say, “That entrepreneur is a lying, manipulative, soulless psychopath who brutally exploits labor from the eleven-year-olds she employs in her sweatshops in Indonesia,” then you probably think I am insulting her, unless you are a libertarian. And yet both statements mean about the same thing: that she is someone who is willing to do whatever is necessary to ensure the success of her business.

When I read about Milburn online, I’d seen testimonials from his colleagues in which he was often described as a goal-oriented salesperson. That probably meant that he was a master of manipulating other people’s emotions. He knew all the tricks: praise, shame, laughter, anger, promises, guilt, threats.

Whether his use of these tools was conscious or unconscious is, of course, unknowable. But it doesn’t matter much. A lifetime as a sales professional left him with an arsenal of psychological ploys that had become second nature to him.

...Milburn truly had a genius for the strategic use of anger. If he sensed the risk of losing control of the conversation, he would indulge in another outburst. If I were to ever switch over to the Dark Side, I would want to study with him. His techniques were fundamentally dishonest and manipulative, but that is probably what made him so good at sales. And his tactics were probably an effective way to drive a sales team, but I sincerely believed that such tactics were the wrong way to run a software development team. Especially when doing something cutting-edge original, like we were doing, I think open and honest communications were extremely important. (I have worked with many companies where the sales team was both friendly and successful. One does not need to use abusive tactics to have success in sales. Indeed, the sales manager who relies on abuse is typically more interested in aggrandizing their own success, rather than the success of the company they work for.)

https://www.amazon.com/Destroy-Tech-Startup-Easy-Steps/dp/09...

geocar
> similar to Google's observation that success in programming competitions is negatively correlated with job performance

I find it very interesting that Google then interviews with programming challenges...

s2g
I'm pretty convinced half of that is ability to handle arbitrary bullshit, and the rest is just an IQ test with a different name.
We can go further than your statement. Success in "competitive" programming correlates negatively with success on the job:

http://www.catonmat.net/blog/programming-competitions-work-p...

govg
That doesn't really say much about their skill. All Norvig says is that people who are good at competitive programming are used to quickly solving problems and move around, and that probably isn't good when it comes to real world engineering. I don't think that attests to their quality in writing code, more about the environment they are used to working in. If there actually was a net loss in hiring such candidates, the major companies would stop using that as the filtering criteria to begin with.
Jul 07, 2017 · 372 points, 166 comments · submitted by jpn
fatjokes
I'm also a former ICPC world finalist and I'm willing to believe this is the case in a BigCo because: 1) Top ICPC competitors tend to be extremely socially awkward, and may not work well in groups. 2) Vast majority of engineering work is very algorithmically simple, thus these folks don't get a chance to "shine".

However, I've noticed that they are popular in prop trading firms, where work tends to be in very small teams or individual. I don't know how their performance correlates to fund performance.

If I were hiring, I'd still prefer to hire at least some top ICPC performers. The hard algorithms are rare---but can make or break your product.

I also think the knowledge learned from programming contests is invaluable. I'd like to be able to discuss bipartite matching or min-cut with my colleagues without eliciting a blank look.

YZF
I used to spend some time on TopCoder and got to be fairly good but not at the top "red" level... There is some transfer to improved code quality but I think it's generally not something that made a huge difference to what I do as a developer.

There are plenty of people who can do really hard algorithms but may need more time or research. Real life doesn't always present you with the same canned problems that are used in competition and you're not operating under the same constraint. A competition problem typically starts from an algorithm, somewhat like a comp-sci exam question. As a competitor you need to quickly recognize the algorithm and then quickly write an implementation. I'm sure a lot of the best competitive people would draw a blank when presented with a problem that isn't a well known algorithm and certainly wouldn't be able to solve it within the time constraints.

Basically it's a game.

What I'd say is that outstanding performance in programming competition probably correlated to some degree with intelligence. Intelligence correlates to some degree with being a good software developer.

pera
It depends on what position are you trying to fill: if your business have various hard and general algorithmic problems that you need someone to solve then, hiring an ICPC regional champion would probably be a good thing. But this is not always the case...

I wouldn't hire an Olympic medalist runner to do pizza delivery because: 1) the person will probably get really bored (and may quit or perform bad after some time), and 2) the delivered pizzas will probably be a mess inside the box.

I believe programming competitions would do itself a favor changing its currently ambiguous name to algorithmic competitions. Then engineers, who are also programmers, would be ok with it and we would stop having this kind of threads every few weeks.

That being said, I personally enjoy competitive programming and I do agree that the knowledge you acquire is invaluable :) and I also think that most engineers should practice with online judges now and then to be better at their jobs.

pizza234
> I wouldn't hire an Olympic medalist runner to do pizza delivery because: 1) the person will probably get really bored (and may quit or perform bad after some time)

I think it's a bit more complicated than that. What if Olympic medalists were conditioned to think that delivering pizza is the coolest job that exists? What if their scooters have massaging saddles and 5.1 hifis?

I definitely agree this market (segment) has, in a way, a very misled attitude/approach, but definitely, I think there is an (evil) art in marketing such workplaces.

_0ffh
> Vast majority of engineering work is very algorithmically simple

anecdotal_evidence+=1 : Am engineer, only algorithmically interesting task I had the last three months was to quickly find all minimal solutions of a given instance of a certain class of constraint graph problems. That's about average, at my job. Four algorithmically interesting things a year. And I guess I might even be one of the luckier engineers.

kwillets
While these problems don't come up frequently, I've found that in a typical project anything involving a graph has a good chance of being implemented wrong.
0xfeba
I just match APIs up 60% of my time. 20% in 75% useless meetings. And another 20% yak shaving.
hrktb
To add on point 2), most big companies have a strong bias against 'smart' solutions. If something is too complex to be fully understood by half the engineers:

a) it won't be trusted

b) it won't be maintainable by any random staff, which puts an additional risk on choosing that solution.

The maintainability argument will usually be enough to can any idea that only few people are comfortable to make evolve.

collyw
The interview process would indicate the opposite.
hrktb
To be honest I think a lot of company representatives (CTO, hiring managers...) don't acurately understand or voice what they really need.

A lot of them say to want 'A players', but the organization is not ready to value people willing to rock the boat, deeply challenge assumptions or willing to put higher people under scrutiny over their decisions.

Same with hiring super smart people but not wanting to commit to risky bets or to fail often.

In these companies the most demanding people come and go, and more lenient people ('B players' ?) stay to work within the system or try to make things better more incrementally.

lordnacho
Aren't contests normally under huge time pressure? I would think that a lot of people could solve even quite hard problems given ordinary conditions like having lots of time, colleagues, and reference materials available.
jmcgough
At the same time, if you've never seen a solution that uses something like DP, you're not going to see a DP problem and think "oh, I can use DP for this". Part of it is ensuring that programmers have awareness of the tools at their disposal, so they can pull them out when they're under a deadline to get something done on time that's performant
eastWestMath
Companies don't have 2 hour time limits on solving their hard problems - if you run into hard algorithmic problems in your problem domain, you should just have someone with a graduate degree in CS theory. Let them work out a solution that's actually correct and they can explain why, and then the engineers can implement said algorithm.
derf_
I am also an ACM ICPC finalist (our team was 2nd in the world, 1st in North America in my final year of eligibility).

I think there are a few very specific skills you learn in programming contests that are useful in the real world:

1) Once you understand the problem to be solved, you can simply sit down and write the code to solve it. The ability to turn ideas into code in minutes instead of hours or days is invaluable. It lets you try lots of ideas, because the implementation step is easy. You can pick the ones that work and throw the rest away, and you don't mind throwing them away because they weren't a big investment. You don't realize how useful this is until you start meeting people who can't do it.

Our coach always told us that there is at least one easy problem, and someone's job the moment the contest starts is to identify it and bang it out. If we hadn't solved one problem in the first 20 minutes, we were already losing.

2) You learn to write code that is simple and correct the first time. Idioms that avoid special cases. Simple data structures ("No pointers!" was common advice, though of course reality was more subtle than that). Understanding when big-O complexity really matters, and when you can use a sub-optimal solution that's easier to code and reason about.

Sure, someone who is smart can write crazy complex code that only they can understand. But crazy complex code doesn't win contests. If you make a mistake you will burn all of your time trying to figure out what you did wrong, and the worse the code is, the worse of a hole you will dig yourself into. Write simple code that doesn't leave much opportunity for mistakes.

There's a scene in Jack Reacher where Tom Cruise's character says, "James Barr is a sniper. He's not the best. He's not the worst. But he trained non-stop for two years. What does training like that do? What does any training do? Skills become reflex, muscle memory, you do without thinking. It also makes people who aren't necessarily smart seem smart by beating some tactical awareness into them." That's what contest programming does.

3) You learn to identify your limits. There's usually (but not always) at least one problem you should just throw away as "too hard" for the time allotted. You won't solve it, so any time invested in it is a waste. In the real world, where you can't just decide not to do your job, that means you know the difference between when you can go fast and when you need to slow down and reflect. When you need to write a test to really know if the code you just wrote is correct. When it's better to step back and simplify the problem, rather than beat your head against it.

Yes, people like to focus on the "hard algorithms", but really this is not what makes good contest programmers. People aren't deriving Ford-Fulkerson on the fly during a contest. They memorize a set of canned algorithms and apply them when necessary. You learn to recognize and value simple, robust solutions to problems. You carry a toolbox of these wherever you go. This toolbox keeps growing long after you stop competing, and it contains a lot more than fancy algorithms.

Now, there is also plenty you don't learn doing contest programming that you still need to know in a real job. But these are the things I learned that have helped me.

I'm still waiting for the day I need to compute the coefficients of a rook polynomial, though.

curiousDog
I would disagree that they'd be better at solving hard algorithms as well. The hard algorithms are being solved at the top theory groups at the top 10 schools.

I did a bit of programming contests in school as well and most kids practiced hard and got good at the tricks and patterns of solving those problems. The algorithms were mostly undergrad or grad level at best. They were not breaking new ground.

As to whether top ICPC talent correlates to IQ and capability, Most likely, although the same can be said about a really high GPA though. People like Tomek Czajka will shine in all areas.

I've worked at a couple of the Big-4 companies before and the best programmers I met there were not competitive programmers.

raverbashing
But as much as you need a good algorithm, you need it to be testable, reliable and predictable

This involves some team work

richardknop
This is probably true. I agree. I would aim for balanced teams, some top ICPC performers balanced with people who are better at solid design, creating maintainable codebase and working with business people to define requirements.
joenot443
I agree for the most part, but I think it really depends on the product your team is building. I can think of at least a couple companies I've worked at where the hard algorithms weren't rare, they were actually non-existent. The product's success relied on user experience, a solid design, and exhaustively tested and maintained code.

The reality with many companies these days is that the algorithmically hard problems are solved in the frameworks and libraries they use, it's simply not necessary for most engineers to understand their inner workings.

I'm curious, would you prefer an engineer with solid software engineering and design knowledge, or one with minimal experience building real software, but very in-depth algorithm and CS theory knowledge?

fatjokes
> I agree for the most part, but I think it really depends on the product your team is building.

I completely agree. However, 1) Google---whose CTO is the originator of the claim in discussion---definitely works on hard algorithms and 2) you never know how your product could grow. A feature which could get dismissed as "impossible" might be implementable by the right talent. Presumably these teams don't need to be full of top algorithm folks.

> I'm curious, would you prefer an engineer with solid software engineering and design knowledge, or one with minimal experience building real software, but very in-depth algorithm and CS theory knowledge?

Depends on the size of my team and what I'm trying to deliver, I suppose. In general I'd aim for a balance in the team, maybe 3/4 engineering/design + 1/4 algorithms. I feel like it's easier to learn design patterns than how to use algorithms creatively.

sltkr
> 1) Google [..] definitely works on hard algorithms

True, but Google also has over 20,000 engineers working on various things. I think only a tiny minority of those are actually working on hard algorithms.

fatjokes
Exactly. They don't just hire programming contest folk. Most of the people they hire are those with engineering experience, etc.
jaggederest
It's like only hiring NASCAR drivers to drive taxis because "they might need to go fast".
BatFastard
Indeed, for Taxi drivers its better if they go slower and entertain the guests along the way.

I have had top algorithm people work for me and they can do make magic happen. But they tend to have the quirky personalities of Wizards. I once had a amazing code ask me if he could so some side work so that he could pay off his credit cards, sure no problem. Next week he came back and told me how he had bought a new 10k telescope....

rkunnamp
Just like 'There is no such person as Good CEO', but only 'A Good CEO for a particular company, at a particular point of time', there is no such a person as 'Good Programmer', there is only 'A Good Programmer for a particular job, at a particular point of time'.

Context matters a lot. One does not use an AK-47 to kill a mosquito. It is terrible for that job. But that does not make AK-47 a terrible weapon.

Good at programming competitions does not equal good at any programming job, is a more appropriate sentence.

dsfyu404ed
Have you tried killing mosquitos on the back of a deer[1] standing behind a tree at 400yd. Even the AK with it's heavy cartridge isn't enough. You want a full power rifle, especially if the tree is a hardwood. /s
kpil
I think that there are personal traits that make people perform good in most situations, but perhaps not all extremes. As continuity is normally good, you try to find those, and change only if you really have to.

A programming competition evaluates almost none of the traits thats doing a marathon run as a highly preforming team.

lr4444lr
In the chapter of his interview in Coders at Work[0], Norvig found that the strongest correlate in the interview process with success at Google was paradoxically to have been given the lowest possible score by one of the interviewers. He surmised that this was because in order for such a person to have even gotten hired at the end of the process, someone trustworthy must have seen so much potential in the prospective hire that he strongly advocated for the person to be offered a position which worked in spite of that other low rating.

Kind of ironic for a company whose product values are so tightly tied to quantitive data.

[0]http://www.apress.com/us/book/9781430219484

ubernostrum
More recently, people who have and are willing to analyze data across many interviewees and interviewers at multiple companies (so they can see ongoing performance of a candidate a company passed on!) have pointed out:

http://blog.interviewing.io/you-cant-fix-diversity-in-tech-w...

After looking at thousands of interviews on the platform, we’ve discovered something alarming: interviewee performance from interview to interview varied quite a bit, even for people with a high average performance ... roughly 25% of interviewees are consistent in their performance, but the rest are all over the place. And over a third of people with a high mean (>=3) technical performance bombed at least one interview.

aoeusnth1
This isn't surprising - in fact, it would be surprising if it wasn't the case. This fact says nothing about the ability of Google's hiring bar to distinguish between good and bad hires, except that it is not a perfect signal.
lr4444lr
Totally agree. It just serves as a useful reminder though not to fall prey to the fallacy of deliberately chasing the correlational measure as a matter of policy.
andrewla
A qualitative assessment of that factor is hard because Google doesn't hire the people that it doesn't hire.

Another assessment could be that a divergence of opinion among interviewers is itself a positive sign -- programmers with strong controversial opinions who are willing to hold to them even in an interview setting might be better programmers for that.

A less sanguine assessment is that "success at Google" correlates with people who generate controversy around themselves, simply because that is something that creates visibility.

mcv
I once had an online discussion about the difference between competitive programmers and professional programmers, and which were better. Someone argued that competitive programmers were better, because they had to perform in more extreme circumstances. As he put it: they were sent into the forest with a knife to kill a lion.

Everybody else ran with that metaphor. Someone asked what a lion was doing in a forest; don't they live on the savannah? I asked whether he was sure there was a lion; plenty of times I've been sent to kill a lion and ended up having to kill a goat or an elephant instead. Are we even sure it needs killing?

I think that's the difference between a competitive programmer and a professional programmer. The competitive programmer will be much faster with a solution to the given problem, but the professional programmer will solve a better problem.

dsacco
That conclusion seems just as arbitrary as the one you're contesting.
wlesieutre
Mountain lions like forests just fine! Unfortunately for them, we chopped most of the forests down.

Assuming you're in the US and somebody tells you there's a lion around, pretty good odds it's not one of the big orange ones.

EDIT: Bay area published yesterday http://www.nbcbayarea.com/news/local/Several-Mountain-Lion-S...

mcv
A mountain lion is a very different animal from a lion. Different subfamily and genus, even. I don't live in the US, but when someone tells me "lion", I tend to assume they mean a lion, and not a puma. Though if there's any doubt, that's absolutely something to ask questions about.
NTDF9
I have had a bad experience with an algorthmic topcoder.

The guy has a brilliant mind. But he doesn't understand the big picture or why his code would fail when integrated with big codebases.

He thinks his work is "done" when he has written his tiny little code with some queues and trees. Then he leaves it for someone else to integrate and thus really solve the problem.

The best engineers I found were the ones who took ownership of their projects and had the ethic to dig deep. Not the ones who could solve toy algo problems.

bit_logic
Here's a list of companies that don't do this https://github.com/poteto/hiring-without-whiteboards I hope it's the beginning of an industry wide trend away from these types of interviews.
FTA
Look at programming competitions from an assessment perspective: what are they measuring and is what's being measured good or bad for a job?

Ability to work under short time constraints (probably good), to hack out some solution that will work temporarily (good) but probably not solid (bad), to forego much time to consider the implications of design and implementation choices (bad), to develop without communication if solo competition (bad) or without communication outside of the small group of core developers (bad), to build a solution without getting feedback and refinements from stakeholders (bad), and so on.

kazinator
People who enter programming competitions are looking for some sort of glory: to be stars. When they don't get it from the job, they get bored.

I suspect that the people good at programming competitions could easily perform well on the job, if the motivation were there. I don't think it necessarily has anything to do with short-term versus long-term problem-solving focus.

There are plenty of short-term problems that you have to solve on the job to be effective. You're not doing them in competition against anyone such that if you procrastinate, you will lose, so the motivation vaporizes.

Also, since job interviews are like programming competitions, people who are good at programming competitions figure they can easily get a job anywhere, and to do that repeatedly. They are not motivated into working hard by job security concerns.

None
None
js8
I would tend to agree, and while I was never very good at programming competitions, when I saw some of the winning code on Topcoder, I somehow lost interest. If that's the code I would write if I learned to be better at competing, then I should rather spend time learning something else.

On the other hand, something like Project Euler or even Advent of Code is very nice, if you do it at your own pace, or for learning a new language.

shortleon
Although, I do think that competitive programmers can become great programmers more often than not.

They practice writing correct for-loops, branches, recursive functions, efficient graph searches, input-output, data structure uses etc. in a very fast pace. To be able to do that requires you to chunk a significant amount of information.

I found that doing stuff very fast correctly makes you learn and internalize concepts quickly and more deeply.

This is all unfortunately anecdotal, I'm not sure how to google for such research (not just in cp but in "learning to do stuff fast improves learning rate").

Yet every instrument I've played and tried to learn, the moment I tried to play stuff fast but correctly (be it drums or piano), I had to improve my technique, had to internalize the rhythm patterns (more complex rhythms are insanely difficult to do fast), memory etc. With it came a significant amount of progress.

Same thing happened to me with language. I was speaking English for 20 years but still had trouble with fluent pronunciation (despite my writing feeling natural), I knew what I had to say but somehow my tongue got all tangled. Yet, when I tried learning some rap songs that use insanely fast diction, my speaking improved up to a point where it felt normal.

None
None
None
None
0x4d464d48
I haven't done a code competition before but my understanding is that code competitions reward cowboy coding over good engineering practice, i.e. implementing features while paying no heed to maintainability.

When you have that mind set and start dealing with a codebase as monstrous as Google's it sounds like a recipe for some serious technical debt.

forhayley
And yet Google absolutely loves hiring competitive programmers. Your point is somewhat alike to saying elite sprinters will not do well in marathons, however the point is elite sprinters will still be far better than any random person off the street.
potatolicious
It continues to surprise and frustrate me that as an industry we continue to highly prize proxy signals for engineering skill, when engineering skill is so directly measurable.

Why even bother with measuring things that are N-degrees removed from actual engineering, when you can just get people to engineer things?

I know others on HN have been hammering this point home for years, but until something changes it deserves to be repeated ad nauseum: work samples work samples work samples work samples.

Stop with the trivia questions. Stop with the contrived algorithms questions. Ask people to design systems, ask people to defend their designs, ask people to write real runnable code that directly relates to the work they will be doing at your job, ask people to review a real piece of code written at your company, ask people to critique real design produced at your company. Anything but what we're doing right now.

teen
I actually like the current interviewing approach... and I feel like we are a silent majority. I think whiteboarding / coding algorithm and design problems is pretty fair and reflective of engineering skill.
stupidcar
How is whiteboard coding reflective of engineer skill? At what time, during any job you've had, have you needed to instantly write code on a whiteboard, in front of peers, in a matter of minutes. Almost by definition, it's not reflective of engineering as actually practiced in a job.

I'm a far bigger fan of at-home coding exercises. Yes, I know, some people get annoyed at these because they think you're asking them to do free work in their spare time. But what other way is there to test people's coding under circumstances that most adequately mirror those of an actual job. E.g. they have a (relatively) unlimited amount of time, they have access to Google, their IDE, etc.

I'd much rather see what someone can come up with, in response to some novel problem, based on a couple of days programming, than what they can hack out on a whiteboard based on thirty seconds of thinking. The former seems closer to what coders are actually required to do, day on day.

recursive
> At what time, during any job you've had, have you needed to instantly write code on a whiteboard, in front of peers, in a matter of minutes.

Fairly regularly. Like when I'm explaining to someone how something works after they've asked me about it.

sidlls
You regularly have to instantly write code for a problem from undergrad CS coursework on a whiteboard, and for which you haven't necessarily been involved in working on, in a constrained time with your job hanging in the balance?

I'm skeptical.

teen
?? it's not something that's hard to do, it's like the foundation of my career. I use the principles all the time. my whiteboard /verbal / code solutions are based on that.
sidlls
Using the principles all the time is different from regularly reimplementing CS textbook trivia under conditions in which your job is on the line. Where do you work, that you have to constantly re-invent the wheel over and over again under such constraints? I'd consider such a job to be hell. Not hard, just tedious and uninteresting to the point of pain.

I'd also not consider it to be an engineering job. What you describe seems to me to be more like a CAD technician job at an engineering firm.

recursive
No. You just changed the question. Even most whiteboard interviews don't meet your newly narrowed criteria. I just meant to state that I regularly write code on a whiteboard in my real work. You could argue it's time constrained too. At least as much as it would be in an interview.
DavidWoof
> At what time, during any job you've had, have you needed to instantly write code on a whiteboard, in front of peers, in a matter of minutes.

Well, constantly, actually. Although I'm thinking of designs and snippets rather than actual functions. I guess it depends on what you're asking people to whiteboard during the interview. I agree that asking people to whiteboard qsort is silly, but walking through design alternatives with occasional code snippets to illustrate implementation options is a pretty basic skill.

> based on a couple of days programming

Either your company is very well-known and very attractive to candidates, or this is going to incredibly restrict your candidate pool.

I think smaller work samples are a great idea, I think code reviews are a great idea, but asking for two days sounds like a bit much, especially early in the process.

kafkaesq
Although I'm thinking of designs and snippets rather than actual functions.

But that's the thing -- at whiteboard interviews, they don't ask you to produce "designs and snippets". They make you write actual working classes and functions.

tptacek
You're the overwhelming majority, which is frustrating, because the method you're advocating for empirically and clearly does not work well.
teen
according to some random people on the internet?
gautamdivgi
The current approach has a huge false negative rate. It keeps out the bad apples but hugely favors marking a lot of good/great apples bad as well. The time invested to study for these interviews is quite onerous in itself. So, I'll say I'm not a fan of the approach.

That being said we do use the method where I currently work to interview. However, the problems are not some made up situation or a test of data structures/algorithms but a set we've encountered in real life.

From what I've heard the whiteboard method isn't really popular with many companies today either but everyone is sticking to it until some other alternative which avoid false positives presents itself.

sidlls
It is reflective of skills at internalizing and applying undergrad (and sometimes graduate) level academic trivia in a familiar context. The interview process you and other "silent majority" members think is effective at measuring engineering skill is not, actually.

I really wish you and so many of your colleagues would spend a few weeks with engineers in the physical science engineering disciplines. What we do in this industry is a farce, with respect to engineering.

taejo
I like it... because I'm good at it, so it's easy for me. However, I get no idea of what working at the company is like (beyond what can be gained from asking directly), and the company gains no information about my suitability for that particular job, since they're assessing me on exactly the same criteria as every other job.
thatswrong0
How do you think that? I think myself a decently productive engineer and I always loathed the fact that I have to study for most of my interviews. Study things that I have almost never used in my actual engineering career. In a situation nothing like actual engineering work.

My favorite interview process (given by the company I currently work at) consisted of one coding prescreen that wasn't terribly difficult, one session where I simply talked with two engineers about past projects I worked on, and then one higher level architecture problem that involved pseudo-code but not some esoteric algorithms.

That interview process, I think, is way more representative of my actual day-to-day work than any algorithm interview I've done. It deemphasizes the ability to figure out / reguritate previously memorized tricky algorithms on the spot (when do you ever need to do that during the work day?) while emphasizing the ability to communicate how and why you make decisions.

Which is an incredibly important part of engineering, way more important than whatever skill whiteboard algorithm interviews test.

DavidWoof
What's ironic to me is that a lot of people are going to see this video and just start using programming contests as a negative proxy signal for engineering skill.
kenning
I am currently reading "hackers: heroes of the computer revolution" and it seems like this culture of optimizing algorithms ("bumming") comes from the extremely early days of programming, when people were laying the groundwork for all the basic functionality of CS.
kafkaesq
Well it's not just that -- algorithms do matter, and a very small number of programmers do move the needle in this industry through their ability to shine in that area.

But the vast majority of the time -- that's not what your company needs at all. You need someone who's smart and reliable, and more to the point, believes in your mission. And who will go in and do all that far from algorithmic, highly unglamorous stuff that keeps your business from going underwater, day in, day out.

But of course - those skills are difficult to evaluate in a short amount of time. So instead companies go for a skill that can be "measured".

A skill like -- you guessed it -- algorithms.

teen
according to who?? like there are people who post stuff like this all the time, but it's just the anecdotes of a small population. companies have tried many different techniques and this is currently working the best. no top notch engineer is going to do a 8 hour take home test for every interview
kafkaesq
companies have tried many different techniques and this is currently working the best.

Is it, now? The best thing you can say about is probably "it seems to kind of, sort of work" -- at the cost of burning through a whole lot of candidates, and their presumably worthless time.

no top notch engineer is going to do a 8 hour take home test for every interview

Yup, take-homes suck, also. But that doesn't mean that complementOf(8-hour-take-home) is the right answer, either.

misingnoglic
And with a work sample, how do you know:

1) how well they work with others?

2) how long it took them to come up with the solution?

3) if they actually came up with the solution themselves

kafkaesq
1) how well they work with others?

Whiteboard sessions don't tell you that either. Only whether they got lucky, and you picked a problem they already know (so they can breeze through the solution in front of strangers with confidence). Or if they can at least suck it up, and pretend they enjoy this nonsense for 45 minutes (or 4-6 hours, depending how crazy your company is about this stuff) to get a job.

2) how long it took them to come up with the solution?

Does it matter? Really now. What matters on the job is did you go the last mile, think about the corner cases and pitfalls, and make sure your solution was cleanly coded and documented for the next guy or gal. Not whether you solved that silly HackerRank problem in 35 minutes as opposed to 45.

3) if they actually came up with the solution themselves

This one's pretty simple to sniff out, actually: "So I'd just be curious, why you did X here and not Y?" There's pretty much no way a bullshitter can answer that question.

BatFastard
Totally agree, and if someone does not have code they can share, they have no passion for programming. Therefore no job with me.
forhayley
Nobody good has time to do work samples. Not to mention in terms of time invested they are always high asymmetric.

Do you ever stop to consider why most top successful companies ask algorithm questions? Are they all really just so stupid?

williamsmj
It's not that simple.

Many good engineers, especially those who aren't young men, don't have existing work samples they are able to share. Perhaps, by choice or otherwise, they have a personal life that does not allow them to write code for free. And their employers won't allow them to share work code.

And many good engineers are not willing to spend an unpaid weekend writing code auditioning for a job they aren't yet sure they want.

And, as you know if you've ever worked with a disruptive colleague, or had customers, engineering in a team for a product is much more than just writing code.

potatolicious
I don't mean to propose that people spend their off hours writing work samples - that's unreasonable for all the reasons you've given and more.

What I mean to propose is that we dramatically alter the current on-site interview process used by most companies.

Instead of 5-7 interviews, consisting of 5-7 independent but ultimately equally contrived algorithms questions on a whiteboard, use the several hours you have the candidate to produce real, working code that reflects the work that your company does.

Which is to say, this requires no more time commitment than the existing interview processes at typical tech companies.

If the work your company does is deeply algorithmic, this will be reflected in the work sample produced via this process. If the work your company does is more heavily UX-oriented, this will be reflected in the work sample produced, also.

Instead of "over the course of a full-day on-site round of interviews, candidate produced several short snippets of hand-scrawled code solving various CS textbook problems, which we will now use to infer general engineering ability"...

... you can say "over the course of a full-day on-site round of interviews, candidate designed and produced a module of code that does [X small thing that company does], which can be assessed with the same rigor and methodology we already apply to our existing work and employees"

sidlls
While I agree that a focus on "work samples" is inappropriate, I must emphatically agree with the parent's broader point.

The way companies in this industry conduct interviews and measure performance is like an aerospace engineering company testing candidates for satellite engineering on their understanding of the Standard Model (physics) or group theory or some such. There's too narrow a focus on what is required of an engineer in the software industry.

softawre
You are allowed to pay somebody for a work sample.
angersock
> don't have existing work samples they are able to share.

This, at least, should be fixable.

Don't work at places that won't let you talk about your work and demonstrate your competence.

pm90
I don't think that's always possible. What if you worked for Healthcare or Security or Finance, where there are strict laws for keeping code/design private?
angersock
What are examples of such laws? There are laws allowing prosecution if you break agreements, sure, but the agreements themselves are what can be fixed by encouraging the market that they're incorrect.
DanBC
In the UK: https://en.wikipedia.org/wiki/Official_Secrets_Act_1989
angersock
That seems targeted at .gov employees, which is a little different.
tptacek
That's not what a work sample is.

A work sample is a (usually, hopefully) standardized piece of work you request from all candidates that mirrors the actual work they'd be doing on the job.

There's no reason a work sample needs to be so onerous that it costs you a weekend. What people forget when this comes up for discussion is that interviews are work. In fact, they are themselves onerous work, since they require you to be on-site and intensely engaged in ways you don't have to be to do the actual work of a job.

A work sample, on the other hand, can be done from your home, with a beer next to your laptop if that's how you roll.

The notion of audition work has definitely been abused in our industry. People get work samples that aren't standardized. They get work samples that are later ignored. Work samples don't have objective scoring rubrics. Some annoying companies assign new features for their products as "work samples".

Done carefully, though, with objective and predictable grading and calibrated to offset in-person interviewing time, they're superior to any process we have.

williamsmj
As I said, many good engineers are not willing to spend unpaid time auditioning for a job they aren't yet sure they want, doing work they can't reuse, by taking a formal standardized test.

It may in fact be a better hiring practice in terms of its ability to predict job performance. But you will _lose good candidates_.

And in particular, you will lose experienced candidates. You are embedding biases toward younger (and therefore less experienced) male coders in your hiring practices if you require what is, from their point of view, free work. (You can see the biases in your "with a beer next to your laptop" remark, for example.)

tptacek
You updated your comment after I replied to it, and so I updated mine.

Source: I ran a work-sample recruiting process for the largest software security company in the US (after it acquired my startup, which was one of the top 5 in the US), and our process most definitely was not biased to younger workers. Work samples drastically improved diversity.

Edit

You've edited your own comment again, which makes you very hard to respond to. If you want to rebut anything I've written so far, can I ask you to copy it into a reply to this comment?

williamsmj
Sorry, new commenter, unfamiliar with the etiquette of the edit button. I'm done.
chucksmash
I agree with you somewhat, but couldn't you just as easily replace "unpaid weekend writing code" with:

- unpaid weekend updating resume

- unpaid weekend practicing algos

- unpaid weekend browsing job sites

just as easily? I don't think asking job seekers to be interested enough to give a work sample is an undue burden. If that filters out people playing the numbers by applying to every opening...is that such a bad thing?

ng12
It's a problem of scale. Spend half an afternoon skimming through CtCI (shouldn't need much more as a seasoned developer) and you're good for N number of job interviews. I'll take that rather than spend 4*N hours on take-home problems in addition to time spent on-site.
aNoob7000
It depends on how far along the interview process you've gotten to with a potential employer and if this is a step away from being hired.

If the potential employer is asking for code samples up front, I think that's a bad sign. Let us talk about the work environment, expectations of the position, and then we can focus on my tech skills.

morgante
If I spend a weekend updating my resume, practicing algos, or browsing job listings, then that time is useful for all jobs I'm applying to and interested in.

If I spend my weekend on a work sample assignment, it's only applicable for that one company which might very easily not give me a compelling offer or be somewhere I'd want to work.

At this point, I'll only do work assignments if I'm already very familiar with the company and have reason to suspect they'll give me a compelling offer.

poletopole
Usually the work sample is something simple that doesn't take longer than a day or two. Many interviews that don't do work samples can go on longer than that and accomplish nothing. I was hired after doing a work sample that took a day. I was still paid for my work, irregardless of being hired. The work sample wasn't "standardized", just a typical task one of the other developers would have done otherwise. So I would encourage any employers reading to try this process out--it's better for both parties.
desiderantes
>You are embedding biases toward younger (and therefore less experienced) male coders

Younger maybe, but where does the male part come from?

williamsmj
Less likely to have personal obligations to others, e.g. family. Less likely to feel social pressure to do time-consuming emotional labor outside of work.
tptacek
How is the standard interview process, which often requires candidates to submit to multiple rounds of in-person interviews, sometimes even requiring travel, better than a process that people can do mostly from their homes?
bit_logic
I'm an experienced engineer (10+ years) and I would favor this process way more than the current algorithm/DS interview. It's the algorithm/DS interview that really favors young graduates. They already have an advantage because college knowledge is still fresh in their minds. And they often have the free time to grind leetcode for months. Experienced engineers often have families or other priorities and free time is hard to find.
methodover
Honest question:

I'm the tech lead for a startup with just a handful of programmers. I'm interested in hiring someone who can help us come up with well-architected, novel solutions to the problems that we're already solving. The issue is, we're solving them in a way that turns out to be pretty difficult to maintain and change. I'd like a totally new, fresh approach that takes the lessons we've learned and creates something that's much easier to write fixes for, much easier to extend with new features.

I need someone who doesn't just build on top of existing, well-defined codebases/APIs. I need someone who has the ability to engineer a complete, well-rounded, extensible codebase/API himself.

It would seem to be remarkably difficult to test for this ability. Even work samples that can be done in the short time period you describe wouldn't be able to test for this ability.

hex13
"build on top of existing, well-defined codebases/APIs." is not necessarily easier than "engineer a complete, well-rounded, extensible codebase/API himself."

I think these are two different skills. Some programmers are better in building on top of existing codebases, some programmers are better in build things from scratch.

maintaining vs starting.

tptacek
Short work sample tests don't need to be simple work sample tests. One of Matasano's work samples required candidates to reverse-engineer a binary protocol, build their own client for it, and then use that client to find vulnerabilities. If you had no aptitude or experience with that problem set, finishing it would take a very long time.†

Ultimately the problem you're really describing is the challenge of hiring people you don't really know how to hire. If you know very well how the role you're hiring for is supposed to function, you should be able to generate challenges within whatever constraints your candidates have. If you don't know very well what the capabilities of that role need to be, no interview process is going to work reliably for you.

(we didn't care, by the way: if you did well with that work sample, it mattered very little to us whether you were prepared when you started it).

ryandrake
As an applicant, I'd be all for going the work sample route, but only if it's paid for and if it were used in lieu of the typical face-to-face hazing session rather than in addition. I notice more and more companies are asking for work samples (or "homework assignments" or whatever they want to call them), but they just pile it onto their already onerous process.

The major pain point in interviewing (at least for me) is the massive time sink for a very small chance of success. I'm spending a few hours on the application, then a few more doing phone screens, then a weekend doing a work sample, then I blow a VACATION day (which has monetary value and of which I have only 10 per year), and at the end of it all who knows what my odds even are? Multiply that by, lets say 10 companies per year, and: I've invested 31 days of my life, have no vacation left, and I still may end up with nothing.

bdcravens
> I still may end up with nothing

What do you end up with if you filter out companies that ask for work samples?

ryandrake
1 additional free weekend for every one I don't do.
bdcravens
I'd definitely say that if a company is handing out work product assignments that take an entire weekend, it's somewhere you don't want to work.
tptacek
I think paying for interviews is problematic for a bunch of reasons.

On the other hand, I agree wholeheartedly with people who reject "homework assignments" layered on top of a standard interview process.

The onus is on people who want to take advantage of work samples to:

* Ensure that the time they're asking of candidates is offset by lowered time demands elsewhere

* Ensure that they're using the work samples objectively, so that people aren't asked to do coding work as part of a crap-shoot application process.

The process we used at Matasano and NCC was less demanding than typical job interview processes. The challenges were simple and self-contained, and when they were completed we could tell you with a decent degree of confidence whether you were likely to be offered a job at your (shortened) on-site interview.

pklausler
As a long-time interviewer, I've learned that a candidate being good at programming competitions means that they're probably good at programming competitions.

It's a weak signal either way for success or failure at interviewing and being able to do the job. Part of the problem, as we've discussed on HN so often recently, is that a programming interview has to waste time with FizzBuzz-style questions just to flag the candidates with great resumes, transcripts, and phone screenings who still actually can't program a computer to solve even a trivial problem.

kafkaesq
who still actually can't program a computer to solve even a trivial problem.

Or who get nervous, and freeze up.

bjacokes
Here are some of the positive qualities I would expect from a competitive programmer compared to the general CS population early in their career:

- knowledgeable at algorithms and data structures

- good at analyzing correctness and edge cases, even on simple non-algorithmic problems (e.g. FizzBuzz)

- accustomed to working hard and learning new concepts. this attribute is not specific to competitive programmer – for example, I'd expect similarly from an open source contributor – but it's higher than for a typical college student.

Some negatives I'd expect, which are fixable over the course of the course of their career:

- over-confidence in code, under-testing

- less skilled in OOP, coding style, version control systems, as well as web development or systems code (unless they have specific previous work in these areas)

- sometimes looking down on gruntwork/rote as beneath them (like the view that pure mathematicians have towards applied math or statistics)

I think that list of positive attributes often outweighs the potential negatives, especially during an internship or in the first year or two of someone's career. After that, I would expect many non-competitive programmers to have picked up on some of those advantages (code correctness, learning new concepts).

I've tried to steer my own interviews away from algorithms (especially DP) and focusing more on giving problems that are relatively straightforward, while still being complicated enough that someone has to write precise code and identify/fix a few edge cases.

kinkrtyavimoodh
Adding to the good points here, I think coding competitions superstars are also potentially more likely to have a diva-complex, while others are more likely to have an impostor complex when they get into a company like Google. The right amount of impostor complex (where it enables you but does not cripple you) is actually very helpful as it helps you learn and better yourself.
sghiassy
Question?

Does being good at programming competitions make you good at interviewing for programming jobs?

Obviously there's some irony in that question. But I also think there's some truth to it as well.

xyzzyz
In my opinion, based on my and my friends' experience, being good at programming competitions clearly makes you very good at interviewing.

I don't think that being good at contests makes you bad at the job, though. Consider the following scenario: assume that skill of being good at contests is completely orthogonal to being good at the job (so the real correlation coefficient is 0, instead of negative, observed by Norvig). Then, since it's easier to to get hired if you're good at contests, the observed correlation coefficient inside a single company will be negative, due to selection bias. Depending on the effect sizes, the selection bias might even make observed correlation coefficient negative even if it's positive in reality (which I think it is).

For a more intuitive explanation, consider a used car market buyer who has preference for black cars (this corresponds to algorithm-based interview process of companies like Google or Facebook). Even though the color of the car should have no correlation with its quality, the black cars owned by that buyer will more likely be worse quality than non-black ones.

thaumasiotes
> Depending on the effect sizes, the selection bias might even make observed correlation coefficient negative even if it's positive in reality (which I think it is).

This is well known in SAT scores. SAT math and verbal scores have a very strong positive correlation in reality; inside almost all colleges they are negatively correlated, because colleges impose threshold effects below and above. (That is, colleges admit students in a narrow band of SAT scores, rather than admitting students who are above a minimum threshold.) You can stylize that into the idea that total SAT score is constant for the students in any given college, which easily explains why component scores would be negatively correlated within colleges.

msteffen
As a former ICPC world finalist who later worked at Google, I would say that practicing for programming contests is almost like cheating as far as Google interviews are concerned. The Google interview format is almost identical to the ICPC practice sessions I did in college.

I've seen this video before, and IMO the extreme similarity between programming contest questions and Google interview questions could explain the negative correlation. Specifically, borderline engineers with programming contest experience are more likely to get hired than borderline engineers without programming contest experience, and therefore the set of Google engineers with programming contest experience includes more borderline people than the set of Google engineers without programming contest experience. Thus the slight negative correlation.

opportune
I expect this to be the true root cause of the negative correlation as well. Technical questions are good at rooting out those not good at data structures and algorithms, and good at promoting those that do. However, you can be excellent at solving ICPC questions and not know anything about documentation, object oriented design, OS, databases, anything web related, etc. It's a very specific skill and practicing programming contests optimizes for it.
christophilus
This is the best explanation of the phenomenon I've read. I think you're exactly right.
cbhl
When I was in high school, being good at programming contests meant picking up a bunch of habits that you'd never use at work -- memorizing the same twenty #include lines that include every conceivable STL data struct you'd need; using single letter variable names; no comments whatsoever. If you and the next person come up with the solution at the same time, you might lose simply because the other person could type faster. You have to un-learn these habits for industry.

Just about anyone can get exposure to a set of representative coding questions (see: the USACO training robot, or Cracking the Coding Interview), but training for these contests means spending XX hours a year under a time limit trying to write code from memory (because you don't have time to look things up in the manual).

paulcole
Also important to remember that good at programming does not necessarily equal good at job.

Excelling as a communicator, being an empathetic person, and having great interpersonal skills are just as (perhaps more) important than how well you can code.

agounaris
Winning the dunk competition does not make your team be an NBA champion...
rdiddly
Another sports metaphor: Being a good sprinter doesn't make you a good marathoner.
BoiledCabbage
But winning the 3pt shooting contest is a pretty good proxy for shooting ability.
NTDF9
And I'm so glad that Golden State Warriors didn't just Hire Kevin Durant for his 3pt shooting abilities. He'd lose to a lot of people who are better 3pt shooters.
sjg007
So what happened to teamwork? We see dominant teams last 4-5 years and then fizzle. Wooden had 10 championships with 7 in a row. Celtics had 8 in a row of 10 championships.
NTDF9
Exactly! I'd hire a Durant (a high performing team player) over a sharp-shooting 3 pointer genius (fast algo coder)
BoiledCabbage
True, but they do already have the prior two years winners in Curry and Klay.

In fact 5 of the past 7 years 3pt shooting contents winners were present in the NBA finals this year.

James Jones, Kevin Love, Kyrie Irving, Marco Belinelli, Steph Curry, Klay Thompson, Eric Gordon

That's probably not a coincidence.

aqp
http://ruberik.blogspot.com/2017/07/no-programming-competiti...

It looks like it wasn't "being good at programming competitions" that was negatively correlated with job performance.

It was "participated in programming competitions".

And there are some more "how to interpret machine learning models" caveats in that blog post.

It seems to me the biggest factor in explaining this is that the people who are just below the hiring line but participate in competitions get a bump over the line. Since there are more people just below the line than above it, the "participates" group is bottom-heavy, producing the correlation.

I do a lot of interviews, and it seems to me that lots of people with experience perform below how they "should", because they're not practiced at solving problems from scratch, they work all day on modifying larger systems. Programming competitions would fix that for them, as would most open-source hobby projects.

AndrewStephens
I don't find this surprising. I played around on Top Coder enough to reach the first division but the problems presented were in no way related to my day to day work. While the people that did very well (I never got anywhere once I hit the top grade) might have had an excellent command of very specific data structures and techniques, non of the code would have been very useful in the real world.

If I was interviewing someone, a career as a competitive programmer would not be a detriment but it would not count for very much overall. We are looking for creative thinkers that can work as a team.

CalChris
The market is pretty good at sorting this out and I'm sure programming competition winners are fairly valued. I've seen more Berkeley, Stanford, MIT, CMU and IIT degrees rise to the top than other schools. I haven't seen any top coders. Maybe they're there and I haven't noticed. But I haven't seen them on resumes that I've culled through.

It's probably something you might list on a first job resume but further on down the road, you'd just list your work and your education.

WalterBright
All job interviews rely on proxies for whether someone will be good on the job or not, because the only way to know is to actually hire them.

All proxies are inaccurate.

softawre
Yeah. The best proxy is just to hire somebody as a temp contractor for a month or so if both parties can afford it.
blacksmythe
Also, everyone is convinced that their proxy is the best achievable, although of course they test it only against false positives, not false negatives.
WalterBright
> everyone is convinced that their proxy is the best achievable

In my experience, people merely hoped their process was good enough. And it usually was. The "we only hire the best" is marketing propaganda and they know it.

qdev
My viewpoint is taken from the context as someone who is a reasonably seasoned developer returning to the job market. I never did programming competitions in university, though I surely was someone who fit the profile (computer science guy with a discrete math bent). As part of my recent prep for a job search, I joined one of the programming competition websites and did a couple of contests.

I posit that one of the dangers of spending a lot of time doing programming competitions and becoming very proficient at them is that, perhaps, you can come to believe that "true" programming, some sort of Platonic ideal of programming, is about coming up with the clever insight that solves an algorithmic puzzle.

But, in fact, a fair bit of _commercial_ programming is down and dirty, with databases, and user interfaces, and a lot of the time is really just shuffling data from one place to another, maybe filtering it or combining it with another set of data.

And that's just at the beginning of your career. Later on in your career, success means being able to work at larger scales in a team. That means organizing the code in a way that supports the efficient development of the codebase by individuals like yourself, by your team, by the development group as a whole... And at the architect level, you perhaps are looking at designing the system to support the efficient operation of the entire organization.

So I can easily believe that success at a programming competition does not correlate with long-term success as a software engineer in commercial software development. The two are really very different.

(Btw, I actually found the competitions that I did to be fun, but mentally exhausting. I'd say go ahead and do them, especially if you have an inclination for those types of problems. Just be prepared to use a different mindset for commercial software development.)

bshanks
I see a lot of comments speculating that there may be something wrong with programming competition winners, but this result might be a statistical curiousity unique to Google. It's possible that for most companies, good at programming competitions could still be positively correlated with job performance. For example, what if most people whom Google hires could have won some programming competitions if they had wanted to (a situation that most other companies are probably not in)? In this case, 'won programming competitions' could be data that is a lousy filter for Google, yet a good filter for other companies.
closed
Good point, in essence it sounds like one theory people have is that competition winners have over-specialized, but your theory is that there could be floor effects in the range of ability at Google (i.e. the floor is high, many people at Google could win competitions).
purpleidea
Completely agree with Peter. I'm shit at those competitions (maybe not the worst ever) but I think I'm pretty good at my job. But I think there are also some who are good at both.
x1798DE
In general, I think there are a lot of things like whiteboard interviews and coding competitions where employers would prefer "good at X means good on the job", but are reasonably happy to settle for "bad at X means bad on the job."
samlittlewood
When I am interviewing:

Candidate having competed in programming competitions - big green flag.

But, if success in competitions is, in their view, their biggest asset - amber flag.

I would extend this to most competitive endeavours.

jpn
This:

https://news.ycombinator.com/item?id=13739329

hajderr
well if you're not good at programming contests would that entail a contemplating/slow programmer?

I'd rather pick an algorithmist and teach him/her to reflect than a so called "reflectionist" / "slow coder" and teach him/her how to solve algorithmic problems.

I'd be interested in knowing the guy's (catonmat) own reflections and experiences too.

None
None
None
None
twii
Depends also on what job I guess?
bluetwo
Overfitting?
williamsmj
The title of the post understates the claim in the link, which is that, "being good at programming competitions correlates negatively with being good on the job".
rquant
no the statement is ""being good at programming competitions correlates negatively with being good on the job, conditional on passing Google interviews" which is completely different statement.

"being good at programming competitions" hugely positively correlated with "begin good on the programming job" unconditionally

sbierwagen
Do you have... any kind of evidence to support that claim?
Klockan
The overwhelming majority of people who can't code would fail horribly both at jobs and competitions, this effect will drown out everything else no matter how the distribution looks for the few percent who can code.
williamsmj
I tweeted this link this morning (apparently the first time it's been posted to Twitter, so perhaps how it ended up here on HN).

I did so in response to the CTO of Kaggle tweeting "Super confused why we still use resumes. Get 100x the signal from domain profiles (GitHub, StackOverflow, Kaggle, etc.) & real work samples", which ... where to start [https://twitter.com/benhamner/status/883137638084956160].

adavidoaiei
They are two types of business, first type business to business, software made to run internal in a company, the business run on this software, generally in this kind of software you use enterprise frameworks, the second type are business to consumer where anyone can made an account and use application, some software as a service are like that, they are too other types of web applications business to consumer.

When you allow that anyone to make an account and use your application sometimes you couldn't rely on enterprise technology and it's better to write everything from scratch or to customize open source solutions.

What they are testing in algorithms contests: 1.) the algorithm is correct with various test sets 2.) the performance of the algorithm, running time in milliseconds, there you should to know how to measure time complexity of algorithm with O(N) 3.) memory consumed, there you should know how much memory you are allocating for variables, for example a variable of type byte consume 8 bits

I think that programmers which has prizes in algorithms contests are suitable for business to consumer applications because there they will find a challenge, this kind of programmers will be bored to dig in enterprise frameworks.

pitt1980
I'm not able to watch the video at my current computer

but its actually really typical that when something becomes an advantage for being selected to a certain pool

the success of the those in the pool after the selection will negatively correlate with that thing

---------

the really obvious example of this is the hockey birthday thing from Malcolm Gladwell's Outliers

people with the earlier birthdays were more likely to make it past each selection stage in becoming an NHL player

but those with the later birthdays who were able to be selected in spite of their later birthdays, were typically more successful after the selection

---------

The authors contend that the strategy might actually work against a team's success because they found that players born later in the year and drafted later actually had more productive hockey careers.

Deaner said the study showed that men drafted in the second half of the year were about twice as likely to have successful careers in the NHL ??? reaching benchmarks like 400 games played or 200 points scored ??? than those born earlier in the year.

"If the team wasn't making this mistake, they probably would have been more successful," he said. "The guys born in the first part of the year are much more likely to be busts."

https://www.nhl.com/news/study-suggests-nhl-has-bias-in-favo...

lordnacho
> the success of the those in the pool after the selection will negatively correlate with that thing

Could be relevant: https://en.wikipedia.org/wiki/Berkson%27s_paradox

pitt1980
Yeah, exactly
MarkMc
That's an excellent point! I can think of another example: In the movie "Hidden Figures" the black, female engineers at NASA are much better than their white, male counterparts simply because it was harder for them to get in. Perhaps the opposite is true today of engineering students at colleges with affirmative action?
paulddraper
It's pretty common for tech conferences to give preference for talks to tech minorities.

Whenever I hear a conference has done this, I subconsciously deprioritize attending a talk given by a minority, since they got in with a lower bar...

Terrible I know, but I'm not sure it is illogical.

kenferry
This suggests that you think there is no perception bias that affirmative action is trying to counteract. You might be interested to know that when people try to correct for bias, they typically undercorrect.

Also, for people coming from more challenging circumstances, getting the same results is literally more impressive.

cma
Alumni preference is at least as big or bigger a factor in admissions than affirmative action. Many of the schools were segregated not so long ago, so you can guess how that affects the alumni pool.
None
None
muh_gradle
I am familiar with Outliers. I read the book. But I fail to see how this is relevant in any way.
scythe
It's sort of like Simpson's paradox. It's relevant because the statistical "paradox" (contradiction of intuition) described in Outliers is similar to the one in the video.
None
None
kjksf
He probably meant that people who do programming competitions are more likely to pass Google interview (they'll be better at doing algorithmic questions quickly) but are not necessarily better suited to do the actual job.

Just like hockey players born earlier in the year were more likely to be drafted.

balls187
Google's data shows that their interview process produces a low number of false positives, at the risk of producing a high number of false negatives.

That is, despite passing on otherwise talented people, those people who successfully pass Google interviews go on to be successful at Google.

edit to add:

Couldn't find that original article, this article goes on to speak about success predictors: https://www.wired.com/2015/04/hire-like-google/

dchichkov
Was it a proper double-blind study substituting interview result for rnd() to make a hiring decision? And established correlation a few years down the line?

Because what you've stated: 'data shows' ... "successfully pass go on to be successful" - sounds like cargo cult science or pseudoscience to me.

balls187
Google generally isn't known for pseudoscience.
encoderer
Even Google has a PR department dumbing things down.
dchichkov
You've stated 'data shows'. So my question was - was it a proper double blind study?

Because it is definitely possible to do it properly. Substitute the results (or partial results) of the interview with rnd(), use it for hiring decision for a subset of candidates. Keep this information confidential. Establish if parts of your interview process don't perform better than randomness a few years down the line.

It's possible to do. Only I don't think this was done. And if it was not done, and the method was some 'data shows' with hand-waving - it would be under definition of pseudoscience.

cwyers
You're calling most scientific journals psuedoscience -- even the hard sciences, like astrophysics. Nobody's doing controlled studies of supernovas, they're drawing inferences from observational data. I don't think your definition of science is sustainable.
thedufer
Do you really need to do the double-blind study, though? The median person cannot program at all, so using rnd() couldn't possibly have better than a 50% success rate (certainly lower; 50% is just a round number that I'm sure is larger than the number of programmers). Is Google doing worse than that? I kinda doubt it.
sjg007
Yes. You can do it at each stage as well. And we are not talking about hiring a general employee but a CS or programmer. So employ rnd() after a resume screen. Google has already found that GPA and brain teasers had no positive effect. This is similar and it would be an interesting experiment.
register
Definetely. I arrived at the final stage in a Google interview which I failed and I can confirm that out of the 5 interviews three were based on puzzles that I later discovered are found in books for coding competions. On two I did a good job to work out a solution myself but I completely got the third wrong. Who prepares for this kind of competitions has a huge advantage in these kind of interviews.
Cofike
That's how I felt trying to find a job in the bay area. If I wanted to compete with the top talent then I needed to prepare for the interviews and practice those problems.
btilly
Your interpretation is too weak. It is not just "not necessarily better", it is the stronger "on average are probably worse".

The reason is that programming competitions give more of a boost to your odds of being selected than to how well you'll do on the job. So people who otherwise wouldn't have gotten in now will, and will not perform as well as the people that they displaced.

Which is what happened with hockey players. Being born at the right time of year put you in a bucket with people who were slightly younger than you. Which improved your performance on the tests, but didn't matter once you all grew up. So slightly worse people at the right time of year displaced slightly better at the wrong time of year, and the average came out that people who got through and were born in the latter half of the year were actually better.

sjg007
The hypothesis is that the older kids are bigger stronger but the younger kids are more skilled. And apparently this holds up all the way to the NHL where the skill edge overcomes age.
nilkn
To expand on this:

* Prior to officially selecting candidates based on performance in problems derived from programming competitions, candidates who excelled at programming competitions were likely to do well on the job.

* That correlation was observed on a wide scale by employers, so many companies -- Google chief among them -- started incorporating such questions into the official interviews.

* Candidates now observed the change in employer interviewing methods on a wide scale and adapted their preparation methods. This fundamentally changed the pool of people good at programming competition problems in such a way as to reduce the correlation between the original signal (good at algorithmic problems) and the goal (good at the job).

* Overall, widespread acknowledgment -- and all consequent changes in behavior -- of the original correlation between the signal and the goal significantly reduced the quality of the correlation.

warkdarrior
In other words, what gets measured gets managed.

So if you measure hiring candidates by their performance in programming competitions, everyone will manage their own skills towards doing better in competitions.

notduncansmith
Yes but not just that. I think the bigger trend is that training oneself to excel at those types of problems meant (in addition to other things) one thing about you back then (that you were really into programming). Now that same behavior likely means that you want to get a nice job at one of the big tech companies, as a result of them publically selecting for that. These are fuzzy indicators to begin with, but they're definitely different fuzz.
saghm
> I think the bigger trend is that training oneself to excel at those types of problems meant (in addition to other things) one thing about you back then (that you were really into programming). Now that same behavior likely means that you want to get a nice job at one of the big tech companies, as a result of them publically selecting for that.

Thanks for the explanation! The idea being discussed didn't quite click for me until I read this

robrenaud
I think it's simpler than this. I don't think there is much outright gaming of the signal.

Programming interviews and programming competitions are very similar, much more similar than programming interviews and real world software engineering. When you are selecting top programming competition competitors, you are implicitly selecting people who will absolutely smash your (non-design) programming interview questions. This has little to do with their effectiveness as software engineers.

None
None
chongli
I favour the simpler explanation: Campbell's law. [0] If programming contests are favoured, then people will optimize for them. Simplify/generalize even further and you get Goodhart's law. [1]

[0] https://en.wikipedia.org/wiki/Campbell%27s_law

[1] https://en.wikipedia.org/wiki/Goodhart%27s_law

Being good at programming competitions correlates negatively with being good on the job - Peter Norvig

http://www.catonmat.net/blog/programming-competitions-work-p...

mining
This is relative to other people who had been hired at Google - there's probably still a positive correlation between those two variables amongst the general population.
icc97
Very interesting. Although I think he's speaking specifically about competition winnners, not just being good at them.
None
None
boltzmannbrain
Oh cool, I was hesitant to say "positive correlation" without data. Thanks for the link!
None
None
minwcnt5
This has been circulated around HN and Reddit several times, and it's disappointing that someone of Norvig's stature would present the data in such a misleading way.

Here's a good explanation posted by "tedsanders" the last time this came up on HN:

""" All of these claims from Google that say competition performance hurts or that GPA doesn't matter are missing one huge thing: selection bias.

Google only sees the performance of the employees that it hires, not the performance of the employees that it doesn't hire. Because of this, the data they analyze is statistically biased: all data is conditioned on being employed by Google. So when Google says things like "GPA is not correlated with job performance" what you should hear is "Given that you were hired by Google, GPA is not correlated with job performance."

In general, when you have some thresholding selection, it will cause artificial negative correlations to show up. Here's a very simple example that I hope illustrates the point: Imagine a world where high school students take only two classes, English and Math, and they receive one of two grades, A or B. Now imagine a college that admits students with at least one A (AB, BA, or AA) and that rejects everyone without an A (BB). Now imagine that there is absolutely zero correlation between Math and English - performance on one is totally independent of the other. However, when the college looks at their data, they will nonetheless see a stark anticorrelation between Math and English grades (because everyone who has a B in one subject always has an A in the other subject, simply because all the BBs are missing from their dataset).

When Google says that programming competitions are negatively correlated with performance and GPA is uncorrelated with performance, what that likely means is that Google's hiring overvalues programming competitions and fairly values GPA. """

I've also heard people involved in Google's Code Jam competition say that Norvig's study was done a long time ago, and no longer really applies.

jodooshi
I think what you said is true. But the main point implied here but I didn't mentioned is the mindset or competence is quite different between programming competition & real work. After all, being good on the job depends more on reflection, going slowly, making things right. ;-)
senderista
https://en.wikipedia.org/wiki/Berkson%27s_paradox
I am not in a hiring position but Mr. Peter Norvig says that being good at programming competitions correlates negatively with being good on the job at Google.

http://www.catonmat.net/blog/programming-competitions-work-p...

jaruche
Very interesting! And also a great talk. But I understood from it is that once you pass the Google hiring bar only then this inverse correlation happens.

What about to evaluate some candidate from the street?

That`s true, according to Peter Norvig "Being good at programming competitions correlates negatively with being good on the job" http://www.catonmat.net/blog/programming-competitions-work-p...
mzl
Norvigs claim can quite probably be credited to selection bias. See http://erikbern.com/2015/04/07/norvigs-claim-that-programmin... for some nice graphs.
minwcnt5
You forgot to add the key qualifier "at Google". That says nothing about how being good at programming competitions correlates with being good on the job in general.

It's also kind of common sense that if you spend a lot of time working on something that's not really related to your industry job that won't be as good at said industry job as if you spent that time, say, writing open source machine learning software. People just assume that programming == software engineering, but programming competitions are a lot more similar to math competitions than they are to real world jobs.

HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.