HN Books @HNBooksMonth

The best books of Hacker News.

Hacker News Comments on
Introduction to Algorithms, 3rd Edition (The MIT Press)

Thomas H. Cormen · 51 HN comments
HN Books has aggregated all Hacker News stories and comments that mention "Introduction to Algorithms, 3rd Edition (The MIT Press)" by Thomas H. Cormen.
View on Amazon [↗]
HN Books may receive an affiliate commission when you make purchases on sites after clicking through links on this page.
Amazon Summary
The latest edition of the essential text and professional reference, with substantial new material on such topics as vEB trees, multithreaded algorithms, dynamic programming, and edge-based flow. Some books on algorithms are rigorous but incomplete; others cover masses of material but lack rigor. Introduction to Algorithms uniquely combines rigor and comprehensiveness. The book covers a broad range of algorithms in depth, yet makes their design and analysis accessible to all levels of readers. Each chapter is relatively self-contained and can be used as a unit of study. The algorithms are described in English and in a pseudocode designed to be readable by anyone who has done a little programming. The explanations have been kept elementary without sacrificing depth of coverage or mathematical rigor. The first edition became a widely used text in universities worldwide as well as the standard reference for professionals. The second edition featured new chapters on the role of algorithms, probabilistic analysis and randomized algorithms, and linear programming. The third edition has been revised and updated throughout. It includes two completely new chapters, on van Emde Boas trees and multithreaded algorithms, substantial additions to the chapter on recurrence (now called “Divide-and-Conquer”), and an appendix on matrices. It features improved treatment of dynamic programming and greedy algorithms and a new notion of edge-based flow in the material on flow networks. Many exercises and problems have been added for this edition. The international paperback edition is no longer available; the hardcover is available worldwide.
HN Books Rankings
  • Ranked #3 all time · view

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this book.
Early in my career, I interviewed at Google. One of the interviewer asked me to recite the algorithm for constructing a Convex Hull. Since I hadn't done anything related to convex hulls since my algorithms class as a sophomore in college (several years earlier), I couldn't remember all the details. At some point, I said, I know where in CLRS ( this is. The interviewer didn't like this, so he said "Pretend you're an engineer at Google. You need to have a convex hull. What do you do?" And I said probably the most correct thing I've said in any interview: "I would go look it up." He really didn't like this answer, but we struggled together for an hour, with him frustrated that I couldn't remember the details of an algorithm I last had seen 6 years earlier and couldn't recite in 60 minutes what took our professor 180 minutes of lectures to cover, and me frustrated that would have taken me 30 seconds to look up.

I did not get the job at Google.

I do make sure as a more senior engineer at my current company I use what leverage I have to make sure other interviewers don't ask pointless questions like this.

I had a very similar experience interviewing for Google. I was asked to do a task that eventually boiled down to a topological sort, and I thought the question consisted of recognizing that the answer was a topological sort and moving on because it was over the phone.

However, that was not the case. The interviewer wanted me to code it all out over Google Docs, but I didn't remember the exact algorithm so I basically had to re-figure it out on the fly, which took most of the interview (I even similarly mention "in any real situation I would just look this up", but that didn't help). At the end, I had a bunch of pseudo-C++-code that should do it correctly.

I thought I was done, then the interviewer said she would go go copy my code and compile it after the interview to see if I was right, which blew my mind. It was never mentioned previously that the code would actually be compiled and run, and with no syntax highlighting or ability to compile and test the code myself there is zero chance it was ever going to work.

I never heard back, so I'm assuming my code failed and they left it at that. Anyway, I'm much happier now that I think I would have been at Google.

A passage from one of Jon Bentley's Programming Pearls:

As soon as we settled on the problem to be solved, I ran to my nearest copy of Knuth's _Seminumerical Algorithms_ (having copies of Knuth's three volumes both at home and at work has been well worth the investment). Because I had studied the book carefully a decade earlier, I vaguely recalled that it contained several algorithms for problems like this. After spending a few minutes considering several possible designs that we'll study shortly, I realized that Algorithm S in Knuth's Section 3.4.2 was the ideal solution to my problem.

If Bentley needed to go look up an algorithm that he vaguely recalled studying in the past, just how awful of programmers are the rest of us that look things up, really?

"Never memorize what you can look up in books" is often attributed to Albert Einstein; the long version is "[I do not] carry such information in my mind since it is readily available in books. ...The value of a college education is not the learning of many facts but the training of the mind to think."

The entire concept of asking someone to write code in Google Docs is just insane to me. No one would ever do that on a job, because it's remarkably awful and difficult, and the editor will fight against you every step of the way (auto-capitalization, just to name one thing).

And yet somehow interviewers think that this will give them a good picture of how you'd do work on the job. Baffling.

We ask candidates to do this. However, our questions are far more reasonable than Google’s, and the reason we use Google Docs is specifically because we don’t want candidates worrying about writing fully correct, runnable code during that stage of the interview process. We never try to run any code written in this stage and don’t dock candidates for small syntax mistakes, etc.

That said, it’d be nice to find something more ergonomic and still retain the expectation that the candidate need only write pseudocode.

Yep, it was a struggle even when I thought I was supposed to be writing pseudocode. I've had some really great interviews using coderpad and similar software (including the company I'm at now), and it's such a better representation of how the interviewee actually codes.

I don't get how my interviewer (an engineer), didn't see the problem with it. I know that Google wants to use their own products, but that can't possibly be the norm for other Google interviews.

They're doing it so that they can see your edits live, "prooving" that you're not cheating in some way.

But yes, this is asinine. Sharing your desktop through a video conferencing program and using the IDE of your choice would be far more realistic test, but Google likes to put hoops up to jump through that are smaller than your body.

Why can't you contort yourself like an octopus!? Fail!

Nah, it's because this is the process for phone, not video, interviews, which are somewhat more accessible. And you need the ability to record the code at the end anyway (which docs provides and a video doesn't).
Yeah, Google's interviews are whack. I've been rejected twice -- I think because of flubbing an algorithm question -- and after that I designed a new algorithm for my work that's getting published in OSDI this year, I think.

Their loss.

I had the opposite experience in a Google interview.

I just said, that can be solved with a topological sort, and then we moved on.

But I failed with another interviewer. He kept asking how to prevent hashing from producing collisions. The answer was universal hashing, but I had forgotten about that

Typical. Universal Hashing as a concept is only interesting to cryptographers, and people who are completely obsessed with obscure hashmap trivia (i.E. FAANG interviewers).
Man, one time I had an interview at a startup. They asked me some fairly trivial questions and I wrote some pseudocode which they were fine with. Then in the last 5 minutes they were like, "well, let's get this ready to run in a browser." I was shocked and stuttered out something like "wait, this was pseudocode, I'm definitely not going to be able to make this run in the last 5 minutes," and they said something like "oh, OK" and wrapped up.

That was the entire phone screen, and they failed me. This still burns me years later because it was such an obvious communication failure on their part - if you're one of the rare interviews that actually require me to write working code, you better be darn sure to mention that upfront, rather than 5 minutes before the end of the hour! - and yet I was the one who failed.

I made the mistake of answering Google recruiter without a CS degree and "only" an EE/CompE and 10 years Software experience. Junior interviewer would not proceed past some algorithm question regarding boxes of pennies by weight, as he wanted a specific algorithm to be used in my solution, which I clearly didn't know. Recruiter then ghosted me, so F off Google for wasting my limited time like this.
Your story reminds me of when I interviewed at Google and did great on four interviews but the fifth one sunk me. I was asked an inheritance question which didn't make much sense (coming from a heavy Perl and Ruby background) but probably would have made more sense if I were a heavy C++ developer. I explained how the question wasn't really an issue in Ruby but the interviewer didn't like that and angrily said "you should have learned this in your compilers class at Georgia Tech!"

I'm glad that the interviewer read my resume closely to learn my alma mater, but not closely enough to realize I had a biology degree from it.

I tried again a year later and got luckier with my allocation of interviewers. I never did end up using any algorithms there.

For a company whose main claim-to-fame is indexing the web (rather than mirroring it in their databases), Google sure doesn't seem to understand that CS learning is done by mentally indexing textbooks (rather than mirroring them in your brain.)
That may be true. But then why hire someone with just look-up skills before hiring someone who really tries, and enjoys the challenge?

I'm interviewing engineers frequently and although I agree that the question asked by GP is maybe not the best it still gives the signal if someone is willing to power through a problem with minimal guidance and/or ambiguous constraints. Something I'm willing to find out at the peril of pissing off a few candidates.

> why hire someone with just look-up skills before hiring someone who really tries, and enjoys the challenge

This attitude will keep you from hiring someone who will just "do the right thing," which is to look up stuff that can be looked up, and also persevere when an off-the-shelf solution won't be sufficient. Plenty of engineers will spend time trying to reinvent the wheel when it is totally unncesssary.

True, but in my experience it's easier to teach resourcefulness after hiring than injecting motivation for solving problems somehow.
At some point we had a brilliant junior dev who had a task to add some logic to some ES plugins in Java. When the code review went up, it was all a single Java 8 stream statement. He went out of his way to convert everything to a stream because he enjoyed the challenge I guess. It was 3 pages of nested declarations and inscrutable.

He left eventually but person was hard to work with because you had to beat back this kind of shenanigans every step of the way.

I don't really see a hard distinction between meat memory and silicone memory, so I've never bothered to memorize stuff that I could simply lookup. To me, it seems almost ridiculously inefficient to carry around large amounts of indepth data in your brain, which is limited, costs constant energy to maintain, and isn't even particularly good when it comes to fidelity of recollection and searchability.

It's much more efficient to remember the general shape of problems, to remember where you can find further information (books, chapters, etc). Remembering that something is a well-studied problem, and where to find a solution, is straight-up more efficient than remembering that solution.

Dijkstra once remarked that trying to make a computer act like a human brain was far less interesting than seeing what the limits are of a computer as something that is unlike a human brain. It seems the reverse observation is also true - human brains, once freed from rote memorization and computation (that's literally what programming does to us), can tackle much larger, more interesting, and more complex problems.

I work with novel algorithms all day (I'm writing a decompiler.) The more I learn, the more I have confidence in the fact that I have absolutely no hope of inventing an algorithm that doesn't already exist; and that I shouldn't waste my time trying, when instead I could be spending that time digging through the nigh-infinite vault of potential solutions to my problem known as "the output of CS academia."

Software engineers aren't mathematicians. We don't invent algorithms or data-structures. That's not our trade-skill. And even actual mathematicians don't sit down to solve a real-world problem that no maths they're aware of directly solve—and then produce novel maths, and then use them to solve the problem—the very same year, let alone the same hour.

What I can do, as a software engineer, is to take algorithms that exist, and repurpose them or glue them together in novel ways that the designers of those algorithms never thought of, to do something new. Bitcoin, for one example, is a feat of pure software-engineering: it takes four or five existing well-known algorithms, and puts them together in a novel combination to solve a problem. I could maybe invent Bitcoin. But I can't invent Floyd-Warshall, if I don't know about it.

Google doesn't employ computer scientists (i.e. mathematicians who invent algorithms.) Alphabet does, under DeepMind and Waymo; but Google itself only employs—and only needs—software engineers.

Asking a software engineer, under pressure, to derive a novel algorithm, is a bit like asking a chemist, under pressure, to derive a novel class of chemical reaction; or a materials scientist, under pressure, to derive a novel material.

That's not the type of pressure that a real-world person in these jobs will ever be under. And moreover, it's precisely the type of pressure that people with experience in these positions have learned to mentally associate with "going in the wrong direction, toward wasted effort", flinch away from, and to turn around and study the literature instead.

It's ironic that Google expects its employees to all have university degrees. If there's one skill people who've gone to university are guaranteed to know (and to have built up a reliance on), it's consulting the literature.

I think the point is that no one is being asked to derive a novel algorithm. It’s taken for granted that a person with enough experience will have an understanding of broad categories of algorithms and should be able to reason about the small changes to those algorithms that would be necessary for practical application.
Can you offhand write C++ code for a 2-3 Finger Tree, including the changes necessary to make it efficient in a strict language?
I think if it this way (very simplified): there are candidates that like challenges and who don't mind imperfect interviewing situations, and i had my fair share of people who dislike it would fare better if we just let them work on a real world problem. We did both for each candidate for some time, but the latter was much more costly and didn't provide such a good signal in the end.

Open-mindedness goes a long way. Also typically our interview questions are set up in a way where it's expected you won't finish but try, which reveals a lot about personality and how problems are attacked.

I don’t think the point is being able to use a specific language or to know a specific algorithm, unless the role requires it. If the goal is to test a person’s knowledge and understanding, then the actual implementation is probably less important than their ability to explain and reason about the problem. OTOH, if the goal is to find out whether the person is highly skilled in a specific language and has a lot of experience with an algorithmic domain, then what you’re asking isn’t unreasonable.
If I had any wit, I would have responded to the convex hull question: "I would look this up on the Internet, assuming there is a search engine good enough to find the algorithm."
Another anecdote of an incompetent interviewer blindly looking for a certain answer:

I once had an interview where, for a pretty long question, one trivial step was to check that one set was a subset of another set. Neither set had any special preconditions, just two plain unordered Java HashSets. Not thinking twice about it, I wrote a simple for loop that checked if every element in the smaller set was present in the bigger set.

When I finished the question, the interviewer started questioning me about the runtime of the for loop. He started hinting that it could be more efficient, which confused me, since it seemed impossible to check every element of a set in faster than O(n) time. I also didn't see how it was useful to think so carefully about the runtime of such a trivial thing, seeing as the "story" of the problem indicated that the sets would never be particularly large and the task wasn't one that would be sensitive to a couple microseconds difference in runtime.

We spent about 20 minutes stuck on this, with me awkwardly repeating I didn't see how it was possible to be faster and him telling me to just think harder and look at the problem "mathematically", "forget about the code, just think, in math, if you have a set A and a set B, how do you efficiently find if one is a subset of the other?". Eventually we ran out of time for the interview. As we wrapped up, he revealed to me the elusive answer: "Since you're checking if it's a subset, you should use the built-in isSubset() method that Java sets have". Of course, he hadn't conveyed to me at all that he was looking for a specific built-in method, so I thought there was some secret algorithm that I would have to write to make it go faster. I didn't mention that though, and instead replied that even if it were a built-in method I didn't see how it could be faster than O(n) in its implementation. He didn't have response, and just stammered something like "well it's built in and it's actually the most efficient way" and the interview ended awkwardly. Anyways, I had my doubts, so when I got home I checked.

There is no isSubset() in Java sets.

There is a containsAll(). It's implemented as a while loop that runs in O(n) time, making it no more efficient than writing your own loop.

Naturally, the company ghosted me too.

Three years ago I interviewed in house (after several hours of phone screens) at an SF tech company that has since gone public. The position was for a senior software engineer doing mostly backend Rails stuff.

The biggest portion of the in-house interview was some fairly simple algorithm puzzle involving solving some word game given some rules and a list of valid English words. That section of the interview ended with me explaining how hash table lookups are constant time and the interviewer insisting that they are linear time.

To be clear, there was no “gotcha” about collision resolution or anything subtle like that. My claim just came up as an obvious step in my explanation of the running time of my proposed solution and the interviewer jumped on it.

I thought I was quite cordial, but I didn’t back down, and the interviewer seemed quite aggressive. The other interviewer in the room at the time seemed very uncomfortable with the fact that I would question something so basic, but didn’t intervene or express any “opinion” on the “debate.”

I was rejected due to insufficient experience. :)

I interviewed at Google quite a few years ago and had a similar experience but over multiple interviews. I had 5 interviews slowly getting harder and harder questions. By the fifth interview I got past the first question pretty fast so he moved onto a second harder one that my answer did not seem to impress. After all these interviews they just never called me back again. The whole process was senseless. It was as if they wanted to find the point at which I would fail so they could stop. I'll probably hate Google forever after that. I am sure they have great people but I am always very interested when I meet someone who works there so I can ask them about what they do and they always seemed kinda average good not amazing.
My experience at Google was that the secret sauce wasn't that it was full of geniuses. It's that everyone was _reliably_ competent and very smart. It's hard to overstate how valuable that certainty is, and how impressive it is to manage that at scale. It enables an entirely new world of employer-employee relationships, including their famous transparency and policies that could (and would) be abused by people too dumb to understand coordination problems. The cost of this is of course going to be plenty of false negatives. (Bear in mind they were a fifth of their current size when I was there, so I don't know how much this applies anymore)
I'll play devil's advocate and say I don't dislike coding interviews. Most of the time they are done completely wrong however.

They are extremely useful to filter out candidates that simply cannot program, and who won't be able to no matter how much mentoring time you 'invest' in them. I've heard about interviews for senior engineers that were totally derailed by a simple FizzBuzz. I like wordcount (the wc *nix command) as a warmup/screening question simply because either: The candidate comes up with an algorithm for it and test cases OR starts counting whitespace and derails the implementation with a cascade of if-else for every new test cases that breaks the previous implementation (I've seen if-else to check for the n cases I suggested...).

But honestly I feel there is so much cargo-culting for coding interview. Especially if you ask for compliable code of well known (as described by a textbook) algorithms you ultimately assess rote memorizations skills and not engineering.

Convex Hull doesn't sound so bad to be honest. If I was using this question I would expect someone to be able to come up with the gift wrapping algorithm with maybe a little bit of help. What, as an interviewer, would really want to see is if the candidate can come up with test cases and a way to test whether his solution is actually a convex hull and then try to break down the problem and come up with an algorithm. I sure wouldn't expect to be able to type what's on the board and for it to compile right away (it's a board, not an IDE!) but I expect the candidate to be able to walk me through the code and run it line by line through some test cases.

I don't dislike coding interviews either. But if I were asked about a Convex Hull, my response would have to be, "What is a convex hull?" otherwise I'd have to guess. I have a computer science degree, did I miss something? Is that common knowledge? Right now I have the power to look it up, but it's a little strange to me that I could be asked about that in an interview and my pass/fail would potentially depend on it (granted when I am the interviewer, my pass/fail rarely depends on the response to the coding questions).

So the next question is, will interviewers explain the problem if you're unfamiliar and not "fail" you if you can explain how you'd approach a solution?

I would never fail someone for asking questions.

I liked the convex hull problem because it's easy to sketch in 2D and explain in case someone doesn't know what it is.

That was a terrible interview experience. Of course anyone would look it up. I used to work at google and interviewed people as part of my job as a regular software engineer. I occasionally heard about these kinds of stories. Google didn't give much feedback to individual interviewers, no one told them if they were terrible or great questions or whatever. I also saw better interviews.
Well, I probably used it last time 10 years ago and still remember it.

All you need to remember is that you sort points by angle from a fixed point on the convex hull, and you can easily work out the rest of the algorithm as well as the proof of correctness.

Not knowing it is a relevant signal that you did not seriously attempt to compete in computer science competitions during high school and college and did not otherwise have a burning desire to learn algorithms and data structures nor a specific interest in computational geometry. Whether it's in Google's interest to select for that of course is debatable.

Yeah, I remembered the sort by angle part and the broad strokes after that, but there were some details I forgot so I missed some corner cases.
> Whether it's in Google's interest to select for that of course is debatable.

I'd argue it does not. Computational geometry is a niche. It's not even a niche that's particularly relevant to most of Google's development operations.

In modern software development, knowing the detail of specific algorithms off the top of your head and being able to implement them unaided is, at best, a parlor trick. I doubt very much that it even correlates to one's effectiveness as a developer. Even for development tasks which involve algorithmic work -- which many don't! -- knowing that various algorithms exist, and what they're used for, is much more valuable than having memorized the details of how those algorithms are implemented.

The idea is that you might need to invent a new algorithm where a technique from an existing algorithm is used in a modified way or in a new setting, and then knowing the actual technique can be essential to find an inspiration.

E.g. if you know that convex hulls can be computed by sorting by angle and sweeping, then you might come up with the idea of sorting by angle and sweeping in problems that are unrelated to convex hulls (e.g. determining what is visible from a given point in an environment with obstacles).

In general, if you know a lot of those techniques and heuristics, you will be much more effective at problem solving in domains where these techniques apply.

It is relatively rare for this to be a big factor in routine software development, so that's probably something one should filter for only if interviewing for roles where you might need to design novel algorithms (which Google probably has more than the average company).

Your entire premise is based upon the fact that this wouldn't be discoverable for anyone that didn't already know the algorithm.
One day I will understand why seemingly half of all CS papers seem to be concerned with convex hulls and Voronoi decomposition. What is so fascinating about these two topics as to warrant hundreds, if not thousands of papers? I've never heard of anyone using them anywhere ever for any purpose in industry, yet these topics are a focus of concentrated intellectual study as if they were the cure for cancer.
An entire industry – geospatial – and a variety of related industries depend on high-performance CH, Voronoi, and line simplification algorithms, among others. Not to mention their importance in medical imaging applications, tomography (so, in a sense, they directly contribute to healthcare improvements). This should be self-evident, but I can provide further direct evidence if need be; I’ve implemented QuickHull, Visvalingam-Whyatt etc from the papers, and the resulting libraries see enthusiastic use in a variety of sometimes surprising fields. They get lots of attention because the problem domain is well-understood and often resistant to low-hanging optimisation efforts (computational geometry algorithms tend to be difficult to optimise)
> with him frustrated that I couldn't remember the details of an algorithm I last had seen 6 years earlier and couldn't recite in 60 minutes what took our professor 180 minutes of lectures to cover, and me frustrated that would have taken me 30 seconds to look up.

Sounds like a very effective way to filter out people who haven't taken algorithms courses recently and/or don't spend hours every week on algorithm puzzle contests or solving algorithm puzzles for "fun." They could probably save a lot of time by asking your graduation year and looking at your hackerrank score.

“I would probably look it up” is the correct answer unless you’re interviewing for a very specialized role. I had a similar experience interviewing at google (and I regret not walking out), and a much more pronounced experience at MS (where I did walk out and don’t regret it).
“Never memorize something that you can look up.” - Albert Einstein

Exactly my experience twice over phone interviews with them, after the second one I actually got some kind of survey question, my answer was not to ever bother me again with stupid emails how great my CV looks like, that I would be the right person that they are looking for and then waste a couple of hours of my life with such exercises.

And for what? Anyone that works with Android only has to wonder where do those algorithm magicians land at Google.

Ok, tell me what's wrong with this algorithm:

Given N points, take the first one, sort all the others by their angle to the first, and return the cycle of N-1 points as your hull.

Extending to more than 2 dimensions left as an exercise for the reader.

I work at Google and have performed plenty of interviews over the years. We're specifically trained not to do this. We have "feedback feedback" in our interview systems specifically for this type of situation.
And yet... I've read countless anecdotes about this kind of behavior, over multiple decades.
Your surprise comes from the fact that you're modeling an institution as a single person when it in reality consists of a hundred thousand people. It's obviously less mental effort to reason about large entities this way, but it's obviously much less accurate.
That's funny because institutions can only be modeled as a statistical average of all of the humans who contribute to a given property of the institution.
Untrue (there's plenty of emergent behavior in large organizations), and ignoring that, irrelevant. An obvious trivial example is looking at the property of consistency in actions: an organization consisting of a hundred thousand people is going to be far less consistent in many sequences of actions than any given person would be.
Given that I gave my feedback about 10 years ago and it keeps happening, something is not right.
> I did not get the job at Google.

Not to troll, but have you considered that you perhaps were not qualified for the job? There are people that can recite details of convex hull construction algorithms.

But it didn’t sound like the job was for algorithm recital performances.
I had several similar experiences through and after college. One in particular was while I was still in school, they had asked a question relative to performance when using views in an SQL database, and my answer wasn't satisfactory to them. Despite the interview going well up until that point, you could feel the tone shift immediately after, like they were just asking the rest of their questions as a formality and wanted me out of there as quick as possible. Didn't get called for a follow up, of course. Found out later that place was actually preying on foreign-based students (who received funding from the university to cover their out-of-state tuition difference if they worked up to 20 hours / week max for a company associated with campus) by making them work 60-80 hour weeks along side school and threatening to fire them if they told anyone, which would cause them to lose their funding and likely have to drop out / move back to their home country.

I received offers from several others, some never called back. Ended up turning all those down though, as I had an interview with someone who helped step through the few mistakes I made as a learning process during the interview. One particular question I remember was basically building front-end components with Javascript, and my example had a loop with a write call in it. He mentioned he would compile the markup in the loop, then write it after it was finished, and asked why that would be better or worse. Had to think on it for a minute, but came up with a trade-off between more memory usage but less DOM writes, resulting in faster rendering. Worked that into the next coding example too, since it was similar but more complex. After the interview, when they called to schedule a follow up interview before I even made it home, I realized he was more concerned about whether I would ask for help, learn, and not make the same mistakes again. Worked there, learned a ton, became a senior dev just few years out of college, now I manage a suite of products for my current company. I don't think I would have gotten that kind of opportunity had I not had that first manager and job, so I strive to emulate that in my interviews.

One of my favorite questions is something like "describe a crisis that occurred and the steps you took to resolve it," and give an example of one of my many stories of pushing bad code to production, having a server crash, etc, etc. I love the question because it involves no coding, no memorization, no regurgitating of info. Instead, it resolves around the person being a hero, which in my experience usually gets them relaxed, comfortable, and talking. You also learn a lot about their thought process: How did they first notice the issue? What was their process for triaging it? How did they debug it? If something involved in it was out of their wheelhouse did they go to someone else more experienced for help, or did they search for similar problems online to try and narrow it down? Did they delegate tasks if the workload was too high for them? Once it was fixed what was the procedure it went through for testing / verification? Did you monitor the issue afterwards to make sure it didn't come back? Once it was deployed, did you search for similar issues affecting your other systems? Did it ever come up again in new code and you were able to identify it before it was deployed?

Obviously not all of these will apply to everyone, but the open-endedness of the question means you can probe a bit here and there throughout their story and gain a ton of relevant insight. And most importantly, it answers the question of "when things get tough, can you keep a cool head and think critically?" I can teach code, data structures, algorithms, whatever, to anyone. But the difference between someone who is always trying to learn and grow, takes initiative, asks for help, etc, and someone who just wants to do the bare minimum is night and day.

Side benefit is this usually takes up several minutes of an interview so they can't be used on BS questions like the one you mentioned.

You suffered so that others would not have to.
Yea I interview (and have sat on HC) at Google, and interviewers who ask these types of questions really frustrate me.

If your question requires having previously memorized or being able to come up with some tricky algorithm on the fly in 45 minutes and code a solution using it, your question is probably bad.

I get why they ask them - they're easy to ask, they're easy to score, and when your question inevitably gets burned because someone posted it all over the internet in their blog post titled "How I got a job at Google!!" and then bumped way up the list on hackerrankcode-whatever I forget the websites... it's easier to come up with a new one, because you just pick a new esoteric algorithm and ask that.

And honestly, coming up with interview questions can take awhile. When you have 5 or 6 very calibrated questions and all of them get burned and banned in a one quarter span, you now have to spend the better part of a whole workweek, plus dozens of more interviews coming up with new questions and recalibrating.

But, I just refuse to ask those types of questions anymore. They really don't give useful signal. The only three bits of signal you get are "does the candidate already know this algorithm (or are they a supergenius who just figured it out under time pressure)? can they communicate it to another engineer? can they write code?"

Importantly if the answer to the first question is no, then you get zero signal on the other two parts. That basically means as an interviewer you just failed your job.

What on earth is wrong with asking to see the interviewee's code? Skim over it looking for neatness, how they comment, what build procedure is there and quiz them about what you see: anything from language choice, to build reproducibility, from architecture to install. And of course quiz them on algos you see.

All of these 'tricky' exam style questions don't show a thing about the person sitting in the interview room.

I've hired based on a good discussion alone, without code... And got a great engineer.

You lose out on candidates who work at companies that don't let them show you their current code, and have a home life that occupies their time outside of work.

So, you can ask to see candidate's existing code if you work at a small enough shop and/or don't care about missing out on candidates who don't have shareable code. But if you are trying to hire at a larger scale, it does not work.

Many great hiring schemes work until a certain scale and then they fall apart. Some hiring schemes also severely tilt your candidate pool towards a certain subset of the population. I'm sure whiteboard style interviewing also has similar issues.

In addition to the sibling's excellent "candidates who work at companies that don't let them show you their current code, and have a home life that occupies their time outside of work", here's another category of candidates you lose: people who code differently in their downtime than they do at work.

If you looked over my github, you would see most of my commits are ones that, at work, I wouldn't accept from anyone. The code is sloppy, there are no tests, it looks like someone was just trying to get something working as quickly as possible with no thought for maintainability or understandability. But, of course, that's exactly what I was doing! When I code in my spare time, I'm solving problems that I've run into in my spare time, and I approach it very differently from in my work life. The constraints are different, and so the solutions are different.

My favorite question to ask in software engineering interviews is one that I believe to be un-burnable.

> It's 2140 AD, New York is under water up to X feet high. Buildings have been retrofitted with <magical-ish material> to withstand the water. You are in charge of keeping your building dry. If water gets in and damages the foundation, a few thousand people die or become homeless.

> Design a system that ensures that doesn't happen.

How candidates approach this, how they think about redundancy, how they deal with additional constraints or extra scenarios thrown at them tells a lot about how they approach things. The question is not about software (explicitly) on purpose so that it gets people out of the coding mindset.

With more software-focused systems you always get candidates who starts writing code and designing objects and classes and stuff. No, I want you to design a system. Stop writing code.

The reason this question is unburnable is that there's no right answer. You're being asked to show how you work through an abstract problem and design a solution. You aren't being asked for an answer.

So far everyone who did well on that question has turned out to be a great engineer. Whether they were fresh out of college (and learned a lot fast) or they were already experienced.

Seems reminiscent of the tales of early 2000s Google interviews, of the "estimate how many golf balls fit in an airplane" variety.

I'm not convinced this offers a useful selection criteria other than boosting your unconscious bias on who "seems smart", but on the other hand I'm also not convinced it's any worse than the standard modern Leetcode interview.

Funnily that old style of question is far closer to my day-to-day as an engineer than a leetcode algorithms question. Most of my job involves figuring out solutions to fuzzy problems based on unknown constraints, undiscovered requirements, and often unclear end-goals.

"How would you fill this airplane with golf balls?" is a fantastic question. If the candidate doesn't reply with "Why? What are you really trying to achieve?", they're gonna do poorly in modern software development.

Caveat: This does not apply to junior positions where you are expected to bang out code based on super fleshed out requirements and constraints.

I suppose it depends on how you grade the answers. Like I have bad spatial awareness in terms of how big things like planes are. I genuinely don't really have an idea how long a commercial airliner is, or how big a ping pong ball is. I feel like I'd do ok if I could get reasonable approximate values for things like the size of the plane, the balls, the seats, etc. if I also have to supply those values myself the end result is likely to be off

The end result being off is fine if I am being graded based on my thought process, but a disaster if I am being graded at having an idea of the size of airplanes before walking into the interview.

Like I said, the point of this question isn’t to seek your domain expertise, the point is to see how you use other people’s domain expertise to create a [software] system.
The way you ask it sure. I'm just not sure that is how everyone asks it, though maybe it is. I've never been asked something quite like that before but some comments I've seen around seem to imply some people want a close to accurate answer.
Commercial airliners vary in size more than an order of magnitude anyway. There are commercial aircraft that are shorter than an A380 is tall.
The fuselage of a 737 is the same diameter as the engine of a 777.
Why? Isn't the better question "how would you solve <the actual thing they are being hired to solve>"?

I hire people to track objects via computer vision. I ask them "hey, here's a system I want, what approaches would you take", and explain that this is a 3 year research project, of course they will be giving simplistic and wrong answers and there is knowledge asymmetry here, but I'll inform you as you go as to what works and not. "Optical flow". Okay, why? What's the general algorithm? Okay, so it turns out it doesn't work in our situation because X,Y,Z. Any ideas on what you'd look at next. And so on. Because that is exactly how my day-to-day conversations and work goes. Bounce ideas off of co-workers, latch onto something that seems promising, explore, maybe it works, maybe it fails. If it fails, what does that tell you about what to try next. Along the way you can have them code a tiny piece of something they mentioned if you want to see some code. But basically you are seeing if they understand the domain you are working in (which is not golf balls on airplanes), if they have some (not comprehensive) knowledge of the domain, and if they understand book solutions don't necessarily deal with the messiness of the real world and can adapt approaches to appropriately (i.e. a 20sec/frame algorithm is not going to cut it when I need 180fps).

It bothers me that this is still somewhat unfair due to the vast information asymmetry, but I try to deal with that. And my own biases can creep in. Are they mostly using 1980s style image processing techniques? Do they understand Bayes? Do they throw ML at problems that aren't tractable that way? It's unlikely that their past experience & preferences reflect exactly what we need. What is important - can they adapt, or even better, communicate why my choices are wrong and theirs are right (I don't need a robot to grind out code reflecting my ideas, I need them to figure things out and solve things in a fairly scientific manner).

So those are the questions I try to ask. I have no evidence that I get it right, but I haven't been disappointed with the hires.

I think your approach is pretty optimal, but at larger companies the candidates might be interviewing for the company as a whole and not for a specific team.
Interesting you mention 80’s style image processing techniques. Please take care with these kinds of biases. Often older techniques are superior than ones that are merely fashionable today. I’ve also been around long enough to see pendulums go back and forth at least once in the AI realm, never mind tech in general.

Mind you, I’m not saying you’re wrong to discount people who fail to stay abreast in their field, just know that sometimes there is wisdom in ignoring popular trends, or revisiting old techniques that might benefit from a different landscape.

I don't think they were looking for the right answer; I think the goal of that type of questions is to see how does the candidate approaches it. It's especially valuable with college hires.

Some candidates will simply freeze if they don't know the answer. Some will try to estimate the dimensions and come up with an estimate. I think Google wanted to try to anticipate what a person would do when stuck by asking these types of questions.

Me: "I'm a software developer, so I would let a material science engineer figure it out." You: "You can't do that... All the material science engineers have drowned." Me: "That's not very realistic..." You: "Thank you for your time. Don't call us, we'll call you."
"Ok the material science engineer gave you the fancy material. It was applied. As you know, nothing is perfect and lasts forever. How do you ensure your building doesn't sink?"
I'd let the material science engineer figure that one out too. No way in hell would I risk my ignorance of building maintenance be the cause of thousands of deaths.
The materials engineer can give you a great material. They can't design a system to monitor said material for defects.
S3 bucket had public access enabled... we asked a software engineer to do something that they're not the expert in.
One thing I learned in getting my engineering degree is that it’s unethical to practice engineering in an area you don’t understand. No way would I ever try to apply my software engineering skills to a materials problem
I feel it's only unethetical if there are "real" stakes on the line. E.g. Where people can get hurt. I don't mind a mechanical engineer writing code for a company that makes the zillionth vapid webapp nor do I mind a software engineer designing a mechanical watch.
Call uber boat. Have people evacuate. Problem solved. Building gets collected by the garbage collector.
This is an awful question for hiring software engineers. It’d be like asking a building engineer what quicksort is.
Use <magic material> to build a wall around the city? Have the people who maintained each individual building form a team that can keep an eye on sections in shifts? If we have enough magic material, build a double wall system for breaches?

I'm done. Give me my paycheck.

> Use <magic material> to build a wall around the city?

That sounds expensive. Do we need to do that? Does it solve the problem better? Does it maybe create a worse solution? How would you find out?

> Have the people who maintained each individual building form a team that can keep an eye on sections in shifts?

How would you make this less time intensive? Can you use automation?

> If we have enough magic material, build a double wall system for breaches?

Is there a cost-benefit analysis you can run here? How would you find out? Should we keep adding walls ad infinitum or does each additional wall have diminishing returns?

> I'm done.


I can't tell if you're asking these questions to elicit a longer discussion, or if you are doing so because you don't like the idea of a simple solution.
It would be probably cheeper - simply because the perimeter would be shorter than the sum of perimeters for most of buildings included in the city/district. That is what cities did since at least Neolithic til moment when cities become undefendable (because of cannons and airstrike).

But that is probably not the point. In order to build wall around the city, you need to have power or consensus in society to deliver this decision. And achieve that in reasonable time might be unreal for someone in charge of one building.

It’s unfortunate that this bizarre and arbitrary stab at analytical thinking assessment is still used to evaluate the skill of software developers.

Here’s another idea: have them write software. Not algorithm trivia, but actual software. Keep the scope small so there isn’t an onerous time commitment and have them explain their choices.

I would love to ask questions like that. I'm pretty sure hiring committee would not like it though lol.

I do worry about asking questions that give candidates extremely large advantages if they have certain backgrounds. For example someone coming from mech, civ, or petroleum engineering will get a huge leg up on that question.

It is also worth noting that the structure for interviewing at a company with 30-50k+ engineers is different than the structure you need for interviewing at a company with <<1k engineers.

IMHO, this is not particular better than an algorithm question.

Why not just summarizes the traits of the technical skills you expect from the candidate, lay it out, and come up appropriate questions for each interview?

General software engineering interview does not work. But there can be more specific measures to improve the experience.

> Why not just summarizes the traits of the technical skills you expect from the candidate, lay it out, and come up appropriate questions for each interview?

That won't work; it's aimed at a different kind of skill.

The skill the GP is looking for is ability to solve a problem you have never seen before, for a problem that bears little resemblance to anything you've done before, by transferring your existing general problem solving skills. It's a test of your ability to solve new things, which is a capability the company finds useful.

This is a very useful skill, and you can learn to do it better, but it's not a "technical" skill as we usually mean it. However it is one of the things which might be associated with "great engineer".

Listing technical skills and testing them will not tell you if the candidate has developed the above capability.

> It's a test of your ability to solve new things, which is a capability the company finds useful.


I work with early-ish stage startups. There's always a library that solves any given known problem. We don't have the scale or nuance to need to reinvent the wheel.

Where I really need engineers to shine is in solving the parts that don't have a library because we've stumbled onto something new. Or at least something new to the team. Or the way we've cobbled libraries together creates something new.

The important work is the work you haven't done before. We're engineers not line cooks.

November should involve completely different work and a whole new set of problems than February. If you're still doing the same thing you did in February, something's gone wrong.

The GP isn't asking a very good question to identify that skill, is the point. Interviews in other technical industries don't ask these kinds of questions, because they're not really useful as a gauge of, well, anything useful.
That's... part of the plot of the novel "New York 2140". Was that intentional?
> Design a system that ensures that doesn't happen

Well, if we live in a world with magic, I would just use the magic material to make a machine that removes the water magically.

Why is water even encroaching in the first place if the material/barrier is magic? As far as I know, water doesn't have any anti-magic properties.
This is a terrible question. Problem solving ability doesn't exist in a vacuum. Our experiences give us resources to draw from to combine in new ways that allow us to solve novel problems. Asking a software engineer to solve a problem in a dissimilar domain is badly missing the point of screening software engineers.

Sure, you may say that everyone who does well turns out to be a great engineer. I'm sure Google says the same thing about their algorithm trivia tests. Presumably the goal is to move away from these seemingly arbitrary and irrelevant tests. Just replacing one arbitrary and irrelevant test with another isn't improving the state of things.

I disagree. The goal of this question is to see how you utilize domain experts as a resource, add your software expertise, and design a comprehensive solution. It is specifically not a question about what you already know.

In the interview I play the domain expert.

This is exactly what your job will look like: collaborate with domain experts, use your software skills, solve real world problems.

I don’t need an engineer who can build a queue. I need an engineer who can use a queue.

Have you considered asking questions that involve queues.
I think you are doing it right.
Properly testing for a "solution engineer" (which is what you appear to be testing for) would pose a question that is a legitimate "software" problem and "domain experts" would be in areas such as say "electronic payments". Candidate would be expected to demonstrate ability to devise proposals for a 'functional solution to a business requirement' using available domain experts.

The key phrase is yours: "real world problems". Your hypothetical misses that by a generous mile.

And I apologize about this but we're discussing actual issues with software recruitment:

One major problem noted by senior engineers subject to these interviews is the competence level of the interviewer. I had this one guy, a "principal engineer", ask me about "Optimistic locks" as his initial query into my knowledge of concurrent systems. It took a lot of self-restraint to not blurt out "You mean optimistic locking?"

It is amazing to me that we somehow managed to hire very good software workers in the 90s without any of these shenanigans. One thing that does stand out from my memory of the 90s: we had senior colleagues (with literal white hair) in senior engineering positions. Go figure.

I think they may also ask them because they want to see how you do at the specific task of taking a problem you remember the pseudocode algorithm for, and actually turning that into working code in a real programming language. You know, the "schlep" part of programming.

I feel like they don't realize that this is the goal they're calibrating these questions toward, though. If they did, they wouldn't require the "from memory" component—instead, they'd break the question into two parts:

1. tell me what algorithm you would use to solve this problem; or, failing that, tell me what criteria you'd use to select the best algorithm for solving this problem out of a set of candidate algorithms.

2. Here's a journal paper, explaining a particular data structure + set of algorithms that can† be used to solve the problem we're talking about. All the algorithms in the journal paper are in that sort of declarative, math-proof-phrased pseudo-pseudo-code form. Now take this journal paper and turn it into an implementation in your language of choice. Explain your thinking as you go.

† The paper chosen is not the one corresponding to the algorithm the candidate chose; nor is it the one corresponding to the algorithm that's "right" to pick for solving the problem. It is, in fact, randomly-chosen from the pool of sub-optimal solutions to the problem, filtered for the ones that get rarely used in practice and so have no easy non-journal-paper presentations of a reference impl laying about on the web to study. Alternately—if the interviewer has enough time to explain a second problem—it could even be a paper randomly-chosen from the pool of all algorithms papers!


• Both task 1 and task 2 are things "senior" engineers do every day in the real world.

• There's no component of memorization; the step "between" step 1 and step 2, where you'd look up the journal paper corresponding to the algorithm you thought of, is dropped.

• You can't really "burn" either task.

Yup if you are willing to explain the algorithm to the candidate and not dock them any "points" (or whatever) for having to explain the algorithm, then this works fine. I have no qualms about having a discussion about an algorithm that's tricky, and seeing how the candidate works through it.

That's not what I see in practice though. Not remembering the whole algorithm at best ends up wasting half of the interview (code the remainder in 20 minutes??), and at worst gets explicit language from the interviewer like "TC couldn't figure out <insert tricky algorithm> and I had to explain the algorithm to them explicitly, WEAK/BORDERLINE on algorithms".

I should also say, I dislike doing any engineer interviews that are purely talk-about-algorithm without any code, because I've interviewed a disturbingly large number of candidates who cannot write about 10 lines of mostly-bug-free code with a single state variable and like one loop, in 30+ minutes.

> That's not what I see in practice though. Not remembering the whole algorithm at best ends up wasting half of the interview (code the remainder in 20 minutes??), and at worst gets explicit language from the interviewer like "TC couldn't figure out <insert tricky algorithm> and I had to explain the algorithm to them explicitly, WEAK/BORDERLINE on algorithms".

It is kind of hilarious to get an algorithm problem which took eminent computer scientists years to solve the first time. "Why yes, I am Donald Knuth/Edsger Dijkstra/et al. – only smarter, luckier, and much faster!"

As you note, the initial question is basically "how well do you recall the solution to this problem?"

If I liked spending time memorizing long stretches of text, I'd be a doctor, not a software developer.
These are great suggestions!

A few years ago I worked through building a spreadsheet in JavaScript. It was a great introduction to interpreters. I read through Writing an Interpreter in Go by Thorsten Ball [1]. Constraining the interpreter to execute formulas in cells was a straight-forward way to approach building one from scratch.

Writing a Pratt parser as part of this forced me to understand how it works. Figuring out how to process a sheet led me into algorithms and structures like directed acyclic graphs (as mentioned in the article). I found myself referencing Introduction to Algorithms and really studying it [2].

In the end I turned it into a talk at Big Sky Dev Con in Montana. The whole thing was a good experience - from researching how to do it, to sticking it out through the implementation, to distilling it to a 45 minute talk. Be sure to check out the recording [3] and code [4] if you're interested.

Any of these suggestions will lead you down a rabbit hole of learning with a clear objective in sight to keep you motivated to dig deeper.





Thank you for sharing this, Lance! I'm a "recovering actuary/Excel power user/Excel developer (shudder)" currently in Lambda School in their web development track.

I'm really enjoying watching the recording of your talk, and in addition to learning one way to build a spreadsheet, I'm also learning lots of good software development practices orthogonal to the specific project, which is great.

Again, thank you for sharing!

Thanks! Glad you like it!
By the way, I'm curious, how did the company you work(ed?) for in Missoula, MT, get started? I'm always interested when I come across software companies in non-megalopolises.
The three founders live here, and wanted to build a software company. They worked hard to make something people wanted, and raised money from the three Fs (friends, family, and fools). They applied to YC on a lark, actually got in, moved to Mountain View for 3 months, learned a ton and made good connections.

After gaining traction in the market it was possible to raise from angels and then VCs.

I believe you can build and sell software from anywhere if you have the drive and find a way to solve problems that people are willing to pay money to solve.

Here's an article that talks more about it:

Read this book:

It's all in there. Put away a week of time to really start digging into it, get into the habit of learning from a book. I'd say it's worth it.

I did read this in college 12 years back. It did get me excited back then. But then I guess I did not get into putting those algorithms into practice when I first started working (working in smaller companies, building simple web applications). I think it will be a good idea to go back to it and start again. Thanks!
Nice list. Appreciate it. But I see there are a few amazing software books missing from the list such as:

- Clean Code (by "Uncle Bob")) []

- Design Patterns (by "Gang of 4") []

- Introduction to Algorithms (by "CLRS") []

Introduction to Algorithms might not be the best thing to start with. The name is very much misleading
Glad to see I'm not the only one who thinks this - it's one of the many areas I need to look at, and saw that recommended elsewhere... took a look on Amazon and got completely intimidated! If 1000+ pages is an introduction, what does the chapter and verse look like?

(I've started reading Grokking Algorithms this week, and it's been a much better introduction - I know it's not as in depth as some other books, but I'm making good progress with it and not drowning in complexity from the get go)

Don't let its size intimidate you. Introduction to Algorithms is quite approachable and its chapters are pretty well self-contained.
Thanks - I'll keep that in mind (although I'm old and attempting to retrain, and have found lots of things more difficult than everyone else appears to!)
Those are all there. With the nicknames as well.
A book about the design patterns is a good idea, the original book by then the gang of 4? I disagree, IMHO this book isn't well written, now the question is: is there a better book on this topic?
Head first is a much better option IMO
there are a few books you can get:

A lot of "classic" problems are so embedded into CS professors that they don't even see them as problems anymore (lazy caterer, pick's theorem, etc) so if you didn't study these classics explicitly in school you have to discover them on your own.

> Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity.

I.e., it's an asymptotic upper bound.

It's also interesting to compare with Ω() (Big Omega ­— asymptotic lower bound) and Θ() (Big Theta) (big-O AND big-Omega):

A good textbook on this subject is Introduction to Algorithms: .

What I described is the same thing in layman's term. Worst case is the colloquial word for upper bound. And in that example n was approaching 1000.

If you want to be puritan the only fault I see in my definition is instead of using a generic function I assumed it's linear function - but that's for explaining the colloquial use.

I can't recommend courses, because I don't have direct experience of any, but given what you say, my suggestion would be to take a bit of a pause from pragmatic problems, and dedicate some time to learn the foundations of computer science, in particular about algorithms and data structures. I'd recommend a couple of books: "The Algorithm Design Manual" by Steven Skiena if you want something not too theoretical, or "Introduction to Algorithms" by Cormen, Leiserson, Rivest, if you want a bit more breadth and theory:

As a second suggestion, I'd recommend to learn a language somewhat different from JavaScript-like (or C-like) languages, something that challenges your mind to think a little differently and understand and create higher order abstractions. There are many choices, but to avoid confusion and being my favourite, I'll point to one: read the "Structure and Interpretation of Computer Programs" by Abelson and Sussman. It teaches Scheme in a gentle and inspiring way but at the same time it teaches how to become a better programmer:

Or if you want made of dead trees:

I can't recommend it enough. If you read it, do the exercises, don't limit to read through them.

Maybe it's even better if you start with this, and THEN read the books on algorithms and data structures.

Enjoy your journey!

SICP has continually showed up on my radar and before this post it's what I was looking at getting into again but felt this the book might have been too "meaty" and maybe was out of vogue. I think I just need to delve into it and finish it. Awhile back I did the first couple chapters and it did seem to go quite well though it's not for the faint of heart and really requires hard focus and I'm okay with that! My hesitation was that I wasn't sure if it was a solid starting point. Looks like it might be!
It could have a bit of a "serious air", but don't let this intimidate you, it's not that academic. Also you don't need to read it all to grasp the most prominent benefits. Chapter 2 and 3 have a lot of juice (I remember in particular the part on generic operations and the symbolic algebra example in ch. 2 and the streams in ch. 3). If you decide to read it all, in the end you'll learn how to write an interpreter for the scheme language, and this will ingrain a very deep understanding on how an interpreter (and even a compiler to a degree) works. But as I said, write the code yourself while you read the book. It will be easier to follow and much more fun!
Feb 16, 2017 · imakecomments on Algorithms
This is just my opinion and I'm sure it differs from others...

Roughgarden's class is advance and expects mathematical maturity. You may find his course quite fast and rough if you are a beginner.

Sedgwick's class is much easier. He is a bit boring and tries to use "real life" examples (in some instances) from the physical sciences to make the material relatable. This in my opinion detracts from the material. Also, he doesn't always fully explain where he got some of the big ohs here and there.

My advice? Follow MIT's OCW course (it uses CLRS). Supplement it with Algorithms Unlocked, the Khan Academy link in OP and CLRS. If you use those 4 resources and put in the work you'll understand the material.

All 4 sources have Thomas C's DNA touch to it (he is the C in CLRS). So you'll find it consistent when you read from one source to the other. After reading/hearing the same thing about 4 different times in 4 different ways it'll begin to click.

Order of easiness is probably Khan Academy > Algorithms Unlocked > MIT Algorithms Course > CLRS.

Algorithms Unlocked is like "pre-CLRS" and Khan Academy's version is the TL;DR version of Algorithms Unlocked.

Hope this helps.

Below are the links,

Great feedback and insight. Appreciated. I'll checkout the MIT OCW and Khan Academy first. I'm hardly a beginner, but my skills feel a bit rusty and want to refresh them.
No problem.
Any particular reason that you linked MIT course from 2005 instead newer version from 2015?
I'm going through Sedgewick's class right now. Is the MIT OCW's course math heavy? It lists "Mathematics for Computer Scientists" as a prerequisite, I am somewhat familiar with the material, but not in a very deep level. Should I take that one before?
The main aspects from Mathematics for Computer Scientists that it assumes is knowledge of asymptotic complexity (O(n)) and basic proof writing skills.
Great post, thanks for the resources.

I find there are a lot of online resources for those looking to learn algorithms and data structures but I've had trouble finding the same breadth and depth of resources surrounding the math behind CS (discrete math, probability, etc.). Any suggestions?

Read "Discrete Mathematics" by Epp. Probably the easiest Discrete Math intro I came across.
> Order of easiness is probably Khan Academy > Algorithms Unlocked > MIT Algorithms Course > CLRS.

Where does Roughgarden's course fit in to this?

I've taken both Stanford's and Princetons Coursera courses, and powered through the MIT OCW, and I would say this evaluation is spot on.

If you have to pick only one go with the MIT OCW, and snag a copy of CLRS. I got mine from my local lib. and it gave me more than enough time to work through the problem sets from the mooc.

Feb 15, 2017 · imakecomments on Algorithms
As a follow up to this resource I recommend,

Algorithms unlocked:


Both include the same author as the one in this article (Thomas Cormen).

Dec 19, 2016 · gressquel on Skip Lists Done Right
Excellent article, I really love algorithm and datastructure. if anyone else is interested like me, I recommend this book
I'm not a big fan of that book. For all its weight its still surprisingly informal.
It is also very handy if you need to flatten a small (< 2 kg) household pet.
I always say college was very practical learning as at different points in time I was able to kill a mouse with both algorithms (this very book) and computer architecture.
It took you 2 books to kill 2 mice, but I'm sure with more observations it would have been obvious that your approach was actually O(logN).
It would be bad to have a book about multiple trees which implemented less than 1 tree in its printed form.
Implemented? I don't think it implements tree, so much as inherits from tree.
On the upside, it is really cheap by weight.
> It is also very handy if you need to flatten a small (< 2 kg) household pet.

or your ego :)

I recommend learning discrete mathematics, then data structures and algorithms.

I cannot stress enough how important mathematical foundations is. It'll make everything else much easier to learn. I haven't read the book but heard good things about: as a beginner text.

Coursera has multiple offerings on Data Structures / Algorithms -- find one that works best for you.

For instance:

By doing all of those you'll get a good introductory exposure to the topics.

You should also look at a rigorous course offering of Algorithms. MIT has a few online to view.

Some readings for a beginner are: (not beginner level but classic)

After all of this you should be fine with diving into interview books. You'll want to whiteboard solutions and be able to do all the difficult problems. Look into sites like leetcode, glassdoor and be able to do the difficult problems posted there.

Thanks, there are some good resources here. I rushed through CLRS to help prepare for these interviews but I failed at the phonescreen, which was mostly comprised of Leetcode / Hackerrank type questions. I'll definitely take a look at the coursera course specialization but it seems as though these interviews comprise of a 'programming challenge' type question for the phone screen and more with a focus on data structures, algorithms and system design for the onsite.
I learned programming without a computer in the 90s with Kernigan & Ritchie "C Programming Language", one of the best coding books ever written:

I'd also learn SICP:

and algorithms from either CLRS or Skiena Algorithm Design

Along with SICP, The Little Schemer would be a great book to work through on paper.

Once that is worked through, Godel, Escher, Bach would be an especially entertaining and enlightening read.

That's basically your bible of basic data structures & algorithms. There are, of course, many many other domain-specific ones, but that textbook is taught in the vast majority of data structures & algorithms courses, it covers most of what people would expect in an interview or in basic programming, and it teaches you how you might implement them. The book itself uses pseudocode, but if you're a halfway competent programmer you should be able to translate it into your language of choice.

(For a fun exercise, it can be interesting to translate them into a programming platform of choice. For example, how would you implement a priority queue with an SQL database? How would you do reservoir sampling? How do you implement sorting on disk, where the dataset is >>> RAM size? How do you implement a graph algorithm on a cluster of 5000 machines? These are the sort of "Google level" problems that you end up facing in industry, but they require a full understanding - not just familiarity - with what the underlying algorithm is doing.)

Thank you for the link.

I'm even trying to figure out, for example, maybe I read about a particular data structure in a blog post that I'd like to learn more about. I find the relevant article on Wikipedia, and from there it may or may not take me to specific implementations of that data structure.

Perhaps I'm mistaken but my hunch is Wikipedia won't necessarily be exhaustive (in terms of being able to extrapolate from the article what an actual implementation should look like).

Maybe that part of the question is more of a fishing expedition, as I imagine it's a small fraction of developers / engineers who are ever actually implementing complex data structures.

Also self-taught with an ME degree (also from Rose-Hulman. Hi Keith!).

I'm an engineer at Twitter. Due to the interview process, I have yet to encounter anyone (myself included) who doesn't have a pretty sound understanding of algorithms and data structures. The biggest difference I've noticed between self taught folks and CS majors is comfort level with bit-twiddling. All the high-level stuff I know I've had to learn to do my job. All the algorithm and data structures stuff I've had to learn to pass interviews. Moving bits and bytes around is hard for me because it's so rare for me to encounter it in my day-to-day work.

As for how I learned all this stuff? Read Steve Yegge's Google interview post[1], buy the Algorithms book[2], learn everything Steve suggests you learn.

1: 2:

Through most of my life I always looked at any form of rejection as a form of motivation, and adhered to a sort of "next time, be so damn good they can't ignore you" mindset. I think one can take that too far, to a point where it isn't healthy, but if you channel the emotion the right way it can help you keep moving forward and keep climbing to progressively higher heights.

If you want to take that approach, and say "f%!# it, I'm going to buckle down and work my ass off so I do get the job next time" there are a few concrete steps you can take.

1. Find, read, and do the exercises in two or three of the various popular books on "programming interviews". I'm thinking of books like Cracking the Coding Interview[1], Programming Interviews Exposed[2], Ace the Programming Interview[3], etc.

2. The companies you mentioned are well known for asking lots of detailed questions on fundamental computer sciences concepts. If doing "big o" analysis and talking about algorithms in detail isn't your forte, get a couple of good Algorithms course books and go through them. Personally, I'm a fan of the Robert Sedgwick books[4][5][6][7], and the CLR[8] book is a standard in this area.

3. Look over the many various articles / blogs / etc. written about preparing for Google interviews.

I have never applied to Google myself, so I can't speak to that from first-hand experience, but this Steve Yegge blog post always struck me as being excellent:

4. Take as many interesting Coursera, EdX, Udemy, etc. courses as you can find time for.

5. Write code any chance you can. Get involved in, or start, an open source project (or two). Volunteer to code for a non-profit / charity or something in your area. Write an app for yourself, to fill a need of your own.

6. Make sure you broaden your horizons and challenge yourself. If you've always written, say, Java or C++ or Ruby code, then make an effort to learn Go, or Erlang or Haskell or Prolog.

All of that said, as I've gotten older, I probably feel a little bit less of the "I'll show you!" thing. I've developed more of a stoic approach, and almost a bit of a zen mindset. There's a lot to be said for a sort of calm, peaceful acceptance of things, even when they are negative. There's a lot more one could say about this, but I don't want to get too philosophical here. I'll just point out that you applied to two... TWO.... companies. Out of like a BILLION possible companies you could work for. Ok, maybe not a billion, but certainly millions, or thousands, depending on where you live and your willingness / ability to travel.

My point is, don't put too much weight on what happened with Google or Amazon. The whole "dream companies" thing is a crock of shit, IMO, looking back on it with hindsight. I've worked for two companies in my career that I once thought of as my "dream" destinations, and neither experience was anything special (neither was bad either), and not worth getting all worked up over.

Final last bits of advice.

1. Read Nietzsche

2. Read Ayn Rand

3. Get drunk

4. Listen to some Queensryche

5. Profit???









Points. I am giving them to you. Thank you, sir or madam. I salute you.
Read Nietzsche? Common now, OP isn't a teenager :)
OK. So Cormen's "Intro to Algorithms"[1] or Knuth[2].., but be prepared to be gobsmacked and have a graduate level math background...



On the other hand if you like wordy, I would quite recommend "Compared to What"[1] by Rawlins...


I've felt the same way for a long time... and the only way to get over it is to practice. Buy a whiteboard and practice at home. Buy some books (e.g. and/or You can also subscribe to

Not exactly an answer to your question, I know. Most startups I know in SF/Bay Area at least do a technical test (either coding or just regular questions) on the spot. It is stressful, but as anything, you can learn how to do it.

Thanks for the recommended books and links. That's actually what I'm currently doing. Studying and practicing. I just wish there was a better way. I feel like I'm back to my school days where your SAT/GRE scores mean more than your body of work. Of course, the difference is that I have to take a test for each company I apply to now.
For points 1 and 2 has a great set of courses on algorithms[1], Taught by a Princeton professor. I started the first one a couple of months ago, but had to drop out because of time constraints. I've re-enrolled for the upcoming section. It has a companion book, Algorithms[2], that I bought which was a great help. I don't have a copy of CLRS[3] But I've heard it is also a great reference and is on my amazon wishlist. Other than that, MIT's OpenCourseWare[4] also has the entire CS degree online for free. It's funny, reading your post I thought I hide sleep posted or something. You seem to be experiencing exactly what I'm going through. I'm also 27 and don't have a CS degree. I am highly motivated though, and have no problem hacking at this stuff on my own. I've talked to a bunch of CS majors who actually still feel inferior after they graduate so other than helping the impostor syndrome[5] that I suffer from, I'm not sure that a degree would do me a lot of good. (With the exception of padding my resume and getting me in the door for an interview). [1] [2] [3] [4] [5]
In India, at least at IITs, we get assignments and course material on email or as hardcopy kept at campus photocopy centre from where students can get copies at cheap rates. We hardly bought textbooks. The once we bought were very cheap. I think the most expensive one I bought was for $8 at current rates. Don't you guys have libraries ? Whenever I needed to consult a textbook I used to go to the library and read it there.

Moreover we get "Indian" edition of those books which are written by Americans and those used to be as cheap as Indian author text books.

On flipkart (Indian version of amazon) Introduction to algorithm is for INR621 which is roughly $10

On amazon same is at $60

>Don't you guys have libraries?

The campus library may well only have one copy on reserve for a course with 300 students, or they may not stock the newest edition.

The latter part gets to be a problem in the instance where the textbook publisher decides to publish a new edition every year or two just to ensure that you'll be doing the wrong homework exercises unless you're using the same new copy the professor has.

When more than a student or two has the idea of using the library's copy, it becomes a battle over who will be able to check out the limited number of copies. Sometimes this becomes serious enough that the library will freeze the availability of a book if it's been assigned that semester.

Obviously there is something to be learned about the price differences in the different versions, but there is also a non-trivial difference in quality. I've had International Versions that were missing whole 10-page sequences, pages out of order, and different problem sets whose answers had factual errors. Also the lower grade paper is an obstacle to book reuse.

By this I don't mean to be negative, I just believe the final answer involves more reasonable digital distribution; I'm sick of the myth of "getting the exact same book for cheaper" because I've been burned by it a number of times.

umm...never had any such issue with my books. May be I was lucky :)
If you're Mexican/Canadian, a TN Visa might be a better option than H1B since there's no annual limit.

I'm currently in Silicon Valley from Canada on a 2.5 day trip, with 4 interviews in that timeframe, and it's quite fun yet intense.

I'd also say that if you're a self-taught programmer (I'm an EE), it's worthwhile to read a book like Intro to Algorithms[1] cover to cover before you start doing technical interviews. There are just too many CS101 concepts that you'll never run into even after years of programming.


I may be wrong, but I believe that interviewing in the US while you're on a tourist visa/ESTA (presumably your case?) constitutes "seeking immigrant status" and is unlawful. You may want to avoid posting about it in public forums where your identity can be easily traced.
I am really, really not a lawyer, but he says he is seeking a job offer for a TN visa, which is non-immigrant status, so I don't think this is a case of seeking immigrant status.
I was thinking the same thing !
You are (mostly) wrong. A job interview is a perfectly legitimate activity to undertake on a B-1 (business) status. I have done it more than once. At immigration, they ask the purpose of your trip; you say, "job interview;" and they stamp your passport with B-1 (business) rather than B-2 (tourist). It really is as simple as that.
Code Complete 2 [1] was one of the first coding books I've read. As with anything else, it's good to look around (HN is a good place) for people who have problems with the book. I think I learn as much reading the commentary people make about books like that as I do from the book itself.

I think I've listened to every podcast on software engineering radio a few times [2]. The older ones are especially nice because they usually pick a specific topic and cover the high points. I liked that I could listen to it while I was driving, or otherwise not in front of a computer.

It's specific, but Javascript: The Good Parts is probably the most used book I have on my shelf. It has such a perfect amount of usable information in it. It's pretty great. Again, it's definitely worth looking up critiques and counterpoints.

I've also got Introduction to Algorithms, which I use as a reference, sometimes. I switched over to The Algorithm Design Manual [5] after I saw it referenced in an older Steve Yegge post [6]. I read through the intro and it seemed like a book that would be more appropriate from an autodidactic standpoint. I really have no idea if that's going to pan out, since I'm not that far into it, but we'll see, for sure. Doesn't kill me to have an extra algorithms book laying about, though, and I've always got intro to algorithms for cross reference. I've found that I really need to have as many sources available as possible when I'm learning alone. Usually I don't get something until the fifth person describes it from the tenth different angle.

That's most of what I can think of off hand. I really enjoyed The Joy of Clojure [7], though haven't checked out the newer version. Programming Collective Intelligence [8] is a fun book, and is what made me want to go back down the maths route to get more into machine learning.

And of course habitually reading hacker news for an hour or three every night :)

So that's my totally inexpert list of random stuff that I enjoy

[1] [2] [3] [4] [5] [6] [7] [8]

What do you want to learn? Programming or CS? CS is more than just programming, and CS theory is more than just Algorithms & Data Structures.

If you want to learn about Algorithms and Data Structures and you have a strong math background, then CLRS is the book to get:

An undergraduate CS curriculum will mostly cover the parts I-VI of the book (that's around 768 pages) plus a few chapters from the "Selected Topics Chapter" (we covered Linear Programming and String Matching). Mind you, this book is very theoretical, and all algorithms are given in pseudocode, so if you don't know any programming language, you might have to go with a an algorithms textbook that is more practical. In my DS course we had to implement a Red-Black tree and a binomial heap in Java, and in my Algorithms course we only wrote pseudocode.

Maybe Sedgewick's (Knuth was his PhD advisor!) "Algorithms (4th ed)" will be a better choice for a beginner, as it shows you algorithm implementations in Java: (If you decide to go this route, you might as well take his two Algorithms courses on Coursera, they will really help).

There are also a bunch of Python-based introductions to computer science which have a broader focus than just teaching specific data structures and algorithms. Some of them emphasize proper program design, debugging and problem solving. I haven't read any of them, so I can't vouch for them, but here are a few of the more popular ones:


This book was written to go along with John's edX course:


Oh and btw, there's also the Theory of Computation, which is a major part of CS theory. Here are a few MOOCs and recommended books on the subject:






Sipser's book is probably the best introduction to the theory of computation, and I believe its last chapter deals with Complexity theory as well.


I loved this book very much. It has a very informal and conversational style (don't let it fool you, the problem sets can be HARD).


Once you are familiar with some computation models, its time to study computational complexity and this is one of the best books on the subjects. It is used both for graduate and undergraduate courses.

As often as this gets recommended, I'd go with Introduction to Algorithms (CLRS)[1]. I'm currently in the process of going through it on my own as I'm taking a semester off, and I find it perfect to study from. It goes in depth, explaining the math behind run times (which some may like about it, while other might not), and it covers a lot of ground. I haven't fully finished it, and I'm only on chapter 6, but so far it's one of my favourite textbooks to study from.


Same here, I came in to post about that exact book.

There's also Pearls of Functional Algorithm Design which I have yet to read.

In general you've got stuff like Introduction to Algorithms ( and the Algorithm Design Manual ( both of which I have seen mentioned as good books. I'm in a grad school-level algorithms class that just started using the former (which is language agnostic, but we are doing our implementations in Java) and it looks pretty good to me.

>"I recently enrolled in a local Uni as a freshman."

Same here.

>"I was curious what kind of structured plan you have, is it just you daily schedule or something more long term?"

It's mostly longer term on a scale of months to one year or several in the case of the degree.

For example, one goal is working through the entirety of Introduction to Algorithms [0]. I've set out dates by section for reading, completing problem sets and reviews. I've written these dates out along with my reasons for pursuing the goal and how I believe I will benefit from achieving it.

That might sound simple, but taking the time think these things through, and articulate them into something concrete was certainly useful to me.

This isn't the kind of thing I'd have done historically, but I've been trying to stay very open-minded to anything that might help me reach the goals I've set. I follow the same process for any significant new goal.

I make a point to take (hand written) notes on everything I work on. At least once a week, I whiteboard what I've been reading or working on. There are going to gaps or things I completely blank on which become items for further review in the coming week.

I have yet to settle with a calendar or to-do app which I really like, for the moment, all these dates and tasks are either in a notebook I use daily or on my whiteboard for emphasis.

As much as possible, I try to create "artifacts" from the way I'm spending time. This can be notes, a quick summary of what I got from reading a chapter or article or an attempt at implementing something I read about.

I have a solid routine for my days made pretty necessary between my job and small child, but each day isn't entirely planned. I keep a shortlist of things that I work on in the same set of windows (early morning, commute, at work in the gaps, before bed) each day. I've cut out most anything that isn't family, work or study.

One thing I aim to stay strictly consistent about is waking (5:00). This gives me a little "extra" time before I wake my child up.


>> I make a point to take (hand written) notes on everything I work on.

I was not doing this intially thinking I dont have write down stuff if I understand it, Only to be completely lost and having to read and understand it all over again. Now I write everything down too.

I wish you all the best.

$81 for

No legal DRM free option as far as I can see.

Fifth link was a direct to the PDF.

The TAOCP[1] is the reference for all the types of algorithms it covers, the Cormen book[2] is a reference for its breadth and the Algorithm Design Manual[3] is quite nice to learn how to design your own.




I guess the question is what do you mean by advanced topics? What direction do you want to go in? The latter book you mentioned seems to cover a number of topics and is probably a good bet.

If you are interested in the web, both these books were good:

Here are a few books that cover some "advanced?" topics that I'd like to read when I have time (would also like to hear other peoples' recommendations on them):

I'm not sure on your background or the quality of these books, but an understanding of data structures, algorithms, and object oriented programming could be considered important:

Although these and other intermediate to advanced topics tend to be covered better in non-language-specific books such as this shotgun blast to the head. Don't worry, it's just an "introduction":

Thanks @poof131 - I'd like to go deeper into algorithms / data manipulation / social network analysis (for my job), and also web programming using python (weekend reading).

I'm currently reading Python for Data Analysis but feel like I can read about how to use a library but it's hard to retain specific syntax use cases if I'm not using those libraries immediately / frequently.

One book I really like is Collective Intelligence (, which has some good examples on social network analysis.

I think the problem is that the "right" way is completely subjective. Is Spring "bad" and Scala "good"? Depends on what you're building and for who.

Overall, the best way to be a great developer (not just a good coder) is to: 1) Build a strong theoretical foundation. The curriculum of any top tier CS program is a good starting point. Check out Intro to Algorithms

2) Stay on top of new technologies and models. Read Hacker News, Read Reddit, Follow notables on Twitter. Check out @lhazlewood

3) research, debate, practice, implement, review

Yep, that's the current way I do things, but you know how much reading hacker news can be confusing. And not always what is popular == what is good code examples.

I'm looking for a highly acclaimed open source project, or developer, that most people agree is a good example. Some sort of a "developers academy" that gives awards for "best design" "most elegant code" etc. like the Oscars. then people can follow the practices. Sadly, at the moment, these things are not taught in school, or anywhere. If I want to learn making movies, I know exactly what movies to analyse, for coding, I have no clue, there is no guarantee that a popular github project is also well written, it just might be doing its thing right. I hope I managed to point the problem, and I think as a community, we should build some standards that are cross language and framework, and maintained and evolve with time. e.g. the "good code principles, with examples, certified by top 1000 in stackoverflow, 20 YC CTOs, 10 professors from Stanford, 20 R&D managers from the enterprise world, and 1000 top hackers on hacker news. crazy, but this is exactly the film academy, isn't it?

I think that, especially once you start asking for cross-language, cross-framework standards, you're simply asking for the impossible. I know that feeling, wishing some kind of authority could help you determine the One True Way but there is almost certainly no such Way that is equally True and Singular across all possible languages & frameworks.
For algo/datastructures, CLRS ( is the gold standard.

For OS's, Tanenbaum ( is popular.

'Math' is broad - if I can recommend only one book to cover all of Math I'd probably say 'The Road to Reality' ( More practically (for the subset of math most programmers are likely to care about), you'll do fine with one good discrete math book and one linear algebra book. Throw in one each on Stats, Abstract Algebra, Calc (up to ~diffeq), and Real Analysis (in roughly that order) if you're a bit more ambitious ;-)

I just wanted to add that I felt Tanenbaum's Modern Operating Systems was a tad light on some of the core topics (especially when trying to get a good grip on the issues behind concurrency). I personally recommend the book by Silberschatz:

I also think for understanding how operating systems work, nothing beats writing your own! I learned most of the concepts by building a toy OS during the better part of my undergraduate studies. I highly recommend this for people who like coding and are afraid of jumping into the theory too quickly. For example, analyzing memory allocation algorithms is never as interesting as when you have to pick one for your own kernel!

_Concrete Mathematics_ [1] is a good book that is fun to work through. The book can be considered an introduction/prerequisite to Knuth's _The Art of Computer Programming_ series [2], which contains much mathematics. If you would like a book that is not so difficult, _Introduction to Algorithms_ [3] (aka "CLRS") is also highly recommended.




I would recommend the Introduction to Algorithms by Cormen. It covers not only the complexity analysis but also describes the most important algorithms, data structures and classical problems solutions.
Apr 08, 2012 · crop on Coding
i think more important then learning a specific language is getting a understanding of the algorithmic stuff. try theese two books for example:

i wish i had these books 10 years earlier..

I'll start out with the classic:

"Introduction to Algorithms" a.k.a. "CLRS"

I think this is all (and probably more than) you really "need" if you want to know CS for programming, and an enjoyable read too.

I would strongly urge the OP to not use this book. I used it as my undergrad algorithms book and it is, quite frankly, far too obtuse and hard to understand. At first, I thought it was just me and that other people found the book readable. I eventually realized that the book itself is just bad for autodidacts.

Last month (about 5 years after I took the course), I went through to review some stuff that I'd looked at more recently. Specifically, I had worked out Dijkstra's algorithm on my own and finally understood how the heck it actually worked. I then opened CLRS to check their proof. What I saw blew my mind in terms of sheer obfuscation. Despite having a very clear understanding in my head of the core concept behind the algorithm, I read their proof and was just amazed at how little it actually explained about the simple idea that ties the whole thing together. That by itself isn't a huge deal, because some proofs are just hard to follow. What really upset me was that there was practically no surrounding discussion of the intuition behind the algorithm. Behind all the rigor of mathematics lies very simple and powerful ideas. This book did nothing to emphasize those simple ideas. It's a real travesty to be honest :(

If you're looking for something more accessible, I've heard that The Algorithm Design Manual[1] by Steve Skiena[2] is better. Whatever you choose, though, please stay away from CLRS.



edit: I just found this review of CLRS on Amazon ( It's the top comment for this book and sort of gets at the issue:

With that said, this book falls short in one MAJOR area, explanations. Too often explanations are left out and left as exercises and there are no solutions to the exercises! Or details are replaced by ambiguous statements such as of "cleary, this works", or "it is easy to see that this ...". I get the concept of learning by doing, really I do, but there should be some kind of solutions so the student can CHECK his/her understanding of the material and sometimes the exercises are not about advanced aspects of a concept, sometimes it is the core material. Even if the solution manual only contained a simple answer without the work. Not only would it help tremendously but the purpose of doing the exercises would be preserved; that is the student getting his/her "hands dirty" and working out a problem.

For the love everything good and pure in this universe, I really wish writers of mathematical books would stop using statements like "clearly this works" or "it is easy to see", "it is obvious" etc. While that may be true for you and your brilliant circle of colleagues, everything is not always clear and obvious to your readers. Save all of that ambiguity for your research paper.

A great book should deliver in two areas; it should challenge and it should inform. The challenge is there, no doubt. However in some ways it fails to inform the reader. The authors should really think about releasing a students solution manual to help students learn better. I take away two stars for the reasons stated about

> What really upset me was that there was practically no surrounding discussion of the intuition behind the algorithm. Behind all the rigor of mathematics lies very simple and powerful ideas. This book did nothing to emphasize those simple ideas.

I would just respond to this by saying that the mathematics do express the ideas. It's just a different language than most people are used to.

As far as The Algorithm Design Manual and CLRS is concerned I would not say that they are comparable.

CLRS is about algorithm analysis and formally proving that they behave in certain ways - like proving that quicksort has a guaranteed worst case of O(n^2) and an average case of O(n log n). Being able to work with proofs such as those is fairly fundamental to being a Computer Scientist in the strict, academic, sense. That being said, it should not be used as book to teach data structures or algorithms without formal proofs having been taught first - without the maths background the book is going to be very hard going.

The Algorithm Design Manual on the other hand, is more like a literature review for a collection of algorithms/families of algorithms (75, claims the copy infront of me). For each of them it gives an overview of the class of problems you'd apply the algorithm to, a run through of the algorithm (some with pseudo code) and discussion about the algorithm and times when the author has used it, and then pointers to more in depth sources. CRLS and Knuth are cited considerably, which should give you an idea about the relative stature of the two books being compared here.

Ultimately, they're both good books but for different audiences. The Algorithm Design Manual is by far the easier read, not least of which because it's not trying to be an academic text and is replete with anecdotes from the author. It's an excellent overview of the techniques available to the practicing developer - the ones who work in fields where you might need to whip up a quick Bin Packing routine anyway. CLRS is a lot harder to get through, but it will teach you how to prove that your algorithms will do what they should.

When it comes down to it, I'd say that CLRS is for the Computer Scientist while TADM is for the practitioner and I'm glad I own both.

Introduction to Algorithms by Cormen et al would be a good place to start if you're going the theoretical way. Very mathmatical and not so practical - I read this myself at uni and I learned a ton.

Go to amazon, search for data structures and algorithms. Those four words should bring up a whole bunch of results. Choose one that's well reviewed that has familiar elements to you.

Also, wikipedia is a great place to start (of course).

Sorry I can't be more specific, but I don't know your educational background or experience. Fwiw I used . The only thing is that I had a decent, recent math background, and if you haven't done "real" math in a long time, it might be one more barrier to learning what you really want to learn.

I find it shocking that "technical/instructive/informative" books are disallowed. I can't begin to fathom a reasonable explanation for that.

To be more helpful, a mathy algorithms text, like CLR ( might slip through the censors and be a good option--depending on whether your friend is comfortably enough with thinking about code abstractly.

I was told that staff has explicitly stated to him that the institutional policy does not encourage inmates to pursue white-collar professions upon re-entry.
I'd start with the bible of algorithms:

Read that puppy cover to cover, and try your hand at implementing some of the data structures and exercises in it. I studied it in college, and reviewed out of it before I interviewed at Google. Some of the best interview prep I did.

One more tip: when you're practicing, practice on a whiteboard, with no reference material, and only try compiling it on a computer after you're absolutely positive it's correct and have tried a number of test cases. Nothing like simulating real interview situations to help you prepare...

The following are books from my student days. These are sources from which I happened to learn the "everything in an hour, but longer to absorb" subject matter. I'd recommend Applied Cryptography as excellent, the other is just good.

Algorithms: Introduction to Algorithms (Cormen, Leiserson, & Rivest)

Security: Applied Cryptography (Schneier)

I cannot find my automata book just now. It's down in the garage, and I want to stay inside where it is warm now. As for concurrency, I got part of that from the Tannenbaum OS book:

But the rest, I actually got from a coworker on the job! We did cover databases and ACID transactions in school, but that wasn't taught very well and I didn't really get it until I was doing real work.

Do some assembly language. It will give you a key advantage over everyone who is too scared to touch it.

Write a compiler and/or interpreter. This can actually be pretty small, and it will also give you an advantage over those too scared of something so seemingly "esoteric."

Shining my tptacek light re: Applied Cryptography

Thomas recommends Practical Cryptography instead.

I actually read Applied Cryptography when it first came out. My experience is that it convinced me that crypto is hard to do right not that it's easy.

"Practical" will be my next technical read.

It came out in, like, 1995 didn't it? I remember that because all my IRC friends immediately got to work on crazy crypto tools with algorithms and ideas cadged from that book. It definitely didn't teach them that crypto was hard.
It came out in, like, 1995 didn't it?

That would be about right.

I remember that because all my IRC friends immediately got to work on crazy crypto tools with algorithms and ideas cadged from that book. It definitely didn't teach them that crypto was hard.

It's one thing to do some fun project. It's another thing to do something for production. It's yet another thing to read such a book and realize it means there's people who know a lot more about this than you. At the same time, it is a fun read for a techie.

I don't have a computer science background and I recently read TAOCP and Introduction to Algorithms by Cormen et al for an interview with google. I failed the interview but got through the books finding them very interesting.

Reading TAOCP was a worthwhile experience but I feel the Introduction to Algorithms is of more practical use because it covers a lot more ground (at less depth) and is a much easier read.

Here's the slightly modified answer I just posted there (even though it's an old question):

Back when he was still doing podcasts Joel Spolsky answered the similar question, which was partly "Does a good programmer without a CS degree really have a chance to get a job at Fogcreek?[1]" (It's near the end of the page.)

He says that for a good self-taught programmer who began with a high-level language, say PHP or Java, who comes at programming from a practical perspective, there are a few important parts of the CS curriculum the person may have missed out on, and goes on to list some books that would help fill in those gaps.

Off the top of his head he named these books in about this order:

- Structure and Interpretation of Computer Programs[2] (also free online[3])

- C Programming Language[4]

- The Unix Programming Environment[5]

- Introduction to Algorithms[6]

He said that those books covered the aspects of the CS curriculum his company needs in a good programmer, e.g. being able to create algorithms for an uncommon data structure.

Those books all have the added advantage of having exercises, and all being a very pleasant read. SICP is an introduction to many of the big ideas in CS: data structures, streams, recursion, interpretation, compilation, register machines, etc., and their implementation in Scheme (a kind of Lisp). It's a great place to start. The next two focus on implementation details like pointers and memory allocation. They are compact, powerful books. The last, Introduction to Algorithms, seems misleadingly titled, as it is fairly comprehensive and used in both undergraduate and graduate courses. If you work your way through the entire book, chapeau!







What's wrong with less cryptic books like these?


I have read CLRS (the second book in your list); it's really great as an introductory text as it spans a lot of areas. For example, sorting/searching is a chapter in CLRS, while it's a volume in itself in TAOCP (The Art Of Computer Programming). Also, CLRS is written in the style of a textbook, rather than a reference and a newbie would find it easier to follow. And, TAOCP is more of an in-depth reference to practically everything that has been researched in that field so far.

How many people need this level of depth?
A lot more than you would think - there are plenty of programmers out there that can't tell you the tradeoffs between the different approaches, and they can get away with it, because it appears to work (see thread synchronization, cross-site scripting, data-base injection, null pointers, pointer overflow, etc, etc) until somebody demonstrates that it doesn't.
thread synchronization, cross-site scripting, data-base injection, null pointers, pointer overflow

Does TAOCP cover any of those?

I would expect it to have plenty of algorithms to implement mutexes with, as well as parallel sort algorithms.

There are also going to be a lot of pointer based tricks that you don't get unless you really understand pointers.

No mention of CLRS? For shame.

Agreed. CLRS > TAOCP.
yeah for a cs degree equivalent CLRS >> TAOCP as TAOCP is more of a reference manual on algorithms whereas CLRS is a new way to think about writing programs. Both have their place in your bookshelf though.
Yes, and apparently CLRS's place on your bookshelf is shifted to the right of TAOCP. :)

Don't forget Okasaki's _Purely Functional Data Structures_, though. CLRS's coverage of persistent data structures is pretty weak.

Interesting. I hadn't heard of that one. Is the book much better than the thesis that he did?
The book expands on his thesis. I'd highly recommend it, especially if you found his thesis interesting.
No list of books is going to substitute for years of interactive study, but if there is going to be a list, CLRS should be (high) on it. You might not need it to write a Rails app in 21 days, but you'll be a more enlightened programmer if you do.
Buy and read:

Once you've gotten through that, consider borrowing TaoCP from your local library.

Is that actually the best book on basic algorithms or just a book that is virally recommended?
It is used in MIT's Intro Algorithms class. That said, it's very rigorous and requires use of discrete mathematics. It might be worth looking into some lighter reading first to see if algorithms are interesting enough to you personally to warrant working through a full textbook.
It's the best general-purpose algorithms textbook I've seen. There are lots of other books which go more in-depth into specific areas, but you need to cover the basics first.
Here's a really good blog post series using that book and the related MIT lectures:
Jan 24, 2010 · 1331 on Algorithms course material
_Introduction to Algorithms_ (aka "CLRS")
CLRS is the classic textbook for Algorithms... For some topics Klienberg and Tardos is good, I will suggest to read the CLRS text and keep K&T for topics you want to read more on, also the K&T problems are much harder (usually) than CLRS.
I'd also very highly recommend this book, but anyone looking to buy might be interested to know that the 3rd edition comes out this fall (9/30).

"It includes two completely new chapters, on van Emde Boas trees and multithreaded algorithms, and substantial additions to the chapter on recurrence (now called "Divide-and-Conquer"). It features improved treatment of dynamic programming and greedy algorithms and a new notion of edge-based flow in the material on flow networks."

HN Books is an independent project and is not operated by Y Combinator or
~ [email protected]
;laksdfhjdhksalkfj more things ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.