HN Theater

The best talks and videos of Hacker News.

Hacker News Comments on
Tech Talk: Linus Torvalds on git

Google · Youtube · 80 HN points · 72 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Google's video "Tech Talk: Linus Torvalds on git".
Youtube Summary
Linus Torvalds visits Google to share his thoughts on git, the source control management system he created two years ago.
HN Theater Rankings
  • Ranked #26 all time · view

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
While this is true, I think Git is quite different. Linus has said that he's not really a "VCS guy", so apart from high level architecture and high level concepts of what Git actually is as a VCS, he hasn't really had as much impact on Git's codebase as much as Linux.

Antirez, on the other hand, has had a large part in the continued development of Redis since it's inception, so like, 11 years or so (2009).

AFAIK, apart from UI improvements over the years, Git has _largely_ been the same since inception, in that you can watch a video as far back as e.g and not much has really changed since those days (by the time Linus gave that prez, Junio already started running the project).

Jun 29, 2020 · Puts on GitHub was down
If you let Github become a single point of failure, maybe you are using git wrong? It's named "Distributed version-control" for a reason. I can really recommend this talk by Linus Torvalds about how git is more of a way of working than a piece of software:
I may use git wrong, but didn't use GitHub wrong. First, Github provides more functions than a git repo. Second, I haven't seen anyone or any company using Git in the way as Linus said in the video. I believe most companies are not either.
The git aspects of github are not the things people build SPOF on.

It’s the code review tooling, the artifact storage & the deployment pipelines.

A distributed version of those would be awesome...

In the case of git, Linus did what he did with Linux and named it after himself.


I didn’t watch through it again to confirm, but if my memory isn’t failing me, it was this talk where he mentioned his naming strategy.

Branching by cloning was copied from bitkeeper. It was also early git's only branching mechanism. If you listen to Linus's talk when he introduced git at Google, you'll hear him conflate "branch" with "clone" because that's what he was thinking of at the time.

> 1. Linux source was hosted on BitKeeper before git. It basically was one of the first distributed source control systems (or the first? not sure if anything predated it).

There were several systems which predated it. The most arcane one I know of is Sun's TeamWare[1] which was released in 1993 (BitKeeper was released in 2000). It also was used to develop Solaris for a fairly long time.

Larry McVoy (the author of BitKeeper) worked on TeamWare back when he worked at Sun.

> 2. Linux devs got into conflict with the BitKeeper owner over open-sourcing and reverse engineering, so Linus realized they needed a new system because no other source control system had the features they needed like BitKeeper had (mainly, I understand, the distributed repos).

That is the crux of it. Linus does go over the BitKeeper history in the talk he gave at Google a decade ago on Git[2].

[1]: [2]:

I hope this one is next:

Also, I hope that they adopt flac. Right now they seem to use lossy codecs.

They don’t necessarily use the same codec for YouTube’s master copy and the one that you get served. From what I understand, you can upload whatever codec you want to YouTube, and presumably it gets kept in its original format, so it can be transcoded however you like for serving. Anything else doesn’t make sense. Transcoding to FLAC would make sense if you uploaded PCM to begin with.
I think a reasonable middle road, since most people won't notice or care about the difference, is to support something lossless like flac but not make it the default.

The people who really care could turn it on (perhaps at the account level), and they'd be happy, and YouTube still saves on bandwidth costs.

320kpbs MP3 is already transparant. FLAC is mostly for archival purposes.
they probably would refer to them as bandwidth saving codecs :D but you're right
Yeah, but 160kbit Opus is so good that it's rarely worth it to use Flac. I've done 180º phase cancellations against uncompressed on a variety of sources, and it is so close to transparent that one would have to be have some serious OCD to be unsatisfied with it, and even then could not A/B it in a blind test with better than 50% accuracy except on rare source material, and then just barely.
Opus is great but its barely worth it to skimp there when compared to HD video. Still, archive copies aren't youtube's forte. Makes sense for them, but I wouldn't convert my personal audio collection with storage so cheap.
I was responding to someone lamenting the fact that they stream in Opus instead of Flac. There is zero advantage to be gained from streaming in Flac. So why the lament? If they store archival quality, they wouldn't ever be streaming it anyway.

Furthermore, you underestimate the amount of bandwidth savings when you multiply it over millions of viewings. It really adds up. It totally makes sense for them to use Opus. They save money, and the viewer loses nothing. Shaving pennies is an effective way for a business to cut expenses. Thousands of small changes, insignificant by themselves, can have a significant impact on EBITDA.

> Makes sense for them,

(i.e. Youtube)

Even though all music I have on my drives is in FLAC, I can't differ it almost always from Opus 160 kbps, which is what they're using now (but most of the stuff is reencoded from previous codecs).
Yeah I guess that most people won't be able to notice any difference except for the additional traffic needed. FLAC is probably mostly useful for "masters" you keep and re-encode to the currently best lossy audio format (opus at the moment). This way you avoid generational artifacts.
That's obvious. I assume YT got lossless records from labels for their Music offering.
Jan 16, 2019 · someguy101010 on How to teach Git
I really think showing them how it works visually with a DAG is a great way to show someone how git works. Also, Linus has a great talk about git and how it works here:

(Sorry for mobile link, I'm on my phone RN)

I still recommend people to watch this video from the man himself, Linus Torvalds, if they really want to understand the architecture and data structure behind git -

If I remember correctly, he mentioned that he wrote the MVP in 4 weeks and the data structures he used are quite simple. Never got a chance to look at the source code but I guess they are in github (at least the mirror copy.)

It only took 4 weeks plus 30 years of programming experience and a genius mind.
there was great excitement and adoption before github

famous linus talk at google 2007

github founded 2008

git's distributed nature was revolutionary. Although not the first (BitKeeper for one preceded it), it has a mind-numbingly simple design, of distributed data by content addressing (data named by its hash, so it has the same name whereever it is).

> So having run into this problem, folks like Google, Twitter, etc. use monorepos to help address some of this.

I think you’re retroactively claiming that Google actively anticipated this in their choice at the beginning of using Perforce as an SCM. They may believe that it’s still the best option for them, but as I understand it, to make it work they bought a license to the Perforce source code forked it and practically rewrote it to work.

Here’s a tech talk Linus gave at Google in 2007:

My theory (I wonder if someone can confirm this), is that Google was under pressure at that point with team size and Perforce’s limitations. Git would have been an entirely different direction had they chosen to ditch p4 and instead use git. What would have happened in the Git space earlier if that had happened? Fun to think about... but maybe Go would have had a package manager earlier ;)

Maybe Google’s choice for monorepo was pure chance. However, on many occasions the choice was challenged and these kinds of arguments were (successfully) made in order for it to stay.
> I think you’re retroactively claiming that Google actively anticipated this in their choice at the beginning of using Perforce as an SCM.

Oh I didn't mean to imply exactly that, but really good point. I just meant that it seems like folks don't typically _anticipate_ these issues so much as they're forced into it by ossifying velocity in the face of sudden wild success. I know at least a few examples of this happening -- but you're right, those folks were using Git.

In Google's case, maybe it's simply that their centralized VCS led them down a certain path, their tooling grew out of that & they came to realize some of the benefits along the way. I'd be interested to know too. :)

What would version control look like after Git ?

Git famously was not built for monorepo [1]

I would like to see sub-tree checkouts, sub-tree history, storing giant asset files in repo ( without the git-lfs hack ), more consistent commands, some sort of API where compilers and build systems can integrate into revision control etc


WWgitND, or “What Would git Not Do.
Jun 07, 2018 · 2 points, 0 comments · submitted by drexlspivey
John Carmack is an absolutely brilliant speaker. Conversational, captivating and effortlessly natural. I could listen to him talk all day about the most arcane bits of graphics development which i'll never understand but am fascinated by regardless.

His QuakeCon talks are particularly good.

QuakeCon 2011 -

QuakeCon 2012 -

I like his talks because he's always interested by what he's doing and it tends to make me interested again in code.

Linus Torvalds is another surprisingly good speaker. His talk on git - one of the dryest possible topics - was very interesting. There's not many other people I'd sit and listen to talk about SCM.

Linus Torvalds' talk on Git:
May 08, 2017 · hdhzy on Plan Your Commits
IIRC that's almost exactly what Linus said about git: that it's rather a toolbox for creating SCM rather than SCM itself [0].


Apr 22, 2017 · 5 points, 1 comments · submitted by dgellow
Could an admin add '2007' to the title and fix Linus name?
Linus: "You can have people who try to be malicious... they won't succeed."

Linus talked about why git's use of a "strong hash" made it better than other source control options during his talk at Google in 2007.

Edit: the whole talk is good but the discussion of using hashes starts at about 55 min.

> but git doesn't actually just hash the data, it does prepend a type/length field to it.

To me it feels like this would just be a small hurdle? But I don't really know this stuff that well. Can someone with more knowledge share their thoughts?

I think Linus also argued that SHA-1 is not a security feature for git ( Has that been changed?

Yeah what is the attack here?

If you don't have permissions to my repo that already limits the scope of attackers to people who already have repo access. At that points there's tons of abuse avenues open that are simpler.

If someone could fork a repo, submit a pull request and push a sha for an already existing commit and that would get merged and accepted (but not show up in the PR on github) well that would certainly be troubling, but at that point I'm well past my understanding of git internals as to how plausible that kind of attack would be...

If the sha matched an existing one, would it be merged as part of the changeset, or just ignored?
If a sha matches, git won't fetch the object because it already has it.
Repo access doens't stop people from injecting code into your repository. A pull request actually puts objects into your repo, but under a different ref than heads and tags.

1. Go to github, do a git clone 'repo' --mirror 2. cd to the bare repo.git and do a git show-ref and you will see all the pull requests in that repo. If any of those pull requests contained a duplicate hash, then in theory they would be colliding with your existing objects. But since git falls back to whatever was there first, I think it would be very challenging indeed to subvert a repo. You'd essentially have to find somebody who's forked a repo with the intention of submitting a PR, say, on a long running feature branch.

You could then submit your PR based off hashes in their repo before they do, which would probably mean your colliding object would get preference.

Its pretty far fetched, but the vector is non-zero.

It's easier than that. Someone could create a benign patch and a malicious patch that collides the hash. But what could they do with that?

For example, let's say it goes (letter##number represents the hashes):

State 1:

Real repo: R3 -> R2 -> R1 -> R0 (note: R3 is the head)

Benign PR: B1 -> B0 -> R1

Malicious PR: M1 -> M0 -> R1 (note: M0 = B0, but the contents are different)

State 2, after merging Benign PR:

Real repo: R4 -> R3 -> R2 -> R1 -> R0, R4 -> B1 -> B0 -> R1

If Malicious PR was merged now, Git would, I imagine, just believe that M1 is a commit that branched off of B0, since that's where the "pointer" is at.

So, yeah, what would this actually accomplish?

OP actually pointed to an interesting reference[1] which briefly touches on the impact of pseudo-random number generation, amongst other things.

Diving a little bit deeper down the same rabbit hole, David A. Wheeler[2] (as he emphatically insists on being cited) also made note of threads with non-deterministic scheduling. Lots of notes on hardware-centric implications as well.

Definitely turns my perspective around on what it means to trust software. I suppose the effort is a proper graduation from masturbating[3] to safe sex, so to speak.




I honestly don't believe it is an exaggeration. Take a look at this video for Linus's thoughts
Linus Torvalds talk about git

Was going to post the same one. Great talk!
> With a DVCS, though, we have a problem. You do have the whole history at any given point, which in turn means you need to have every version of every blob. But because blobs are usually compressed already, they usually don’t compress or diff well at all, and because they tend to be large, this means that your repository can bloat to really huge sizes really fast.

You can shallow clone a repository in git. This looks more like 'What should be the default mode for a source control : Centralized or Decentralized ?'. I think it should be decentralized since it requires more discipline.

> Git is so amazingly simple to use that APress, a single publisher, needs to have three different books on how to use it. It’s so simple that Atlassian and GitHub both felt a need to write their own online tutorials to try to clarify the main Git tutorial on the actual Git website. It’s so transparent that developers routinely tell me that the easiest way to learn Git is to start with its file formats and work up to the commands.

This could simply mean that it is am amazing piece of technology that everyone loves to talk about it. It could also mean that it is a bit more complex, but that doesn't mean that the complexity is not worth the benefits.

> There is one last major argument that I hear all the time about why DVCSes are superior, which is that they somehow enable more democratic code development

I think the point that was made was that the issue of commit access goes away : ( actually that whole talk is pretty interesting ).

To be perfectly honest, most companies would do fine with subversion and a very good web based user interface. The simplicity and the advantage of not having to teach people set in their ways may work in its favor.

Sep 30, 2015 · 1 points, 0 comments · submitted by kevando
A video of this wonderful Google tech talk is available on YouTube:

Jun 04, 2015 · timtadh on Tmux has left SourceForge
Agreed. As a high schooler I loved sourceforge. I would talk it up to people and I had a couple of projects that I put up there. I thought it was the best thing since sliced bread.

Then I saw that famous talk by Linus on git in 2007 ( Since I had never managed to get SVN working properly for me git was awesome. No server software to install. By the time I wanted to put up another project on the web Github was a thing and I used that. I never looked back, I loved how it was about the code, not how many installer downloads you had.

That for me was the main problem with sourceforge. In the end it was a game (for the devs) to get the most downloads because that was how your projects were judged and ranked. The Github "game" is slightly better and there are multiple ways to play.

> I call that filter bubble. I like using git for local commits, but 15 out of 20 persons here don't even take their laptop with them and I bet that 17 out of 20 won't touch code on the road. Maybe at home, if they have to. A decent centralized VCS would totally do.

It's not only "for local commits", although being able to have local branches without polluting a public namespace is a huge win. It's also about _speed_ when you're doing VCS operations. Linus Torvalds actually made the case really well in his talk:

> There are a lot of companies that actually would prefer if the code never left the premises and have a use-case for finer grained permissions (some folks can only touch the assets, others can only ever see the dev branch, can't see history,...), things that are by definition not possible in a DVCS.

That's a question that's completely orthogonal to whether or not you use a DVCS. How is a "traditional" VCS going to help you when you can check out the code locally and smuggle it out on a flash drive?

In my company, we use git and there are access restrictions as to who can access and commit to our branches.

> Storing large assets in git sort of suck and requires ulgy hacks. I'd love to version the toolchain and the VM images for the local development environment, but that's just not feasible with git.

..and that's not the use case for git. Linus has been very clear about _what_ git is optimized for, performance wise.

That doesn't mean that DVCSes in general are useless for storing large assets, but that the most popular implementation is. Also, I'm not really sure what traditional VCS you're referring to, that makes it easy to version VM images and remain storage efficient?

Feb 06, 2015 · 2 points, 0 comments · submitted by gshrikant
Aug 23, 2014 · hhsnopek on Project Gitenberg
> Every time a programmer has an issue with Git, whoever helps them has to sit down and explain the underlying system for 20 minutes and draw a bunch of sticks and bubbles.

This isn't true at all for a lot of people. I know a lot of people that just read the docs and are able to solve the issues. Others will Google the problem and find the solution on stack overflow. Everyone learns differently...

> Anytime someone shoe-horns it into a product they talk about how Git is so amazing and solves all these problems, but what they are really talking about is just a version control system, not Git specifically.

Git is amazing and does solve a lot of problems, but there are problems that aren't solved by Git. Even Linus himself says this here: (

Using the github API, rather than git, for creating epub books and pdfs is a great. Using git to control changes as the do is perfect as well.

> Non-programmers will never put up with this.

Ermm don't assume that everyone gives up right away. With the GUI interfaces we have today, Git is really simple once you learn it.

> Git is really simple once you learn it.

Pretty much everything is simple once you learn it. That's what learning is. But git certainly doesn't go out of its way to make that process easy.

>Pretty much everything is simple once you learn it.

I wouldn't say so. A lot of things are designed-by-committee implemented-by-the-lowest-bidder messes that are painful and complex even once you know how they work.

Git may have some weird design decisions but for the most part it's well-implemented and follows a simple conceptual model.

Aug 03, 2014 · Walkman on Why Git sucks?
No offense, but seems your problems coming your unwillingness to learn git properly. If you would take the time and learn it (reading Pro Git book, learning from visual tools, which there are a ton, etc) (Again, no offense, just suggestion.)

> Huge number, about 148, of commands with many sub-commands that often do non-obvious, even unrelated things.

You don't need to know all 148 of them. If you only know 16 (1/9 of them): add, branch, checkout, commit, config, fetch, log, pull, push, rebase, reflog, remote, reset, show, stash, status, You can do pretty complex things already. For day-to-day usage, these are more than enough.

> git rebase enables users to rewrite history which subverts what should be the main, if not only, function of a version control system.

Git rebase enables very simple workflows, like the GitHub workflow[1], which IMO is quite simple, small overhead workflow, pretty useful and scalable. Git rebase is also for local repo only. In every tutorial and even git command recommends against pushing rebased branches to an already pushed remote repository. If you keep this in mind, it makes sense.

> ... to rebase heavily to create a false history presenting themselves as fantasy 10X programmers who don't make mistakes, ...

I think the reason is not that they want to hide their mistakes, but to clean the history. Would you rather see typos in a history? IMO it's just cruft, an amend or rebase pretty much just a clean up you can read it more easily, and you don't have to deal meaningless 'Fixed a typo' comments...

> There are numerous commands to do similar, overlapping but nonetheless different tasks. If for example you want to back up to earlier versions of code you can do git revert, git reset --hard, git reset --soft, git checkout -- <SHA>, git checkout <SHA> -- all of which do different but overlapping things.

git revert is not the same as git reset. git revert creates a new commit on top of the commit you want to revert. git reset move the HEAD. Totally different operations. git reset --hard, --soft, etc does the same thing, just differ how they handle the index and working tree, and once you understand I think it makes sense.

> A supposed virtue of git is that it makes it extremely easy to create "branches." This often leads to a proliferation of branches, making locating changes and reconstructing the history of changes extremely difficult.

If you stick to a simple workflow and rebase constantly, this is not a problem at all. Not everybody needs git-flow[2]. If you work on the same files with somebody, I think this will always be a problem, no matter what VCS you use, but with git there are a couple of merging strategies which can facilitate the process.

> Cryptic hexadecimal SHA for identifying commits instead of sequential numbers for individual files as well as snapshots of the entire project.

The SHA1s are for security. A commit's SHA is actually the SHA1 of that snapshot, so if you change any bit of that snapshot manually, git will complain about it[3]. It is basically that nobody can change code unnoticed!

> Lack of backward compatibility with RCS-style version control systems like Subversion.

I don't really understand what you mean here, but there is the command git svn, which is perfectly compatible with SVN. Recently I converted a whole SVN repository to Git without even losing one commit or message or anything... Also Git supports many more workflows[4], not just centralized.

> ... many obvious problems with Git.

Of course git is not perfect, but instead of ranting about it, you could learn it in a couple of days and work with it, becase many of us think it's pretty flexible and as simple as you want it to be. Actually, if you learn only two commands: add and commit, you can use Git, nobody stops you doing that if you find the other parts difficult.





> Would you rather see typos in a history?

Absolutely. Like, normal git log wouldn't show them only the branch merges but if you dig deeper then you'd see them. Or even allow to mark a commit as "superficial" (wikipedia has superficial edits). It is very easy to hide commits but once rebase destroys them it is impossible to see them even if needed.

Consider how scientific research works: you write down everything you do, preserve them and publish a book/paper/etc at the end. If it turns out you were not right then it's possible to peek at the notebooks later but if you were, noone ever will care.

I never rebase commits which clearly shows how I got from point A to point B, even if I was wrong at the first time, but keeping typos in the log I think is silly.

I actually have been using Git extensively for 1 1/2 years. I have read the Pro Git book and many other sources cover to cover several times. I use magit with Emacs and gitk extensively. I gave up on the Android repo utility after it repeatedly goofed up the equivalents of git pull --rebase.

My complaints about Git are based on direct experience and watching others struggle with it.

Yes, I can and have set up Git in a very simple way not unlike an RCS project that avoids most of the issues that I discuss. No rebasing. Minimal use of branches, just a simple approximation of a development/production branch system. Nonetheless there are several problems with Git that still bit me such as the lack of sequential ids for files and the confusing terminology (staging areas, the index, etc.) and documentation.

For a better Git:

1. Clean up the documentation and man pages to remove the confusing terminology. Settle on one name for the "index" and so on.

2. Ditch rebasing.

3. Use sequential ids with a suffix of some sort for local changes. e.g. 1.2 on the master, 1.3.repo.branch or something like that for local changes. Keep the SHA, hide the SHA, but give users an easy intuitive way to access the SHA if needed.



1.3 \

1.4 1.4.myrepo.master

1.5 |

| /



I would guess there is some scheme like this in some other distributed version control system.

4. Simple checkin/checkout commands like RCS


Bit by Git

2. Ditch rebasing.

You really are completely clueless. Rebasing is invaluable for use on private branches that no one will ever see but the developer. It allows you to clean up your history before others see it.

EDIT: oh, and you can make a hook (like I did from day 1) that disallows non-fast-forward pushes to certain public repos.

This article brings back memories. Back in 2007, I had just watched Linus' presentation on Git at Google ( where he called all non distributed version control systems as useless. I could not make any sense of the DVCS from his talk. I tried to play with Git and it was extremely frustrating due to the poor CLI. I thought may be Git is just a fad. But then more and more people kept talking about how awesome it is.

This was one of the article where Git finally clicked for me. The key quote:

There is no need for fancy metadata, rename tracking and so forth. The only thing you need to store is the state of the tree before and after each change. What files were renamed? Which ones were copied? Which ones were deleted? What lines were added? Which ones were removed? Which lines had changes made inside them? Which slabs of text were copied from one file to another? You shouldn't have to care about any of these questions and you certainly shouldn't have to keep special tracking data in order to help you answer them: all the changes to the tree (additions, deletes, renames, edits etc) are implicitly encoded in the delta between the two states of the tree; you just track what is the content.

It's been 7 years since I have been using Git and I can't imagine how I ever worked with version control which didn't work on the entire tree.

This is a good one on version control: (Linus' Git talk at Google)

And as far as the business of software, especially the internet, surveillance, trust, and marketing: (Bruce Schneier talk at Google shortly after Snowden revelations)

I think I saw that one by Linus on Youtube but didn't have time to watch it. I'm going to have this class use Git to submit all their work (project and otherwise). I've used it a lot for my own work, but never on a team project, so I will need something of a primer on how to do branching, merging, and so forth.

I understand there's a typical workflow in which you have 'master' for the releases, 'dev' for the latest working version, and feature branches are branched off 'dev'. If there's a good video on how to do that, it'd be ideal.

I don't know of a video. But you should check out this project and its readme:

edit: oh, the readme links to screen-casts! Also, I've worked on teams that require gitflow so I assume it's useful experience for students to have.

That link led me to another, which led to this original article on the git-flow model:

Thanks! That's just what I was looking for.

During his speech at Google "[after reviewing the alternatives] the end results was I decided I can write something better than anything out there in two weeks. And I was right"

Something's still not clear about his assumptions in that talk. He says "you just continue to work" and "you could do everything you would do" [if you were connected online]. But this is not true when it comes to integration of source control with issue-tracking.

If you integrate source control with issue-tracking, you start to want source code changes to correspond to issue numbers, and changes to be pushed if an issue is in a certain state or owned by a specific developer. But you can't get this information when you are offline without access to the issue-tracker.

Either there's a case for distributed issue-tracking too, which would give you that information, or issue-tracking being centralized pushes back on source control wanting to be distributed.

I wish this was addressed in Git's assumptions.



I've been mulling over issue tracking lately, since I hate Jira and all the other ones I've used. Wouldn't issues being in states and having owners violate the distributed model? Version control used to have file owners and locks, which is the same thing, and git broke from that.
Sounds like you want Distributed version control with integrated wiki and bug tracking, from the creator of SQLite.
Another option is Veracity by Sourcegear.
Their DVCS comparison page is very informative:
According to his recent save format post [1], he spends a lot of time thinking about something before he starts prototyping.

> So I've been thinking about this for basically months, but the way I work, I actually want to have a good mental picture of what I'm doing before I start prototyping. And while I had a high-level notion of what I wanted, I didn't have enough of a idea of the details to really start coding.

So two weeks seems reasonable to me, especially if he built a prototype first. (Did he?)


Isn't he blurring the line between "commit" meaning creating a state you can roll back to (in all version control systems), and "commit" meaning pushing to the "official" code base in a non-distributed system? In other words, some issues with non-distributed systems that he highlights (having to keep an elite group deemed smart enough to integrate changes, for example) still exist when pushing to a consensus "official" instance of a distributed repo.
Probably because those two actions are identical in CVS, SVN etc, which most people at the talk were familiar with, unlike the commit and push states in a DVCS (hg, git, bzr etc).
His language is pretty muddled in this talk. It's clear he didn't really spend a lot of time thinking about precise and unambiguous language for all of the things he was talking about. He also seems to conflate "branch" and "clone".

This muddled language later turned into a muddled UI that was supposed to get cleaned up, but never did.

In what way does git conflate "branch" and "clone"?
I can't help but think that he was conveniently leaving out details that give the entire story - clearly he thinks the audience are unintelligent sheep, so surely they'll take him at his word, right?
Yeah, that was a great talk. I highly recommend people watch it.

He was definitely right about that, what I like about Linus is he really does seem to prefer to do things in as concrete and simple a way as possible instead of getting mired in abstractions.

He mentions that git wasn't half as good if it wasn't for the help he got after his initial creation that made it cleaner and better.
Writing the basics of a DVCS in two weeks isn't a unique feat. Mercurial was created in almost exactly the same way, and was faster than git (and still is, e.g. for blame and clone operations) and actually was conceived with a UI from the beginning.

I'm pretty sure the only reason git took most of the mind share instead of hg was (1) github (2) Linus admiration.

Every occasion is good to start a flame war, isn't it?
(3) C implementation

There's a strong aversion of using tools done in Python outside of Python community. It is irrational, but it being an inherently interpreted language puts it into the same perception bucket as Perl and who'd want to use a control system written in Perl?

Dude, python IS in the same bucket as perl. The list of pros and cons when compared to C is identical.
There is one difference - you need an external tool to properly obfuscate Python source code. :)
Shhh... don't tell HN.
Well, you're also missing some reasons:

- git has an 'email' centric workflow

- The index with git (git add -i) awesome for people doing patch maintenance

- the rebase workflow is great when you want to maintain against an upstream that you expect someone other than you to commit your changes to, or you have a "pre-publish" phase.

It's almost like those reasons follow the linux workflow.. almost perfectly.

Sigh, I wish people would actually know about Mercurial. It was created for the exact same reason as git, at the same time, and is also intended for kernel development. In particular,

- Mercurial also has an email-centric workflow, look at its mailing list:

- The staging area can be achieved in hg in several ways, or it can be avoided as desired. One easy way is to just pick apart your commit with hg (c)record --amend and just keep adding to your commit as necessary.

- Rebasing has also been available in hg forever (and wasn't available in git from the beginning). We also have some really interesting changes brewing for rebasing in Mercurial:

These ideas in hg were all for following the Linux workflow too.

Nothing against git, but it's important to not deify Linus as if he were a unique genius. He has his flaws, and so does the software that he's responsible for.

I've used both.

Mercurial beat git past the starting gates, but git has the advantage of being used on a very widely used, widely developed, immensely high profile project. While Hg has its own commendable set of projects, git has simply won the mindshare battle.

Could be worse: there's arch and bzr and probably some others.

Hell, even GNU/EFF are looking to ditch their own VCS in favor of git from what I understand.

Yes, that's true git's main advantage is how widespread it is.

GNU Emacs decided to switch from bzr to git, but GNU as a whole doesn't have a recommended VCS yet. In GNU Octave we will keep using hg. I don't think the EFF is too public about which VCS they use.

I meant to type "GNU/FSF". I thought I caught myself and corrected the "EFF" bit. I knew I was going to write that. And damnit, I did.

But yes, I meant "FSF", not "EFF".

Mercurial went a long way down the "never modify history!" road (i.e. never rebase). Turns out that attitude, while theoretically pure, isn't very pragmatic. There are lots of valid reasons for rebasing.

Hg eventually came around but, for me, the damage had been done. It's the second biggest reason I went with git.

(The biggest, of course, is you use git if you want to interact with kernel devs. But lots of kernel devs hated git in 2005/2006. Mercurial had a reasonably big window of opportunity... all imo, fwiw)

- git has an 'email' centric workflow

That's a good observation. Git seems to assume its distributed issue-tracker is email.

Those aren't big things outside Linux.
Email centric workflows are used for lots of projects other than the linux kernel.

My workflow for a lot of the projects I contribute to is something like:

edit -> commit (repeat as necessary) -> rebase -i -> format-patch -> send patchset by email

> - git has an 'email' centric workflow

Which is surprising, given how rough all the email-involving commands of git are. Badly documented (git-send-email man page doesn't even come close to documenting all options properly), awful error reporting and a badly explained (although, in itself, quite sane) workflow when sending the patches.

When setting up my email setup, I consulted the underlying Perl script more than the docs :(.

Her is great video from Google tech Talk starring Linus Torvalds (about Git): Note - Linus uses some strong language.
> Note - Linus uses some strong language.

That's a bit redundant.

"Linus says things" would have been enough :)
How liberating it must feel to him to not give a damn about what anyone thinks about what he says. Good for him.
In Linus Torvalds' talk on git at google[0], he mentions that the reason why he did not choose SVN (one of the reasons why he saw no other way but to build GIT) was that SVNs mission statement read "To create a compelling replacement for CVS"... which Linus hated with a passion and thus could not see why he would even use "a compelling replacement" of.

Interestingly enough, in the 'Open Source projects and poisonous people' talk[1] with two SVN developers, they mention how they chose that line specifically to make sure that everybody is on the same page with the scope of the project - improving upon what people knew and were used to instead of wild, new things. Specifically because they would often have people come into the community with some grand plans that would have just exploded the scope.

It is of course arguable to what extent there could be a general rule to this, but in this case, GIT is just so fundamentally and obviously the better option than SVN. Had somebody tried to make SVN into GIT, it would have simply killed SVN, so it was the right course of action to make it a new project. SVN was destined to die by its design.

To see the GIT adoption rates these days (particularly due to tools like GitHub) is very encouraging to people who like to think that sometimes, starting over fresh is the only option. Of course, the thing you want to build really has to be that much better than the old thing you're trying to replace.

So I agree with you - it's alright to say that GIT is better than SVN. The crucial point to me, though, is that it is so much better. So much better that it is worth the effor to build and worth the effort of the community to switch.

[0] [1]

In Linus Torvalds' talk on git at google[0], he mentions that the reason why he did not choose SVN (one of the reasons why he saw no other way but to build GIT) was that SVNs mission statement read "To create a compelling replacement for CVS"... which Linus hated with a passion and thus could not see why he would even use "a compelling replacement" of.

Having experienced the workflow they learned from using another DVCS (Bitkeeper) probably also influenced his decision at the time.

- why is the sky blue?

- why do birds sing in the morning?

here's why from G-O-D itself:

I was curious to see if anyone posted this already :-)

Awesome talk by my favorite programmer :-P

Feb 11, 2014 · 6 points, 0 comments · submitted by atmosx
I still think the absolute best way to learn git is to watch the talk Linus gave at Google some years back - - just understanding the underlying concepts makes the rest of it easy to grasp.
If nothing else, you can at least see what a Google talk with Linus making more light-hearted remarks looks like, compared to the LKML-related posts discussed months ago.
Nov 25, 2013 · 1 points, 0 comments · submitted by mikeruby
Don't worry about needing to catch up. Stuff is moving so fast these days, you're always working with something new. Everyone is in a continual update mode so it's not like you have 10 years of catching up to do. Tech has turned over a 10 times since then. You could say 10 years and 2 years are functionally equivalent from a new tech point of view.

And don't worry about corps and recruiters. Focus on a problem you want to solve, and update your skills in the context of learning what you need to know to solve that problem. If you can leverage your industry experience in the problem domain, even better.

Data is driving everything so developing a data analysis/machine learning skillset will put you into any industry you want. Professor Yaser Abu-Mostafa's "Learning From Data" is a gem of a course that helps you see the physics underpinning the learning (metaphorically of course -- ML is mostly vectors, matrices, linear algebra and such). The course videos are online for free (, and you can get the corresponding book on Amazon -- it's short (

Python is a good general purpose language for getting back in the groove. It's used for everything, from server-side scripting to Web dev to machine learning, and everywhere in between. "Coding the Matrix" (, is an online course by Prof Philip Klein that teaches you linear algebra in Python so it pairs well with "Learning from Data".

Clojure ( and Go ( are two emerging languages. Both are elegantly designed with good concurrency models (concurrency is becoming increasingly important in the multicore world). Rich Hickey is the author Clojure -- watch his talks to understand the philosophy behind the design ( "Simple Made Easy" ( is one of those talks everyone should see. It will change the way you think.

Knowing your way around a cloud platform is essential these days. Amazon Web Services (AWS) has ruled the space for some time, but last year Google opened its gates ( Its high-performance cloud platform is based on Google search, and learning how to rev its engines will be a valuable thing. Relative few have had time to explore its depths so it's a platform you could jump from.

Hadoop MapReduce (,, has been the dominant data processing framework the last few years, and Hadoop has become almost synonymous with the term "Big Data". Hadoop is like the Big Data operating system, and true to its name, Hadoop is big and bulky and slow. However, there is a new framework on the scene that's true to its name. Spark ( is small and nimble and fast. Spark is part of the Berkeley Data Analytics Stack (BDAS -, and it will likely emerge as Hadoop's successor (see last week's thread --

ElasticSearch ( is a good to know. Paired with Kibana ( and LogStash (, it's morphed into a multipurpose analytics platform you can use in 100 different ways.

Databases abound. There's a bazillion new databases and new ones keep popping up for increasingly specialized use cases. Cassandra (, Datomic (, and Titan ( to name a few ( Redis ( is a Swiss Army knife you can apply anywhere, and it's simple to use -- you'll want it on your belt.

If you're doing Web work and front-end stuff, JavaScript is a must. AngularJS ( and ClojureScript ( are two of the most interersting developments.

Oh, and you'll need to know Git (, See Linus' talk at Google to get the gist ( :-).

As you can see, the opportunities for learning emerging tech are overflowing, and what's cool is the ways you can apply it are boundless. Make something. Be creative. Follow your interests wherever they lead because you'll have no trouble catching the next wave from any path you choose.

Thanks for this. Quite incredibly valuable comment. This is why i love HN.
I'm a web developer that considers myself "up-to-date" but there was quite a bit in there that I need to read up on (notably Hadoop and ElasticSearch). Thanks for the links!

I'd also recommend, as some alternatives:

* Ruby as an alternative "general purpose language"

* Mongo as an alternative swiss army database

* Backbone + Marionette as an alternative front-end JS framework

* CoffeeScript as a better Javascript syntax

I only have one Linus story. Way back in 2007, he was supposed to come and give a talk on git. I had to do an on-site interview with a candidate immediately beforehand, but managed to get it done and got there in time to see the talk start. I had never seen him in person and figured it might be interesting, particularly since I had actually used git on some projects prior to that point.

This wound up being recorded, and it's online here:

I think I lasted ten minutes and left after deciding it was too much. There was enough of that kind of energy going around already to willingly sit there and take in more. Besides, I had an interview to write up.

I lasted eight minutes watching the video. He just rambles about how screwed everything is and how right he is.
You would see it differently, if you would know CVS :-)
Interesting how different people can see the same thing. To me Linus seems to be mostly pleasant during that talk and to the point. I have heard ranty talks and this is not one of them.

I assume it must be cultural difference, since I also checked with my colleagues and they do not think Linus was rude either during that talk.

Jul 16, 2013 · hkolek on Verbal abuse on LKML
I think he means the git talk at google [1] where he defines a few things at the start, among them "wrong, stupid: people who disagree with linus" and states that therefore, during the talk, by definition everyone who disagrees with him is stupid and butt-ugly.


Which is the exact opposite of doing it in person though, it's being all cute and funny about it. Actually doing that in person would look and feel radically different.
i'm 100% certain linus is not being cute, or funny. he means it when he says certain people are stupid.
Compared to the stuff he types into his keyboard it's still lame, and being cute and funny in comparison.

Also, stupid is relative anyway. Who isn't stupid? Is Linus actually that stupid that he thinks he isn't stupid? And if he is, what kind of weight does he think this insult carries out of his mouth? Stuff gets tricky real quick like that.

Really? In when Linus did his talk about git for Google Tech Talks, he explained that the reason they parted ways because many of the linux developers were taking issue to working in a non-libre environment, I seem to recall...


here's the link:

BitKeeper was non-libre for a long time, but they dropped the free-to-the-community version[1] which forced the move. Linus drew criticism for a long time over the usage of a non-libre version control system for the Linux kernel.


If no one has ever read the backstory behind this it's actually a good tale.

There was more to the issue than BK simply deciding to drop the free version. The short story behind that one of the Samba authors helped force Free Software along in more ways than one!

I went and tried to find some information. Definitely worth a read! Thanks. Actually. I'm gonna post it as a story.

Summary of the links shared here:;playnext=...

And here are them with titles + thumbnails:

how awesome are you? thanks
Thank you so much for this!
This is cool :) Btw. the first link was somehow (re)moved. The link is now:
Single best is difficult, here are some favourites of mine:

"You and your research" by Richard Hamming:

"How to design a good API and why it matters" by Joschua Bloch:

Google TechTalk on Git by Linus Torvalds:

All talks ever given by Alan Kay, for example:

Double thumbs-up to Google TechTalk on Git by Linus Torvalds.
I think the talk you are referring to is tech talk he gave at google:

Sep 06, 2012 · 3 points, 0 comments · submitted by manuscreationis
He means stupid as in simple, unaware of things it does not need to know. Stupidity and ignorance in software is high praise and it's very difficult to achieve.


That style is just linus' way. It comes across as a lot more aggressive on screen than in person. To get a better sense of him, I'd recommend watching this talk he gave evangelising git at google back in 2007. It's pretty entertaining, and very educational on the issues facing VCS designers.

To a certain extent, if you can knock git together in a couple of weeks you get a free pass to talk however you want, as long as you don't hurt anyone.

Ouch! No heed to be harsh. I certainly appreciate Linus's work. Kudos to him! I understood his point on the stupidity thing. It even took some laughs out of me. And that's the reason I posted that, so other people could too.

I don't mean to start a flamewar on scm, really. But I think git is overly complicated, and I believe I'm not alone on that.

Anyway, don't take things too seriously and have a good one!

PS: You gave a great idea. I should email Linus and offer to pay him some beers. That sure will be a great talk. Thanks mate!

Sorry, really didn't mean to be mean whatsoever. It's hard to convey tone in internet comments :)
His comment was more general:

"You can disagree with me as much as you want, but during this talk, by definition, anybody who disagrees is stupid and ugly, so keep that in mind."

Apr 16, 2012 · rjzzleep on Bye Bye SVN, Hello Git
i think this is the place to remind of torvalds git talk
Mar 18, 2012 · benthumb on Writing and Speaking
A great speech @ Google on a subject directly apropos of pg's start-up spiel:

Another awesome speech from Google's tech talk series (it takes a little bit of tyrant to pull this off, tho):

Feb 26, 2012 · 2 points, 0 comments · submitted by huytoan_pc
Coming from seven years of Subversion, I am amazed every time I 'cherry pick' a fix in Git to another branch. Subversion could never do that, it's file based versioning vs Git's content awareness is a paradigm shift. Subversion now seems to be nothing more than a file backup system.

Linus Torvalds on Git:

Linus repeats this line WRT distributed systems during his famous talk about git:

"I have a theory about backups; I don't do them. I put stuff up on one site and everybody else mirrors it."

Here is a talk by linus about git, although it isn't a tutorial.

Linux kernel development was managed with tarballs and patches for the first 10 years, until BitKeeper was selected in 2002, then of course Git in 2005.

Source: Also see Linus Torvalds' Google Tech Talk on Git starting around 2:45

Apr 25, 2011 · JonnieCache on The Merge Button
Watch this video and you will become enlightened as to why this is the case:

[Linus Torvalds on Git]

You should check out Linus' tech talk at Google about Git. Not only is Linus a really good speaker, he explains how a DVCS like git changes the workflow when it comes to managing the 'main' repository.

Spoiler: He has a 'circle of trust'. He pulls and merges commits from people he thinks are really smart and those people in turn pull and merge from people they think are smart.

Semi-related: a video of Linus giving a talk on Git at Google

Watch for the bit where he goes:

"What are you guys using?



longer pause

I'm ... sorry."

Take that with a grain of salt.

Linus is snug as a bug in a rug, having solved his own problem of dealing with e-mailed patches. This also works for many coders with similar work-flows.

But, for work-flows with binaries (documents, images, executables, etc.) Perforce looks better, if you read through the comments in the LWN piece. For instance:

[git, problems with binaries] this has been documented several times, but it seems to be in the category of 'interesting problem, we should do something about that someday, but not a priority' right now

P.S. a couple of months ago, in another HN discussion, stevelosh kindly contributed a link about the bfiles extension for Mercurial:

The article above talks about Google's work on the Linux kernel, something that is managed competently with Git outside of the company.

Granted, Linus's comment is a sweeping statement (as most snarky comments tend to be) but in this context, Perforce seems woefully inadequate for managing Google's kernel work.

Well, Paul has a similar "quirk". He uses two PCs. Everyone's seen this by now: Also Linus disliked using Google's net connection for his Git talk. Here's one relevant excerpt:

When I am here (Google) I cannot read my e-mails because my e-mail goes onto my machine and the only way I can get into that machine is when I am physically on that network. So maybe I am a cuckoo, maybe I am a bit crazy, and I care about security more than most people do.

In another part he was about to demonstrates a kernel diff, but stops when he realized he's disconnected from Google's net. Here's the talk: One of my favorites; a classic. And I don't even do software.

Apr 16, 2010 · mgunes on Torvalds: git vs CVS
This is effectively a summary of his Google Tech Talk on git.

Feb 17, 2010 · 50 points, 8 comments · submitted by nreece
Man, he really hates CVS.
I'm too young to have used CVS, but I really hate SVN compared to Git :)
Having dealt even superficially with a large CVS repository... don't we all? SVN is at least more stable.
We have used long time (2001) ago CVS as repository for one of our games (NHL2K2), and one of the programmers used a specific way of commenting his code. One day, that form of commenting messed up somehow the diff information the .CVS - it was easy to fix (we manually fixed the problem, it was some kind of +, - that got wrong, or something like that).

Once we moved to P4 it was a relief, and still is. And nothing stops you from using P4 with git/mercurial or others.

P4 is great, but sometimes I need to submit more than one changelists (set of files) with the same file, but different bugfixes.

This classic essay always makes me chuckle. Especially when svn branching or mergeinfo goes awry

"If Version Control Systems were Airlines"


"Watch out for layovers, though. It can take hours to merge new passengers into the formation properly, and it might take several attempts to take off afterwards."


"At checkin time at the gate, if more than one person arrives with a copy of the same ticket, they are ushered into the “merging room” and each person is given a brick. The door is closed, something magical occurs, and the one person that emerges still able to walk is allowed to board the plane."

This is a video from 2007 that's over an hour long. Executive summary, someone, please?
Summary: Linus hates CVS and, by extension, SVN. Talks about the importance of distributed development and pains of the centralized model. You can work offline. On centralized, trivial decisions can't be taken because they pollute the central repository and leads to bad practices. Creating branches is not the main issue, merging FAST is. Git performs/scales well, 22K files on the Linux kernel repo, about 4 merges a day for 2 years since being tracked with Git. Mercurial is good, but Git is better. Git uses SHA-1 hashes for the full history, brings security against attacks and RAM/disk corruption.
FWIW, this is the video that was cited by Chris Wanstrath (of GitHub) as what got him into Git (on, I believe, his episode of the Changelog Show - It was a video that caught my attention at the time too, but I didn't get on the Git train for at least a year or two after.. sheepish look
For me, it was watching Linus' Google talk on git, and then reading some tutorials, and then just doing it.

    Pro Git:
    A short little bit about git's internal model:
    GitHub's Git Cheat Sheet:
    A successful git branching model. gorgeous workflow and pictures.
Also, if you have any questions, feel free to email me. The only reason someone should be using SVN now is if there are institutional reasons or if you use lots of large binary files. I'd rather you send me an email about something that seems hard than give up and use an inferior tool. I'm not an absolute git master, but I've ended up explaining it to most of my classmates, so I've given that explanation a few times already.
At work we have LOTS of large binary files, yes. Static dependencies in C#. But I'll give it a go for personal projects.


No problem. It's something that's being worked on, just merging them becomes a pain. If your dependencies are truly static, in the sense that you're not modifying them or updating them often, it shouldn't be a big deal. Usually it's large images or other 'asset' type stuff that causes a bloated repo.
> But the truth is, the answer to that is not much.

This claim may sound like truthful only if one never used git indeed.

EDIT: I think I should expand this. There is what I would miss if git is taken away from me, in no particular order:

Speed, speed, speed. Operations which SVN performs over network (and not in the most efficient way) git performs locally and it's blazingly fast.

Ease of branching and merging.

Local branches, squash commits.


Staging area.


Having entire history at my fingertips and not having single point of failure which SVN central server is. The fact that entire history in git often takes less space than single checkout of SVN is a nice bonus.

Not having each and every directory of my project littered with .svn


There is more, of course.

I was kind of Subversion fan when I first saw Linus talk on git ( ) and I did not like it at all. Then I did try git and saw the light :) I am still using SVN, but only because I have to. Now it looks dated (mature?), slow, clunky and fragile.

Absolutely as soon as you start working with other people you will need this knowledge. I wouldn't learn SVN because you can interact with SVN via Git. (git-svn). To get startet:

Git on Windows, how to install and use it (or see below for Tortoise git)


If you use the Git GUI from Msysgit to clone a repo please use the following credentials: ...

Tortoise Git is a easy solution which is thought to integrate into e.g. Explorer etc. Please see here for details: (first install msysgit then this..)

Linus Torvalds on Git (a more detailed introduction)

Git usage: ----------

Most interesting is for git is that "Branching" is very cheap. This means that if you want to implement feature X you just "branch" the repository (e.g. local or remote) and don't affect the main branch until you want to merge your code back into main.

Cloning: ---------

1.) Open e.g. Git Bash

2.) Go to the folder where you want to store the repository

3.) git clone ssh://IP:PORT/REPONAME.git TARGETDIR

4.) It will ask you for your ssh key. (please see the above video (.mov) for details).

Later then you can test things by just changing something slightly and recommiting it, e.g.

1.) cd TARGET

2.) vi README

    # ---> .... changed something
3.) git add *

4.) git commit

    # --> vi will pop up and you type e.g. one line "- 
Added some more

    # details to the README file how to do XYZ."
    # This can also be done on the commandline by:

    git commit -m "<YOUR MESSAGE HERE>"

5.) git push

    This will push back your changes to the master repository.

Have fun !
Linus was making fun of SVN in his talk ( ). One of his point was, that SVN chose the wrong way right from the start and there is no way to have "CVS done right" because CVS is hopeless by design.
CVS may have had serious flaws, but I find Linus' original revision control practices to be less than optimal:

"The Linux kernel source code used to be maintained without the help of an automated source code management system, mostly because of Linus Torvalds' dislike of centralized SCM systems.

In 2002, Linux kernel development switched to BitKeeper, a SCM system which satisfied Linus Torvalds' technical requirements."

Nearly a decade without a revision control system?

In Linus' git talk, he says that he finds creating different tarballs is much better version control than using CVS.
If you compare granularity of tracking and utility of historical version control across different projects of that era, such as FreeBSD vs. Linux, I think you'll find that Linus' position is clearly false.
Disclaimer: I use Git for my own stuff and as an svn client at work.

But I wonder if Git's author's boorish-yet-sort-of-amusing behavior when invited to speak at Google a couple of years ago may have also quietly weighed on the decision some:

Mar 10, 2009 · zcrar70 on DVCSs and DAGs (part 2)
One of the things I thought was interesting about this post was when he said "And what they find there is someone who doesn't seem to get it" (regarding Linus speaking on Subversion in his talk at google: ).

When I watched the talk initially, I had the impression that what made Linus' talk tip from "I don't think this is the right way of doing things" to "I think SVN is a pile of crap" wasn't so much on the history model (DAG vs line), but SVN's implementation. Specifically, I was thinking about how the base revisions are stored locally (with the .svn folders in each versioned control folder, which git resolves much more efficiently), the poor rename support etc. It's been a while since I saw that talk though, so it may well be that he was indeed criticising the Line model in general (and I can see how the line model doesn't fit in with his workflow at all.)

You might be interested in this talk Torvalds gave at Google where he talks about how Git came about:

Mar 06, 2009 · maryrosecook on Git
This talk by Linus Torvalds was what got me excited about Git:
Not all of Google (yet), but there has been some significant lobbying for the use of git inside Google in the past by Linus himself. (Google Tech Talk)

If you want to learn more about git and the motivation behind it, then I really recommend watching the linked video. I find it quite entertaining.

Yeah, because Linus is really impartial on the matter ;-)
git is hot why would you want to switch back.

Tech Talk: Linus Torvalds on git

you can run git on windows too...

that project is awesome (just started trying it today) but it includes a full unix cli including bash and coreutils.

... it's an April Fools joke.
ah... I hate april fools. I'm always the fool. regardless. I posted good info. So why should I get modded down?
Feb 04, 2008 · minus1 on Why CVS sucks (humor)
For a talk (partially) dedicated to the vesion control system and why it sucks:

I'd recommend darcs or git. Check out Linus' talk on git, he basically spends 70 minutes talking about why a distributed VCS is vastly superior to any centralized system.

Do you use either one?
Hear Linus Torvalds talk about Git:

Do people who use the word "f-bomb" invariably turn out to have no sense of humor? Or is it just me?

Am I the only one who actually laughs when a guy puts up a slide with a dictionary definition of the word "arrogant", admits that the definition fits his personality, and then (a few slides later) responds to other programmers' criticisms of his framework with a two-word slide: "f__k you"? Is the concept of "setup" followed by "punch line" really so obscure? Is the word "f__k" really so disturbing to your delicate insides?

Am I the only one who is amused by these lines from Linus Torvalds' git talk (

"If you actually like using CVS, you shouldn't be here... you should be in some mental institution, somewhere else."

"Git is so much better than everything else... because my brain did not rot from years and years of thinking CVS did something sane. Not everyone agreed with me... they were ugly and stupid."

"I decided I could write something better than any [source code management system] out there in two weeks... and I was right."

... despite the fact that Linus is (technically) insulting me and every programmer I've ever known?

And am I the only one in the world who saw the title of DHH's critique of the mobile web app ("You're not on a f__king plane!") and laughed out loud, because it was an obvious reference to Samuel "L" Jackson's famous line from "Snakes on a Plane"?

(I guess the reference would have been clearer if DHH had written "You're not reading this Motherf___ing Post on a Motherf___ing Plane", but somehow I doubt that such a title would have appeased the "f-bomb" crowd. Quite the contrary.)

I'd love to read more critiques of DHH (and Torvalds, and PG for that matter) that actually engage his ideas, and fewer tone-deaf, catty complaints about his attitude, his vocabulary, and the fact that not all of his apps are legendary success stories. If his work is so "anti-user", would you please give me some examples of similar apps that do a better job, and explain why you think so? If the web app which works offline is such an amazingly good idea, would you please tell me which ones you use, or perhaps build one and then sell it to me?

What is "f-bomb"?
It's an americanism.
The problem is that David is not Linus nor PG, so that sounds awfully arrogant coming from him.

The other problem is that David's ideas are not really "his ideas". He simply recycled XP and Agile concepts from the 90s. I for one prefer to read the originals (Kent Beck, Martin Fowler, Ron Jeffries, etc...)

Creativity is knowing how to hide your sources.

But anyway, a Railsite told me the other day that what he loves about Rails is that it's not especially new, conceptually. It's a series of mdthodologies that work great.

Do you think that PG invented Lisp, or the software startup, or angel investing? Do you think that Linus Torvalds invented Unix?

Yes, Rails is the instantiation of a bunch of XP and Agile concepts. But here's the thing: concepts don't serve Web pages. Code serves Web pages. And even Martin Fowler doesn't take the time to write all his own code from scratch.

I don't respect DHH because his ideas are original. I respect him because he can convince thousands of programmers around the world to work together. That is not an easy trick - there's a reason why it has been compared to herding cats.

I think he has a unique combination of mild autism, artistic stubborness and scandanavian directness. In the end he makes valid points and he sold me on trying git.

It is hard to tell if he's being serious or if he's just making socially retarded attempts at sarcasm.

Either way he's infinitely more charming than Mr. Zuckerburg...

Sorry this doesn't answer your question but you might find this relevant or at least interesting (and just maybe slightly amusing).
May 22, 2007 · 8 points, 0 comments · submitted by amichail
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ [email protected]
;laksdfhjdhksalkfj more things ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.