HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
The Mother of All Demos, presented by Douglas Engelbart (1968)

Marcel · Youtube · 73 HN points · 88 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Marcel's video "The Mother of All Demos, presented by Douglas Engelbart (1968)".
Youtube Summary
"The Mother of All Demos is a name given retrospectively to Douglas Engelbart's December 9, 1968, demonstration of experimental computer technologies that are now commonplace. The live demonstration featured the introduction of the computer mouse, video conferencing, teleconferencing, hypertext, word processing, hypermedia, object addressing and dynamic file linking, bootstrapping, and a collaborative real-time editor."
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Aug 28, 2022 · 3 points, 0 comments · submitted by tzury
Always glad to see this getting attention, I really enjoy computing history. There's a link to a video in the article, but it's part three of ten. Recently I came across this[0]. It's pretty much the whole thing in one video. It's long but it's well worth the watch. I was born in '68, so I can't remember the 60s, but I can remember the 70s and it's easy for me to understand why people felt Engelbart was, "Dealing lightning with both hands" at the Mother of All Demos.

[0]https://www.youtube.com/watch?v=yJDv-zdhzMY

martyvis
And I just found this very recent biopic on Doug. Just watched 10 minutes so far, but it looks to be really well produced, with interview snippets with Alan Kay, Vint Cerf amongst others.

https://youtu.be/_7ZtISeGyCY

mftb
Cool, bookmarked, ty!
I know it’s not spreadsheets and even more distant, but I think it’s worth it for you to check [1] to see the origin of much of HID to this date. Best in mind that all of what you see there was done by 12 people from scratch (no OS, not even assembled hardware) in roughly 2 years until 1968.

[1] https://youtu.be/yJDv-zdhzMY

For "how things work", I recommend the book Code by Charles Petzold. After that, Jon Stokes's Inside the Machine will give a lot of details on CPU architectures up to Intel's Core 2 Duo. You can also try following along a computer engineering book if you want to go that low in detail with exercises, Digital Fundamentals by Floyd is a common textbook (I have an old 8th edition).

History-wise, enjoy learning slowly because there's so much that even if you dedicated yourself to it you wouldn't be "done" any time soon! Some suggestions in order though:

Watching The Mother of All Demos: https://www.youtube.com/watch?v=yJDv-zdhzMY

A short clip of Sketchpad presented by Alan Kay: https://www.youtube.com/watch?v=495nCzxM9PI

An article from the 40s that also inspired Engelbart: https://www.theatlantic.com/magazine/archive/1945/07/as-we-m...

The Information by James Gleick

What the Dormouse Said by John Markoff

The Psychology of Computer Programming by Gerald Weinberg

Lastly, to mix up in whatever order you please, some paper collections:

Object-Oriented Programming: The CLOS Perspective edited by Andreas Paepcke

History of Programming Languages papers for various langs you're interested in, here's the set from the second conference in 1993 https://dl.acm.org/doi/proceedings/10.1145/154766 but there have been further conferences to check out too if it's interesting

Also all of the Turing Award winners' lectures I've read have been good https://amturing.acm.org/lectures.cfm

All that and some good recommendations others have given should keep you busy for a while!

Apr 20, 2022 · dmitriid on On anti-crypto toxicity
> Visicalc was not free software... but Linux 1.0 is from 1994.

Goalposts: shifted.

The shift itself is also extremely funny because web3's entire premise is: "you pay for everything"

> The point is not just in having an application, the point is in having an application created under the new paradigm.

So, where are they? Where's web3's VisiCalc (1979), emacs (1976), vi (1976)? Douglas Engelbart's Mother of All Demos was in 1968 [1].

Oh, I know the answer. "Bitcoin paper was 14 years ago, smart contracts first proposed over 30 years ago, appeared on Ethereum 7 years ago, but it's still early for this new paradigm because Visicalc, Unix erm Linux, muble mumble"

[1] https://www.youtube.com/watch?v=yJDv-zdhzMY

rglullis
> Goalposts: shifted.

Not at all. It was you and OP who asked for some kind of "killer app" from web3 as a way to justify its existence. My response is to say that this is an absurd ask.

Not only is absurd, it already exists, you just prefer to ignore it. I may agree that Bitcoin as a currency is a failed experiment, there is no denying that the possibility of sending value across borders is the killer app already.

"Oh, but BTC is volatile!". Yes, but I could in 2011 send money from the US to Brazil instantly without going through any international bank (buy in US exchange with USD, sell it in Brazilian exchanges in BRL, wire from the exchange to my brazilian bank) with minimal exposure to BTC volatility and at a lower cost than the fees charged by the banks.

But I am sure you want to ignore that, or find a reason to say that "it doesn't count". I can not fight your own prejudices.

dmitriid
> Not at all. It was you and OP who asked for some kind of "killer app" from web3. My response is to say that this is an absurd ask.

No. Your response is literally "you wouldn't ask Unix people to create VisiCalc instead of compilers", but they did, to which you responded "but it isn't free".

> it already exists, you just prefer to ignore it. I may agree that Bitcoin as a currency is a failed experiment, there is no denying that the possibility of sending value across borders is the killer app already.

You mean, the thing that people have been doing since the invention of borders is the killer app?

> But I am sure you want to ignore that

No, I'm not ignoring that. And the reason it doesn't count because original question was about web3, and if all web3 has to offer is sending money across borders, it's not web3, it's Western Union 2.0.

rglullis
> You mean, the thing that people have been doing since the invention of borders is the killer app?

Trustlessly, no one has. Without relying on central authority, they haven't.

> it's not web3, it's Western Union 2.0.

You are just being an idiot now. Thanks for proving that you are not interested in a productive conversation.

dmitriid
"You expect Visicalc, not Unix compilers"

"Visicalc isn't free, also Linux"

"Killer app is sending money"

"You're an idiot"

See these and other useful quotes in our New York Times bestseller, "A Crypto Shill's Guide to a Productive Conversation".

rglullis
So not only you are misinterpreting every thing I said, you also are resorting to call me a shill?

My work is on making it easier to have crypto for e-commerce, focused on stable currencies. If you go look at all my writings and discussions, you will find me telling people to avoid speculation on token price and to not look at crypto for a quick buck.

You are completely lost in your hate, dude. Quite fitting for a "anti-crypto toxicity" thread. Go find something better to do with your life.

dmitriid
> So not only you are misinterpreting every thing I said

There's no need to misrepresent what you said, you're presenting what you said loud and clear.

> you will find me telling people to avoid speculation on token price and to not look at crypto for a quick buck

See: literally nowhere did I ever imply that. But sure do tell me more from your book of "misrepresented productive conversation".

> You are completely lost in your hate, dude

Ah. Now there's hate. Do not misattribute hate to warranted reactions to your behaviour.

> Go find something better to do with your life.

See how the only person shifting goals, misattributing behaviour and calling people idiots is not me. Mirror, meet rglullis.

dang
Perpetuating flamewars on HN and breaking the site guidelines yourself is not cool, regardless of how wrong someone is or you feel they are, and regardless of how badly they've been behaving.

Please review https://news.ycombinator.com/newsguidelines.html and stick to the intended use of the site from now on. This doesn't depend on what other people do.

dang
You broke the site guidelines repeatedly and egregiously in this thread. That's seriously not cool, and we ban accounts that do it. I don't want to ban you, so please review https://news.ycombinator.com/newsguidelines.html and stick to the intended use of HN from now on.
I posted this a few years ago, asking why there isn't a decent and flexible collaborative outliner like "Google Trees":

https://news.ycombinator.com/item?id=20425970

The thing that's missing from "Google Docs" is a decent collaborative outliner called "Google Trees", that does to "NLS" and "Frontier" what "Google Sheets" did to "VisiCalc" and "Excel". And I don't mean "Google Wave", I mean a truly collaborative extensible visually programmable spreadsheet-like outliner with expressions, constraints, absolute and relative xpath-like addressing, and scripting like Google Sheets, but with a tree instead of a grid. That eats drinks scripts and shits JSON and XML or any other structured data.

Of course you should be able to link and embed outlines in spreadsheets, and spreadsheets in outlines, but "Google Maps" should also be invited to the party (along with its plus-one, "Google Mind Maps").

It should be like the collaborative outliner Douglass Englebart envisioned and implemented in his epic demo of NLS:

https://www.youtube.com/watch?v=yJDv-zdhzMY&t=8m49s

Engelbart also showed how to embed lists and outlines in maps:

https://www.youtube.com/watch?v=yJDv-zdhzMY&t=15m39s

Dave Winer, the inventor of RSS and founder of UserLand Software, originally developed a wonderful outliner on the Mac originally called "ThinkTank" and then "MORE", which later evolved into the "Frontier" programming language, and ultimately the "Radio Free Userland" desktop blogging and RSS syndication tool.

https://en.wikipedia.org/wiki/Dave_Winer

https://en.wikipedia.org/wiki/UserLand_Software

More was great because it had a well designed user interface and feature set with fluid "fahrvergnügen" that made it really easy to use with the keyboard as well as the mouse. It could also render your outlines as all kinds of nicely formatted and stylized charts and presentations. And it had a lot of powerful features you usually don't see in today's generic outliners.

https://en.wikipedia.org/wiki/MORE_(application)

>MORE is an outline processor application that was created for the Macintosh in 1986 by software developer Dave Winer and that was not ported to any other platforms. An earlier outliner, ThinkTank, was developed by Winer, his brother Peter, and Doug Baron. The outlines could be formatted with different layouts, colors, and shapes. Outline "nodes" could include pictures and graphics.

>Functions in these outliners included:

>Appending notes, comments, rough drafts of sentences and paragraphs under some topics

>Assembling various low-level topics and creating a new topic to group them under

>Deleting duplicate topics

>Demoting a topic to become a subtopic under some other topic

>Disassembling a grouping that does not work, parceling its subtopics out among various other topics

>Dividing one topic into its component subtopics

>Dragging to rearrange the order of topics

>Making a hierarchical list of topics

>Merging related topics

>Promoting a subtopic to the level of a topic

After the success of MORE, he went on to develop a scripting language whose syntax (for both code and data) was an outline. Kind of like Lisp with open/close triangles instead of parens! It had one of the most comprehensive implementation of Apple Events client and server support of any Mac application, and was really useful for automating other Mac apps, earlier and in many ways better than AppleScript.

https://en.wikipedia.org/wiki/UserLand_Software#Frontier

http://frontier.userland.com/

Then XML came along, and he integrated support for XML into the outliner and programming language, and used Frontier to build "Aretha", "Manila", and "Radio Userland".

http://manila.userland.com/

http://radio.userland.com/

He used Frontier to build a fully programmable blogging and podcasting platform, with a dynamic HTTP server, a static HTML generator, structured XML editing, RSS publication and syndication, XML-RPC client and server, OPML import and export, and much more.

He basically invented and pioneered outliners, RSS, OPML, XML-RPC, blogging and podcasting along the way.

>UserLand's first product release of April 1989 was UserLand IPC, a developer tool for interprocess communication that was intended to evolve into a cross-platform RPC tool. In January 1992 UserLand released version 1.0 of Frontier, a scripting environment for the Macintosh which included an object database and a scripting language named UserTalk. At the time of its original release, Frontier was the only system-level scripting environment for the Macintosh, but Apple was working on its own scripting language, AppleScript, and started bundling it with the MacOS 7 system software. As a consequence, most Macintosh scripting work came to be done in the less powerful, but free, scripting language provided by Apple.

>UserLand responded to Applescript by re-positioning Frontier as a Web development environment, distributing the software free of charge with the "Aretha" release of May 1995. In late 1996, Frontier 4.1 had become "an integrated development environment that lends itself to the creation and maintenance of Web sites and management of Web pages sans much busywork," and by the time Frontier 4.2 was released in January 1997, the software was firmly established in the realms of website management and CGI scripting, allowing users to "taste the power of large-scale database publishing with free software."

https://en.wikipedia.org/wiki/RSS

https://en.wikipedia.org/wiki/OPML

https://en.wikipedia.org/wiki/XML-RPC

Mar 26, 2022 · kkfx on Why we need Lisp machines
>> UNIX isn’t good enough anymore and it’s getting worse

> Why exactly?

Beside the defects well stated in the Unix Hater's Handbook, unix violate it's own principles since many years. Original unix idea was: desktops like Xerox SmallTalk workstations are too expensive and complex for most needs, so instead of a real revolution of an extraordinary outcome we decide to limit ourselves to most common needs in exchange of far less costs. No GUIs, no touchscreen, no videoconferencing and screen sharing [1] just a good enough CLI with a "user language" (shell scripts) for small potatoes automation and a bit of IPCs for more... Well... For more there is a "system language" (C) that's easy enough for most really complex task.

That was a success because no one really like revolutions and long terms goals especially if they demand big money while many like quick & done improvements at little price.

However in few years unix start to feel the need of something more than a CLI and some GUIs start to appear, unfortunately differently than original Xerox&co desktops those UIs were not "part of the system, fully integrated in it" but just hackish additions with so interoperability, just single apps who have at maximum cut&paste ability.

> Sure, but it also requires rewriting a lot of these things, introducing and fixing new bugs... It feels like the good ol' "let's rewrite this program" that quite frequently doesn't live up to the expectations

We need desktops again, witch means not just "endpoints" or "modern dumb terminals of modern mainframes named cloud", but desktop computing, since desktop development is essentially abandoned since many years and even back then was in a bad shape we need to restart from the classic desktops. LispM was ancient, hackish, but are still the best desktop we have had in human history so a good starting point. We have some kind of LispM OS/OE here: Emacs, still alive and kicking so there is something to work with, that's is. Emacs is already a WM (EXWM) have countless features and it's already "plugged" in modern bootloader OSes to have hw, driver and services. It just need to evolve.

[1] yes, you are reading correctly and no, I'm not wrong, I'm talking about the famous NLS "Mother of all the Demos" from 1968 https://youtu.be/yJDv-zdhzMY

Well, unix is another really bad OS compared to it's historical predecessors: at first they decide for a bad programming language to need less hw horsepower and separate that cheap language from the user language (C for the system, for "complex" things, shell scripts for the end user), for equal reasons they decide that's no need for GUIs, while far before unix we have had GUIs, touch monitor, even the world first video-conference with screen sharing in LAN (the so called Mother of all the Demos, in 1968 [1] then they realize that's was not that good and graphic systems start to appear on Unix, far limited, complex, that completely violate unix principles since for GUIs there were no IPCs, classic PostScript GUIs do support some user-programming but not really something like classic systems, CDE support a certain integration but again nothing like classic systems.

Since them all "modern" systems keep rediscovering in limited, limited and bug ridden ways what historical systems have done far better decades before...

I think many should just see classic advertisement like https://youtu.be/M0zgj2p7Ww4 than see it's date and where we are today...

It's not only security it's the overall design. In the past hw resources was limited an so hacks and slowness were common, hw itself being "in a pioneering phase" was full of hacks and ugliness but evolving those systems would have led us too the moon while we are still in the middle age...

[1] https://youtu.be/yJDv-zdhzMY

The fact that collaborative document preparation is as troublesome as this in 2022 is a damning indictment on the computer industry. (It's become very good at advertising though.)

Douglas Englebart must be rolling in his grave at a comment like this https://www.youtube.com/watch?v=yJDv-zdhzMY

castillar76
It really is puzzling to me that there aren't more and better options in this space. On the one side of the fence you have things like SharePoint or some of the Google bits that are fine for synchronous collaboration on a page or document through a browser, but not very tech or API-friendly. On the other side of the fence you have things like Mediawiki and Dokuwiki that are super tech-friendly, but very manager-unfriendly and not particularly good for synchronous collaboration either.

It surprises me that Confluence continues to be the only thing sitting in the middle of those, and while some competitors have started to shyly emerge (Bookstack, xWiki, I think there's one in progress at Jetbrains), nothing feels like it's really aggressively going after the Confluence market-space directly. Given how many people I've seen comment that they're not fond of Confluence but can't seem to replace it (at least, not until Atlassian forced their hand with the server-license fiasco), it seems like a natural space for people to pursue.

I always love watching that incredible old APL demo [1], though. Seems incredibly futuristic for the time!

edit: this is actually a different demo, but there's a similarly impressive one focused on showing APL, from the 70s I think? Will update if I find it!

[1] - https://youtu.be/yJDv-zdhzMY

Engelbart is a genius ahead of his time - at a time computers were huge boxes only used for corporations and government, he introduced the concept of personal computing and showed how powerful a GUI can be.

People often credit Xerox PARC and Steve Jobs for popularizing the GUI, but to me Engelbart, as an academic, deserves more credit than both.

The Mother Of All Demos (1968) https://youtu.be/yJDv-zdhzMY

Much props to this guy, The Mother of All Demos, presented by Douglas Engelbart (1968) https://www.youtube.com/watch?v=yJDv-zdhzMY
Jun 10, 2021 · W0lf on Building the First GUIs
Obligatory reference to the mother of all demos [1] by Douglas Engelbart

[1] https://www.youtube.com/watch?v=yJDv-zdhzMY&ab_channel=Marce...

Jun 04, 2021 · 5 points, 0 comments · submitted by voxadam
May 30, 2021 · salmo on Yamaha MOTOROiD
The Augmentation Research Center (aka Augment) was Englebart's lab at SRI.

This was a project that wanted to change computers from being number crunching machines to "augmenting human intellect". Things like the mouse, windowing systems, video conferencing, and hypertext came out of that lab and spilled into Xerox PARC, etc. It was also the SRI end of the original Arpanet connection with Berkeley.

Other ideas, like a keyboard that worked via "chords" didn't pan out. It made it so one hand could use the mouse and the other could use the keyboard without switching back and forth. I'm sure many folks with RSI wish for that.

I would check out the "Mother of all Demos". It's out on YouTube: https://youtu.be/yJDv-zdhzMY

And of course you can look up Englebart on Wikipedia or whatever.

And if you are interested in the topic more I recommend "What the Dormouse Said: How the Sixties Counterculture Shaped the Personal Computer Industry" by John Markoff. It plays this history out more before and after.

Mar 26, 2021 · 1 points, 0 comments · submitted by sayyss
Watch Doug Engelbart sell you on it 53 years ago in 1968. https://www.youtube.com/watch?v=yJDv-zdhzMY

The fact that this isn't a reality in our every day computer use is sad. Glad to see CoScreen picked up the ball which Microsoft and Apple dropped long ago.

jsilence
would really prefer this implemented as an open protocol instead of a product.
Nov 02, 2020 · gjvc on Doug’s Demo Sequel: 1969
In the original demo [1] a year prior he calls the pointer a "tracking spot" and of the mouse: "I don't know why we call it a mouse -- sometimes I apologise for it. It started that way and we never did change it."

[1] https://www.youtube.com/watch?v=yJDv-zdhzMY

dTal
I thought it was always very obvious why it's called a mouse - the cable is suggestive of a tail.
Aug 24, 2020 · 1 points, 0 comments · submitted by jorgebucaran
Aug 04, 2020 · 2 points, 0 comments · submitted by Breadmaker
Aug 03, 2020 · kbumsik on Bill English has died
I was so surprised when I watched "The Mother of All Demos" from 50+ years ago (1968) and saw what they predicted and actually implemented.

They demonstrated what's happening in 2020, video conferencing, real time collaborative word processing, and etc. Truly amazing.

[1] https://youtu.be/yJDv-zdhzMY

[2] https://en.m.wikipedia.org/wiki/The_Mother_of_All_Demos

gjvc
and probably most of most historic importance, hypertext-navigable content editing. (note that Ted Nelson coined the term "hypertext" in 1963). I often wistfully hope that some group of people will retrace the steps made by that generation of researchers, and evolve a better medium than TBL-derived efforts.
vaxman
Rewatched it on the giant touch panel in my dashboard last night. One thing that is neat is the genlock-like mix of video and computer output —pretty cool for 1968. Going to hookup a mouse today and go trackpad-free for a while on the old ‘puter in honor of his passing.
cxr
> what they predicted

From Bret Victor's "A few words on Doug Engelbart"[1]:

> Almost any time you interpret the past as "the present, but cruder", you end up missing the point. [...] If you attempt to make sense of Engelbart's design by drawing correspondences to our present-day systems, you will miss the point, because our present-day systems do not embody Engelbart's intent. [...] If you truly want to understand NLS, you have to forget today. Forget everything you think you know about computers. Forget that you think you know what a computer is. Go back to 1962. And then read his intent.[2]

1. http://worrydream.com/Engelbart/

2. http://www.dougengelbart.org/pubs/augment-3906.html

I just finished watching it ( again, definitely seen it before ).

Featuring : introduction to programming, GUIs, basic usage and troubleshooting, processor design with Roger Wilson, manufacturing including pick & place, flow soldering, desoldering.

I was lucky to be in a neighbouring country to the UK and was exposed to the Electron first in our local library, then went on to a school which had a classroom full of BBC Micro's with central server.

Just imagine if I had been able to watch this video at the time. This is awesome content and for me in the same league as The Mother of All Demos [0]. Such a pity the UK didn't keep it up.

[0] https://www.youtube.com/watch?v=yJDv-zdhzMY

I can give you the names of a handful of books that might be useful. Some are more technical, some less so. Some are more about personalities, some about the business aspects of things, some more about the actual technology. I don't really have time to try and categorize them all, so here's a big dump of the ones I have and/or am familiar with that seem at least somewhat related.

The Mythical Man-Month: Essays on Software Engineering - https://www.amazon.com/Mythical-Man-Month-Software-Engineeri...

Hackers: Heroes of the Computer Revolution - https://www.amazon.com/Hackers-Computer-Revolution-Steven-Le...

The Cuckoo's Egg: Tracking a Spy Through the Maze of Computer Espionage - https://www.amazon.com/Cuckoos-Egg-Tracking-Computer-Espiona...

Where Wizards Stay Up Late: The Origins of the Internet - https://www.amazon.com/Where-Wizards-Stay-Up-Late/dp/0684832...

Open: How Compaq Ended IBM's PC Domination and Helped Invent Modern Computing - https://www.amazon.com/Open-Compaq-Domination-Helped-Computi...

Decline and Fall of the American Programmer - https://www.amazon.com/Decline-American-Programmer-Yourdon-1...

Rise and Resurrection of the American Programmer - https://www.amazon.com/dp/013121831X/ref=sr_1_1?dchild=1&key...

Accidental Empires: How the Boys of Silicon Valley Make Their Millions, Battle Foreign Competition, and Still Can't Get a Date - https://www.amazon.com/Robert-X-Cringely/dp/0887308554/ref=s...

Softwar: An Intimate Portrait of Larry Ellison and Oracle - https://www.amazon.com/Softwar-Intimate-Portrait-Ellison-Ora...

Winners, Losers & Microsoft - https://www.amazon.com/Winners-Losers-Microsoft-Competition-...

Microsoft Secrets - https://www.amazon.com/Microsoft-Secrets-audiobook/dp/B019G2...

The Friendly Orange Glow: The Untold Story of the PLATO System and the Dawn of Cyberculture - https://www.amazon.com/The-Friendly-Orange-Glow-audiobook/dp...

Troublemakers: Silicon Valley's Coming of Age - https://www.amazon.com/Troublemakers-Silicon-Valleys-Coming-...

Hard Drive: Bill Gates and the Making of the Microsoft Empire - https://www.amazon.com/Hard-Drive-Making-Microsoft-Empire/dp...

Masters of Doom: How Two Guys Created an Empire and Transformed Pop Culture - https://www.amazon.com/Masters-Doom-Created-Transformed-Cult...

The Supermen: The Story of Seymour Cray and The Technical Wizards Behind the Supercomputer - https://www.amazon.com/Supermen-Seymour-Technical-Wizards-Su...

Bitwise: A Life in Code - https://www.amazon.com/Bitwise-Life-Code-David-Auerbach/dp/1...

Gates - https://www.amazon.com/Gates-Microsofts-Reinvented-Industry-...

We Are The Nerds - https://www.amazon.com/We-Are-Nerds-audiobook/dp/B07H5Q5JGS/...

A People's History of Computing In The United States - https://www.amazon.com/Peoples-History-Computing-United-Stat...

Fire In The Valley: The Birth and Death of the Personal Computer - https://www.amazon.com/Fire-in-Valley-audiobook/dp/B071YYZJG...

How The Internet Happened: From Netscape to the iPhone - https://www.amazon.com/How-Internet-Happened-Netscape-iPhone...

Steve Jobs - https://www.amazon.com/Steve-Jobs-Walter-Isaacson/dp/1451648...

The Idea Factory: Bell Labs and the Great Age of American Innovation - https://www.amazon.com/Idea-Factory-Great-American-Innovatio...

Coders - https://www.amazon.com/Coders-Making-Tribe-Remaking-World/dp...

Dreaming in Code: Two Dozen Programmers, Three Years, 4,732 Bugs, and One Quest for Transcendent Software - https://www.amazon.com/Dreaming-in-Code-Scott-Rosenberg-audi...

The Pentagon's Brain: An Uncensored History of DARPA, America's Top-Secret Military Research Agency - https://www.amazon.com/Pentagons-Brain-Uncensored-Americas-T...

The Imagineers of War: The Untold Story of DARPA, the Pentagon Agency That Changed the World - https://www.amazon.com/Imagineers-War-Untold-Pentagon-Change...

The Technical and Social History of Software Engineering - https://www.amazon.com/Technical-Social-History-Software-Eng...

Also...

"The Mother of All Demos" by Doug Englebart - https://youtu.be/yJDv-zdhzMY

"Jobs vs Gates" - https://www.amazon.com/Jobs-Vs-Gates-Hippie-Nerd/dp/B077KB96...

"Welcome to Macintosh" - https://www.amazon.com/Welcome-Macintosh-Guy-Kawasaki/dp/B00...

"Pirates of Silicon Valley" - https://www.amazon.com/Pirates-Silicon-Valley-Noah-Wyle/dp/B...

"Jobs" - https://www.amazon.com/Jobs-Ashton-Kutcher/dp/B00GME2NCG/ref...

And while not a documentary, or meant to be totally historically accurate, the TV show "Halt and Catch Fire" captures a lot of the feel of the early days of the PC era, through to the advent of the Internet era.

https://www.amazon.com/I-O/dp/B00KCXJCEK/ref=sr_1_1?crid=U6Z...

And there's a ton of Macintosh history stuff captured at:

https://www.folklore.org/

That demo from 1968 ruins everything (https://www.youtube.com/watch?v=yJDv-zdhzMY), like the illusion of progress.
Slightly off-topic but I glad that the article mentions "Mother of all demos" [1]

I watched this for the first time last year and I was shocked - Douglas Engelbart demonstrated what happens after 50 years. The mouse is the least amazing one but the demo featured real-time video conferencing and Google Docs-like real-time word editor with shared mouse pointers.

It is hard to believe it's 1968 and he actually implemented them 50 years ahead.

[1] https://youtu.be/yJDv-zdhzMY

Apr 19, 2020 · 2 points, 0 comments · submitted by bottle2
Top of my head, here are some books on specific histories I enjoyed. All old, but there are timeless nuggets in them.

Soul of a new machine, previously mentioned. Where I first learned about mushroom management.

Just for Fun: the story of an accidental revolutionary [0] was fun bio on Torvalds from 2001

The Mythical Man Month [1] offers some insight into the management and thinking that went into OS/360

Masters of Doom [2]: offers an enjoyable history of the shareware years and the rise of id software

The multicians site [3]: is a collaborative history of Multics, one the most influential operating systems.

The Mother of All Demos [4]: even better than Steve Jobs keynotes

Steve Jobs iPhone introduction [5]: I’m not a huge fan of Mr Jobs, but this is one of the best presentations ever. It’s not history, per se, but very interesting through our eyes.

0: https://www.goodreads.com/book/show/160171.Just_for_Fun

1: https://www.goodreads.com/book/show/13629.The_Mythical_Man_M...

2: https://www.goodreads.com/book/show/222146.Masters_of_Doom

3: https://multicians.org/

4: https://youtu.be/yJDv-zdhzMY

5: https://youtu.be/vN4U5FqrOdQ

May I suggest a rephrasing for that?

Execution can bring profit. Ideas might not.

History is full of examples of visionaries coming up with ideas that have inspired and fueled others--in some cases whole industries have been founded on these ideas. For instance, Doug Engelbart gave the "Mother of All Demos" in 1968 [1]. It was a proof of concept, but the ideas he talked about were implemented by other companies--who profited from it.

It is unfortunate that we live in a world where ideas hold such little monetary value, but they're most certainly not worthless.

[1]: https://www.youtube.com/watch?v=yJDv-zdhzMY&t=4s

petra
Of course you need both execution and ideas.

But let's say you have very good ideas but you're a mediocre executioner.

1. Could you find a very lucrative job/partnership mostly because of your ideas, and get quality help with the execution ? It's possible.

2. Ideas are easy to steal. Do you have a decent strategy to prevent that from happening ? Those sometimes exist.

3. Not every business sucseeds. Can you try repeatedly ? Maybe, depending on context.

blowski
"Ideas are unlikely to have any immediate monetary value until they are executed" is more accurate, but doesn't roll off the tongue so easily.
ct520
I dunno, makes sense to me. Your idea is worthless to ones self (monetary) if you do not leverage/execute it.
Feb 19, 2020 · tyfon on Larry Tesler Has Died
Douglas Engelbart demonstrated copy/paste with a mouse in 1968 [1], however I'm not sure what he called the process. It was also very much an experimental system and not something for sale.

[1] https://www.youtube.com/watch?v=yJDv-zdhzMY

from the mother of all demos (1968) https://www.youtube.com/watch?v=yJDv-zdhzMY&feature=youtu.be...

I suppose you mean this video ?

Fice
A better link: https://www.dougengelbart.org/content/view/374/ And also: https://archive.org/details/dougengelbartarchives?sort=title... because youtube videos are ephemeral.
Moru
Thankyou, that was nicely indexed even!
Dec 27, 2019 · kps on Windows 95 UI Design
Originally, yes, there was one¹. But three were used quite soon after.²

¹ https://www.computerhistory.org/revolution/input-output/14/3...

² https://youtu.be/yJDv-zdhzMY

https://news.ycombinator.com/item?id=20425970

DonHopkins 85 days ago | parent | favorite | on: I was wrong about spreadsheets (2017)

The thing that's missing from "Google Docs" is a decent collaborative outliner called "Google Trees", that does to "NLS" and "Frontier" what "Google Sheets" did to "VisiCalc" and "Excel".

And I don't mean "Google Wave", I mean a truly collaborative extensible visually programmable spreadsheet-like outliner with expressions, constraints, absolute and relative xpath-like addressing, and scripting like Google Sheets, but with a tree instead of a grid. That eats drinks scripts and shits JSON and XML or any other structured data.

Of course you should be able to link and embed outlines in spreadsheets, and spreadsheets in outlines, but "Google Maps" should also be invited to the party (along with its plus-one, "Google Mind Maps").

It should be like the collaborative outliner Douglass Englebart envisioned and implemented in his epic demo of NLS:

https://www.youtube.com/watch?v=yJDv-zdhzMY&t=8m49s

Engelbart also showed how to embed lists and outlines in maps:

https://www.youtube.com/watch?v=yJDv-zdhzMY&t=15m39s

Dave Winer, the inventor of RSS and founder of UserLand Software, originally developed a wonderful outliner on the Mac originally called "ThinkTank" and then "MORE", which later evolved into the "Frontier" programming language, and ultimately the "Radio Free Userland" desktop blogging and RSS syndication tool.

https://en.wikipedia.org/wiki/Dave_Winer

https://en.wikipedia.org/wiki/UserLand_Software

More was great because it had a well designed user interface and feature set with fluid "fahrvergnügen" that made it really easy to use with the keyboard as well as the mouse. It could also render your outlines as all kinds of nicely formatted and stylized charts and presentations. And it had a lot of powerful features you usually don't see in today's generic outliners.

https://en.wikipedia.org/wiki/MORE_(application)

>MORE is an outline processor application that was created for the Macintosh in 1986 by software developer Dave Winer and that was not ported to any other platforms. An earlier outliner, ThinkTank, was developed by Winer, his brother Peter, and Doug Baron. The outlines could be formatted with different layouts, colors, and shapes. Outline "nodes" could include pictures and graphics.

>Functions in these outliners included:

>Appending notes, comments, rough drafts of sentences and paragraphs under some topics

>Assembling various low-level topics and creating a new topic to group them under

>Deleting duplicate topics

>Demoting a topic to become a subtopic under some other topic

>Disassembling a grouping that does not work, parceling its subtopics out among various other topics

>Dividing one topic into its component subtopics

>Dragging to rearrange the order of topics

>Making a hierarchical list of topics

>Merging related topics

>Promoting a subtopic to the level of a topic

After the success of MORE, he went on to develop a scripting language whose syntax (for both code and data) was an outline. Kind of like Lisp with open/close triangles instead of parens! It had one of the most comprehensive implementation of Apple Events client and server support of any Mac application, and was really useful for automating other Mac apps, earlier and in many ways better than AppleScript.

https://en.wikipedia.org/wiki/UserLand_Software#Frontier

http://frontier.userland.com/

Then XML came along, and he integrated support for XML into the outliner and programming language, and used Frontier to build "Aretha", "Manila", and "Radio Userland".

http://manila.userland.com/

http://radio.userland.com/

He used Frontier to build a fully programmable blogging and podcasting platform, with a dynamic HTTP server, a static HTML generator, structured XML editing, RSS publication and syndication, XML-RPC client and server, OPML import and export, and much more.

He basically invented and pioneered outliners, RSS, OPML, XML-RPC, blogging and podcasting along the way.

>UserLand's first product release of April 1989 was UserLand IPC, a developer tool for interprocess communication that was intended to evolve into a cross-platform RPC tool. In January 1992 UserLand released version 1.0 of Frontier, a scripting environment for the Macintosh which included an object database and a scripting language named UserTalk. At the time of its original release, Frontier was the only system-level scripting environment for the Macintosh, but Apple was working on its own scripting language, AppleScript, and started bundling it with the MacOS 7 system software. As a consequence, most Macintosh scripting work came to be done in the less powerful, but free, scripting language provided by Apple.

>UserLand responded to Applescript by re-positioning Frontier as a Web development environment, distributing the software free of charge with the "Aretha" release of May 1995. In late 1996, Frontier 4.1 had become "an integrated development environment that lends itself to the creation and maintenance of Web sites and management of Web pages sans much busywork," and by the time Frontier 4.2 was released in January 1997, the software was firmly established in the realms of website management and CGI scripting, allowing users to "taste the power of large-scale database publishing with free software."

https://en.wikipedia.org/wiki/RSS

https://en.wikipedia.org/wiki/OPML

https://en.wikipedia.org/wiki/XML-RPC

Aug 23, 2019 · 6 points, 0 comments · submitted by tosh
Aug 16, 2019 · 16 points, 2 comments · submitted by Malfunction92
welcome_dragon
That this was over 50 years ago will never cease to amaze me
EricE
For more on this and what lead up to it I highly recommend: https://www.goodreads.com/book/show/722412.The_Dream_Machine
Aug 15, 2019 · ian0 on Tell HN: Danny Cohen Has Died
>>I'm always amazed at some of the things people managed to do "way back when"

Its insane. The first time I saw the mother of all demos[0] (also from around that time) I couldn't help but think "How the hell have we only come this after 50 years".

[0] https://www.youtube.com/watch?v=yJDv-zdhzMY

jacquesm
In some ways it is a step back. The mother of all demos is pure function, zero form. If all those cycles going to form would go to function there would be an amazing jump forward in speed. Try a really old version of windows on modern hardware for an idea of what I'm getting at.
I have been lucky to not be impacted by hand or wrist pain. I did experience chronic back pain for years due to a pinched sciatic nerve, it can be debilitating and depressing. Chronic pain is a large precursor to suicide, if you have it, you need to get it fixed.

My interest in this is from a FutureOfProgramming perspective, how can we interact with computers in different, higher order ways. What does the Mother of All Demos [0] look like in 2020?

Travis Rudd, "Coding by Voice" (2013) [1] this uses Dragon in a Windows VM to handle the speech to text. This was the original, "termie slash slap", a kind of Pootie Tang crossed structural editing.

Coding by Voice with Open Source Speech Recognition, from HOPE XI (2016) [2]

Another writeup [3] that outlines Dragon and a project I hadn't heard of called Silvius [4]

It looks like most of these systems rely on Dragon, and Dragon on Windows at that due to not having the extension APIs on Mac/Linux. Are there any efforts to use the Mac's built in STT or the Cloud APIs [5,6]?

[0] Mother of All Demos https://www.youtube.com/watch?v=yJDv-zdhzMY

[1a] https://www.youtube.com/watch?v=8SkdfdXWYaI

[1b] https://youtu.be/8SkdfdXWYaI?t=510

[2] https://www.youtube.com/watch?v=YRyYIIFKsdU

[3] https://blog.logrocket.com/programming-by-voice-in-2019-3e18...

[4] http://www.voxhub.io/silvius

[5] https://cloud.google.com/speech-to-text/

[6] https://aws.amazon.com/transcribe/

caspar
Talon actually uses Mac's built in STT if you don't have Dragon.

James from http://handsfreecoding.org/ was working on a fork of Dragonfly[0] to add support for Google's speech recognition, but I'm not sure if he still is. There are several barriers to that working well though: additional latency really hurts, API usage costs and (as far as I know) an inability to specify a command grammar (Dragonfly/Vocola/Talon all let you use EBNF-like notation to define commands, which are preferentially recognized over free-form dictation).

[0]: https://github.com/dictation-toolbox/dragonfly

tmacwilliam
We’re working on a new voice coding app called Serenade [1] that sounds like what you’re describing. We tried cloud speech APIs, but found the accuracy wasn’t good enough for common programming words like "enum" or "attr". We found that using our speech engine (based on Kaldi [2]) designed specifically for coding gave us better accuracy.

Serenade also enables you to use natural English voice commands like "add method hello" or "move value to function". Not only is this often faster than typing, but it also means you don’t have to memorize the syntax details of every language or many editor shortcuts. Our app is still early, but we think the future of programming is working with these higher-level inputs rather than typing out all code by hand.

We’re looking for people to give feedback, so if anyone is interested in giving it a try, you can download Serenade at [3] or email me at [email protected].

[1] https://serenade.ai

[2] http://kaldi-asr.org

[3] https://serenade.ai/download

The thing that's missing from "Google Docs" is a decent collaborative outliner called "Google Trees", that does to "NLS" and "Frontier" what "Google Sheets" did to "VisiCalc" and "Excel".

And I don't mean "Google Wave", I mean a truly collaborative extensible visually programmable spreadsheet-like outliner with expressions, constraints, absolute and relative xpath-like addressing, and scripting like Google Sheets, but with a tree instead of a grid. That eats drinks scripts and shits JSON and XML or any other structured data.

Of course you should be able to link and embed outlines in spreadsheets, and spreadsheets in outlines, but "Google Maps" should also be invited to the party (along with its plus-one, "Google Mind Maps").

It should be like the collaborative outliner Douglass Englebart envisioned and implemented in his epic demo of NLS:

https://www.youtube.com/watch?v=yJDv-zdhzMY&t=8m49s

Engelbart also showed how to embed lists and outlines in maps:

https://www.youtube.com/watch?v=yJDv-zdhzMY&t=15m39s

Dave Winer, the inventor of RSS and founder of UserLand Software, originally developed a wonderful outliner on the Mac originally called "ThinkTank" and then "MORE", which later evolved into the "Frontier" programming language, and ultimately the "Radio Free Userland" desktop blogging and RSS syndication tool.

https://en.wikipedia.org/wiki/Dave_Winer

https://en.wikipedia.org/wiki/UserLand_Software

More was great because it had a well designed user interface and feature set with fluid "fahrvergnügen" that made it really easy to use with the keyboard as well as the mouse. It could also render your outlines as all kinds of nicely formatted and stylized charts and presentations. And it had a lot of powerful features you usually don't see in today's generic outliners.

https://en.wikipedia.org/wiki/MORE_(application)

>MORE is an outline processor application that was created for the Macintosh in 1986 by software developer Dave Winer and that was not ported to any other platforms. An earlier outliner, ThinkTank, was developed by Winer, his brother Peter, and Doug Baron. The outlines could be formatted with different layouts, colors, and shapes. Outline "nodes" could include pictures and graphics.

>Functions in these outliners included:

>Appending notes, comments, rough drafts of sentences and paragraphs under some topics

>Assembling various low-level topics and creating a new topic to group them under

>Deleting duplicate topics

>Demoting a topic to become a subtopic under some other topic

>Disassembling a grouping that does not work, parceling its subtopics out among various other topics

>Dividing one topic into its component subtopics

>Dragging to rearrange the order of topics

>Making a hierarchical list of topics

>Merging related topics

>Promoting a subtopic to the level of a topic

After the success of MORE, he went on to develop a scripting language whose syntax (for both code and data) was an outline. Kind of like Lisp with open/close triangles instead of parens! It had one of the most comprehensive implementation of Apple Events client and server support of any Mac application, and was really useful for automating other Mac apps, earlier and in many ways better than AppleScript.

https://en.wikipedia.org/wiki/UserLand_Software#Frontier

http://frontier.userland.com/

Then XML came along, and he integrated support for XML into the outliner and programming language, and used Frontier to build "Aretha", "Manila", and "Radio Userland".

http://manila.userland.com/

http://radio.userland.com/

He used Frontier to build a fully programmable blogging and podcasting platform, with a dynamic HTTP server, a static HTML generator, structured XML editing, RSS publication and syndication, XML-RPC client and server, OPML import and export, and much more.

He basically invented and pioneered outliners, RSS, OPML, XML-RPC, blogging and podcasting along the way.

>UserLand's first product release of April 1989 was UserLand IPC, a developer tool for interprocess communication that was intended to evolve into a cross-platform RPC tool. In January 1992 UserLand released version 1.0 of Frontier, a scripting environment for the Macintosh which included an object database and a scripting language named UserTalk. At the time of its original release, Frontier was the only system-level scripting environment for the Macintosh, but Apple was working on its own scripting language, AppleScript, and started bundling it with the MacOS 7 system software. As a consequence, most Macintosh scripting work came to be done in the less powerful, but free, scripting language provided by Apple.

>UserLand responded to Applescript by re-positioning Frontier as a Web development environment, distributing the software free of charge with the "Aretha" release of May 1995. In late 1996, Frontier 4.1 had become "an integrated development environment that lends itself to the creation and maintenance of Web sites and management of Web pages sans much busywork," and by the time Frontier 4.2 was released in January 1997, the software was firmly established in the realms of website management and CGI scripting, allowing users to "taste the power of large-scale database publishing with free software."

https://en.wikipedia.org/wiki/RSS

https://en.wikipedia.org/wiki/OPML

https://en.wikipedia.org/wiki/XML-RPC

Dec 10, 2018 · 4 points, 0 comments · submitted by dredmorbius
http://www.computerhistory.org/atchm/macpaint-and-quickdraw-...

"The Apple Macintosh combined brilliant design in hardware and in software. The drawing program MacPaint, which was released with the computer in January of 1984, was an example of that brilliance both in what it did, and in how it was implemented."

"The high-level logic is written in Apple Pascal, packaged in a single file with 5,822 lines. There are an additional 3,583 lines of code in assembler language for the underlying Motorola 68000 microprocessor"

But, almost two decades before that, in 1968(!) not directly the Paint program, but all kinds of interactivity and simple drawing using the first mice ever:

https://m.youtube.com/watch?v=yJDv-zdhzMY

https://en.wikipedia.org/wiki/The_Mother_of_All_Demos

The Mother Of All Demos is an absolute classic: https://youtu.be/yJDv-zdhzMY https://en.m.wikipedia.org/wiki/The_Mother_of_All_Demos

I also very much enjoy Feynman’s talks. Here’s one on imagining physics: https://youtu.be/4zZbX_9ru9U

For computer science in particular, I highly recommend the first lecture in the SICP series, especially on the naming of so called “computer science”: https://www.youtube.com/watch?v=2Op3QLzMgSY

Maybe these are mostly too new, or you have different (more practical, hands on?) definition of tech-talk - but from a quick look the only speakers I expected - and found - were Sandi Mets and Rob Pike.

If it's practical, I'm surprised not to see the js "wat" lightning talk (which I now can't seem to find...).

If it's more general "best of", I'd expect something like Guy Steele "growing a language" : https://youtu.be/_ahvzDzKdB0

Douglas Engelbart "the mother of all demos": https://youtu.be/yJDv-zdhzMY

Alan Kay "doing with images makes symbols": https://youtu.be/p2LZLYcu_JY Or, if that's too long, the much more condensed ted talk: "a powerful idea about teaching ideas": https://youtu.be/Eg_ToU7m1MI (Maybe that's not a "tech talk"?)

Rich Hickey "simple made easy" : https://youtu.be/34_L7t7fD_U

To name a few of the top of my head.

bcbrown
> js "wat" lightning talk (which I now can't seem to find...)

https://www.destroyallsoftware.com/talks/wat

lgregg
I really enjoyed that. Thanks for finding it.
O_H_E
Just watched Sandi Mets talk because of your comment, and it sucks to be forced to go study biology/sociology/literature after this ;)
yaj54
You're right those are all great talks (and fit in my definition of a tech talk). I just checked, and none of them are in my dataset, which I'll admit I'm surprised about. But they (and related ones) will make it into the next round.
Tuna-Fish
The issue seems to be that they are not typically watched on youtube. For example, the "simple made easy" linked above is a low-quality pirate youtube copy, the proper place to watch it is here:

https://www.infoq.com/presentations/Simple-Made-Easy

It's correct though.

90% of code is just reinventing the wheel, over and over again. And not just specialised modules like payments. Its insane that even small companies have "data experts", "UI/UX designers". Best practice is actually quite narrow and less important than we think, better to have 30 great designers working on the best way to allow users to interact with devices than 30.000 wasting time. Likewise, its better to have a bunch of smart people working on abstracting away data storage then everyone digging through AWS docs.

The truth is, WE have not been able to agree on standards of software development that would see the complexity abstracted so we can write applications faster. And because of this, cloud platforms and SAAS providers are filling the gap and increasing in power.

Every time I watch the mother of all demos [1] I feel depressed. Its as if builders were given the power to create new construction materials from scratch, for free. And then spent the next 50 years arguing over whose material is better than actually building cool stuff.

[1] https://www.youtube.com/watch?v=yJDv-zdhzMY

bni
It is re-inventing the wheel, but in another business context. I have yet to witness "re-usable" business logic, anywhere.
solarkraft
Maybe somewhat off-topic: How much cool stuff is there even left to build in the software world?
adrianN
I don't know what's cool to you, but a general artificial intelligence would be cool to me. An operating system + userland without remotely exploitable holes would be pretty great. Working quantum computers would be cool too, but that's more of a hardware problem.
taway_1212
At least in the gamedev area, there's infinite number of games to build (the same way there's infinite amount of novels to write and songs to compose).
taf2
Infinitely and beyond
seeekr
The more you build, the more you realize how many more things haven't been built yet and you might want to build after the current one(s)... ad infinitum.
narag
90% of code is just reinventing the wheel...

I disagree. More than 99% of code doesn't require to (re)invent anything at all. Actually, if we stopped trying to be too clever, it would be much easier to clean and refactor code.

Edit, to be more specific: mechanic, repetitive, naive code is maybe boring to write, but easier to read and understand.

It's not the same demo, the article is referring to "The Mother Of All Demos" [1].

[1] https://www.youtube.com/watch?v=yJDv-zdhzMY

WalterGR
Whoops, you’re right. Thanks for the correction.
No.

The mouse introduced in 1984 was philosophically incompatible with Anglebarts mouse like device. Watch the mother of all demos and see the difference.

Apple: https://youtu.be/S7AL7tkQ7d0?t=13m8s

Mother of all demos: https://www.youtube.com/watch?v=yJDv-zdhzMY

None
None
Augmenting Human Intellect. A Conceptual Framework by Doug Engelbart.

http://www.1962paper.org/

In 1962 Doug Engelbart published what may be the most important paper in computer history and in human augmentation.

This is where he laid out his concept of interactive computing and which would lead to him and his team to invent the mouse, word processing, email, and most of what we today consider personal computing. We still have far to go to live up to the dreams and ideas presented here:

Read Augmenting Human Intellect

http://www.1962paper.org/web.html

This presentation of the paper hosted and presented by The Liquid Information Company, makers of richly interactive text, inspired by and in dialog with Doug Engelbart whom we were honoured to make a webomentary on as well: Invisible Revolution, The Doug Engelbart Documentary.

You should also see the 1968 Demo this paper resulted in:

https://www.youtube.com/watch?v=yJDv-zdhzMY

The Mother of All Demos, presented by Douglas Engelbart (1968)

https://www.youtube.com/watch?v=yJDv-zdhzMY

The Mother of All Demos

https://en.wikipedia.org/wiki/The_Mother_of_All_Demos

"The Mother of All Demos" is a name retroactively applied to a landmark computer demonstration, given at the Association for Computing Machinery / Institute of Electrical and Electronics Engineers (ACM/IEEE)—Computer Society's Fall Joint Computer Conference in San Francisco, which was presented by Douglas Engelbart on 9 December, 1968.

The live demonstration featured the introduction of a complete computer hardware and software system called the oN-Line System or, more commonly, NLS. The 90-minute presentation essentially demonstrated almost all the fundamental elements of modern personal computing: windows, hypertext, graphics, efficient navigation and command input, video conferencing, the computer mouse, word processing, dynamic file linking, revision control, and a collaborative real-time editor (collaborative work). Engelbart's presentation was the first to publicly demonstrate all of these elements in a single system. The demonstration was highly influential and spawned similar projects at Xerox PARC in the early 1970s. The underlying technologies influenced both the Apple Macintosh and Microsoft Windows graphical user interface operating systems in the 1980s and 1990s.

May 11, 2018 · 2 points, 0 comments · submitted by jamesknelson
The Mother of All Demos, presented by Douglas Engelbart (1968)

https://www.youtube.com/watch?v=yJDv-zdhzMY

Apr 01, 2018 · 5 points, 0 comments · submitted by relyio
> human-machine cooperation is probably where the big money will be.

As it has been for as long as machines have existed, really. This reminds me of Douglas Engelbart and his vision for computers. I'll cite the section of his wikipedia page that paraphrases an interview with him from 2002[0][1].

> [Douglas Engelbart] reasoned that because the complexity of the world's problems was increasing, and that any effort to improve the world would require the coordination of groups of people, the most effective way to solve problems was to augment human intelligence and develop ways of building collective intelligence. He believed that the computer, which was at the time thought of only as a tool for automation, would be an essential tool for future knowledge workers to solve such problems.

He was right of course, and his work lead to "The Mother of All Demos"[1].

Machine learning is the next step in using computers as thought enhancement tools. What we still need to figure out is an appropriate interface that is not as "black-boxy" as "we trained a neural net, and now we can put X in and get Y out".

EDIT: Now that I read that quoted section of wikipedia again, it's funny to note that computers were "only seen as tools of automation", and how modern fears of AI are also about automation. Automation of thinking.

[0] https://en.wikipedia.org/wiki/Douglas_Engelbart

[1] https://www.youtube.com/watch?v=VeSgaJt27PM

[2] https://www.youtube.com/watch?v=yJDv-zdhzMY

seanmcdirmid
Ironically enough, Engelbart was often derided by his colleagues at the time who thought hard AI was just around the corner and so all of this intelligence augmentation stuff would be obsolete soon enough. Today we are closer than ever (always just 20 years away!), but still IA rather than AI is very much the way to go.
ethbro
(if you check an old comment)

Do you have any sources for this? I find "sentiment at the time" especially hard to find, historically speaking.

And I'd be fascinated to read something about this.

aalleavitch
This is a fantastic point, and I think a lot of AI development that goes in the direction of trying to replace human beings is essentially absurd. We already have humans, why would we want something that can do what humans can already do? Rather, we want something that extends the capabilities of humans into areas where they aren't proficient. For instance, why do we put all this effort into natural language processing when humans are already totally optimized for it? What we need is a solution to scaling, not a solution to NLP itself.

EDIT: To expand; one way to do something like Siri would be to have a system that routed requests to human operators. The human operator would give the correct answer to the request, and then the system would use that as training data. If the system was reasonably confident it already knew the answer to the request from previous training data, it would answer right away, but if it was below a certain confidence it would route to a human. This seems like the smartest way to leverage machine learning in these kinds of scenarios, and I'd be surprised if someone hasn't already tried it or something similar in the past.

leggomylibro
It's funny that you bring that up - it does seem like the concept of 'extended cognition' is one of the biggest benefits that we've collectively realized from computers (and other relatively nonvolatile communication mediums like books.)

This is a computer-oriented analogy, but most fields have their own tables and charts and maths that are tedious to keep on the tip of your mind. Still, for example, I don't need to remember the details of every API that I use; I can just remember that there is a 'do X' call available, and refer to the documentation when and if I need to actually use it.

In the same vein, I can quickly get a feel for whether an idea is possible by stringing together a bunch of abstract mental models. "Can I do X?" becomes, "are there good tools available for doing A, B, C, and D?", and that information is only a quick search away. Actually using those tools involves an enormous amount of detail, but it's detail that I can ignore when putting an idea together.

And in most cases, that 'detail' is a library or part that already abstracts a broad range of deeper complexities into something that I don't have to think about.

The question becomes something like: how do we expose people to enough information that they are aware of how much they can learn if they need to, without drowning them in trivia that they will never be interested in?

cinquemb
Your example also is related to my experience briefly working in P&G chemicals R & D lab; the ChemE's around me routinely used google to look up reaction kinetics of different compounds (as well as other similar queries) rather than rely on their memory of such. I was attending a local university at the time (mostly for calculus and mathematical modeling using mathematica, and french), but I'd say this experience is largely one that started my questioning the value attending university in general (I dropped out an ivy about two years later, for this reason among others).

I suspect that the concept of 'extended cognition' as it is realized with the use of computers and how people use it day to day to get work done is in conflict with how we all are mostly taught via rote memorization, and then application of information; therefore it should naturally follow that those who are heavily invested/exposed in 'non extended' cognition services have relatively more to lose, as well as any currently realistic answer to this:

>The question becomes something like: how do we expose people to enough information that they are aware of how much they can learn if they need to, without drowning them in trivia that they will never be interested in?

will bring cognitive dissonance to those who need the answer most (those with heavy exposure to relatively 'non extended' cognition services).

ethbro
When you're looking at effects, I think you need to dig down into what exactly is being extended.

Are more data sources being made available? Is data being preprocessed? Is an initial task being automated?

Because the truth of any worker (in less than a ruthlessly specialized huge company) is that they may be an "extended cognition" worker, but still perform many "non-extended cognition" activities as part of their job. Because there was previously no alternative and work needs to get done.

Fast forward that, and you're never going to fully automate a goal. But you will automate sections of the process that are amenable to machines.

Advice? Recognize which type of work you spend most of your time in, and don't get caught being the "non-extended cognition" person...

Jan 22, 2018 · 2 points, 1 comments · submitted by tosh
tosh
Incredible to realize that this was almost 60 years go
Dec 19, 2017 · lawry on Dynamicland
I'm instantly reminded of "The Mother of All Demos, presented by Douglas Engelbart (1968)" https://www.youtube.com/watch?v=yJDv-zdhzMY
Pulcinella
Then you’ll like: https://www.dougengelbart.org/firsts/1968-demo-interactive.h...
I haven't used org-mode, so I may be a bit unclear on exactly what is meant by folding editors. However, if we're talking about collapsing sections of text, I wonder if the oN-Line System (NLS) would qualify, as shown in Douglas Engelbart's 1968 "The Mother of All Demos" [1].

[1] https://www.youtube.com/watch?v=yJDv-zdhzMY&t=5m48s

Nov 20, 2017 · 3 points, 0 comments · submitted by fmax30
This image [0] of the setup used for "The Mother of All Demos" [1], [2 - video] reminds me that the way we currently use computers and desks may not be the only way.

The keyboard and mouse controls are on a panel that is connected to the chair so they stay in the correct position relative to your body even if you turn one way or the other.

Just thinking about this video sometimes gives me shivers.

[0] https://cdn.arstechnica.net/wp-content/uploads/2015/04/Willi...

[1] http://sloan.stanford.edu/mousesite/1968Demo.html

[2] https://www.youtube.com/watch?v=yJDv-zdhzMY

I don't buy it that everyone would be a digital creative if they just had the right tools. To make quality content, whether it be apps/music/movies/etc..it requires much more than tools.

I respect Alan Kay, but I don't understand the need to bash on modern day technology.

Have we really come along way in terms of general computing? Maybe not [Example: 0]. But in terms of the digital world, I can take a video of my parents and send it to my cousin who lives 10,000 miles away and he can respond in a matter of seconds.

I can literally meet people in remote areas of the world. I can interact with people who barely have food but somehow can get cell phone access and now they are learning and communicating with everyone else. I can't imagine a better way to level the playing field (socioeconomically) globally then how we have it.

Do we have a lot more work to do? Sure. But Rome wasn't built in a day.

The future of the internet & technology, the direction it's headed, is going to come from small contributions from millions across the world.

What people will need will turn into what we have and use. And there won't be some magical device that just pops out of nowhere that will change everything.

[0]: https://www.youtube.com/watch?v=yJDv-zdhzMY

TeMPOraL
I see the problem right here:

"To make quality content ..."

The right tools shouldn't be about creating "content", they should be about letting you solve your own problems, instead of trying to transform them into problems you can buy a solution for. Creative arts are only subset of this - they happen when someone's problem is "I want to make a piece of art".

> Have we really come along way in terms of general computing? Maybe not

That's the point AK seems to be making, though. All the great things you mention are huge accomplishments, yet they're also disappointing compared to what is possible, what we should have now. Rome wasn't built in a day, but you have this whole army of Rome builders who decided that it's better to sell bricks instead of building the city...

homarp
>I don't buy it that everyone would be a digital creative if they just had the right tools. To make _quality_ content,...

who cares about quality ? who is the judge of what "quality" is ? As long as someone can create the content _and distribute it_, they are a digital creative.

scroot
> "I don't buy it that everyone would be a digital creative if they just had the right tools."

That is not Kay's argument. Not everyone who scribbles notes or draws something on a piece of paper is a "creative" or artist. Not all who write at all are professional writers. But we do not have the computing equivalent of paper for everyone. Sure, someone can write in a word processor or draw in a drawing program, but that's not all that a computer is for -- that's just imitating paper. The outstanding question is: can we make something that is as extensible as pen and paper and literacy for aiding human thought for the next level, the computing level? All we have now are stiff applications. Saying that such things make people literate in computing is like writing by filling out forms.

> "Do we have a lot more work to do? Sure. But Rome wasn't built in a day."

Kay is your ally here, a constant gadfly putting the lie to all the hype and BS. He's a constant reminder that we shouldn't feel so satisfied with our mud huts. He's reminding us to build Rome at all.

  > you have two hands.
Every watch ‘the mother of all demos’⁰? Engelbart's system was designed to be used with one hand on the mouse and the other on the chordset. Xerox followed this with the Alto, and on to the Star¹, eventually dropping the chordset but with the same model: the mouse and the left-side function key block worked together.²

https://youtu.be/yJDv-zdhzMY

¹ http://www.digibarn.com/friends/curbow/star/keyboard/

² https://youtu.be/_OwG_rQ_Hqw?t=54m25s

Go watch the MOAD, and realise that most of what's changed is cost, scale, ubiquity, and speed.

That's pretty much the definition of incrementalism.

Yes, there've been a number of watershed transitions, but when you're increasing capability/cost by a factor of ten each decade, that's expected.

https://m.youtube.com/watch?v=yJDv-zdhzMY

Xerox was an immensely profitable business once. Being first to market with photocopiers, and keeping a large market share for a long time.

In the Palo Alto Research Center (PARC), they had the right team, the right ideas and research was absolutely going in the right direction.

For instance, they had former SRI International researchers that participated in the Douglas Engelbart's "oN-Line-System", presented in 1968 in what is now known as "the mother of all demos" (https://www.youtube.com/watch?v=yJDv-zdhzMY, https://en.wikipedia.org/wiki/The_Mother_of_All_Demos).

Their achievements include the creation of the excellent Xerox Alto computer system featuring a GUI and a mouse as input device, which inspired the Apple Macintosh and MS Windows (a story dramatized in multiple occasions, notably in the classic "Pirates of Silicon Valley").

Xerox leadership failed to visualize how innovations like the Alto could be converted into profitable products... even if it looks self-evident today. That's a once in a lifetime opportunity that they let go and as a result other companies heavily profited from PARC's findings and continue to do so today.

In addition, photocopiers are no longer at the center of business activities, and usage paper is decreasing. This makes Xerox a company of the past, like Kodak or Blockbuster (not trying to be offensive, but it is fair to say so).

"You know what makes the rockets fly? Funding." - The Right Stuff.

What really made PARC work is that they were funded to develop the future of computing by building machines which were not cost-effective. It was too early in the mid-1970s to develop a commercially viable personal workstation. But it was possible to do it if you didn't have to make it cost-effective.

That's what I was told when I got a tour of PARC in 1975. They had the Alto sort of working, (the custom CRT making wasn't going well; the phosphor coating wasn't uniform yet) the first Ethernet up, a disk server, and I think the first Dover laser printer. All that gear cost maybe 10x what the market would pay for it. But that was OK with Xerox HQ. By the time the hardware cost came down, they'd know what to build.

Previous attempts at GUI development had been successful, but tied up entire mainframe computers. Sutherland's Sketchpad (1963)[1] and Engelbart's famous demo (1968)[2] showed what was possible with a million dollars of hardware per user. The cost had to come down by three orders of magnitude, which took a while.

Another big advantage which no one mentions is that Xerox PARC was also an R&D center for Xerox copiers. That's why they were able to make the first decent xerographic laser printer, which was a mod to a Xerox 2400 copier. They had access to facilities and engineers able to make good electronic and electromechanical equipment, and thoroughly familiar with xerographic machines.

Ah, the glory days of Big Science.

[1] https://www.youtube.com/watch?v=6orsmFndx_o [2] https://www.youtube.com/watch?v=yJDv-zdhzMY

alankay1
Almost correct. The first laser printers were not the Dover but SLOT by Gary Starkweather at PARC, which was made on top of a 3600 (much faster engine) in the very early 70s, and its connection to the Ethernet EARS a little later (by Ron Rider using an Alto as the server computer).

PARC really wasn't an R&D center for Xerox copiers (this stayed in Rochester), and Gary was a great engineer who could do the work of making a laser and its optics and scanner work with a standard copying engine pretty much by himself.

BurningFrog
Money is a necessary ingredient, but I've been in and near enough very costly endeavors that produced very little brilliance to think it's probably not even in the top 3 most important ones.
jackfoxy
Money is necessary, but not sufficient.
None
None
agumonkey
Kay said the organic collaboration between talents that made things possible. If one guy had an idea that required some piece he didn't know how to get, he could ask some guy who would build it. Surely it took roots in the copier R&D resources, but a symbiotic pool isn't a given either.
bootload
"Sutherland's Sketchpad"

Fantastic read, "Sketchpad: A man-machine graphical communication system" ~ https://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-574.pdf

moftz
There are still a lot of companies that will let you work on your own internal projects as long as you can get your manager to sign off on it. The idea is that if you can get a working prototype or some proof of concept working, you can try to convince a customer to sign on to fund actual production. Some companies have leadership that isn't focused on short-term profits and thus let the smart people they hire use their brains to come up with new things to build to make money on. Not everything is going to be a winner and maybe someone else will come along and make a better version before you can get yours sold but as long as any R&D project resources can be put towards another, the money and effort was not in vain.
joering2
Great post, +1.

So in that case imagine there is a PARC bis somewhere out there... what kind of out-of-regular-folks wallet things they might be working on right now?

I want to be surprised today like Jobs was when he visited them first time.

visarga
I think DeepMind is another PARC. They never cease to amaze me with their brilliant papers. Something just as great as the personal computer and GUI will be built there.
Baeocystin
Any standouts you are able to link? I'd love to dive in, but I am not sure where to start.
visarga
Wavenet impressed me much: https://deepmind.com/blog/wavenet-generative-model-raw-audio...

Also Decoupled Neural Interfaces: https://deepmind.com/blog/decoupled-neural-networks-using-sy...

And AlphaGo of course.

Baeocystin
Thanks for the links!
BaronSamedi
Funding is a necessary but not sufficient condition. They were the right people, in the right place, at the right time. Off all of those, the time period, i.e, the beginning of the computer revolution, is the most important factor in my view.

There are certain moments in time where such innovation is possible and this was one of them. In this respect they are like The Beatles--the right talent just at the right time.

There isn't anything like Xerox PARC at the moment and won't be until the conditions are once again ripe for another Cambrian Explosion (as another commentor aptly described it) of innovation.

abalone
> Ah, the glory days of Big Science.

This still happens! Silicon Valley gets a lots of long-term oriented funding from taxpayers via DARPA and other government agencies. Siri (named after SRI International where Engelbart did his work) and autonomous driving (DARPA Grand Challenge) are just a couple of recent examples.

In fact PARC still does too. Although owned by a private company, PARC does government-funded research.[1]

I would say generally people in SV are less consciously aware of the major role government funding plays in it to this day. It's probably because of the mythos of private entrepreneurship and the uncomfortable narrative of all this so-called "free market capitalism" being supported by billions in govt funds, with only a fraction of the profits returned to the public coffers via taxes.

[1] https://www.parc.com/news-release/97/parc-awarded-up-to-2-mi...

asrp
I think the biggest difference is the funding went from unrestricted to being very specific. A lot of questions are asked before money is given. But the answers for an interesting project may involve concepts and vocabulary that do not yet exist, ideas that are not currently trendy. While I can understand the desire to avoid misallocation, this model assumes the funder has a better idea of what needs resources that the funded. From the second point in the answer:

> 2. Fund people not projects — the scientists find the problems not the funders. So, for many reasons, you have to have the best researchers.

So yes, the total amount may be there but is now diverted.

alankay1
This is one of the most important points
TeMPOraL
> While I can understand the desire to avoid misallocation, this model assumes the funder has a better idea of what needs resources that the funded.

Reminds me of a (probably the) reason we're doing standardized testing in schools. A decent teacher is able to teach and test kids much better than standardized tests, but the society needs consistency more than quality, and we don't trust that every teacher will try to be good at their job. That (arguably, justified) lack of trust leads us as a society to choose worse but more consistent and people-independent process.

I'm starting to see this trend everywhere, and I'm not sure I like it. Consistency is an important thing (it lets us abstract things away more easily, helping turn systems into black boxes which can be composed better), but we're losing a lot of efficiency that comes from just trusting the other guy.

I'd argue we don't have many PARC and MIT around anymore because the R&D process matured. The funding structures are now established, and they prefer consistency (and safety) over effectiveness. But while throwing money at random people will indeed lead to lots of waste, I'd argue that in some areas, we need to take that risk and start throwing extra money at some smart people, also isolating them from demands of policy and markets. It's probably easier for companies to do that (especially before processes mature), because they're smaller - that's why we had PARC, Skunk Works, experimental projects at Google, and an occasional billionaire trying to save the world.

TL;DR: we're being slowed down by processes, exchanging peak efficiency for consistency of output.

acdha
Public education is a bad comparison because it drags in a lot of contentious political debates about how we deal with poverty, racism, difficult home environments, taxation, etc. Based on a fair amount of reading on the subject I think the primary thing driving the standardized testing push is the dream that with enough data we'll find some miracle teaching technique which will avoid the need to tackle the actual hard problems.
DLTarasi
Agreed. Optimization for process over outcomes seems to be a recurring problem in large organizations. Unfortunately it's not hard to understand why decision maker's prefer this approach given the pressures of shareholders/voters. No one gets fired for following the process.

A similar issue was just discussed yesterday in a thread on Jeff Bezos' letter to shareholders:

"A common example is process as proxy. Good process serves you so you can serve customers. But if you’re not watchful, the process can become the thing. This can happen very easily in large organizations. The process becomes the proxy for the result you want. You stop looking at outcomes and just make sure you’re doing the process right. Gulp."

https://news.ycombinator.com/item?id=14107766

_9vzr
Standardized testing has also restricted teachers to standardized teaching. They must complete X number of units from the binder every Y number of days. This leaves little time for the teacher to come up with their own lesson plans. I think there needs to be a balance between a teacher's autonomy in the classroom and standardization. I've heard many teachers complain while I was in school that they felt like they don't have the time to focus on the things they want to do in class. In my elementry school, the class would go to a different teacher for either STEM or liberal arts. I think this is a good start since because teachers can solely focus on material that is related, they can tie in lesson plans and make sure that the standardized subjects are taught efficiently to leave time for the other topics they want to focus on.

I remember in middle school, I had a couple teachers who would say "ok, this is the standardized stuff I have to say" and then go into a very boring lecture of what we just had been learning except the way we first learned it was interesting and engaging. They were simply dotting their i's to make sure that they did their job in conveying the material the state wanted us to know.

petra
But is it really a trade-off , for every teacher or scientist ?

Sure , some teachers could use the freedom to teach better(and create new teaching methods) , but statistically , won't most teachers perform better when driven by above average methods ?

And as for research , sure we need those big breakthroughs , but we also need many people to work on the many small evolutionary steps on each technology . Surely we cannot skip that ?

TeMPOraL
The issue here is IMO that the trustless processes deliver consistency and reliability at such huge costs to quality of outcome, that the outcome becomes below average. In other words, an average teacher could do a better job free than directed by standardized testing, but the results would be so varied in quality and direction[0] as to be useless within the current social framework. And it's not that the society couldn't handle it, it's that when building systems, we want to make things more consistent, so they can be more easily built upon.

Basically, think of why you build layers of abstraction in code, even though it almost always costs performance.

--

[0] - e.g. one teacher focusing on quality math skills, another on quality outdoors skills, etc.

petra
Layers of abstraction in technology not always cost in performance, when you look historically and system wide. For example the fact that digital technologies we're modularized(physics+lithography/transistors/gates/../cpu's/software developed without tight coupling/optimization) allowed faster improvements across the whole stack , according to some researchers, who also think, that in general, modular technologies advance faster. And modularity always requires some standardization.

And as for education, i don't think standardization always leads for performance loss. For example ,the area of reading has seen a lot of research, and one of the results is "direct instruction", one of the best methods to teach reading, and is a highly standardized method.

But maybe what's your saying is true for standardized tests in current implementation.

TeMPOraL
You always trade something off when you're abstracting. Some things become easier, but at the expense of other things becoming more difficult.

To use your example - more modularized technologies are more flexible and can be developed faster, but they no longer use resources efficiently. A modular CPU design makes design process easier, but a particular modular CPU will not outperform a hypothetical "mudball" CPU designed so that each transistor has close-to-optimal utilization (with "optimal" defined by requirements).

Or compare standard coding practices vs. the code demoscene people write for constrained machines, in which a single variable can have 20 meanings, each code line does 10 things in parallel, and sometimes compiled code is itself its own data source.

--

The way I see it, building abstractions on top of something is shifting around difficulty distributions in the space of things you could do with that thing.

petra
Sure, in the small, a few developers working - fully optimizing stuff usually works better.

But on bigger design spaces, when you let hundreds of thousands of people collaborate, create bigger markets faster, and grab more revenue - you enable a much more detailed exploration of the design space.

And often, you discover hidden gold. But also - if you've discovered you've made a terrible mistake in your abstractions - you can often fix that, maybe in the next generation.

humanrebar
> A decent teacher is able to teach and test kids much better than standardized tests, but the society needs consistency more than quality, and we don't trust that every teacher will try to be good at their job.

It's more about it being way to hard to fire bad teachers and standardized tests, in theory at least, are an attempt at providing objective proof of bad teaching that won't be subject to union pushback and/or lawsuits.

I think a lot of the desire for standardized testing would dissolve if principals could just make personnel decisions like normal managers. And, of course, it's up to their superiors to hold them accountable for being good at that job.

> ...we're losing a lot of efficiency that comes from just trusting the other guy.

I think people have been trusting public schools a lot. That generally worked out great for people in affluent suburban districts. And it worked out horribly for people in poor, remote, and urban districts.

TeMPOraL
Well, that's my point expressed in different examples. Standardized testing makes the system easier to manage from the top, and makes the result consistent. Firing "bad teachers" is a complex problem - some teachers can get fired because they suck at educating, but others just because the principal doesn't like them, etc. With standardized rules and procedures, you try to sidestep the whole issue, at the cost of the rules becoming what you optimize for, instead of actual education (which is hard to measure).

> I think a lot of the desire for standardized testing would dissolve if principals could just make personnel decisions like normal managers.

That could maybe affect the desire coming from the bottom, but not the one from the top - up there, the school's output is an input to further processes. Funding decisions are made easier thanks to standardized tests. University recruitment is made easier thanks to standardized tests. Etc.

> I think people have been trusting public schools a lot. That generally worked out great for people in affluent suburban districts. And it worked out horribly for people in poor, remote, and urban districts.

That's the think I'm talking about. I believe it's like this (with Q standing for Quality):

  Q(affluent_free) + Q(poor_free) > Q(affluent_standardized) + Q(poor_standardized)
  Q(affluent_standardized) < Q(affluent_free)
  Q(poor_standardized) > Q(poor_free)
E.g. education without standardized tests may be better for society in total in terms of quality, but then it totally sucks to be poor. Standardized educaion gives mediocre results for everyone.
nradov
The usual "business school" approach to preventing metrics from causing unintended effects is to assemble a balanced scorecard of a dozen or metrics. If you do it right the negative factors of each cancel each other out. So for example to evaluate public school teachers instead of just looking at student improvements in standardized test scores relative to their peers you could also factor in subjective (but quantified) ratings from principals / students / peers, number of continuing education credit hours, student participation levels in extracurricular programs, counts of "suggestion box" ideas, etc.

http://www.balancedscorecard.org/Resources/About-the-Balance...

humanrebar
> It's probably because of the mythos of private entrepreneurship and the uncomfortable narrative of all this so-called "free market capitalism" being supported by billions in govt funds...

I think this is a really bad straw man. There's broad bipartisan support for funding research into good ideas, assuming it's done the right way. The problem, though, is that the billions in government research spending we already allocate are subject to incredible amounts of lobbying and earmarking.

We'll have a NASA allocation, but it has to take place in the state of a particular Senator. Or the research will be funneled through the department of defense and received by defense contractors.

And, besides, for every DARPAnet, we have many "Where does it hurt most to be stung by a bee?" (1).

It's hard to evaluate whether we would still end up in a net positive situation. I don't have hard feelings against people who are both enthusiastic and skeptical about government funding grants.

(1) https://www.flake.senate.gov/public/_cache/files/ef6fcd58-c5...

cma
> And, besides, for every DARPAnet, we have many "Where does it hurt most to be stung by a bee?" (1).

Did they invest the same amount in these two?

humanrebar
They invest more than DARPAnet in programs that get earmarked. Entire jets and battleships are researched, developed, and manufactured for political reasons.

I was saying that just pointing out that programs like DARPAnet might not make up for all the other nonsense.

And we can flip that around as well, maybe PARC was spun off from some successful grants, but we're not taking into account these sorts of failures when attaching the price tag to those initial grants.

nataz
I really dislike research hit jobs like the link you posted. At best they are disingenuous, at worst they are outright lies. The funding attached to the projects is for the total grant amount provided, which may fund multiple projects, staff, or overhead.

From the link:

METHODOLOGY Specific dollar amounts expended to support each study were not available for the projects profiled in this report. Most were conducted as parts of more extensive research funded with government grants or financial support. The costs provided, therefore, represent the total amount of the grant or grants from which the study was supported and not the precise amount spent on the individual studies. This is not intended to imply or suggest other research supported by these grants was wasteful, unnecessary or without merit.

humanrebar
Agreed. I poked around for more primary sources but the office of a senator seemed less controversial than partisan news organizations. Actual links to studies would be less useful since the "interesting bit" would be a note about funding coming from the NSF or something.

It's an incomplete and distorted picture of how bureaucracy funds research. I brought it up to add counterpoints and context to the implication that government is actually good at this sort of thing if we'd just get over free market capitalism (or something).

infinite8s
You think the office of an avowedly partisan senator would be less contentious than a putatively 'neutral' news organization?
humanrebar
Than the sources I found? Yes.
afsina
Siri was an inferior product comparing to others not funded by government (even after Apple took over). So that is not a glorious example of Government funding. Government is extremely ineffective, and most funds are waste. But sure some stuff will appear from all that loot.
Pica_soO
Yes, i mean, its not like a company locked the telephone away because they feared a damage to their gramophone disc sales. That would be shortsighted madness, company's are incapable off, if you read the history books released by their PR-Departments.
timClicks
Inefficient != ineffective. If you're accessing a resource via the Internet, you're testament that public funding of R&D can work.
supremesaboteur
That was not what the parent was saying. US Government spends large amount of money every year. They are bound to make some things work. Private sector makes some things work also. So, the question is : 'which should be preferred' ? If you agree that government spending is less efficient in making things work, you should prefer private spending. Unless you believe there are things private spending cannot do which is a reasonable case to demand public funding for research. For example, some argue that the quarter-to-quarter nature of how private companies operate precludes them from making investment in long running research. But companies like Google do run multi-year research projects.
petra
DARPA doesn't spend that much money, a few billions a year , and has a pretty good track record , probably better than most corporations .
munin
I've been funded by DARPA in one way or another and DARPA is not big science. DARPA is an applied R&D agency, meaning they want to see a path from what you are talking about to some broad application, preferably in the military space. If you can't describe that path to them, the odds of getting funded by them are low (this is drawn from experience of winning more than 10mil in funding from DARPA on many different programs).

The DARPA model is also not big science because the researchers on the outside do not pick the problems, the PMs at DARPA do. The model works when a visionary becomes a PM and is given a "big" (20-80 mil) bucket of money to disperse to researchers to enact their big ideas. If the PMs are truly visionary, then this works. If they are not, you wind up with a sort of soup of crap.

Additionally, since it's applied, DARPA will do frequent (once a quarter and once a year) check-ins, measurements, and progress reports. If you don't measure up during those, your funding gets cut. So as a researcher, it is a challenge if what you want to do is a little bit off the path of what the PM wants to do.

I have much more limited exposure to IARPA but it seems to be the same way there.

I have more experience with NSF, which is the total opposite: researchers propose their own projects and there are infrequent touchpoints, and no cut points really. However, NSF will give you one to two orders of magnitude less money than DARPA.

samstave
Gosh, this is an interesting thread...

Request: can you give an AMA on how one actually proposes/gets funding from darpa or at least an ELI5 on the topic...

What sort of idea should one have where you say to yourself "gee, I should hit up darpa with this!"

munin
Honestly there is a truth and a super-truth, to use theater terms.

The truth, is that you should look at the Broad Agency Announcements (BAAs) that DARPA, specifically the Information Innovation Office (I2O) makes. There will be big PDFs that have lots of boiler plate and look really boring, but these are basically documents written by a program manager (PM) that describes what their envisioned research program is, how they see it being divided up, and what kind of researchers they see doing work in each area of their research program. A research program is split up into one of many (between 3 and 8) technical areas (TA) and there will be language in the BAA describing how many TAs one performer might propose to.

Overall a TA can be described at a high level as one of a few roles: a blue team, a red team, and a white team. There will usually be more than one blue team, at most one red team, and precisely one white team. A blue team does the R&D on the program. Each blue team can have their own unique approach to addressing the research challenges laid out by the PM. Usually the blue teams have competing or complimentary approaches, it is seen as a waste of money to pay two blue teams to do exactly the same thing. A red team evaluates the work done by the blue team. Exactly what this means depends on what the blue team builds. The white team "holds the room together" by fitting the PMs vision together with what the blue team does. The white team usually has a much closer relationship with the PM and usually the white team is pre-determined by the time the BAA is made public.

Blue teams are usually teams of 2-4 companies / universities. These teams usually form before a BAA comes out due to people "in the know" getting a rumor about an upcoming program, or just after it comes out, based on prior relationships. It's kind of rare to have a blue team that is only one entity, but depending on the entity and the size of the program, it can happen.

The path to a successful proposal is demonstrating that your team has ideas that are relevant to the PM and that you can successfully execute them. You show this by having a stack of prior work, good ideas, and a well formed and coherent proposal. Think of the proposal as an audition - if you as a team can't get it together for a month and write a 30 page document describing what you want to do, you probably can't keep it together for 4 years working on something. Your team reads the BAA, does some analysis, figures out how your ideas and past work can apply to the research program proposed by the BAA, and then you write the story up. If you do a good job, and your story is more compelling than other people, you get the money.

The super-truth is that it's an old boys club with a lot of luck and nepotism.

The way people get "in" is when their peers go to DARPA to be a PM. Then you, the researcher, can think "Well Dr. Smith knows about X and likes X+Y, I do some Y, I have an idea that could use some funding, what would Dr. Smith like to read about." Then you write it up and e-mail it to them.

Luck is a big factor here because PMs get e-mails like this all the time. It's almost essential to already be in the PMs rolodex/friend group to get some of their cycles. They like to say that they want to have people outside their circle approach them with new ideas but that's kind of a white lie, there are lots of people in the world and a finite amount of the PMs time.

samstave
Fantastic insight. Thank you.
itchyouch
Seems like most of the best opportunities in general are exactly like this. It's not about what you know, but who you know...
chrisweekly
It's not who you know, it's who knows you.
IIIIIIIIIIII
If you see humanity as a brain and a human like a neuron, the network produces outcomes unimaginable from looking at the individual. Which is easy if you look at an individual stone-age human, and even they are actually more advanced thanks to culture and groups (tribes) than an individual human would be (which would be more akin to a "Mowgli", grown up without learning from and support of the group).

So it's no surprise who you know is important - the brain is mostly about connectivity, not about the individual neuron.

Just a model for thought, obviously not a complete description ("every model is wrong"). I think the value of this model, if you can warm up to it, is that you stop worrying that "it's all about the connections" - because it really is and it's useful that way because that's a major point of how a network works. So do work on your connectivity! And also just like in the brain, a few high-quality connections are worth much more than a thousand low-quality ones.

munin
Oh it's worse than that, it's what you know and who you know.
srbl
I work on several DARPA programs and agree with most of this. But some programs are by design far more basic or applied than others. Also worth noting - DARPA generally doesn't pay contractors anything like what the private sector pays employees.

The programs (and their vision, and their success) depend quite a bit on the PM, and on their ideas and engagement. To a lesser extent success depends upon the SETAs as well.

munin
> DARPA generally doesn't pay contractors anything like what the private sector pays employees.

Eh. You can get like a $200/hr rate. If you're a small company with low overhead, you can get paid quite competitively with the private sector in base comp. Difficult to match big G stock grants, though, no one is becoming a millionaire off of this work.

Unless of course what you do on the grant can be turned into a billion dollar company, because (generally) contractors walk away from research programs with liberal rights to the IP they create, with the government retaining some rights to use, but very rarely (IME) retaining direct ownership. Of course, it's probably not that likely that what you do can turn into a billion dollar company but hey, one can dream...

omarchowdhury
Thankfully one doesn't (usually) need to create a billion dollar company to become a millionaire.
If you enjoy reading documents, BITSAVERS.ORG is a treasure mine full of lost knowledge.

https://archive.org/details/bitsavers

Here you will find Xerox PARC, DEC, Burroughs, Apple, Sun, Borland,... old research documents, papers, manuals.

In the context of static compilation, for example in MS-DOS, take Turbo Pascal 6.0 programmer's guide.

https://archive.org/details/bitsavers_borlandturVersion6.0Pr...

For some 8 bit love you can check ZX Spectrum and Atari archives.

http://www.worldofspectrum.org/

http://www.atariarchives.org/

Next step is to get an emulator for any of those old OS/computers and try to follow along one of those books.

Also there are some videos on YouTube showing those systems.

"The Smalltalk-80 Programming System"

https://www.youtube.com/watch?v=JLPiMl8XUKU

"NeXT vs Sun"

https://www.youtube.com/watch?v=UGhfB-NICzg

"Steve Jobs internal demo of NeXTSTEP 3"

https://www.youtube.com/watch?v=gveTy4EmNyk

"Amiga Programming"

https://www.youtube.com/watch?v=WOAyVIWFaXQ&list=PL1E7187BCF...

"Amiga Hardware Programming"

https://www.youtube.com/watch?v=p83QUZ1-P10&list=PLc3ltHgmii...

The Mother of All Demos"

https://www.youtube.com/watch?v=yJDv-zdhzMY

Just a very tiny sample of videos

nurettin
I emailed the bitsavers link to our programming team at work with the subject: "jurassic park"
Douglas Englebart's video is another classic video that took place in 1968:

https://youtu.be/yJDv-zdhzMY

Light pen's and chorded keyboards in the 1960's.

None
None
Dec 06, 2016 · 3 points, 0 comments · submitted by mobitar
Features such as chat and video, which were revolutionary in 2004-2005

They were revolutionary in 1968: https://www.youtube.com/watch?v=yJDv-zdhzMY

wolfgke
Chat and video was revolutionary in 2004-2005 on the web.
jacquesm
The hell they were, I pioneered that in 1995.
Oct 21, 2016 · 2 points, 0 comments · submitted by mgalka
Came here to give the same answer. Between "Future of Programming" and the Mother of All Demos (https://www.youtube.com/watch?v=yJDv-zdhzMY) I find it mindbogglingly depressing that it feels so much like the entire programming field has stagnated for the last 40+ years, re-implementing the same [flawed? limiting? incorrect?] ideas over and over.
First, the "Mother of all Demos" by Doug Engelbart: https://youtu.be/yJDv-zdhzMY This was in 1968, at a time when most people thought about computers as being machines for solving computation problems, like processing payrolls or calculating rocket trajectories. Engelbart and his students had the radical idea that computers could be used for human "knowledge worker" productivity. In one 90 minute presentation, he introduces everything from the idea of a GUI, to the mouse, to word processing, hypertext, computer graphics, and (simulated) videoconferencing. You have to be able to put yourself in the shoes of the audience that has never seen this stuff before, and it'll blow you away.

Something more recent: Martin Fowler's great introduction to NoSQL: https://youtu.be/qI_g07C_Q5I Not so technical, this is a great overview of the reasons why (and when) NoSQL is valuable. He crams a lot into a short speech, so it's one of the rare videos I've required students in my database classes to watch.

Now, really getting away from the technical, I have to recommend watching the IDEO shopping cart video: https://youtu.be/taJOV-YCieI This is the classic introduction of Design Thinking to the world, in 1999. If you're using the Lean Startup or an Agile method, but have never heard of IDEO's shopping cart, you may be able to get along fine at work, but you should be kind of embarrassed like a physicist who's never read Newton.

The mother of all demos by Douglas Engelbart https://www.youtube.com/watch?v=yJDv-zdhzMY

How I met your girlfriend: https://www.youtube.com/watch?v=O5xRRF5GfQs&t=66s

rdegges
I had the pleasure of working with Samy back at Fonality (my first job!) He's such a cool, smart, and just all around amazing dude. Anything he does is always fun, interesting, and hacker-ish.
Sep 04, 2016 · 3 points, 0 comments · submitted by dogma1138
I actually always thought the part of "the mother of all demos"[1], where "the mouse" is introduced is a bit awkward, like they know it's a poor hack. Just because we've got a lot of mileage out of it, doesn't mean it's a great input solution. One IMNHO good indication of that is that children (not just your daughter :) seem to overwhelmingly seem to prefer touch input.

I'm mostly in agreement with the article, but the author also misses some things, I think:

> Yes, the iPad Pro has a very fancy stylus. That’s great for artists, but the vast majority of people aren’t artists and don’t care.

> Yes, the iPad Pro has a very fancy, true-to-color screen. That’s great for artists, but the vast majority of people aren’t artists and don’t care.

I think good touch and pen input really still is the obvious better way to input. It doesn't work well (enough) without a few decades of UI engineering and real-world testing -- but a "real" digital drawing pad and touch interface is probably a very sensible way to spend the improvements Moore's law have given us: My Amiga 2000 had 4?MB of ram and a 7Mhz processor, my current desktop has four cores at 4Ghz and 16GB of RAM. Using some of that to go from 4096 colours and 320x256 to 24bit 4K or 8k along with real-time input in the form of free-form painting/writing seems reasonable.

Just because it's so hard to paint with a mouse that most computer users aren't digital artists, while everyone that's been given pen and paper will have at doodled is a great indication that we need better input for our digital devices. As for the better screen - I think getting pixel resolution on par with print resolution can only be a good thing. Personally I find it much more comfortable to read a thousand pages at high dpi, than at "standard" +/- 1080p on a 20-24" monitor.

[1] http://www.wired.com/2010/12/1209computer-mouse-mother-of-al...

"The Mother of All Demos, presented by Douglas Engelbart (1968)"

https://www.youtube.com/watch?v=yJDv-zdhzMY

They also had hypertext in 1969:

https://www.youtube.com/watch?v=yJDv-zdhzMY

It's a failure in the sense that it's not generally available or in general use. I've also used tele-presence rooms before and they're not that great. Better than Skype or Google Hangouts, but not by much.

This is the Mother All Demos... Congrats to the team, this is amazingly done!

https://www.youtube.com/watch?v=yJDv-zdhzMY

Aug 20, 2015 · 2 points, 0 comments · submitted by anant90
Do 20 year olds feel lost when their next big thing is a derivative work from the 70s[0]?

[0] https://www.youtube.com/watch?v=yJDv-zdhzMY

rbl
Just picking a nit here... The 'Mother of all Demos' is from the end of 1968, so the '60s. I've been about 2 months old when that happened. :)
Aug 05, 2015 · 1 points, 0 comments · submitted by septerr
Aug 03, 2015 · e12e on The Web We Have to Save
Kind of want reply both to you and @pjc50:

I think moving images with sound as part of hypertext/hypermedia is a given -- and has been since (at least) the 60s. That's not just "TV". And "TV" isn't just video.

We've only recently entered the era where users can reasonably make live and recorded video, and use that as a form of expression (from video chat, to youtube/vimeo to snapchat etc). And just like you could take a tv camera and use that for video conferencing in the 60s[1] -- video-the-medium is much more than "TV".

"TV" is indeed very pacifying. For myself, I can on days off, do two very different things that are very immersive: watch tv series, or play Star Wars: Online Galaxies. Now, I mostly play SWTOR solo, so for the purpose of this discussion, lets just use "play a single-player computer game". I enjoy both, but if I read a book or play a computer game -- I'm mentally invigorated. If I watch a TV series, even something interesting like "Humans"[h], "Mr. Robot"[r] -- or more interesting drama like "True Detective"[t] -- I always end up more passive. Even watching documentaries, or news -- TV ends up a very passive medium.

And that's (IMNHO) just the good "TV". I find that a good movie is usually between "book" and "TV" -- something with the format/runtime that drives a different expression both in writing and directing.

> We've been trending away from the 'web as a collection of books' to 'the web as a series of TV channels' for many years now.

I don't think the web has ever been much like a "collection of books". It might have been "a collection of essays" -- and I don't think we're really trending towards "a series of TV channels". I think we'll still see video expression and communication evolve, and I think user generated communiques ("content" is more something that oozes from a (corporate) organization) will continue to dominate.

It is my impression that young people (~3-20) are much more inclined to use video than I am -- I don't much like it. But I expect rich media images/sound/video will continue to increase, as bandwidth/storage gets cheaper.

I think that trend in media is different from the trend in centralization, commercialization though. There's nothing wrong with videologs (other than that I prefer a well-written text...) -- as long as we can host them in a more distributed manner than we do today.

I think we'll see interesting times ahead, as the people that are now 5 years old grow up to master both rich media and various forms of programming/automation as a natural part of their life. We've seen nothing yet. Nothing.

A post-script on quality: I think many of us want quality. And I think we'll continue to make it, and find it. Be that small Internet radio-stations that do curated music (as opposed to things like soundcloud that I also like, but unfortunately is just as hopelessly centralized as youtube) -- or "real" blogs -- or whatever.

It remains to be seen how/if this will all be funded. Maybe we won't have as much paid-for content/art -- maybe we'll all have more free time -- assuming we don't kill each others as jobs are automated away, and those who own the robots keep all the riches.

[1] https://www.youtube.com/watch?v=yJDv-zdhzMY

[h] https://en.wikipedia.org/wiki/Humans_(TV_series)

[r] https://en.wikipedia.org/wiki/Mr._Robot_(TV_series)

[t] https://en.wikipedia.org/wiki/True_Detective_(TV_series)

e12e
Re: Essay vs Blog -- I had the good fortune to come across "American Essays"[1] edited by Charles B. Shaw in a thrift shop some years ago. Shaw opens the book with, in-part:

"The Essay In America (...) There is no literary form that embraces so wide a variety of inclusions. Adjectives commonly used to describe the sorts of essays indicate this diversity: familiar, critical, didactic, informal, nature, expository, historical, reflective, travel, personal, descriptive, and humorous are a dozen of these qualifiers. The noun, too, is varied: article, character sketch, causerie, feuilleton, fantasy, anecdote, paper, satire, miscellany, ephemera, impressions, and reverie are another dozen terms applied to the multiformity of the essay. The composition that can be thus diversely denoted cannot be rigidly defined. The essay may be as personal as a toupee or as austere and sublime as a Himalayan peak; it may be as light as the foam on faery seas or erudite as the disquisitions of him whose

"... words of learned length and thundering sound amaz'd the gazing rustics rang'd around';

it may be fun or the prickliest sort of intellectual stimulation; its ideas may be straws which build bridges or rugged ores polished to the texture of satin. Its range of subject matter is infinite. One might substitute "essay" for "man" in the sentence from Terence's The Self-Tormentor: "I am a man, and nothing that concerns humanity do I deem a matter of indifference to me." In place of a sedate definition perhaps we may compromise on the analogy of the twentieth-century American essayist, Stuart Pratt Sherman: "The ordinary life indeed is itself an essay, starting from nowhere in particular and arriving at no definite destination this side of death, but picking its way, like a little river, now with 'bright speed', and now with reluctance and fond lingerings over all sorts of obstacles and through all sorts of channels, which would be merely humdrum but for the shifting moods and humours that play over a bottom of commonplace with the transient magic of shadow and light."

Words from 1954 that I think are a fine lens with which to view both the "blogosphere" and our evolving social media in general.

[1] http://www.amazon.com/American-Essays-Charles-B-Shaw/dp/B001...

Some of TempleOS actually reminded me of the MOAD[1], and even the Acme[2] editor a bit. The beauty of plain text is that it's trivial to interpret though (not unicode so much). Anything else and you need to start to worry about backward/forward compatibility, byte orders, etc... I imagine it's probably easier when you only have only one platform to think about though.

I liked the index concept. There have been more than one (successful) efforts to build a filesystem that fully supports indexing content, but they're generally implicit indexes. Explicit indexing via file content might be interesting.

I'm not sure what the real justification for running everything in Ring-0 is. I can't think of many reasons for anything more than minimal privileges to be a benefit.

[1] https://www.youtube.com/watch?v=yJDv-zdhzMY

[2] http://acme.cat-v.org/

those who do not learn from history are condemned to repeat it. watching the mother of all demos once a year is a sobering reminder of that.

https://www.youtube.com/watch?v=yJDv-zdhzMY

some better-than-most discussion here:

http://arstechnica.com/the-multiverse/2015/04/from-the-vault...

The most complete example of a structured editing system I know off is Doug Engelbart's NLS system from the mother of all demos (from 1968!) [0]. In the system text, drawings and code are all structured hyperdocument data. It is quite different from most software we use today. As seen from the demo, structured editing works well not just on code but on also on text.

I have been experimenting with implementing some of these ideas in the browser[1]. I primarily use the system to take text notes with and write down my ideas. You can't program in it yet, but part of the program is driven by datastructures that are created, edited and manipulated in the system itself. Instead of looking and manipulating the raw data, however, you can render the data to look like a pseudo dsl.

[0] https://www.youtube.com/watch?v=yJDv-zdhzMY

[1] https://github.com/smarks159/hyperdocument-system-wiki

Yep, I agree it wasn't simple at all, but what percent of world GDP did me with a macbook with chrome on it connecting to Facebook to post a message take?

Again, building a one-off is easier than a consumer-ready working system deployed across millions of locations that is owned by no central authority and has to run continually. That's why we had SHRDLU (https://www.youtube.com/watch?v=QAJz4YKUwqw and http://hci.stanford.edu/winograd/shrdlu/) and The Mother of All Demos (https://www.youtube.com/watch?v=yJDv-zdhzMY) years ahead of their time. That was my point.

Windows 3.1 to Windows 8 is incremental. This, however, was not:

https://www.youtube.com/watch?v=yJDv-zdhzMY

It's hard to pin down, but I'd define a fundamental advance as one that either (a) multiplies the number of addressable problems, or (b) constitutes a sudden and sizable nonlinear jump in capability over a very short period of time.

It does often result from the combination of existing things, but it's always a discontinuity. A fundamental innovation is when 1 + 1 + 1 = 1000, not 3. It's a big monumental win that unlocks whole sections of the "technology tree" that were not reachable at all before.

An example of (a) would be the final integration of all the components necessary for a CPU onto a single die. An example of (b) would be what the development of the alternating current transformer did for energy transmission over long distances.

The most dramatic examples are both. Examples of these would be the first radio transmission or the first rotary electrical generator. These were utterly unprecedented innovations that changed human life radically. They weren't linear at all.

You're right. I haven't seen many of these in my lifetime. I can think of some low-magnitude borderline cases but nothing Earth-shattering that has touched me directly. But if I'd lived from, say, 1900 to 1970, I'd have seen many in rapid succession.

You seem to be arguing that such things do not exist-- that the fitness-weighted combinatorial state space for technology is completely smooth and incrementally traversable. I think you should study a bit of combinatorics as it applies to areas like evolutionary information theory. Discontinuities in slope, fitness valleys between significantly different local maxima, and isolated peaks unreachable by incremental hill-climbing are common features of real-world fitness landscapes.

dragonwriter
> Windows 3.1 to Windows 8 is incremental.

Arguably Windows 3.1 to anything in the DOS-based line ending with Windows Me is incremental, and Windows NT 3.1 to anything ending in the line of which Windows 8.1 is the most recent (until Windows 9 is released) iteration is incremental, but the shift from the DOS-based line to the NT based line when XP extended the NT line into the consumer space was not incremental.

For those of you on HN who haven't seen it yet, I strongly encourage watching Doug Engelbart's "Mother of all Demos" [1]. You'll be amazed by what technology was in development in 1968. Keyset was just one part of the demo.

[1] https://www.youtube.com/watch?v=yJDv-zdhzMY

agumonkey
It was such an anachronistic experience when I watched it the first time. You start questionning our societies' ability to let progress emerge.
I've had an intuitive sense for a long time that the innovation rate has fallen dramatically since the 1950s. What we're doing now really feels like small incremental improvements to technology that was all invented prior to 1970.

Has anything actually been invented since 1970?

Relevant:

https://www.youtube.com/watch?v=yJDv-zdhzMY

sp332
Alan Kay asked Stack Overflow the same question. https://stackoverflow.com/questions/432922/significant-new-i...
api
The responses are not encouraging. They're either incremental improvements on things invented before 1980 (often before 1970) or the poster is misinformed and the idea pre-dates 1980.

The WWW is actually a 1960s idea. Watch Engelbart's demo.

Sometimes I wonder if the problem is the massive growth of free market ideology (market fundamentalism) since 1970: Ayn Rand, the Reagan Revolution, libertarianism, etc. The free market is heavily biased toward extracting pre-existing value and things with short time horizons. It rarely spends money on very speculative R&D, and the culture that it begets is a hustler culture of quick-to-market, quick-to-exit, etc. Things like the Engelbart demo are the product of very smart people with very rough, flexible deadlines sitting around and thinking a lot. There's no way to monetize that. A related problem is that invention itself is hard to monetize except via patents... the innovator often loses in the market to fast followers.

In machine learning terminology we could say that the free market is a greedy hill climbing algorithm prone to converge on local maxima and unlikely to cross fitness valleys.

shakethemonkey
Likewise, the helicopter is a "Renaissance idea" (Leonardo da Vinci). In both cases, the underlying socio-technological substrate was insufficient at the time of the lofty idea.
protonfish
The Internet and everything in it?
ori_b
Was created as ARPANET in the 1960s.
api
All invented and in many cases first prototyped in the 60s.
zenogais
It's important not to confuse the rapid production and reproduction of novelty with innovation.
pantalaimon
Where do you draw the line?
zenogais
I feel like this @hipsterhacker tweet sums it up nicely: https://twitter.com/hipsterhacker/status/466077431061696512
darkmighty
1) The source, "Asimov's Chronology of Science and Discovery" is obviously biased towards older innovations. It's difficult to judge a priori the impact of current innovations. (and it's pretty outdated by now)

2) Breakthroughs are simply very impactful innovations -- of course earlier innovations are going to have an advantage: it's like saying "The total number of descendents per capta has been dwindling since the dawn of ages. We're doomed!". Claiming the rate of breakthroughs should remain constant is ridiculous.

3) Additionally, inventions are information. They're cumulative and knowledge instensive. We can't expect them to occur at a constant rate simply because it will get harder and harder to learn eveything required to invent some really new technology/scientific discovery. I like to think of human knowledge as a chunk of information that we compress and pass along. There's only so much we can compress, and our brain hasn't really changed significantly to accomodate and absorb more information faster. The result of course is specialization, and again specialized innovations by definition have limited scope.

4) To claim we had no specialized breakthroughs recently is even more ridiculous. Take the computing aspect of engineering alone (and math) and you probably get more small breakthroughs than there were inventions at all in the 18th century. Telecommunications also developed completely in the last few decades.

The link to the YouTube video of "The Mother of All Demos" is dead. I found another one here: https://www.youtube.com/watch?v=yJDv-zdhzMY
The collaboration video https://www.youtube.com/watch?v=tXWkhLyQm4s especially,

reminded me of The Mother of All Demos https://www.youtube.com/watch?v=yJDv-zdhzMY&feature=youtu.be...

We're still trying to get this out of the genetic soup stage.

CoreObject looks like a great stab at it with a great name!

I want to believe, but there's an elephant in the way.

GNU, Linux, the free BSDs, etc. have done a lot, but in the end they have only copied the work that came before them. You can see it from the surface down-- the GUIs are copies of Windows or Mac, and the OS kernel is a copy of AT&T Bell Labs Unix with ideas from other newer OSes.

I've come around to the view that the basement/garage hacker who changes the world is largely a myth. It's a very popular one and a very American individualist one, but I see few if any real world examples.

Most people around here have probably seen this:

https://www.youtube.com/watch?v=yJDv-zdhzMY

This wasn't the product of basement hackers. It was the product of enormous amounts of DARPA money used to pay a good number of very smart people to focus on the problems of human-computer interaction full time for many years.

I want to believe. Please show me the basement/indy hacker equivalent of that demo.

Sure, basement hackers can make innovations... a little here, a little there. They can also do an excellent job making copies of commercial/military stuff, and in some cases the copies are a bit better than the original. I'm not saying this work has no value. But it doesn't innovate much and it certainly doesn't invent. Invention is a hundred times harder than innovation.

IP sucks. It's a bad system. But consider what it is attempting to accomplish. It's trying to create a mechanism whereby really innovative work can really pay. It's attempting to reward investments in really pushing the line forward. In its present form it does a poor job of that, but that's the intent.

Edit: I think the same logic also applies to startups, but to a somewhat lesser extent. Startups do sometimes have money, and can therefore focus on problems and employ people. But they must get revenue fast, which hugely limits their ability to play and experiment. As a result startups can innovate -- improve existing things -- but they generally cannot invent. Most startups that attempt to invent run out of runway and crash in the weeds. Invention seems to require a financier who is willing to throw money away, such as DARPA, NSF, etc... traditionally either nation-states or really freaking huge companies with money to burn. Invention is also hard for startups because there is little to no way to monetize it. That's the problem that IP attempts but largely fails to solve.

PythonicAlpha

   It's attempting to reward investments in really pushing the line forward ...
No, I don't think so. IP was never meant to help innovative people. IP -- at least as it was created in the last decades (before, nobody talked about IP, but patents and trademarks and ideas -- no IP) -- is a fighting term that was coined to create a new form of property -- thus, after having things and than land, now also have ideas "own-able". Ideas are no property. How can they? That is just a bad, bad word.

And when you say, that the intent would be to help people with ideas, I can not consent! It is something to make investors and owners happy! To make rich people happy and steal from those with the ideas.

api
It's an attempt to make ideas, inventions, etc. into something like physical goods that can be put on balance sheets like physical goods. Accountants like that, investors like that. But it doesn't seem to work very well.
PythonicAlpha
Exactly. Ideas don't fit into their system of wealth and money -- so they try to put it into the system. But it does not fit and only brings much trouble, since ideas can not be measured. The patent system is awfully broken because of this and it will go downhills more and more if nobody stops it (it does not seem so, because the neo-capitalism is still going strong and has more might than all governments together).
fred_durst
IP is creating more assets. Its designed for the benefit of the asset class. You'll notice how with IP becoming less powerful with open source software etc, there came the rise of SaaS or Software as a Service.

The truth is that strong IP, and strong property rights in general, are part of a system where one profits via the ownership of things as opposed to goods and services provided. That's why the asset class needed to push so hard for it. Otherwise, they all might have to go back to working as their assets will no longer be able to do the work for them, so to speak.

So I wouldn't say that what IP is "attempting to accomplish" is to create invention. Its likely a lot closer to incentivising and further increasing the control the asset class has on our society. We can decide how to support and fund invention in many, many ways. IP is just there because we have decided its the asset class's decision. And the asset class would never invest in anything that did not return more assets. So if we want to keep the system of asset class control over our society, we needed to come up with something that gave the asset class more assets for technological innovation, otherwise they'd probably just run up the price of real estate and natural resources to an unsustainable point.

euank
I find your assertion that basement hackers never "invent" silly. Heck, Google/Pagerank started in a garage, as the saying goes. The demo you link is amazing and I'm not sure anyone has topped that in or out of a company since, but using that as proof that only companies and governments can invent is fallacious.

Many other statements you make are downright false or silly. For example, you say "the GUIs are copies of Windows or Mac" and yet I've never seen a powerful tiling window manager on those two OSs ... certainly not before I used them on Linux. There might be similarity between some WMs and others, but there are dozens that are fairly novel.

The "Basement / Garage hacker that changes the world" you say is a myth; and yet Google and Apple both have undeniably changed the world despite originating with basement hackers.

Furthermore, the industry has moved to open source where it's easier for a basement hacker to do something big. At the time that video was made, you would have to create practically everything from the ground up. There was no free code to build on. Anyone might have had an idea expressed in that video, but he or she would have had great difficulty implementing it in a basement due to lack of resources to make the whole stack.

Nowadays, if I have an idea, there's a good chance many components of it are already done in the open source sphere, and I can "stand on the shoulders of giants" and implement my idea with minimal work. The ability to create slow and steady progress and have such diverse bases on which to build, I think, makes the basement hacker far more possible. The jump from idea to working product is now much shorter in the software world than ever before.

Next, you say "basement hackers can innovate ... but it certainly doesn't invent". I honestly don't see the difference you're drawing between innovation and invention. innovate is the verb that creates inventions, the noun. I'm just not seeing what you're trying to do by creating that false dichotomy.

I also think you have the intent of "IP" a bit wrong. Patents are supposed to grant limited monopolies to people who have innovated. IP also covers trademarks and copyright which each have nothing to do with "innovative work" paying. By using IP there, you just fell into the trap of the article you're commenting on (I recommend reading it).

Furthermore, you say patents are meant to reward innovative work. That's simply false. Innovative work rewards itself. If you invent something truly innovative, you'll make money selling it. If patents were only meant to reward, they would have no requirement of disclosing sufficient details to reproduce the invention. What patents are meant to do is help the economy and industry as much as possible. A patent requires an inventor to disclose his invention and gives him a limited monopoly. The disclosure is meant to allow others to use his idea to create further inventions. The monopoly is to encourage people to disclose ideas at all, not merely hoard them as trade secrets.

"Invention is also hard for startups because there is little to no way to monetize it." ... The whole point of a useful invention is that it solves a problem in a novel way. Surely people with that problem will want to pay for the solution that didn't exist before?

vezzy-fnord
GNU, Linux, the free BSDs, etc. have done a lot, but in the end they have only copied the work that came before them. You can see it from the surface down-- the GUIs are copies of Windows or Mac, and the OS kernel is a copy of AT&T Bell Labs Unix with ideas from other newer OSes.

Really, the first thing you point out is the widget toolkits? Those really aren't the work of any of the projects you listed (with the exception of GNU and GTK+, though it's presently GNOME, which is still a subproject of GNU, but largely dominated by Red Hat).

There are tons of different toolkits available, which is truly the beauty of it. The bazaar choice. I'm not sure what you're expecting. It's not like the GUI model has changed much since its inception.

The fact of the matter is that a lot of proprietary operating systems owe significant debt to "basement hackers," particularly to BSDs due to their permissive licensing model. The TCP/IP stack on OS X, for one thing.

As for kernels, I can't help but be reminded of the Torvalds-Tanenbaum debate. Also, the Hurd is not monolithic modular like the BSD and Linux kernels, it's a multiserver microkernel approach.

That's an awesome chord keyboard - http://www.youtube.com/watch?v=yJDv-zdhzMY#t=2039 I've always wondered why more people don't use them. It seems significantly less prone to errors - instead of fat fingers or incorrect placement, you have to coordinate the timing between the fingers.

It's pretty amazing how much that video demonstrates. I wonder what the next version of that video will be. Hopefully it's not computer related.

EdiX
It wouldn't be very useful without NLS. It wasn't intended for text entry but to select commands to execute while the mouse selected the target for the commands, ie you would enter DW for "delete word" with the chorded keyboard and use the mouse to select the word to delete.

It disappeared because the people a Xerox Labs decided it was too hard for normal people to learn to interact with a computer in this way, so they replaced the chorded keyboard with on-screen buttons plus a set of keys on the left of the keyboard for the most used operations (Undo, Open, Copy, Paste, etc). Then Apple, to make things even friendlier, removed the extra buttons from the keyboard entirely and relegated the most frequently used operations to key combinations (Cmd-C, Cmd-X, Cmd-V, etc). You will notice that many shortcuts are relegated to the left side of the keyboard.

jerf
I'd like to try one, but there appears to be basically no such thing as a bluetooth-enabled one-hand chord keyboard that you are intended to hold in your hand (as opposed to having on a table), and that's the use case I'm interested it... an input device for my augmented reality glasses while I'm walking around. On that note, if they're ever going to make a... well... can't really make a comeback if you never made an appearance at all... an appearance, that's probably the scenario that will drive them. Voice may cover casual usage, but when you really need to go to town you're going to need something more, and no current input device can meet that need.
DanBC
Sadly I think neat keyboards are patent encumbered.

There have been things like the Frogpad, or the Twiddler. But they're expensive and not available anymore.

https://en.wikipedia.org/wiki/FrogPad

http://www.youtube.com/watch?v=ciQVBNHrKKA

I totally agree, I think their time has come and it'd be great if someone could rescue the tech from all these dead or dying companies and provide a decent chording keyboard for people on the move.

eru
Perhaps Morse code would make for a neat device, too.
None
None
kybernetikos
I've been looking at this recently too. I like the basic design of chordite http://chordite.com/

7 keys across four fingers, and then perhaps a trackpoint for the thumb.

In terms of a bluetooth gpio controller, there are a few options including a few that have been kickstarted recently (e.g. bleduino and rfduino).

mattstreet
I've always heard that type of input device called a twiddler, I found one when googling "bluetooth twiddler": http://www.handykey.com/

* I'm an idiot, that was a top link because they're using the name twiddler, but they don't seem to offer bluetooth.

None
None
jerf
I poked around Google quite a bit a few months ago. If you drop any one of my adjectives, you can come up with something, but the full gamut does not appear to exist.
Pxtl
Port 8pen to the PS3 navigation controller (one-handed joystick device)?
mcb3k
If you don't mind getting your hands dirty, you could probably make a prototype pretty easily. A battery, some buttons, and a bluefruit[1] should be enough to make one. You would likely need to also write some kind of input translator (Custom android keyboard or so?) to wrap/ convert the value from straight key presses to your combinations (since you want a combination of key presses to give you just one character). 3D print a case, and you're probably all set to go.

I mean, it's likely easier said than done, and would take some time and effort, but it's probably also pretty do-able.

[1]http://www.adafruit.com/blog/2013/09/27/new-product-bluefrui...

vanderZwan
Since Arduino can be programmed to function as a keyboard[1], wouldn't that be an easier option? Then you can put the chording logic on the Arduino board and have it function like a plain USB keyboard otherwise as far as the computer is concerned[2] (that's kind of how the Makey Makey[3] works - it's also Arduino based).

[1] http://arduino.cc/en/Reference/KeyboardWrite

[2] http://store.arduino.cc/index.php?main_page=product_info&cPa...

[3] http://www.makeymakey.com/

mcb3k
Yeah, you should be able to do that as well. There are a lot of different ways you could make a project like this work, I was just wanting to throw out an example to show that we're empowered to do these kinds of things now. We don't have to wait for some company to come out with a portable, bluetooth chording keyboard anymore, we can do these kinds of things ourselves. I think that is really exciting!
For me, even though it's PHP, ProcessWire consistently models data in a human way. I've put the out-the-box backend CMS in front of people that interact with computers very little and they instantly "get it". All data is simply a node in a hierarchy, and is easily movable and editable. I don't understand why more frameworks don't copy this basic structure.

If you watch "The mother of all demos"[1] the structure is almost visually 95% the same.

I look at Angular and it's looks complicated - the article is about how complicated it is. Surely we should be striving for tools the everyday man can pick up and play with?

[1] - https://www.youtube.com/watch?v=yJDv-zdhzMY

jacques_chester
> I don't understand why more frameworks don't copy this basic structure.

Lots of problem domains are difficult to model as trees because often, a node can be logically assigned to multiple parents, but the tree model requires it be physically assigned to only one.

Recently I bought a prepaid SIM card for a trip I am making to the USA.

In my bookkeeping system, I could enter it either as a transaction against the "Telephone & Internet" expense account, or I could enter it against "International Travel". What I can't do, because trees require mutual exclusion, is enter it under both. That breaks the model.

If instead Renaissance accountants had understood sets and relations, I might be able to log it against both and then derive whichever view of the data was necessary.

And that's the problem with hierarchical data. It privileges one and only one view of the problem domain. As soon as you need some other view, you are in trouble. If your project management system organises by project, then getting per-staff reports is now much harder. If your class hierarchy views A->B as the natural order of creation, what happens when you come up with cases where C->A but not C->B? You can't: you have to introduce complicated workarounds.

Hierarchies are simple to understand on their face. But they quickly come apart when faced with the real world and ad hoc queries about the state of the real world.

LBarret
Sets and relations are indeed the most richest metamodel but most of the time a dag (directly acyclic graph) is enough : object can be connected to many parents but it keeps most the hierarchical organisation, which is easier to model.
jacques_chester
What can be modelled as a graph can be modelled as sets and relations and vice versa; they're both equally powerful. But just as any two Turing-complete languages are equally powerful, the discussion doesn't end there.

It's still the same problem: you have mixed the logical model and physical structure of your data together. You are privileging one view of the data over all other views.

I consider switching to a full graph model a complicated workaround for the limitations of trees. You now introduce new and exciting paradoxes and you will need to litter your code with special cases (B means C, but only when A is not an ancestor, otherwise it means D). Ask C++ programmers about the joys of multiple inheritance.

In some cases the logical model is a graph and in those cases you should absolutely model it as a graph. But modelling all problems as a graph is inadvisable.

I'm pretty sure he's referring to 'The mother of all demos' by Ebgelbart, where he demonstrated live collaborative document editing among dozens of ludicrously ahead of its time software features. [1]

That said, I disagree with the larger point the op is making. When you work in a technical field you tend to figure out the tool for a task, grin and near through a finicky setup process and then dismissing improvements that reduce that friction as trivial/unimportant, even though those improvements mean its able to solve the problem for an exponentially larger market.

A classic example is rsync and dropbox. It's less pronounced here, but the more advanced web code editors get, the longer someone interested in the topic can mess around and try things out without setting up a local dev environment (multiple hours of totally unfun work if you're a novice), the more likely they are to stay interested and push through and get over that hump when the time comes.

[1]The video(100% worth watching) http://www.youtube.com/watch?v=yJDv-zdhzMY Wikipedia: http://en.m.wikipedia.org/wiki/The_Mother_of_All_Demos

Jul 05, 2013 · 1 points, 0 comments · submitted by AYBABTME
davewicket
PLEASE stop reposting this. We get it. We really do. We read HN.
Here is the full 100 Minutes on youtube https://www.youtube.com/watch?v=yJDv-zdhzMY

EDIT - Mother Of All Demos - 18:57 Hyperlinks / 31:50 Mouse & VC 45:59 Video Conference / 1:03:59 Collaborative Working / 1:12:58 Collaborative Working with VC / 1:34:09 Cross Country WAN / Internet Concepts (via youtube)

devindotcom
Yes, please use this link instead of the 11-part version that was uploaded before on YT. This one is better quality and of course all in one piece. Was coming here to link it myself.
I was just listening to one of Douglas Crockford's talks which discussed the contributions Engelbart made to computer science. What really stuck with me was the sheer number of new concepts he showed in his demo, which, though made in 1968, would showcase features not seen for decades, and some of which are still not effectively in use. If you haven't seen it, it's well worth the hour and forty minutes.

I think this is also a good moment to reflect on how incredible it is to be involved in a science which is still so much in its infancy that seminal figures in its development are still alive and well. Let us not forget their contributions, and most importantly, not overlook the concepts that, if employed today, can advance us far beyond where we are now.

Douglas Engelbart's Mother of All Demos: https://www.youtube.com/watch?v=yJDv-zdhzMY

Jul 03, 2013 · neona on The Demo
http://www.youtube.com/watch?v=yJDv-zdhzMY

^ Youtube link to the demo, as I can't get the video on the site to work.

EDIT: better quality version

Jul 03, 2013 · lominming on Doug Engelbart has died
Mother of All Demos: http://www.youtube.com/watch?v=yJDv-zdhzMY ... It is really amazing how he conceived all these in 1968 (45 years ago!). If you think about it, what we have today is just putting on a pretty UI on top of all these concepts. We hardly moved for those innovation breakthroughs that he demoed.
Jul 03, 2013 · oasisbob on Doug Engelbart has died
Better quality, full: http://www.youtube.com/watch?v=yJDv-zdhzMY
Jul 03, 2013 · 6 points, 0 comments · submitted by sabalaba
Jul 03, 2013 · glhaynes on Doug Engelbart has died
If you've never watched the retrospectively-named "Mother of All Demos", it's an extraordinary document of current-day technology that was somehow done in 1968. There are parts that just seem "wrong" — as seemingly-anachronistic as if Don Draper used a PowerPoint. Yet, there it is. http://archive.org/details/XD300-23_68HighlightsAResearchCnt...

Edit: link had been to http://www.youtube.com/watch?v=yJDv-zdhzMY but a user below posted this archive.org link which is better. Thanks, mxfh!

oasisbob
In the demo video, there are some noises in the background, I'm assuming from his workstation -- rather pleasant and atmospheric. Anyone know what they are? Is it a transmission artifact? Feedback intended for the user (maybe related to the chorded keyboard)?

I've read several accounts of this presentation ( one is in _What the Dormouse Said_ ), but don't recall an explanation.

tqh
It's not intended. It's clock and/or data-signals that leaks and has gotten amplified, if you look at it in an oscillator you will probably see a square or sawtooth wave at around 1KHz. You might even be able to read that data...
bentcorner
I don't know, but as a guess it sounds like a transmission artifact from when his terminal redraws. Maybe some cross-talk between the terminal transmission line and the microphone feed?
glurgh
The account in What the Dormouse Said says the sound was intentional audio feedback:

"Engelbart referred to the on-screen cursor as a "bug" or a "tracking spot," and there were occasionally odd buzzing sounds in the background as he executed commands at the keyboard. The group had been experimenting with using the computer to generate different tones depending upon what was being executed, as a way of creating auditory feedback."

Looks like a pretty uniform buzz, too

http://i.imgur.com/BbaGsCu.png

sandyarmstrong
Thank you for posting, this version is much higher quality than the other youtube links.
mxfh
The version hosted on archive.org seems to have even less compression artifacts.

http://archive.org/details/XD300-23_68HighlightsAResearchCnt...

tfb
WOW! How have I never seen this before? 1968??? That is amazing. I'll have to come back and watch the entire video later this evening when I have the time.

I will say though that taking a look at it for just a few minutes is incredibly inspiring. It's a reminder that with enough passion and determination to push the boundaries of current technologies, anyone can become an innovator who sets in motion a wave of changes that effects the lives of everyone on this planet.

agumonkey
No offense, but that metaphor is still an euphemism.
mtrimpe
Another one of his hidden gems is the video where he discussed how they created a virtual tele-conferencing room by moulding CRT screens in the shapes of the head of each participant and projected their face onto it.

It's the perfect example of a practical solution Doug invented decades ago that still hasn't been matched by today's technology.

P.S. I was never able to find the link after I saw it once, so if someone here knows where to find it; please let me know.

at-fates-hands
>>>It's the perfect example of a practical solution Doug invented decades ago that still hasn't been matched by today's technology.

I just told one my co-workers about this and he nearly broke down crying. I asked him who he was and he went into the same example you gave. To say he was ahead of his time is an understatement.

Always sad to lose such a visionary. I'm just learning about some of these incredible people and their amazing contributions.

keithpeter
What is needed is a history project.

See Max Jammer's Conceptual Development of Quantum Mechanics as a model. He was interviewing the Old Ones before they passed on.

mindcrime
It would be amazing if someone could do more historical documentary type work on early pioneers in computing. I've learned a ton about Doug Englebart since seeing this post about his passing, and I now regret not having studied his work more closely earlier. It also reminds me that there are many others who probably had / have insightful things to say that could be lost to the sands of time. Ted Nelson seems like an obvious candidate, but who wouldn't want to learn more about people like:

Herbert Simon[1]

Norbert Weiner[2]

Marvin Minsky[3]

Richard Greenblatt[4]

Bill Gosper[5]

John McCarthy[6]

Barbara Liskov[7]

J.C.R. Licklider[8]

Edit: and, of course, we can't neglect Vannevar Bush[9] or Claude Shannon[10] either.

etc...

[1]: http://en.wikipedia.org/wiki/Herbert_A._Simon

[2]: http://en.wikipedia.org/wiki/Norbert_Wiener

[3]: https://en.wikipedia.org/wiki/Marvin_Minsky

[4]: http://en.wikipedia.org/wiki/Richard_Greenblatt_(programmer)

[5]: http://en.wikipedia.org/wiki/Bill_Gosper

[6]: http://en.wikipedia.org/wiki/John_McCarthy_%28computer_scien...

[7]: http://en.wikipedia.org/wiki/Barbara_Liskov

[8]: http://en.wikipedia.org/wiki/J.C.R._Licklider

[9]: http://en.wikipedia.org/wiki/Vannevar_Bush

[10]: http://en.wikipedia.org/wiki/Claude_Shannon

at-fates-hands
This is such a great list, I just started frantically jotting some of these names down so I can start digging into their work.

Is there any particular order I should tackle these?

mindcrime
I don't know about order... they've all done some amazing stuff. FWIW, there's some interesting historical / biographical information on some of these folks in two books, both by Steven Levy:

Artificial Life

and

Hackers: Heroes Of The Computer Revolution

That said, the stuff Claude Shannon did basically established the field of Information Theory[1], which underpins huge swathes of our modern digital world. You could do worse than reading up on him.

[1]: http://en.wikipedia.org/wiki/Information_theory

Summary of the links shared here:

http://blip.tv/clojure/michael-fogus-the-macronomicon-597023...

http://blog.fogus.me/2011/11/15/the-macronomicon-slides/

http://boingboing.net/2011/12/28/linguistics-turing-complete...

http://businessofsoftware.org/2010/06/don-norman-at-business...

http://channel9.msdn.com/Events/GoingNative/GoingNative-2012...

http://channel9.msdn.com/Shows/Going+Deep/Expert-to-Expert-R...

http://en.wikipedia.org/wiki/Leonard_Susskind

http://en.wikipedia.org/wiki/Sketchpad

http://en.wikipedia.org/wiki/The_Mother_of_All_Demos

http://io9.com/watch-a-series-of-seven-brilliant-lectures-by...

http://libarynth.org/selfgol

http://mollyrocket.com/9438

https://github.com/PharkMillups/killer-talks

http://skillsmatter.com/podcast/java-jee/radical-simplicity/...

http://stufftohelpyouout.blogspot.com/2009/07/great-talk-on-...

https://www.destroyallsoftware.com/talks/wat

https://www.youtube.com/watch?v=0JXhJyTo5V8

https://www.youtube.com/watch?v=0SARbwvhupQ

https://www.youtube.com/watch?v=3kEfedtQVOY

https://www.youtube.com/watch?v=bx3KuE7UjGA

https://www.youtube.com/watch?v=EGeN2IC7N0Q

https://www.youtube.com/watch?v=o9pEzgHorH0

https://www.youtube.com/watch?v=oKg1hTOQXoY

https://www.youtube.com/watch?v=RlkCdM_f3p4

https://www.youtube.com/watch?v=TgmA48fILq8

https://www.youtube.com/watch?v=yL_-1d9OSdk

https://www.youtube.com/watch?v=ZTC_RxWN_xo

http://vimeo.com/10260548

http://vimeo.com/36579366

http://vimeo.com/5047563

http://vimeo.com/7088524

http://vimeo.com/9270320

http://vpri.org/html/writings.php

http://www.confreaks.com/videos/1071-cascadiaruby2012-therap...

http://www.confreaks.com/videos/759-rubymidwest2011-keynote-...

http://www.dailymotion.com/video/xf88b5_jean-pierre-serre-wr...

http://www.infoq.com/presentations/Are-We-There-Yet-Rich-Hic...

http://www.infoq.com/presentations/click-crash-course-modern...

http://www.infoq.com/presentations/miniKanren

http://www.infoq.com/presentations/Simple-Made-Easy

http://www.infoq.com/presentations/Thinking-Parallel-Program...

http://www.infoq.com/presentations/Value-Identity-State-Rich...

http://www.infoq.com/presentations/We-Really-Dont-Know-How-T...

http://www.mvcconf.com/videos

http://www.slideshare.net/fogus/the-macronomicon-10171952

http://www.slideshare.net/sriprasanna/introduction-to-cluste...

http://www.tele-task.de/archive/lecture/overview/5819/

http://www.tele-task.de/archive/video/flash/14029/

http://www.w3.org/DesignIssues/Principles.html

http://www.youtube.com/watch?v=4LG-RtcSYUQ

http://www.youtube.com/watch?v=4XpnKHJAok8

http://www.youtube.com/watch?v=5WXYw4J4QOU

http://www.youtube.com/watch?v=a1zDuOPkMSw

http://www.youtube.com/watch?v=aAb7hSCtvGw

http://www.youtube.com/watch?v=agw-wlHGi0E

http://www.youtube.com/watch?v=_ahvzDzKdB0

http://www.youtube.com/watch?v=at7viw2KXak

http://www.youtube.com/watch?v=bx3KuE7UjGA

http://www.youtube.com/watch?v=cidchWg74Y4

http://www.youtube.com/watch?v=EjaGktVQdNg

http://www.youtube.com/watch?v=et8xNAc2ic8

http://www.youtube.com/watch?v=hQVTIJBZook

http://www.youtube.com/watch?v=HxaD_trXwRE

http://www.youtube.com/watch?v=j3mhkYbznBk

http://www.youtube.com/watch?v=KTJs-0EInW8

http://www.youtube.com/watch?v=kXEgk1Hdze0

http://www.youtube.com/watch?v=M7kEpw1tn50

http://www.youtube.com/watch?v=mOZqRJzE8xg

http://www.youtube.com/watch?v=neI_Pj558CY

http://www.youtube.com/watch?v=nG66hIhUdEU

http://www.youtube.com/watch?v=NGFhc8R_uO4

http://www.youtube.com/watch?v=Nii1n8PYLrc

http://www.youtube.com/watch?v=NP9AIUT9nos

http://www.youtube.com/watch?v=OB-bdWKwXsU&amp;playnext=...

http://www.youtube.com/watch?v=oCZMoY3q2uM

http://www.youtube.com/watch?v=oKg1hTOQXoY

http://www.youtube.com/watch?v=Own-89vxYF8

http://www.youtube.com/watch?v=PUv66718DII

http://www.youtube.com/watch?v=qlzM3zcd-lk

http://www.youtube.com/watch?v=tx082gDwGcM

http://www.youtube.com/watch?v=v7nfN4bOOQI

http://www.youtube.com/watch?v=Vt8jyPqsmxE

http://www.youtube.com/watch?v=vUf75_MlOnw

http://www.youtube.com/watch?v=yJDv-zdhzMY

http://www.youtube.com/watch?v=yjPBkvYh-ss

http://www.youtube.com/watch?v=YX3iRjKj7C0

http://www.youtube.com/watch?v=ZAf9HK16F-A

http://www.youtube.com/watch?v=ZDR433b0HJY

http://youtu.be/lQAV3bPOYHo

http://yuiblog.com/crockford/

ricardobeat
And here are them with titles + thumbnails:

http://bl.ocks.org/ricardobeat/raw/5343140/

waqas-
how awesome are you? thanks
Expez
Thank you so much for this!
X4
This is cool :) Btw. the first link was somehow (re)moved. The blip.tv link is now: http://www.youtube.com/watch?v=0JXhJyTo5V8
Doug Engelbart's mother of all demos (1968):

http://www.youtube.com/watch?v=yJDv-zdhzMY

This is the talk that, according to Wikipedia, included the first public demonstration of the following technologies: the computer mouse, video conferencing, teleconferencing, hypertext, word processing, hypermedia, object addressing and dynamic file linking, bootstrapping, and a collaborative real-time editor. Pretty good for one talk!

http://en.wikipedia.org/wiki/The_Mother_of_All_Demos

dgreensp
"And observe that if I forget to tell the computer to save my work, it loses it!"

furious scribbling in the audience as everyone takes notes

I'm so glad we're finally getting away from this paradigm after four decades. For example, the iPhone notepad (and now Mac TextEdit) doesn't wait for a cue from the user to write the few dozen bytes of new input to persistent storage.

PeterisP
After some incidents in my shool years, I've noticed that I unconciously tend to hit ctrl-s every few dozen keystrokes without even noticing it.

Good software doesn't lose anything ever, but even with "average" (i.e., unacceptably bad if you think about it critically) tools I've had sudden power cuts where I lost just a few words because of this habit.

lifeisstillgood
I realised I have passed some kind of milestone - I read that and thought "search"

Thank you :-)

stretchwithme
Totally necessary when the early PCs would just die on you because you pushed a key.
akanster
> After some incidents in my shool years, I've noticed that I unconciously tend to hit ctrl-s every few dozen keystrokes without even noticing it.

That is exactly how I learned too. Years later, ctrl-s is still a reflex action.

wink
Autosave without saving many undo states is equally dangerous.

Not saying the iPhone does anything in a bad way (I have no clue) - but I've hardly been bitten by losing unsaved work because I'm deliberately using ctrl-c very often.

ygra
That's why many people (I first read about this in Alan Cooper's About Face 2.0) advocating for automatic save also advocate for unlimited undo (or at least something close to it) because closing a document and not saving it is just a form of undo while closing a document and saving it should be the default action.

This is also a place where the desktop metaphor (with paper documents) around which most WIMP systems are built exposes computer details and fails to stick to its metaphoric roots. When I scribble something on paper it's there and I don't have to consciously remember to somehow commit a transient state of my scribbling to paper explicitly.

AustinLin
If you have not watched at least that first 30 minutes of Engelbart's talk, I would recommend you stop what your doing and watch it right now.

In an hour and 40 minutes Englebart demonstrates input via mouse, video conferencing and collaborative editing among other things. This was before the internet, before UIs, before the idea of personal computing. Incidentally the most profound thing about this demo is not the technology demonstrated but rather the introduction of the concept of personal computing. At a time where computers were reserved for number crunching Englebart envisioned a future where they would be a part of our daily lives not controlling us but enhancing our abilities.

A quick side note, it is worth noting that while Englebart was certainly the visionary, his partner Bill English brought Englebart's visions into the world.

OK, as a non-coder, I'd recommend seeing the video of The Mother of All Demos:

http://youtu.be/yJDv-zdhzMY

I'd also suggest reading Ted Nelson's book, "Literary Machines."

Another book would be Steven Levy's "Hackers." And Paul Freiberger & Michael Swaine's book, "Fire in the Valley."

unimpressive
If you can find it, Computer Lib would probably be good too, though it's sort of just a prequel to Literary Machines.

(I myself failed at this task.)

What this really underscores is that conventional capitalism provides little incentive for fundamental innovation.

All this stuff we're gluing together is the product of state-funded research, mostly from the cold war era. DARPANet, the web (CERN), DARPA research on "augmented human intelligence," etc.

Stuff like: http://www.youtube.com/watch?v=yJDv-zdhzMY

(I bet you thought Xerox PARC invented that stuff, right? Or Apple? Nope. DARPA and SRI invented the entire modern user experience in the 1960s.)

Few companies ever fund that kind of thing. There's two reasons. One is risk vs. reward-- such projects are typically "high risk, high payoff" as DARPA likes to say. Most lead nowhere. The second reason is that there is no good mechanism for monetizing the result. Fundamental innovations are often too fundamental to patent effectively, and are easy to copy once understood. They're also often worthless in themselves. They are enablers of value that is built on top of them.

Fundamental innovation is a lot like infrastructure -- something else free market players seldom invest in.

This is a problem for today's generation of enthusiastic free-market proponents. Who is going to pay for the next generation of innovation?

It's also very unjust. Where were Douglas Engelbart's billions? Tim Berners-Lee, is he a billionaire? No, we have 17 year olds making quick millions gluing together the work of dozens of Ph.D's who will never see that kind of money in their lifetimes.

It's a big thing that turned me off the Ph.D path. I don't feel like taking a vow of poverty and toiling in the dungeons to develop some fundamental, difficult concept that someone else then takes, glues to something else, flips, and gets rich.

rayiner
People don't really realize it, but the mid 1950's through mid 1970's, besides being the golden age of the American economy, was also the golden age of American innovation, and much of it was bankrolled by DOD money (SRI is, of course, a major defense contractor). Not only was the infrastructure around us mostly built during that time (the highways, power plants, etc) but most concepts in computing: programming languages, AI, GUIs, databases, etc.

Over time I become more and more convinced that the market doesn't produce (much) real innovation. If you look at the stuff that really matters, it's come from one of three sources:

1) Government research labs (NASA) or government funded defense contractors (Lockeed-Martin, SRI, BBN, etc)

2) Government funded universities (MIT and its $1 billion in defense contracts each year)

3) Private companies that have either monopolies (AT&T Bell Labs) or massive market power with entrenched cash-cow products (IBM, Xerox PARC).

Look at innovation that's happening now. Where is it happening? Self-driving cars got a big boost from DARPA in the early 2000's (DARPA Grand Challenge), and is now being funded by Google (which has a ton of cash from its deeply entrenched position in search).

tomrod
> Over time I become more and more convinced that the market doesn't produce (much) real innovation.

This is something I've researched a bit in the past. It ends up being a case of semantics. The market produces plenty of innovation but not much basic research; whether it encourages invention is ambiguous.

These three terms (innovation, invention, discovery) are well-defined in the academic literature; typically innovation is the only act of progress that can be monetized. Innovation occurs both on a technical level (hey, this transistor doo-hickey can be put in microchips!) and procedural (hey, organizing in an assembly line increases the speed with which we can make model T's!). Invention is creating something new from known principles, innovation is applying an invention, and discovery is finding previously unknown principles.

snowwrestler
I think that undervalues the innovation that goes into taking a new technology and turning it into a widely available, affordable product. That is not easy, and without that step, no invention is going to change the world very much.

I also think it undervalues the role of the market in creating the capital to fund these activities, and the many different companies and universities (many private) who compete to spend that money in innovative ways.

By comparison, the Soviet Union centrally funded their core research too, but were not able to keep up technologically with market economies like the U.S. or western Europe.

rayiner
I'm not really disagreeing with you, at least in the sense that I do agree that market economies create wealth and wealth is a necessary pre-requisite for funding research. What I think I'm getting at is that I disagree with the common trope that market competition creates fundamental innovation. I think market competition makes companies slaves to the bottom line, fighting tooth and nail in the dirt just to survive. To the extent that the market creates innovation, it does so in companies well-insulated from market competition (monopolies, oligopolies, etc). The decline of Hewlett Packard is a great example. When they became just another hardware company, competing on price in printers, PC's, etc, they ceased to be innovative.

Interesting reading: http://www.slate.com/articles/technology/technology/features.... You should really buy a copy of Tim Wu's book to appreciate Vail's line of thought.

rollo_tommasi
If those PhD are really so smart, they should have had the foresight to be born to Morgan Stanley executives, like the Summly kid.
xtracto
When I was about to start my PhD, a professor and good friend of mine (who already had one) told me: "The people who finish a PhD are not the most intelligent, but the most perseverant".
dmpk2k
Some people don't live their lives to interact with high-power investors, but contribute vastly more value to society than people who do live their lives to do so.
MisterWebz
I don't think I have ever seen so many people on HN be so envious of someone.
rayiner
When people complain about overpaid football players, or overpaid Wall Street executives, etc, is "envy" the first word that really comes to your mind?
snowwrestler
How many financial executives are there in the U.S.? Should we assume that each of their children founded a company and exited profitably by the age of 17?

Let's not take too much away from what this person accomplished.

dsfasfasf
Because I'm feeling a bit (just a tiny bit, I feel I'm almost beyond that) envious I almost want to celebrate your comment. Although there is a lot of true into this we have to admit that even kids born into riches are not that much more likely to be a world success. Because unless you have the will/desire to succeed you probably won't get that far. (Going to an ivy league school and working a nice cushy job is not enough in my book)

Granted, some people have been put closer to the finish line then most of us but I rather not take that as an excuse to explain away my lack of success.

p.s. I grew up in NYC to a working class poor immigrant family. The fact that I grew up in NYC is all the head start I needed. So far I've made to graduate school. There is still a long road ahead. No use complaining about which family I was born into.

rhizome
we have to admit that even kids born into riches are not that much more likely to be a world success.

They are MUCH less likely to wind up bleeding out on a streetcorner in the Tenderloin or creaking along on subsistence wages. It's rarely about "world success."

rayiner
Define "success." Being born into a rich family and going to an Ivy league school won't get you an instant $30m in the bank, but unless the apple really falls far from the tree, you won't have to try very hard to get into the top 5% in a country where even below-median people live pretty well. Not a bad ROI on simply choosing your parents well.
DilipJ
that's not really fair to the kid. All of us are born with advantages that others lack. Every person living in America has access to wealth and connections that the average African can only dream of. That shouldn't take away from anyone's success here or diminish any of our accomplishments
ruswick
This is logically incoherent. The fact that a larger disparity exists between the ultra-wealthy in the West and impoverished people in the third world (though, to be fair, many parts of the West are beginning to resemble the third world) does not invalidate the point that inequality also exists within the first world.

Just because impoverished Africans can't get Ashton Kutcher to invest in their startup doesn't mean that you or I can.

nitrogen
The "appeal to Africa" fallacy: comparing one's problems to problems that are worse does absolutely nothing to address either set of problems.
None
None
MartinCron
Most of the people who don't succeed financially made a terrible choice of who there parents were. It's probably the single most important choice one can make.
kaonashi
Why should we be rewarding such poor decision making? What ever happened personal responsibility?
thom
CHAIRMAN: Yes, and, and, and the wheel. What about this wheel thingy? Sounds a terribly interesting project to me.

MARKETING GIRL: Er, yeah, well we’re having a little, er, difficulty here…

FORD: Difficulty?! It’s the single simplest machine in the entire universe!

MARKETING GIRL: Well alright mister wise guy, if you’re so clever you tell us what colour it should be!

FORD: Oh Mighty Zarquon! Has no-one done anything?

alanh
For those who don’t recognize this, it is a quote from the Hitch-Hiker’s Guide to the Galaxay [1]. The context is: Ford is an (intelligent) hitch-hiker and CHAIRMAN and MARKETING GIRL are parts of the "useless third" of the population of Golgafrincham (basically, middle managers and the like), who were exiled from the planet entirely. (Perceived sexism attributable to Douglas Adams and/or the fact this was a radio broadcast, so everyone has an assigned gender)

[1]: http://www.clivebanks.co.uk/THHGTTG/THHGTTGradio6.htm

scythe
I think you're being a little unfair. I work in quantum computing. I'll probably work in the public sector for most of my life. I have every incentive to plug government-funded research, and oh by the way, you guys all need to vote for politicians who will increase my puny grad student stipend.

Researchers in the private sector publish a healthy amount of papers in peer-reviewed journals. The elephant in the room is drug design. Many of the drugs on the market today were indeed developed and funded by pharmaceutical companies; yes, pharma companies do occasionally push bullshit products. But imatinib works. It's a trillion-dollar industry for a reason.

As an aside, academia probably shouldn't be lumped under the banner of "state" -- the independence of academia helps it succeed.

>This is a problem for today's generation of enthusiastic free-market proponents. Who is going to pay for the next generation of innovation?

Take a deep breath. The bottom line is this: scientific research is cheap.

The US spends a paltry portion of its budget on research. Hell, I think they even spend more on education. Funny, right? So in the context of the great economic circus, publically funding scientific research affects the "freeness" of the market about as much as publically funding, say, the police. Even Ron Paul wasn't looking to make very large cuts to research.

And that's another reason you should vote for whoever is going to increase my salary: it won't hardly cost you anything.

michaelochurch
I think you're being a little unfair. I work in quantum computing. I'll probably work in the public sector for most of my life. I have every incentive to plug government-funded research, and oh by the way, you guys all need to vote for politicians who will increase my puny grad student stipend.

Quantum computing is fucking cool. Very intellectually challenging stuff. Props for that.

I agree on public funding for basic research. Politicians who say "these guys aren't earning their keep" are idiots, and we're worse (as a populace) for not firing these assholes. The mechanism is right there, it's unambiguously legal. Let's fucking use it. Anyway, how is a PhD who makes $85,000 per year while advancing the state of science, who is giving all the work away for the public good, not earning her keep? It makes no damn sense.

Researchers in the private sector publish a healthy amount of papers in peer-reviewed journals. The elephant in the room is drug design. Many of the drugs on the market today were indeed developed and funded by pharmaceutical companies; yes, pharma companies do occasionally push bullshit products. But imatinib works. It's a trillion-dollar industry for a reason.

My issue with the drug industry is that the profit motive seems to be generating 20 variations on the same theme (e.g. statins) and underfunding a lot of greater advancements.

The US spends a paltry portion of its budget on research

Damn right. It's pathetic. We spend more on this "war on terror" in one year than on cancer research in 50. Yet cancer kills orders of magnitude more people than terrorism.

notaddicted
If you aren't satisfied with the way the government is being run, is it a failure of Representative Democracy? You can petition your representative. If you still aren't satisfied you can cast your vote for another representative.

If you aren't satisfied with the way your capital is being allocated, is it a failure of Capitalism? You can petition your board members. If still aren't satisfied you can vote for new board members in the next election. If you still aren't satisfied you can sell -- in Yahoo's case many shareholders have chosen this option since Microsoft's $31/share offer. If you want to allocate other people's capital, see previous paragraph.

Leaving aside the notion of justice, since it is a Difficult Philosophical Issue ♪, to me the only thing that this underscores is that Yahoo is a poor allocator of capital.

http://plato.stanford.edu/entries/equality/#PriEquJus

blablabla123
>What this really underscores is that conventional capitalism provides little incentive for fundamental innovation.

No, what this really underscores is a fundamental change in thinking for equality that our civilizations seeks since centuries: taking people seriously, no matter of gender, race or in this case age. Many people have good reason to be jealous, being jealous this was not possible when they/we were so young.

When I was that age I hated the fact that only old people would be taken seriously when it comes to business. We can be happy this is possible and as a further sign that age discrimination is going to be eradicated. Besides, this is about discrimination of young people but old people are discriminated against too.

mason240
This tired bit again?

Military research may have made done early development on networking protocols, but it was capitalism and the free market principals that took if from "something that could have potential" to the "humanities greatest creation."

Saying that government created the internet would be like giving a government credit for creating cars because while making new weapons, they accidentally made wheel.

robomartin
First: Je viens en paix. So, please, don't turn this into a mud-slinging competition.

Also, because of the ambiguity that exists in the English language I should clarify that this is addressed to the general HN audience rather than to api. In other words, when I use "you" it is generally meant as the plural "you".

I have a genuine problem understanding where anti-capitalism sentiment, particularly in HN, comes from. I also have a huge problem with pro-government sentiment bordering on socialism and even going as far as having a communist undertone. I keep seeing this come up again and again on HN and I just don't get it. I don't understand it.

So, this is me asking for help. Why do you (plural) think this way?

Again, please, I am looking for a respectful conversation. This is about you, not me. I want to understand you. I have lots of questions. Some might sound dumb and maybe even off-base. Go with me, if you will, humor me, and see if you can help me --and others-- understand where you are coming from.

How far back in human history do we have to go until you would concede we don't owe what existed at that point in time to state-funded research? Fire? Hunting? The wheel? Agriculture? Making clothes? Making boats? Domestication of animals? Medieval time? Industrial revolution?

And, when feudal lords enslaved the population, was that population to be thankful because of the developments that came from such "state-funded" efforts?

Communism funded lots of things. Not sure anything of use to the world at large came out of any communist country. Do you think communism is better than capitalism? Why? How about socialist ideas? Better than capitalism?

The Nazi's funded a lot of medical research in the context of killing millions of Jews. Are you suggesting that we all benefit from that today? Are there a few million people somewhere in the world you would be willing to kill today in order to do some state-funded research that would benefit us all?

I know this is a ridiculous and grotesque question to ask, but it is a valid question. If you are deriving any benefit whatsoever from what the Nazi's did then you either have to reject all technology derived from that research --even if it costs your own life or that of your loved ones-- or be willing to do the same in the name of progress. I am not proposing, for even an instant, that what they did was acceptable.

Where do you stop singing the praises of state-funded research? How many people did ARPANET help kill? If we owe the modern Internet to all the military programs between 1960 and 199x or so and you recognize this as good; what's the difference between that and the various Nazi programs. Killing is killing, isn't it?

Well, virtually all state-funded research is motivated by war. War means killing people. Sometimes by the millions. Certainly by the thousands. And, in that regard --in terms of the mass killings-- it is no different from genocide.

How about nuclear power? A great result of state-funded research? Killed millions.

So, in embracing your belief system, do we ever fault governments or do we always look at the rosy side of state-funded research and ignore the ugly parts. As I asked before, how many people did ARPANET help kill? Is that ever a consideration? I am not sure Compuserve and other civilian BBS's helped kill nearly as many people --if any-- as the various means of communication resulting from state-funded research.

When I was younger the realities of war never really hit my radar. As I got older it really started to turn my stomach. And that's why I developed this idea that we should limit our government to throwing fancy dinners for visiting dignitaries and pull ALL non essential funding from their hands. All they do is kill people, one way or the other. The research would be done far better, cheaper, faster and with less violent purposes in the private sector.

I prefer capitalism and profit-driven entrepreneurship than war-driven state-sponsored research who's aim is to kill millions. Do you see the difference? How does that align with your pro-state belief system?

That aside, on the assumption a rare state-funded project produces something of note, for how many decades or centuries should we be thankful?

Do advancements ever become part of human culture and intellectual property upon which we can build without being labeled as having taken advantage of those before us?

Are you of this opinion because you have studied the subject or because you were indoctrinated into this manner of thinking at school or at home?

It's an honest question. Please don't be offended by it.

As an example, most religions folks were indoctrinated into their religion. I don't know anyone who grew up without any kind of religion shoveled at them as a kid, only to choose one independently as an adult.

It is a fair question: Are you pro state/government/communism/socialism because you chose to believe this way or because the thoughts were shoveled into your head? Can you remove yourself from your mind far enough to even make that evaluation?

All forms of governments, from democratic to communist seem to be great at devising ways to kill people. If WW1 and WW2 --with over a hundred million people dead-- did not prove that, I don't know what would.

It is also interesting to note that governments seem to need war as a motivator to invent anything. Why? Is that the only goal they understand?

Can you reconcile your "more government is good" view of the world with the fact that governments --not people-- have started all known wars and that they have caused the death of hundreds of millions of people throughout history? People don't start wars. Governments do.

I sincerely reject the idea that "All this stuff we're gluing together is the product of state-funded research". There are countless innovations that originated far, far away from government. Too many examples to list, but such things as flying machines, the telegraph and the light bulb come to mind.

Now, it is natural for entrepreneurs to look for markets for their inventions. Governments are flush with cash. In the US, if you can mold your idea or invention into something that can be of benefit to our mighty armed forces you can get tons of money for further development and even hugely profitable contracts through programs such as SBIR.

Entrepreneurs look for markets where they can make money. This means that while I am sure someone is going to claim that aviation was funded by government, I will propose the opposite: Aviation entrepreneurs went where the money was: Military contracts.

I could go on, but it is time to listen. There are a lot of questions above. I ask you to think, really think about them and then give me your perspective. And, again, I mean this with the utmost respect. I truly want to understand. I have seen enough pro socialist/communist/government/state posts on HN to cause me pause. I don't know if these are coming from outside the US (or outside democratic/capitalistic cultures) or from within. Don't know. Please, help me understand you.

manys
Why should we care whether you understand, especially when you attempt to hijack a thread, demanding that people satisfy your curiosity?
None
None
None
None
Dewie
> As an example, most religions folks were indoctrinated into their religion. I don't know anyone who grew up without any kind of religion shoveled at them as a kid, only to choose one independently as an adult. It is a fair question: Are you pro state/government/communism/socialism because you chose to believe this way or because the thoughts were shoveled into your head? Can you remove yourself from your mind far enough to even make that evaluation?

So what is your background?

robomartin
Attended private and public schools (nearly a 50/50 split). Private school was not quite religious but had a Christian church next as part of the campus and some of our teachers were the priests. Became an atheist somewhere in college. Highly entrepreneurial family. Have been an entrepreneur myself since probably high school. Started and ran half a dozen businesses with my own money. Also worked in industry (employee) at various levels (Junior Engineer to CTO) for about twenty years.

Is that what you needed to know?

Dewie
I was more refering to where you grew up, or other things that might be correlated to setting the groundstones for a persons political outlook/philosophy. Sorry for not being specific.
shantanubala
I think you took his comment a bit too far: the point is that the government is an institution that is meant to mitigate risks that cannot efficiently be dealt with otherwise.

The risk of going to the moon before anyone else is such a risk (political motivations aside).

Stating that the government can be an effective tool for mitigating risks associated with research is not an endorsement of everything the government does. This is not an endorsement of everything governments have done in the past. This isn't even an endorsement of government at all in its current form.

It is a statement of what our government ought to be doing more of. It is a statement that our current system has disproportionately distributed rewards -- aren't we supposed to reward those who create the most value?

robomartin
OK, I grant you that. I did say I wasn't aiming my post at him though. I've just seen a --in my eyes at least-- seemingly blind pro-government, pro public-sector bend in HN that always seems to float to the surface and I simply don't understand it. I'm not fighting it at this point because I clearly don't understand it. I have to understand it first before I can go past that. Not a molecule in my body thinks this way, so I need help.

As far as the distribution of rewards. The market decides that. Obviously the market --the average consumer-- does not think government is producing enough value or they would be flush with cash. If government produced real value we would be throwing money at them, me included.

With regards to the question of risky or very, very long term research.

Do we really believe government is good at this at all? I don't really see it that way. If you compare the evolution of technologies in private hands vs. that of publicly funded programs, what are the results?

Perhaps the best comparison is to compare efforts in the old Soviet Union with efforts in similar industries in the US. For all their might, the Soviets couldn't make a car worth a damn. During the same period in the US a multitude of private companies produced design after design and evolved solutions that were ages better than anything coming out of the Soviets.

Even Igor Sikorsky ultimately had to emigrate to the US in order to have his helicopter designs grow out of private efforts and evolve as they have. Little known fact: Composer Sergei Rachmaninov funded Sikorsky to the tune of $5,000 to help him launch his company.

Now, of course, as any good entrepreneur would do, you look for where you can sell your products. If government wants to buy you are not going to say "no". And, if government wants to throw more money at you to build other products for them you are going to follow suit. A lot had to happen before government could shovel money at Sikorsky. Other similar stories abound.

Then there's the question of whether or not government is actually equipped to truly make long reaching decision. Few decisions are really made with a clear view of what the future outcome might be. Why did we go to the moon? It was part of an arms race with the Soviets. Not much more than that. Again, war. I have a very fundamental problem with this idea of doing all of these things and spending all of this money to be better at war-making. It really stinks.

Was the lunar program truly worth the investment? Could we have produced similar or better results through other programs?

I am watching companies like SpaceX with great interest. The drive, focus and priorities private enterprise has, when combined with people hell-bent to make it happen, cannot be paralleled by any organization assembled by government. The mindset is massively different.

We could point at side benefits of technologies like GPS. Great stuff, right? Again, what was the motivator? Military. War. More efficient killing. Yuk! The civilian use of GPS was never a part of the program or the driving motivators.

And so, nearly all "good" technology that has come out of any government effort is almost always linked to military needs. If there are no military needs government either does not do it or they fail miserably.

This is the aspect of the whole pro-government, pro-public sector, pro-state-funded mentality I am not getting. You can't point at ARPANET as a state-funded research success without pointing at the thousand or millions (who knows) of people it was surely responsible for helping kill. One goes with the other. There is not dividing line. The state did not initiate these programs to help milk cows or to help us buy books online. They launched and funded these programs to create better killing systems for the wars they need to conduct.

I know I am harping on the military connection. I am eager to have someone provide me with a list of state-funded technologies that DO NOT have their genesis in a fundamental military need. I really can't think of one off the top of my head. Not one.

And so, being pro government/public sector/state and singing the praises of all this wonderful technology we should be so thankful for is tantamount to being thankful for, and supporting, all of the military programs and wars that inspire them. If you elevate what we have received from these government programs and ignore the wars and killing that brought the technologies in to existence you are being a hypocrite. I sincerely doubt that most of the folks who express pro-state views on HN are war-mongers.

And that's when my brain short circuits and I just don't get it.

I'd love to understand.

nisa
> I'd love to understand.

In my biased opinion I'd suggest you study western and northern European democracies. The government provides essential services (decentralized up to the district level) like education, water, streets, police, health care, social security for it's citizens.

I'd rather have clean water and unbiased police and justice and free education. Every private company needs to maximize their own profit. The government does not need to do this. They can subsidize important but nonprofitable projects. This can be and has been made efficient.

There is no interest in a private prison company to reduce crime. There is also not much reason for a private rail company to invest in infrastructure that not profitable. But as a citizen I have an interest to use a efficient train.

It's not black and white. Private sector is very good at a lot of things. But there are certain other things that are natural monopolies or important for the functioning of the society that in my opinion can be best served by an efficient government.

The private sector and capitalism do not also not always work in your interest - Adam Curtis from the BBC (also government :) did a great documentation about certain effects of free market radicalism: http://en.wikipedia.org/wiki/The_Mayfair_Set

1123581321
I wouldn't look to a private prison company to reduce crime because that's not the need they are pursuing. The commercial war on crime tends to be waged by associations that roughly map to where/how crimes are committed and chambers of commerce. These organizations are the pooled efforts of businesses to fight common problems.

A private rail company might decide to lay down unprofitable track to massively grow demand and habitual preference for rail travel. Or, it might stick to more profitable tracks and in exchange not have to pass on the costs of disparate routes to the customers in the high-density areas.

robomartin
> The government provides essential services (decentralized up to the district level) like education, water, streets, police, health care, social security for it's citizens.

I wonder if herein lies the fundamental difference. I would never put it the way you have.

The people form the government and pay them to administer infrastructure and services for them. That is massively different than the "government does for it's people" view. One is almost a royals-and-subjects view while the other says, well, government of the people, by the people and for the people.

In my model we hire the government to serve us. They are nothing more than our employees.

The other data point I have is that as a youngster my family spent quite a number of years in Argentina. Monkeys would govern that country better than nearly any administration they have had to endure. I have followed their politics on and off over the years. To this day they continue to be raped and pillaged by their government. The only way you can characterize them is thugs, thieves and gangsters. It is quite possible that seeing some of the things I saw there planted the seeds for not seeing government as part of the solution as an adult. I mean, look at Cyprus.

gizzlon
If you really want to understand, do some reading. Different (conflicting) economic theories, history, philosophy..

It's quite easy to broaden ones perspective, if that's actually the goal.

hazov
I don't believe he went too far, I think he is just trolling. As someone who lost a close friend in a war and who had grandparents who lose relatives because of the Soviets and the Nazis, I think he's just talking bullshit in trying to advance the old Randian Gospel and its false dilemmas of calling everyone Socialists.

I was to answer him, but I could not believe he's doing this to listen,as he claim. I also would not hold my anger.

robomartin
Well, considering that my family lost quote a few members to genocide I think I am entitled to detest war. And, no, I am not trolling. You shouldn't be angry at all. This is just a conversation.

Pointing out that the vast majority of the technology stemming from state-funded research came from military programs is nothing less than the truth.

hazov
Ok, I'll give you the benefit of doubt, I'm not angry anymore and realized it was stupid to be in the first place.

I'll not enter in a discussion with you here and now, politically I'm very pragmatic, although I'm pretty Conservative (As a Tory used to be some years ago). This post brought some memories (of my friend) which are uncomfortable in certain situations and with the due respect to my grandparents I'll also pass this one.

mason240
This tired bit again?

Military research may have made done early development on networking protocols, but it was capitalism and the free market principals that took if from "something that could have potential" to the "humanities greatest creation."

Saying that government created the internet would be like giving a government credit for creating cars because while making new weapons, they accidentally made wheel.

BCM43
It's also very unjust. Where were Douglas Engelbart's billions? Tim Berners-Lee, is he a billionaire? No, we have 17 year olds making quick millions gluing together the work of dozens of Ph.D's who will never see that kind of money in their lifetimes.

I'm not sure this is fair. Look at the number of entrepreneurs that actually succeed, then look at the positions that people with a CS Ph.D actually end up in. On average (using a median), I'm willing to bet those with a CS Ph.d do a lot better than your average startup founder. We just hear about success stories.

All of this is also ignoring the huge benefits that come with becoming a professor, if you are able to. The kind of pay, job security, and benefits that come with the MIT professorship that Berners-Lee has are enormously beneficial to ones peace of mind.

Mar 07, 2013 · 3 points, 1 comments · submitted by nkuttler
nkuttler
In the video Engelbart demos the computer mouse, word processing, hypertext, dynamic linking, collaborative real-time editing and other ideas that are still in use today.
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.