HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Founder School Session: The Future Doesn't Have to Be Incremental

democonf · Youtube · 249 HN points · 16 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention democonf's video "Founder School Session: The Future Doesn't Have to Be Incremental".
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
No, it's not about personal preference. It is about what is (im)possible when the only context to be considered is the status quo. The technologies you list are innovative but in the end just iterations of inventions of the 1970s. The 70s were so special because stuff happened there was the result of increased spending into science and research of the 60s after the Sputnik crisis [1]. Today's R&D is not focused on what could be possible in 10, 20, or 30 years but in 1, 2, or 3 quarters since companies funding the research need to be focused on their bottom line. The Eve project was special insofar in that it received funding not tied to short-term goals – but only to an amount that in the end wasn't enough. Similar things happened elsewhere and this bars the way to more meaningful inventions [2].

[1] The Dream Machine: J.C.R. Licklider and the Revolution That Made Computing Personal. Mitchell Waldrop.

[2] Founder School Session: The Future Doesn't Have to Be Incremental https://www.youtube.com/watch?v=gTAghAJcO1o

I'd like to argue something contrarian in this thread: you could be on the right path. Some facts to keep in mind:

- There are on the order of 20,000 new startups each year; most of the other people in this thread are from this pool.

- Only 2-5 of those 20,000 startups will end up mattering (i.e., will reach a high enough valuation/impact on the world to move a needle).

- Of these 2-5 startups that matter per year, almost none of them were pivots (Twitter is LoudCloud are the only examples I can think of).

- The conventional wisdom is to take an incremental idea and run with it; "ideas don't matter and execution is everything"; run sprints and use the Lean Startup methodology to pull the market out of an idea (or pivot), and reach a local maxima.

- This will only get you the result that 20,000 startups a year typically get: incrementalism, small victories, and/or failure.

- There's also lottery ticket mentality to this conventional style of thinking: you're no different than anyone else, so stop ideating and start doing, and maybe you'll get lucky.

If you've been at it for a year and haven't found anything good enough to give your life to, it could mean you are different than the average HNer. It could mean you have a calling to think bigger than most people do.

Here is some material you may be interested in which bolsters this viewpoint:

- Alan Kay's "The Future Doesn't Have to be Incremental": https://www.youtube.com/watch?v=gTAghAJcO1o&t=1830s

- Peter Thiel's book Zero to One. Among other things, this book discusses the importance of avoiding competition and commoditized markets (unless you have something 10x better; i.e., Google's Page Rank).

- Listen to interviews with Elon Musk, who claims he started by thinking about HUGE problems that humanity faces and worked backwards. Instead of thinking incrementally, Musk thinks extremely large and seems to plan for years in advance: https://www.tesla.com/blog/secret-tesla-motors-master-plan-j...

At some point, of course, you have to interact with reality, and that could involve interacting with negative people on HN (and especially real customers, etc). When Reid Hoffman started LinkedIn, he talked with dozens of people in his network and more than 2/3 said negative things about his idea. It's part of the process of getting calibrated with reality. But this shouldn't be your starting point; it should be what you do after you find something that actually moves you.

Finally, I want to say that me and a friend were in a similar boat to you and your friends several months ago: we had been reading about technologies and problems for months and had rejected incremental ideas that didn't move us. It looked like we had just fucked around for several months and had nothing to show for it.

We ended up working on a experimental VR Window Manager as an attempt to bring Linux to a new computing platform (i.e., VR): https://github.com/SimulaVR/Simula

Here was our rationale for this idea when we started: https://gist.github.com/georgewsinger/46ea8d6b16fb8bf3249358...

It's very much in the spirit of the advice I just gave you. Lord knows we could be wrong, but one problem we don't have is in doubting whether our project could matter if we pulled it off.

Good luck.

>He thinks that you can just design something nice from whole cloth and people will use it.

Because that's exactly what they did in Xerox Park, many times over.

>That's why his designs aren't deployed.

No comment.

>But he's basically confusing research and engineering, as if engineering wasn't even a thing, and you can just come up with stuff and have people use it because it's good.

Kay has many talks about the difference between invention and innovation (which are much better terms than ones you're using). In fact, his analysis of this difference is probably the most insightful and though-provoking technology talk I have ever seen:

https://www.youtube.com/watch?v=gTAghAJcO1o

Of course, this subject makes a lot of developers highly uncomfortable, hence a lot of shallow, ignorant, knee-jerk dismissals. "Everything is incremental." "Everything is the only way it could be." "This is fine." And so on. Thing is, Kay worked at Xerox and Apple. He read a myriad of books and research papers on computing, which he constantly references in his talks and writings. He worked and continues to work with some of the most forward-thinking people in the field of computing. In late eighties he foresaw most of the current computing trends - which is verifiable via YouTube. Even without any context his talks display a considerable depth of thought. In short: unlike some people, he actually knows what he is talking about.

>The point is that it couldn't have been any different. It wasn't designed; it was evolved.

And that is why someone who designed it just received a Turing award. Makes perfect sense.

Edit: Regarding your other comment here.

>If the web is a genius for hypertext, but not for app delivery, then he should have just said so. That is not a very hard sentiment to express. "The Web was done by Amateurs" doesn't capture it.

He has several decades worth of talks and writing. If you haven't bothered to familiarize yourself with at least some of them to understand what he means it's your own fault.

gambler
Edit 2: I meant, of course, Xerox PARC.
Alan Kay spoke about it length in many of his talks.

For example: https://www.youtube.com/watch?v=gTAghAJcO1o

Excellent video. I highly recommend everyone working in or with IT to see it.

I think he is right.

PCs and smartphone are mostly recreating thing from the past, and not the best things either. However, this isn't because we invented everything there is to invent. It has more to do with how research in computing is currently funded and how companies approach invention and innovation.

deepnotderp
Making things smaller and power efficient IS invention, it's what has driven so much tech progress. Even supercomputers are a power problem now...
romaniv
You probably should watch that video. This is one of its core subject. Kay makes a valuable distinction between invention and innovation, and what you describe falls into the "innovation" bucket. It's incremental. Yes, our cellphones are more powerful than mainframes of the old, but do we utilize them to the same extent?
kenjackson
But aren't lasers incremental too?
deepnotderp
So are FinFETs, Quantum Wells, EUV lithography, compact linear accelerators and all the devices necessary for shrinking just "innovation"?

What's "invention then"?

debuggerpk
radio probably is invention .. various multiplexing techniques like TDMA, FDMA, CDMA are not ..
Seems like the resources he had to work with in the VPRI project were pretty limited. It will be interesting to see what his team comes up with now that they are working with SAP and YC.

So far, I know about this: https://harc.ycr.org/project/

Hopefully, they're shooting for something like this: https://www.youtube.com/watch?v=gTAghAJcO1o

Wow that's exactly how I feel. I couldn't even formulate it that well. But I'm very happy to know that other people feel this itch as well.

Your sentiment of "aiming for the future we want" instead of "piling up more stuff on our existing stack" reminded me very much of this talk: https://www.youtube.com/watch?v=gTAghAJcO1o

Dr. Kay, if you're still following... then with singular respect and gratitude for your life-changing work and ideas, I would like to ask you one question.

Is there a good way to use bad systems?

Such as the web, which you describe as a “broken wheel,” lacking even a fraction of (e.g.) Englebart's vision. Or Linux, which you call “a budget of bad ideas.” (And no small budget, at that.) Or the iPad (and all common tablets, I assume), whose interface you call “brain dead.”[0]

What should we do with these things? Are they dead ends? Are they good for anything? Can they not be salvaged incrementally?

Here in the Hacker News community, where I am happy to see that my enthusiasm about your work and message is strongly shared, there is yet a huge amount of energy being poured into the wrong end of the low-pass filter, or, as you call it, “the world.” I know that we are not averse to learning curves, but maybe there is too much sunk cost to question what's already “working”? What should we do?

One answer is to use bad systems to simulate better ones. But—when this is even feasible—it's always done at the cost of performance, and VPRI's publications make no secret of that. A proof-of-concept does not equal a product. And at any rate, most of us are not researchers.

Because of this apparent dilemma, the exhiliration that I always feel when I hear you speak or read your writing is always tainted with a sense of despair. Is there any enlightened way to use today's systems (for example, as application developers), or should all of our efforts be directed at fixing (or indeed replacing) the systems themselves?

Thank you again, for all that you've done and continue to do.

[0] https://www.youtube.com/watch?v=gTAghAJcO1o&t=28m

https://archive.org/details/130124AlanKayFull

And others. I believe that these quotes are representative and not misused out of context.

alankay
I just found your comment. One answer wrt e.g. Javascript is to use it as a "machine code" and just put a whole better designed thing on top of it. Or try to get the web world to get behind Native Client or something better that allows a protected sandbox to be really developed into new facilities.

Another answer is to not to go back to Engelbart, but to at least start with large ideas like his, and try to think of the Internet and Personal Computing as something much more than a convenience -- but more as a "lifter" of humanity. What would that mean?

Another ploy would be to simply think about what is needed close to the end-users and to follow that back towards the plug in the wall (one hint is that there is no need to encounter a 60s style "operating system" -- so what would be much better in a real Internet universe?)

The main heuristic is to posit "I don't really know what I'm doing, and I'm not a strong enough thinker, and 'You can't learn to see until you admit you are blind, and etc." This is my starting place for trying to do real thinking i.e. we aren't as smart as we need to be -- so we have to put real work and method into thinking and design.

Tony Hoare had a good observation. He said that debugging is harder than programming, so don't use all your abilities in programming, or you'll never get the program working. We can extend that into design. Design is difficult, but being able to assess one's designs is even harder -- leave something in reserve to avoid simply making things because we can.

gavinpc
Thanks for your insights, Dr. Kay. I appreciate your taking the time to reply.

For context, I was recently struggling with these questions in trying to rationalize a Shakespeare project (on current-day systems) as being good for humanity. In the section "why new media," I rely on your and Bret Victor's ideas as a standard for making that argument.[0]

Thanks again.

[0] http://gavinpc.com/project_willshake.pdf

Ipad dumbs down personal computing by doing things in this way. https://www.youtube.com/watch?v=gTAghAJcO1o (Alan Kay talk, 14-15min)
zepto
I love Alan Kay, but he's just wrong there.

And since the PC and Android are not locked down - why don't we see the magic there?

anonbanker
Windows 10 is a walled garden.

Android is Google's walled garden by default.

There is no magic in walled gardens.

Why did I get down-voted? I answered the question and added a little humor. "Innovation" is not analogous to "invention". (Here's Alan Kay on the subject: https://www.youtube.com/watch?v=gTAghAJcO1o) TLDW: Technological adoption follows patterns of consumption like fashion. When we talk about "innovation" we're really talking about technologies that succeed in the marketplace and were successfully adopted by society.

Over the last decade we've seen major tech growth in two areas: mobile computing (computers as fashion accessories) and social networks. Both happen to be areas where youth culture has historically served a role as early adopters. If you're looking for historical perspective to figure out when 20 somethings were given the keys to the tech kingdom, it happened over the last decade when tech companies started using youth culture to drive consumption and "innovation".

Why do we need companies full of 20 year olds?

1) Because 20 year olds develop products for other 20 year olds. Most of them don't have the experience, wisdom or maturity to do anything else and they are fundamentally still trying to please each other and define themselves as a generation.

2) 20 year olds have a ton of energy, health and are willing to work ridiculous hours on bad code for lower pay.

3) Making millionaires out of a few 20 somethings every year helps feed the fire.

Alan Kay has a wonderful lecture on this very topic - http://www.youtube.com/watch?v=gTAghAJcO1o
This the video of the talk where Alan Kay talks about Innovation Vs Invention: https://www.youtube.com/watch?v=gTAghAJcO1o
This slow reading idea also jives with Alan Kay's observation of "Slow Deep Thinking" which allowed us to have massive & non-incremental leaps into the future.

http://www.clarkaldrichdesigns.com/2009/05/alan-kay-and-huma...

Video by Kay which goes into more depth: https://www.youtube.com/watch?v=gTAghAJcO1o

My takeaway is the opposite—you buy Glass to buy Glass. A wearable computer with 1 hour battery life isn't practical, any your expectations won't be met, but you can be closer to the future so that you invent appropriately.

Think problem-oriented vs. idea/tool-oriented mindset, illustrated by Alan Kay in his talk The Future Doesn't Have to Be Incremental: http://www.youtube.com/watch?v=gTAghAJcO1o#t=1096. As you surround yourself with tools of the future, you can think of problems and solutions no one else sees (problem-finding as opposed to -solving).

Apr 05, 2014 · 249 points, 83 comments · submitted by corysama
bitwize
When Tetsuya Mizuguchi left Sega to form Q Entertainment, he and his team started work on the famous puzzle game Lumines. Their stated goal was to create a game that was merely half a step forward, as opposed to their previous game, Rez, which was two steps forward -- and didn't do well at market.

Smalltalk was at least two steps forward, probably much more than that. The critical thing that put it well into the future was the fact that it made the boundary between users and programmers even more porous. I'm sure many of you have heard the stories of teenagers sitting down to an Alto and writing their own circuit design software in Smalltalk. That kind of power -- turning ordinary people into masters of these powerful machines easily and efficiently -- is just the sort of revolution originally desired and promised us by the first microcomputer marketers.

But of course it didn't do well at market at first, so we had to settle for the thing that was merely half a step forward -- the Macintosh.

ExpiredLink
You assume that you/we can know the direction of 'forward'. I very much doubt that.

> Smalltalk was at least two steps forward

... then Smalltalk would be a major programming language today.

dropit_sphere
You might enjoy this segment from Game Theory about how innovative games don't do well:

http://m.youtube.com/watch?v=Cxhs-GLE29Q

mattgreenrocks
Great discussion of this concept, thank you!
neel8986
Though a bit obnoxious i really liked the talk. Alan talked about 2007. If we look back it was the time when first iphone was announced. We all knew that in a timespan of seven years the processor will be much faster( Now it is almost 20 times faster)., connectivity will be faster, it will have better display and better sensors. But still none of the application that exists today (except games and animations maybe) take all this improvement into consideration. We are still stuck in old ideas of messaging app, photo sharing app, maps and news aggregators. I believe all those apps could have been conceived back in 2007. No one thought about any new use cases which can take use of the improved hardware. In fact some of the noble concepts like shazm or word lens were conceived 4-5 years back. Now we are stuck at a time where giants of internet are just struggling to squeeze few more bytes of information from user in sake of making more money from adds. It is difficult to believe after 7 years of first smartphone the most talked about event this year was a messaging app being acquired for 19 billion!! I think hardware engineers push the limits by going to any extent to make moore's law true. But we software guys fails to appreciate what is going to come in future
vidarh
Consider that in 2007, the iPhone was already effectively the result of years of waiting. In '99/2000, there was already touch screen PDAs with apps and various limited networking functionality, and phone functionality at least as early as 2002 (possibly earlier, I don't remember), and a few tablets had started making their appearance (both laptops + touch, as well as "proper" tablets). But they were all massively hamstrung by hardware (the first generation Palm's had less than 1/1000th the memory of many current smartphones; monochrome low res displays, and resistive touch)

Arguably, even in '99, the idea itself was old - those of us working on stuff like that then, were looking back at Star Trek and other SF, and it was just the feeling that it was an idea whose time had finally come.

Apple's genius with the iPhone and iPad, was realising its time had not come, and waiting and refining their design until the basic underlying hardware "caught up" and they could provide a product suitable for "normal users". Everyone else got to make the expensive mistakes; most of the companies involved are no longer around, or pulled out of that market before Apple made its entry.

Sometimes ideas are just not right yet, and spending time trying to force the issue is likely to fail because the end result will be massively compromised.

But sometimes the ideas are just not right yet also because the public has not "caught up". It's not just that software developers must figure this out, but end users must have caught up enough that the new ideas fit into their world view.

eurleif
>Apple's genius with the iPhone and iPad, was realising its time had not come, and waiting and refining their design until the basic underlying hardware "caught up" and they could provide a product suitable for "normal users". Everyone else got to make the expensive mistakes; most of the companies involved are no longer around, or pulled out of that market before Apple made its entry.

Aren't you forgetting that Apple made the Newton?

None
None
neel8986
Uber surely is a great app. But if we think in terms of harnessing computing power i really don't think it falls into the same category as early GUI or OOPs. They definitely have to work real hard for managing the non computing side. But if you concentrate just on the computing side it is just a scalable web service allowing user to book cabs. And it was nothing that couldn't have been conceived in 2007. Far more complex air line booking system existed before that

Only real example that comes to my mind is PA kind of applications of the type of Google Now/Siri. But still they don't work to the level we desire them to

stephencanon
GarageBand and various other music apps have made quite impressive use of the improvements you reference, to name just one category.
fidotron
This is the underappreciated part.

I'm not sure computing is going to have a major revolution, unless we see a massive AI breakthrough. The reason is that the amount of computing power available for a few watts and tens of dollars today now vastly exceeds any idea of what to do with it. With all the data in the cloud stuff the limitations are storage related (i.e. just the time to read data off disk), and the use of the CPUs is really fairly low compared to how long they sit around waiting.

Post iPhone is the best example, because all that's really happened is a (much needed) improvement in the underlying networks and a shake up of the business side of the ecosystem, but very little of this stuff that is actually used wasn't available before. There was a gold rush, but I think history is going to judge that era quite harshly in terms of lack of real progress.

exratione
Allow me to put forward a historical analogy: standing in 2014 and arguing a case for gentle future changes in [pick your field here] over the next few decades, based on the past few decades, is something like standing in 1885 or so and arguing that speed and convenience of passenger travel will steadily and gently increase in the decades ahead. The gentleman prognosticator of the mid-1880s could look back at steady progress in the operating speed of railways and similar improvement in steamships throughout the 19th century. He would be aware of the prototyping of various forms of engine that promised to allow carriages to reliably proceed at the pace of trains, and the first frail airships that could manage a fair pace in flight - though by no means the equal of speed by rail.

Like our present era, however, the end of the 19th century was a time of very rapid progress and invention in comparison to the past. In such ages trends are broken and exceeded. Thus within twenty years of the first crudely powered and fragile airships, heavier than air flight launched in earnest: a revolutionary change in travel brought on by the blossoming of a completely new branch of applied technology. By the late 1920s, the aircraft of the first airlines consistently flew four to five times as fast as the operating speed of trains in 1880, and new lines of travel could be set up for a fraction of the cost of a railway. Little in the way of incrementalism there: instead a great and sweeping improvement accomplished across a few decades and through the introduction of a completely new approach to the problem.

corysama
For ideas on how to make non-incremental progress in technology, check out Kay's earlier talk "Programming and Scaling" http://www.tele-task.de/archive/video/flash/14029/
straws
This is a great talk in a completely unwatchable format. Here are links to mp4s of the video:

http://stream.hpi.uni-potsdam.de:8080/download/podcast/HPIK_...

http://stream.hpi.uni-potsdam.de:8080/download/podcast/HPIK_...

http://stream.hpi.uni-potsdam.de:8080/download/podcast/HPIK_...

http://stream.hpi.uni-potsdam.de:8080/download/podcast/HPIK_...

jal278
A practical suggestion Kay makes is that one way to brainstorm start-ups is to think of technological amplifiers for human universals [1]

[1] http://en.wikipedia.org/wiki/Human_Universals

Geee
Those are kind of strange, but struck a chord because I have been myself trying to discover these fundamentals which make us human and which most of technology is 'amplifying' (by Kay's terms). For example, music is not on that list which I personally think is one of the most important fundamental and which a lot of technology is built upon. Nor is communication, which is another fundamental and also a driver of a lot of technology. Maybe I'm thinking in a bit different terms, though.
andrewflnr
I think Alan Kay put communication in his list. I don't think music is quite universal, certainly not uniformly and not the way we experience it now. He mentioned harmony theory as a notable non-universal, while almost any music you hear today makes extensive use of harmony.
semiel
One of the problems I've been struggling with lately is how to arrange for this sort of work, while still allowing the researchers to make a living. Governments and large corporations seem to have by and large lost interest in funding it, and a small company doesn't have the resources to make it sustainable. How do we solve this?
calibraxis
Activism is one way. David Graeber discusses this very topic. (Article: http://thebaffler.com/past/of_flying_cars and video: http://www.youtube.com/watch?v=-QgSJkk1tng)

Neoliberalism since the 70's is a well-known culprit, but he also discusses the corporatization of universities — stifling managerialist bureaucracies.

seanmcdirmid
Get lucky and score a position in a forward thinking research lab? Having a PhD helps, but there are plenty of constraints in most research labs also.
justin66
> Governments and large corporations seem to have by and large lost interest in funding it, and a small company doesn't have the resources to make it sustainable. How do we solve this?

Educating people as to where their tax dollars are going is always a good start. The average joe has some very, very odd ideas about the federal budget and how money is allocated.

Personal favorites: the way many people complain about how much we spend on foreign aid. Ask such a person how much we ought spend as a percentage of the budget and the figure will very often represent a massive increase over what we spend now, since we don't spend much at all.

Or the way many people literally cannot wrap their heads around how much war costs. A couple of years ago an expert came out and pointed out that we spent more than $20 billion on air conditioning for our military every year in Afghanistan when you include road maintenance and fuel trucks and so on. He was a former general who had been involved in logistics but many people needed to just assume he was full of it, since that's more than we spend every year on fucking NASA.

The trouble I see is that if you were a politician and you went around with the charts and visual aids a businessman would use to give a briefing and convey that info... you'd look like Ross Perot. So I guess he just ruined it for everybody.

leoc
The UK is just launching something to address this: https://twitter.com/MattChorley/status/451773213497118720/ I expect it would be more difficult to do something similar for the US given the wedding-cake of federal, state and local taxes.
bjelkeman-again
Got some other reference? That link isn't working and my mobile makes it hard to fix the link.
leoc
The link WFM, but try http://www.dailymail.co.uk/news/article-2596059/Where-taxes-... ?
bergie
Here is the Finnish visualization http://www.veropuu.fi/valtionbudjetti/
cliveowen
Thank you for posting this, the best quote so far has been this: "Prior to the 18th century virtually everyone on the planet died in the same world they were born into". This is a realization I never had, we take progress for granted but it's a precious thing actually.
DonGateley
Which is why I think the idea of change itself was an invention. Up until roughly that point in history people didn't apply themselves to change because they didn't even have the concept as it came to be understood.

Things progressed so glacially for so long simply because, from experience, nothing other than stasis could be imagined, not because we were any dumber. Change was the key innovation for change. Occasionally I wonder if it wasn't an inherently fatal discovery.

One wonders how many other such "basic" concepts there might be that remain hidden from view.

MrQuincle
Perhaps he's a tad obnoxious, but he says some interesting things.

- think of the future, than reason backwards

- use Moore's law in reverse

- an introvert character can be helpful in coming up with real inventions

- be interested in new ideas for the sake of them being new, not because they are useful now, or accepted, or understandable

- it seems good to sell stuff that can be instantly used, people however, like many other things. they might for example like to learning or get skilled. the bike example is one. but also the piano. or the skateboard.

At least, this is what I tried to grasp from it. :-)

xxcode
Hacker News is the epitome of short term thinking, with projects like 'weekend projects' etc.
leoc
It's amusing that the same optical illusion has been discussed by Michael Abrash https://www.youtube.com/watch?v=G-2dQoeqVVo#t=453 and Alan Kay https://www.youtube.com/watch?v=gTAghAJcO1o#t=1534 in talks on very different topics recently.

> Thomas Paine said in Common Sense, instead of having the king be the law, why, we can have the law be the king. That was one of the biggest shifts, large scale shifts in history because he realised "hey, we can design a better society than tradition has and we can put it into law; so, we're just going to invert thousands of years of beliefs".

Pfft, tell that to the 13th-century Venetians: http://www.hpl.hp.com/techreports/2007/HPL-2007-28R1.pdf . Constitutionalism isn't that new an idea.

forgotprevpass
At 15:00, he mentions research on the efficiency of gestures done in the 60's. Does anyone know what he's referring to?
ozten
Sketchpad by Ivan Sutherland in 1963, would be one.

It allows one to draw CAD drawings, convert drawn letters into labels, etc in a more natural way.

https://en.wikipedia.org/wiki/Sketchpad

jecel
Besides the Sketchpad, Alan always mentions Grail (GRAphic Input Language), designed by Thomas Ellis and programmed by Gabriel Groner and others at the Rand Corporation. That was from 1964.

As far as I know, Alan also greatly admires the work done at MIT in the 1970s like http://www.paulmckevitt.com/cre333/papers/putthatthere.pdf

andreyf
Stephen Wolfram's demo he referred to doesn't appear to be up yet, but this one from a couple weeks back is pretty sweet: https://www.youtube.com/watch?v=_P9HqHVPeik
purpletoned
It seems to be up now at https://www.youtube.com/watch?v=JzYmO20N6MY.
athst
This is a great excuse for buying the nicest computer possible - I need to compute in the future!
sAuronas
Playing Wayne Gretzky:

30 years we will (ought to) have cars that repel over the surface by a bioether [sic], possible emitted from the street - which have become (replaced as) linear parks that vehicles float over and never crash. Because of all the new park area, some kids in the suburbs (because they will be park rich) will invent a new game that stretches over a mile that involves more imagination than football, basketball and soccer - combined.

That was an awesome video. C++ == Guitar Hero

revorad
The talk starts at 2:42 - http://youtu.be/gTAghAJcO1o?t=2m42s
kashkhan
Anyone have a link to the Q&A after the talk?
rafeed
Firstly, I enjoyed his talk. It was pretty insightful into the ways so many businesses and corporations today think, and how we've lost track of building the future. However, there's one thing that really bugged me about his talk. It basically boils down to the fact that you have to take into consideration Moore's Law and have to pay a hefty sum to make any useful invention by paying for the technologies that are 10-15 years ahead of its time to do anything useful for the next 30 years. How does one "invent" in his terms today without the equity that he refers to which you need?
w1ntermute
Also, Moore's law might be applicable to computing hardware, but it isn't necessarily generalizable to other sorts of inventions.
queensnake
That 'universals' guy seems actually to be Donald Brown, and his book is 'Human Universals'. http://www.amazon.com/Human-Universals-Donald-Brown/dp/00700...

The book is expensive, here's a list:

http://condor.depaul.edu/mfiddler/hyphen/humunivers.htm

norswap
Totally tangential, but that intro music segment with Alan Key just looking around is total comedy gold. Ah, those cheesy conf organizers...
oskarth
For an alternative and cynical view of Xerox PARC, have a look at Ted Nelson's Computers for Cynics 2 - It All Went Wrong at Xerox PARC (15 minutes video):

http://fixyt.com/watch?v=c6SUOeAqOjU

kev009
I know this is really trivial, but I found the extended music intro and his unamused reaction quite comical. Over-analyzing, it's a juxtaposition to parts of his talk.
LazerBear
This is very relevant to something I'm trying to build right now, thank you for sharing!
Zigurd
A lot of his talk was wasted on irrelevant complaining about lack of capex in R&D. That's only partly correct. Any one of us can afford to rent a crazy amount of computing power and storage on demand. Pfft.

In short, skip the first 20 minutes. He's being a grumpy old man. In the second part, he's a pissed-off genius and revolutionary.

dropit_sphere
Sure, but do they have money to live on while they're experimenting?
Zigurd
His argument that 5 year timespans are minimal for invention is on target, but that's opex, even when you are cash flow negative. Where he is wrong is that you need large equipment capex for invention. Unless you are building a novel special purpose computing device, like an giant FPGA cryptocurrency miner, you really don't need more than $5k capex per coder for anything anymore, and renting Web-scale power is cheap.
nkuttler
Computing power and storage makes the computer of the future?

If you think like that, "new" things can only come from storing more data and processing it in new ways. That seems rather limiting...

Off the top of my head, I'd expect the "computer" of the future to know a lot more about my immediate surroundings. That means sensors, not big data, computing power and storage are useless. Can I get sensors that are 20 years ahead of the stuff that's mass-produced today? Maybe, but it's certainly not something I can rent for cheap.

Roritharr
Wow, he really comes of as obnoxious.

Yes what was done in Xerox Parc was really amazing and cool, but can you please contain your ego atleast a little?

This talk sounds basically like him explaining to everybody in detail how awesome his achievements are.

EDIT: The best point is where he explains with charts that 80% of people are basically sheeple...

cliveowen
At first glance it might come off as hubris, but it's actually just a way of separating what we now know as technology (consumer technology) which is mostly just incremental from the other kind of technology which is innovative and profoundly transformative.

When he talks about the research that went on at the Parc back in the 70s it's not to show his accomplishments but to show the difference between commercial products addressed to the masses and touted as technological breakthroughs and real breakthroughs that happen way before the mass production and slowly make their way into society and bring about enormous changes and wealth.

dredmorbius
The core idea of non-incremental progress: Xerox PARC accomplished what it did in large part by forcing technology 15 years into the future. The Alto, of which PARC built around 2000, mostly for its own staff, cost about $85,000 in present dollars. What it provided exceeded the general market personal computing capabilities of the late 1980s. This enabled the "twelve and a half" inventions from PARC which Kay claims have created over $30 trillion in generated wealth, at a cost of around $10-12 million/year.

Kay also distinguishes "invention" (what PARC did) -- as fundamental research, from "innovation" (what Apple did) -- as product development.

Other topics:

• Learning curves (people, especially marketers, hate them)

• "New" vs. "News". News tells familiar stories in an established context. "New" is invisible, learning, and change.

• The majority acts based on group acceptance, not on the merits of an idea. Extroversion vs. introversion.

• There are "human universals" -- themes people accept automatically, without marketing, as opposed to non-universals, which have to be taught.

• Knowledge dominates IQ. Henry Ford accomplished more than Leonardo da Vinci not because he was smarter, but because humanity's cumulative knowledge had given him tools and inventions Leonardo could only dream of.

• Tyranny of the present.

bluishgreen
"Knowledge dominates IQ" - along this lines, recently I have come to accept the idea "Speed dominates IQ":

To explain it briefly, some one who can build fast and deploy can learn more as they come into contact with unknown-unknowns sooner and more often.

Contrast this with someone with a huge IQ who thinks about things a ton, and builds really clever first solutions which breaks on first contact with reality/market.

tensor
Running forward blindly does not help in research. During the talk, Alan even talks about this in the context of learning about the past to be able to come up with new ideas in the future.
dredmorbius
True: failing fast only works well if you're failing at the right things, and learning from them.
vidarh
On the other hand, there's a tendency with some people who "knows too much" to dismiss ideas that have existed for a while, even though it hasn't been widely disseminated.

Sometimes running forwards blindly, while ignoring all the reasons not to do something is what finally gets innovations out to people.

bjelkeman-again
It get this at the moment. We have something new that the old hands say can't be done. We've proven it in the lab to our satisfaction and those that "know" just shake their heads. They do want more detail though.

It is fun to take methods that are 80 years old and update them with new technology and have them outperform the state of the art.

PeterisP
Yup, in computing, there often is the situation where in 60ies or so they already figured out the proper way to solve some particular problem, but noted that is impractical due to the unimaginably huge resources required (such as gigabytes of storage or CPU comparable to a modern smartphone); while the 'state of art' of that problem is based on directions that were known to be strictly limited, but could be practically implemented given the hardware limitations of the time.
arethuza
That reminds me of the idea that "The world is its own best model", credited to Rodney Brooks. Basically rather than having an intelligent agent rely upon a complex predictive model of its environment to explore the actions it can take a better/faster/simpler approach might be to actually interact with the environment and see what works - "intelligence without representation".

http://en.wikipedia.org/wiki/Rodney_Brooks

adsr
A crude precursor to the ideas in the Alto can be seen in the mother of all demos with Doug Engelbart imho.
subdane
Doug Engelbart > Alan Kay > Steve Jobs > ?
anewcolor
eh, Engelbart > Kay > Bret Victor
dredmorbius
I meant to include a reference to Ted Nelson's "It All Went Wrong at Xerox PARC", which has been featured here before:

http://fixyt.com/watch?v=c6SUOeAqOjU

calibraxis
Thank you!
tod222
Thank you. Nelson does a great job enumerating the limitations of PARC's vision and the capabilities absent from PARC's work.

After being exposed to PARC's self-congratulatory self-promotion for decades, Nelson's history is a breath of fresh air.

This is not to devalue what PARC accomplished, but that its accomplishments must be taken in perspective.

It All Went Wrong at Xerox PARC is also on YouTube:

http://www.youtube.com/watch?v=c6SUOeAqOjU

mattgreenrocks
> The majority acts based on group acceptance, not on the merits of an idea. Extroversion vs. introversion.

This is extremely important to realize. Eventually your [art|research] will take you beyond what people are comfortable with, and it doesn't mean you're wrong. It is hard, though.

dredmorbius
Quite.

I'm exploring some areas at the moment generally outside the "general acceptance field", and it's uncomfortable. I find that when I run across others, especially others I know, who are expressing similar concerns, or better, are reaching conclusions similar to mine, that I'm strongly reassured.

eikenberry
I think you missed one of his crucial points. That is that we live in a waking dream. That our day-to-day understanding of what is real and normal is a fiction.
skore
> • Knowledge dominates IQ. Henry Ford accomplished more than Leonardo da Vinci not because he was smarter, but because humanity's cumulative knowledge had given him tools and inventions Leonardo could only dream of.

This really struck a chord for me. What I got from it was that many people try to build some form of success on pure IQ and get frustrated when they are outmuscled by knowledge in the market.

I think that cuts back to Xerox PARC as well - by focussing everything on IQ, they created the knowledge that allowed Apple to be so dominant.

Where the talk falls a little bit on the obnoxious side is when Mr. Kay makes dismissive statements on how they created what others sold, just 10 or 20 years earlier. I think that ignores the enormous amount of work you have to put into connecting this knowledge that they worked out to the current state of mind that people are in.

Xerox PARC may have invented the future, but the failure of their parent company to bring that future to market shows that even with that knowledge at hand, you have quite a bit of way ahead of you.

pjmlp
> Xerox PARC may have invented the future, but the failure of their parent company to bring that future to market shows that even with that knowledge at hand, you have quite a bit of way ahead of you.

There are quite a few talks scattered around the web from former employees about how everything went downhill due to mismanagement.

Back in the late 90's, when I started using Oberon and digging into what was Wirth's inspiration at Xerox PARC, I discovered a new world much more interesting that what Bell Labs could offer.

However they had better luck bringing their research into the market at large, given how it all played with the universities.

trhway
>Xerox PARC may have invented the future, but the failure of their parent company to bring that future to market shows that even with that knowledge at hand, you have quite a bit of way ahead of you.

does it really matter that Xerox PARC "invented the future"? If not they, then somebody else couple years later. 10-20 years down the road it just wouldn't change a bit.

garysweaver
I know that it is helpful at times to realize that invention and innovation could happen in a myriad of ways, but I don't think that is a fair way of looking at things in this case, or in many cases.

Without the invention of the mouse, we might all still be using trackballs. A mouse to me is not an "A ha!" invention. I remember when I first used one that it seemed silly to move a trackball around upside-down like that.

And without laser printing, we might all still be using ribbon cartridges at work, because of time/cost concerns with inkjet.

Everything that happens can have profound effects on the future. Think of the Challenger and Columbia disasters. While it's true that the American public had somewhat lost interest in the shuttle program prior to the Challenger disaster, they didn't think of it as risky or dangerous. Think of the lives that would have been saved and how much farther along we might be in space exploration had greater precaution been taken.

trhway
>Without the invention of the mouse, we might all still be using trackballs. A mouse to me is not an "A ha!" invention.

A ha! was to me when i first time used trackball instead of mouse. It was so great, such fast and precise movements (don't remember the game though :) And if i remember correctly people with carpal tunnel get trackball instead of mouse.

>Everything that happens can have profound effects on the future. Think of the Challenger and Columbia disasters. While it's true that the American public had somewhat lost interest in the shuttle program prior to the Challenger disaster, they didn't think of it as risky or dangerous. Think of the lives that would have been saved and how much farther along we might be in space exploration had greater precaution been taken.

i come from different school of thought. Whatever number of butterflies somebody squash today, it may (and will) affect only small details of the future while the whole system's trajectory will still be in the same volume of phase space as it determined by the macro conditions (like energy constraints, etc...).

>farther along we might be in space exploration had greater precaution been taken

the technological civilizations follow typical path. A bit faster, a bit slower doesn't matter. What matters is whether given technological civilization hits a bifurcation point like all-out nuclear war or uncontrollable climate change or uncontrollable run-away genetic development or implosion into ant-colony state as result of members being "always connected" ...

tensor
You could apply this to everything in life and conclude that nothing matters. What is important is that Xerox PARC did invent many of the technologies we take for granted today. This should be celebrated along with all the other contributions that led us to the present.
trhway
>You could apply this to everything in life and conclude that nothing matters.

it matters "locally" - to individuals who invent and innovate and to ones who gets direct financial profit from it. The rest of civilization will get it anyway, at the time when civilization is ready to consume it, so details don't matter at that scale. Ancient Greeks could in theory build a steam engine [granted very low efficient] - they wouldn't have been able to put it use anyway, it would be just a curiosity like Aeolipile was.

svantana
It's probably hard to tell in most specific cases, but it can be argued that some inventions were just ready to happen (e.g. the telephone and it's race to the patent office) while others may have taken decades longer if the right genius hadn't been in the right place at the right time (Einstein's relativity perhaps?)
nnq
It matters more than locally: those invention seeded the minds of the next generations, showed them what could be possible. It's not as if Gates and Jobs didn't hear about the work done at PARC. They most likely did and it "seeded" their minds or at least primed them for their ideas. Even if they didn't do even this, they at least confirmed them that their intuitions were on the right path, and without these confirmations they maybe wouldn't have even had the courage to start their businesses.

And if the ancient Greeks would have actually built that steam engine into a useful device (even something like a powered fishing lines drawer of anything of even the slightest usefulness), I'm sure this would've completely changed the course of human history, even if they may not have found too much practical use for it. You see, once you physically build something that works and is useful (or actually implement something that can be used, in the case of software), no matter how inefficient and barely-useful it is, you actually seed and idea into the minds of all the people that see that device working or hear about it working. And that idea takes roots and very soon they will think about improving it and finding uses for it. If you just write about it in a book, even if you add building instructions and careful calculations showing that it's possible, you get at most 1% of that mind-seeding power. Most peoples minds get primed to work on improving something only after seeing something that (at least kind of) works. It's the "if we'd only tweak that and that it might just actually really work" thinking that most people can only have after seeing something physical that at least "barely kind of works". Very few people have the intelligence and imagination to get a theoretical idea and turn it into something that works, but there always some Gates or Jobs or Wozniaks that once you give them a piece of practical vision they can carry it on to something that really works.

I think that if the ancient Greeks would have build a steam engine with even the most remote practical use and would've displayed that "craftmanship" to the people instead of keeping it locked in library, the whole middle ages with all their horrors might not even have existed. Humanity might have even jumped from antiquity to industrial age directly. Maybe with a slow languish at first, before something equivalent to the printing press would have been invented to help the dissemination of knowledge, but then faster and faster.

epayne
Alan Kay posits that yes it does matter that PARC "invented the future" because there were and are very few researchers working with the opportunity mix necessary. They had five years of freedom from business concerns, tons of ambition, the right context, intelligence and inspiration. Kay claims that the extremely unique situation and persons at PARC and previously at ARPA are what gave rise to the inventions. I think he would agree with you that other people would have made similar discoveries and inventions, if only they too had the opportunity and necessary materials. From what I have heard Alan say in his speeches his perspective is that the opportunity simply does not exist today, at least not in the necessary configuration to do what PARC did.

Check out this video for a more detailed history recounted by Kay about PARC and what led up to it: http://vimeo.com/84523828

trhway
>there were and are very few researchers working with the opportunity mix necessary. They had five years of freedom from business concerns, tons of ambition, the right context, intelligence and inspiration.

A lot of Universities and research centers have the same. As most of them are, arguably, not a PARC may be these conditions aren't important? On the other side, we do have a lot of R&D coming out of Universities and research centers, so may be PARC isn't important?

i may sound like trolling, but i did listen live to Alan Kay once and honestly got more confused and got more questions as a result :)

dropit_sphere
University research is not free from business concerns either---the phrase "publish or perish" exists for a reason.
thisrod
Right at the start, Kay pointed out that the innovation to get from Alto to Macintosh took more work than inventing the Alto. I don't think he was ignoring that. His point might be that someone was bound to do the innovation, but it took very deliberate choices for PARC to situate themselves where the invention was possible.
skore
Good point. I would still say that the route "from Alto to Macintosh" is thinking more in IQ terms and that the full path, Alto to Macintosh to market, is frequently underestimated. I had a feeling that while Kay may have appreciated the first half, the second half was rather absent in his talk and he kept presenting it as though the software they built was sold ten years later by somebody else. For example saying that Microsoft took Bravo and sold Word.
irishjohnnie
What Microsoft did was hire Charles Simonyi in order to productize the research into Word
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.