HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Joe Armstrong & Alan Kay - Joe Armstrong interviews Alan Kay

Erlang Solutions · Youtube · 301 HN points · 20 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Erlang Solutions's video "Joe Armstrong & Alan Kay - Joe Armstrong interviews Alan Kay".
Youtube Summary
The next Code Mesh Conference will be on 8 - 9 November 2017 (with Workshops on 7 November) - subscribe to receive exclusive content, updates and benefits.
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Dec 13, 2021 · 2 points, 0 comments · submitted by tosh
One sort-of exception is Erlang (and anything else that runs on the BEAM vm, like Elixir). It doesn't match Alan Kay's vision exactly, because the message passing is done a bit differently (for example it's asynchronous), but creator Joe Armstrong called it something like "the best of functional programming and OOP".

If you come from C++, Java, or other "OOP" languages, and look at Erlang, it's completely different than what you expect. In Erlang, the "objects" are less like mutable structs and more like functional mini-programs that can create new programs, pass messages, and fail as you would expect.

To get some perspective on both of their perspectives, here's "Joe Armstrong interviews Alan Kay":

https://www.youtube.com/watch?v=fhOHn9TClXY

dllthomas
> Joe Armstrong called it something like "the best of functional programming and OOP".

Though IIUC Armstrong only started doing that after others pointed out that it was OOP.

Lammy
> for example it’s asynchronous

Ruby also lets you build programs like this as of 3.0, using Ractor#send / Ractor::receive https://docs.ruby-lang.org/en/master/doc/ractor_md.html#labe...

It’s still pretty new so a lot of the Gem ecosystem hasn’t caught up yet (e.g. C extensions need to be explicitly opted-in as Ractor-safe), but I built a new “MIME::Types” library replacement with it recently and have enjoyed very few teething issues: https://github.com/okeeblow/DistorteD/blob/NEW%E2%80%85SENSA...

Mar 26, 2021 · drkrab on Joe Is Wrong (2009)
Further context: Joe Armstrong and Alan Kay interview. https://youtu.be/fhOHn9TClXY
The irony of course being that Erlang ranks as one of the most OO languages, if one accepts the Alan Kay concept of OOP, which is rather more behavioural than structural, and thereby entirely compatible with the functional paradigm and algebraic forms generally.

See also https://www.youtube.com/watch?v=fhOHn9TClXY

Nov 07, 2020 · 16 points, 1 comments · submitted by baryphonic
jdkee
See https://www.wired.com/1996/12/ffglass/
Checking out what Alan Kay has to say about it might help: https://www.youtube.com/watch?v=fhOHn9TClXY

The way i see it OOP is about encapsulating/bundling data and the code that processes that data into one "object", that is isolated from other objects. The objects could only "communicate" with other objects by sending "messages" (data/events/something).

Personally i don't like OOP, and think that focusing on data and how it's processed is a better way forward (CSP and such). It's.. a complicated matter.

Concurrency.. the way i understand it, it is about making parts of the program run independent of each other and thus can run at the same time.

But you might want to check out someone who knows better, like Rob Pike: https://www.youtube.com/watch?v=cN_DpYBzKso

Yes, Erlang makes concurrent programs easier to write.

Again it is better to listen to someone who knows more, like the creator of Erlang Joe Armstrong: https://www.youtube.com/watch?v=bo5WL5IQAd0 Any talk by him is great.

Erlang, in how i see it, makes it easier to write concurrent programs because:

1. Parts of a program communicate with other parts by passing messages

2. Every variable is static, as in if you declare that A is 12 you can not change it to be 13

Joe Armstrong had a way of explaining these things so that even i could understand, so i do recommend watching every talk you can find of him. I also recommend the interview he did of Alan Kay https://www.youtube.com/watch?v=fhOHn9TClXY .

But if you want to really get into concurrency i recommend a paper called "Communicating Sequential Processes" by C. A. R. Hoare. See http://www.usingcsp.com/

It's a "formal language" for dealing with concurrency. Go, and some other languages, implement a "CSP style" way of doing concurrency.

Erlang doesn't do SIMD, nor GPU. However, it does world scale low latency systems.

Of course, it's still up to the programmer to do everything correctly and Erlang is just a tool.

I don't quite like *JS, so i won't say anything about that.

Well, hn seems to have collectively shit on this project in this comment thread. I agree with some of the negatives, but I don't understand why those would be the facets that stick out in this conversation. Yes, his site text explaining motivations are pretty bad. App control is nothing new. And many gestures are similar to other systems. But criticisms based on security, or the originality of his ideas or his general use of swipe gestures are just bullshit.

It is backwards and naive to think that desktops are just fine, and computer programmers are historically the last to understand and embrace change (now here is where we could put some Alan Kay quotes, I'll start with his exasperation at our lack of a real CAD system for programming)

> "we would have something like what other engineering disciplines have in serious cad system, a serious simulation of the cad designs and a serious fab facility, to deal with the real problems of doing programming. Ivan [Sutherland] just jumped there [with sketchpad]." [1]

So lets talk about the interesting features he presented, and do ourselves the service of learning from this work.

Panels: Top to bottom for content is DIFFERENT THAN ALL MAINSTREAM OSes. He is lamenting permanent status bars, the windows ribbon, the chrome tab bar and more with this feature. And he goes on to explain alternate features to replace that functionality he moved for this goal of more vertical space. He also displayed a number of situations regarding navigation through panels which seem well designed. He factors in pinning an active window and the ability to scroll among others, and minimizing windows to reinforce spatial memory and leave breadcrumbs. He also accounts for resizing windows.

See the new c2 federated wiki for interesting uses of vertical space and breadcrumbs [2].

Tags: As another user mentioned in the comments, a lot of work has gone into the study of PIM, and tagging is quite effective. Of the three (search, hierarchy, tag) none is found to be best, but the availability of all three is important. This project does us the service of reminding us that we are generally missing that third option. This system offers all 3 options. (I'm sorry I can't find my source right now).

Search: Across all elements of personal computing (email to tabs to applications to files) is an interesting idea. Yes omniboxes have been around forever and will be, but this project pushes the idea that there are even more hooks to toss into that system.

Gaze: This is fantastic, and there are limitless opportunities. Of course its not a silver bullet, you wont be taking my tiling wm keyboard controls away from me (see that Onion video on the keyboardless apple) and obviously I don't think that way. But there are cool interactions that very few people if any have had the ability to come up with on gaze augmented PC systems.

Touch: Everyone is saying that tiling wm controls are way better. Of course they are. What percentage of PC users have tiling wms? Lets just round down to 0%. This brings that kind of efficiency to users which would otherwise never have it.

I appreciate the commentors who have looked at this project and reflected on it. I learn a lot from and really enjoy reading hn especially for the comment threads, I hope I can pay it back some with this.

[1]: https://youtu.be/fhOHn9TClXY?t=1962

[2]: http://fed.wiki.org/view/welcome-visitors

E: This thread got a LOT better since I posted and refreshed the page. Thank you all.

anthk
>(now here is where we could put some Alan Kay quotes, I'll start with his exasperation at our lack of a real CAD system for programming)

CAD is the worst example, maybe. It requires precision in order to not crush down a bridge and typing commands with numbers gives you that in a much better way than a pure GUI.

angleofrepose
Good point, but I think he uses the CAD example more in the context of simulation than as defending the GUI.

I spent a few years in Architectural CAD software and afterwards went to Computer Science and I was surprised in retrospect how often I was already doing commandline-like things before I knew the first thing about a shell or repl. It seems you have that experience too, I have not heard or read too many people making that observation before.

Not at all hard to believe that he was sincere. He seemed to be pretty starstruck when he was interviewing Alan Kay on stage just three years ago: https://www.youtube.com/watch?v=fhOHn9TClXY.
vanderZwan
I was about to share this youtube channel, it has lots of wonderful videos with him - this particular one is a great example.
codyb
And his twitter @joeerl [0] which I feel unfortunate to have just discovered today is filled with him discovering emojis and dictation, talking up the joys of letters, and discussing FPGAs.

It’s sad to hear of his passing, and rewarding to see he was just as kind and curious as you could ever hope to be even just this month.

[0] - https://mobile.twitter.com/joeerl

Apr 20, 2019 · dpeck on Joe Armstrong has died
That is so sad, he was in his late 60s and it seemed like he had a lot of life left in him, talking about how software can get better and having a great (and sometimes snarky) outlook on the profession.

Highly recommend his thesis (2003) and a few of his great interviews/presentations for anyone who isn’t familiar with Joe, it captures a lot of what he thought about and pushed for in his professional life.

http://erlang.org/download/armstrong_thesis_2003.pdf https://youtu.be/fhOHn9TClXY https://youtu.be/lKXe3HUG2l4 https://youtu.be/rmueBVrLKcY

I hope his family and friends can find some comfort in how much he was appreciated and admired in the development community.

nextos
I came here to say the same thing. His thesis is extremely readable and illuminating on the topic of reliable distributed systems.
jacquesm
It goes much further than that, it shows how to tackle reliability even in systems that are not distributed. The primary insight is that all software will be buggy so you need to bake reliability in from day one by assuming your work product will contain faults.
nextos
Yes, I know. Erlang was not distributed till 1991, roughly 5 years after it was born.

It's also really illuminating how they implemented the first versions of Erlang as a reified Prolog [1]. But that is not explained in the thesis, just in his 1992 paper which he briefly cites.

[1] https://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=F4...

pklee
I came in thinking.. god it better not be him.. it better not be him.. so sad to hear this.. I loved his book on erlang. Just an amazing mind
masklinn
> That is so sad, he seemed like he had a lot of life left in him talking about how software can get better and having a great (and sometimes snarky) outlook on the profession.

I'm in actual shock, he was tweeting about pretty much that (also brexit and playing with his phone's voice recognition) just 2 weeks ago… He wasn't even 70…

pera
Yeah me too, he was also very active in the Elixir forum. RIP
mercer
He was even made admin just over a week ago :-/.

https://elixirforum.com/t/introducing-our-new-moderators-and...

I like Joe Armstrong's definition of OOP and what he did to make Erlang an OOP language:

http://erlang.org/pipermail/erlang-questions/2009-November/0...

Alan Kay went on the record to state that current OOP miss his point of OOP where message passing is the main point.

Joe Armstrong's definition of OOP in the mailing list give a little bit more context on the other modern features of OOP that aren't truly needed.

In Alan Kay and Joe Armstrong talk, Alan Kay acknowledge that Computer Science is pop culture where the past aren't learn from and why past mistakes are repeated.

https://www.youtube.com/watch?v=fhOHn9TClXY

hota_mazi
Pretty rich for Armstrong to lament about the past not being learned from while he himself used to think that OO sucks:

http://harmful.cat-v.org/software/OO_programming/why_oo_suck...

anthony_doan
That's an interesting article, I don't believe it go against his other article. Your link shows that he hate certain aspects of OOP and the other article shows he decides to strip those OOP aspects away and only choose the ones that he believe is OOP.

At least that's my interpretation on reading the link you've provided and comparing to the link I've posted.

mikekchar
I din't watch the Youtube video. I'll try to do it later, but the email is spot on IMHO. The point about polymorphism and inheritance is great (although I will argue having tried to implement OO systems from scratch before that it's very hard to implement non-inheritance based polymorphism in a dynamic language because of the difficulty of doing multiple dispatch).

The one quibble I'll make is that about the "everything is an object" issue. I think he's begging the question there. One of the things I realised having done it myself is that in OO systems message passing is essentially doing calling "bind" on a monad: the method is the function and the program state encapsulated in the object is the contents of the monad. This breaks very, very hard if the return value of your method is not an object -- you can no longer chain your operations. I honestly believe that this is the real (probably unconscious) reason why "everything is an object" OO languages are attractive.

Mar 17, 2019 · 7 points, 0 comments · submitted by tosh
I don't think I've watched it previously, and I don't have the time to now, but I have to imagine Joe Armstrong interviewing Alan Kay would at least touch on that (although from one comment it sounds like they didn't do more than that).

https://youtu.be/fhOHn9TClXY

jimbokun
Watch it!

I just watched the whole thing, based on just seeing your link.

It does talk about the historical connections between Smalltalk and Erlang, but goes far beyond that to remind us to think more about the bigger problems we need to solve, and get less caught up in doing things just because we have a tool that can do them or other people are doing things that way.

I found it truly inspiring.

One of the most balanced, insightful and respectful critiques I've read on the topic. Brightened up my day reading it. Thanks.

I've stated previously I'm an Erlang fan, for several of the reasons you've highlighted. I similarly don't believe it's a "global maxima".

Perhaps the most saddening observation is the number of languages that have come after Erlang - intended for server-type loads - that haven't learned from and built on its strengths in concurrency and fault tolerance.

I remember a separate discussion between Joe Armstrong and Alan Kay[0] where Kay posed the wonderful question (paraphrasing): "what comes next that builds on Erlang?"

That's a tempting prospect. My personal wish list would include 1st class dataflow semantics and a contemporary static type system and compiler that's as practically useful as Elm's today.

The key point is to build on what's been proven to work well, not throw it away and have to re-learn all the mistakes from scratch again.

[0] https://www.youtube.com/watch?v=fhOHn9TClXY

I am quite interested to hear if anybody has any thoughts on the work of Valentin Turchin, the author of Refal.

In an interview [1] conducted by Joe Armstrong, Alan Kay traced some of the key ideas of Erlang and Smalltalk to early papers by John McCarthy (Advice Taker), Carl Hewitt (Planner), Simula, and other work (including META 2, which apparently inspired OMeta). The theme here seems to be a favorite of Kay's: "late binding of meaning" used to build systems that scale just as well as biology.

In his interview, Kay mentions Minsky's Computation: Finite and Infinite Machines, which discusses various models of computation.

Now, Alan Kay has long looked to biology for hints on how to scale. (For example, in the interview I mentioned, he says that a computer is a "universal atom of meaning".)

I mention all this because what I am reading of Valentin Turchin [2] is making me very tempted to add (English) expositions of his work alongside my reading of Minksy's Computation. Specifically, Turchin's work on Supercompilation [3,4], which seems to draw upon a tradition of cybernetics [5], seems to me to be in somewhat of the same spirit as that of Kay's, at least when you consider that the influence of cybernetics largely means that we take cues from biology and attach meaning to existing patterns in nature and technology alike in our endeavors to replicate its success (and of course couple our constructions with them in feedback loops, as in the elecrical engineering that largely inspired cybernetics). Turchin appears to be a key figure [6] in the Principia Cybernetica project. (Whereas Kay appears to be more influenced by North American McLuhan and the Pragmatist traditions in philosophy and psychology, Turchin seems to come from a more European tradition of cybernetics. Both traditions seem to be scratching the same itch, though, especially when it comes to biologically inspired notions of computation and meaning.)

Seeing that so much of Turchin's work took place in Russia, and was only understood in the West many years later, I would be interested to hear what people make of it. Simon-Peyton Jones is quoted in [4] as surmising Supercompilation to be a "ball of mud", in which understanding anything at all required understanding the entire thing.

(Ben Goertzel, listed as a co-author of the 2002 paper on supercompilation [3], has made further remarks about Principia Cybernetica web on his website [7].)

[1] https://www.youtube.com/watch?v=fhOHn9TClXY

[2] http://pespmc1.vub.ac.be/TURCHIN.html

[3] http://goertzel.org/papers/SupercompilingJavaMay2002.htm

[4] https://themonadreader.files.wordpress.com/2014/04/super-fin...

[5] http://pespmc1.vub.ac.be/DEFAULT.html

[6] http://pespmc1.vub.ac.be/BOARD.html

[7] http://www.goertzel.org/benzine/PrincipiaCybernetica.htm

Alan Kay once said (at an Erlang conference):

"By the time the software engineering of a language gets in good shape, the language has become obsolete in terms of 'needed expressiveness'. [Therefore] in a 'Real' computer science, the best languages should serve as the 'assembly code' for the next generation of expression!" [1]

He then proposed the hypothetical question: "What should Erlang be an assembly code for?" [2]

I'm not sure if he knew about Elixir..

[1] https://youtu.be/fhOHn9TClXY?t=30m39s [2] https://youtu.be/fhOHn9TClXY?t=36m49s

>OO has been designed to model the world in the hope that this would make expressing business logic easier.

OO was designed for multi processing by sending msgs between multiple processes[0]. Like nature does. However I do not find it suitable for most parallel computing work because race conditions.

[0] https://www.youtube.com/watch?v=fhOHn9TClXY

>AFAIUI from a recent talk by Alan Kay his original intention was that communication would be async ..

This talk maybe ?

https://www.youtube.com/watch?v=fhOHn9TClXY

> Erlang != OOP as imagined by Kay... as I understand Kay's intent.

You should check out the video[0] of Joe Armstrong interviewing Alan Kay - it seemed to me that Erlang was in some respects closer to what Alan Kay intended for OOP than even smalltalk was.

[0]https://www.youtube.com/watch?v=fhOHn9TClXY

This is one of the papers referenced by Alan Kay in the talk posted to HN one day ago:

Joe Armstrong Interviews Alan Kay [video] (https://www.youtube.com/watch?v=fhOHn9TClXY)

Discussion: https://news.ycombinator.com/item?id=13033299

Ahh..META II [1]. This is awesome. Was just mentioned again in the "Joe Armstrong interviewing Alan Kay" talk [2][3]. One of Alan's favourite papers. Joe: "Dan said if you look at this, you will lose a month of your life. And I did" Or words to that effect.

Also the basis for OMeta[4]. I have to admit I've been avoiding it so far in order not to get diverted from architectural insights[5] but this may just be accessible enough to dive on in.

[1] http://www.ibm-1401.info/Meta-II-schorre.pdf

[2] https://www.youtube.com/watch?v=fhOHn9TClXY

[3] https://news.ycombinator.com/item?id=13033299

[4] http://www.tinlizzie.org/ometa/

[5] http://objective.st

poppingtonic
Yes, I googled it during the talk.
Nov 24, 2016 · 269 points, 50 comments · submitted by tosh
Kexoth
"Hello Joe" - Awesome interview opening joke [0] :)

[0] - https://youtu.be/xrIjfIjssLE

ust
I've just seen the movie after your comments, it was interesting to see that they use emacs (without syntax highlighting, it appears..). OT, but how's vim for coding in erlang, anyone?
draven
I had to check and according to https://en.wikipedia.org/wiki/GNU_Emacs , font-lock was added in version 19.28 released November 1, 1994. The movie certainly looks like it was recorded in the 80s.

Some people also still use Acme which AFAIK does not have syntax highlighting.

di4na
Every editors have all that is needed. The community use everything and do not care that much about editor of choice.
josteink
> it was interesting to see that they use emacs

How so? It's a solid piece of software which has been around since forever (since before Ctrl-C and Ctrl-V conventions ever existed). We're all programmers, and Emacs is all about being a programmer's editor.

If you want to see an Emacs-user out-hipster every single web-developer who thought he was cool when he used HTML for his presentations instead of Powerpoint/Keynote... Then watch this: https://www.youtube.com/watch?v=TMoPuv-xXMM

It's incredibly contrived, and incredibly nerdy, yet this, to me, encapsulates so much about what Emacs is. And I love it :)

copperx
Wasting time in such contrived approaches that ultimately gain you nothing is part of what Alan Kay talks about.

Programmers have the devious hobby of tackling complexity of their own doing. It isn't praiseworthy, or something to be proud about.

georgewsinger
Can someone explain to me Alan Kay's problem with monads in functional programming? He attacks the monad concept several times in this talk rather cryptically.
di4na
Monads are the way to hide the fact that state change. Basically they allow you to abstract away the concept of time.

Alan Kay seems to push more in the direction of data as CRDTs or fully vector clocked, so that time is just an additional way to referenced some data.

It is close to the idea of separating time from space or to combine them in two.

In CS, we tend to consider a static vision of the world, and a function applied to it. When the truth is that the world exist at different time and is wildly different at each of this time. So a piece of data would be defined by its place in memory and the clock time (vector clock) it was in that memory. Closer to what CRDTs does in a way.

swah
What did he mean when mentioning McCarthy having solved that "change of state" problem?
jstimpfle
Don't know his or your context, but McCarthy is the creator of Lisp, so in a way he's one of the pioneers of functional programming (there were of course other mathematicians inventing the foundations). Which is a way to combine programs from small building blocks that are just functions knowing no global state - they always return the same answer given the same arguments.

Contrast this to procedural or object oriented programming where usually the parts know a lot more about the context (state) than they really need to know. This can lead to implicit (and hard to discover) assumptions about the state of the world that are broken later on when some parts are changed, leading to bugs.

The way this works is by combining functions with (higher order) functions. For example, chain a function that gets A and returns B with a function that gets B and returns C. This is done without ever speaking of concrete inputs - reducing unnecessary knowledge should lead to fewer bugs. (Whether that's a more practical approach is another question).

pron
That's the wrong context :) He was talking about McCarthy's later discussion of what he called "situations", and is similar to temporal logic (which came later). Functional programming doesn't satisfactorily address the state change problem.
kazinator
Lisp is built on OOP: all sorts of objects that have state and functions operating on it. For instance, an I/O stream, needed to do the "read" and "print" parts of REPL, is a stateful object.

Lisp is not only where functional programming originated, but also object-based programming leading to OOP. Lisp was the first language which had first-class objects: encapsulated identities operated upon by functions, and having a type.

For instance, an object of CONS type could only be accessed by the "getter" functions CAR and CDR which take a reference to the object, and by the "setters" RPLACA an RPLACD.

Meanwhile, the other higher level programming language in existence, Fortran, had just procedures acting on global variables.

dang
I was interested to notice that Steele and Sussman describe Lisp as "an object-oriented language" on page 2 of this 1979 paper: https://dspace.mit.edu/bitstream/handle/1721.1/5731/AIM-514.....

There are other places in the Lambda papers and whatnot where they talk about objects, but I don't recall seeing that exact phrase. Of course, just what they meant by it is another question.

di4na
https://news.ycombinator.com/item?id=13036663

I answered here. I linked the paper by McCarthy at the end.

georgewsinger
So data structures shouldn't be viewed as static, but indexed by time. Fair enough. I still don't understand how this couldn't 100% be achieved via state monads, or at the very least purely functionally.

That said, I appreciate all of the answers here.

jstimpfle
Another way to put it, with monads (or more generally FP) there is a separation of functions and data. With objects data is hidden behind procedural interfaces.

I'm not familiar with CRDT, can you compress the idea of how to "remember" a whole history of states with them and why are vector clocks important? The introduction on wikipedia reads like they are useful for (geographically) distributed processing. I can't really relate or contrast this to monads.

di4na
The problem with distribution is that things can happen simultaneously (or at least before the latency enable you to decide who is first), so your goal would be to be able to recreate an history despite that asynchronicity.

That is what Consistency in the CAP theorem hint at.

The whole idea is that if you define things not only by their state but also by the order this state has evolved, it enable you to know what is the most recent one but also to go back in time. It is especially useful if someone come late after you updated the state and you discover that their update should have happened before the most recent change in state.

Forget that idea of data being hidden behind procedural interface.

Another way to put it : You have a Dog that is dirty at 1600. So you decide to clean him and put him in a bath. He is now clean at 1610. Now you have another person (thread? computer? no idea) that come at 1620 and got the order to clean the Dog at 1550 because someone saw he was dirty. The person do not ask if the dog is dirty or not. He got an order and do it. You now have a dog that is being cleaned again.

With Alan Kay point of view, there would not be a single dog with his single name. But a dog which name would be defined by his name and the moment you named it. So when the person that saw that the dog was dirty at 1550 and decide to clean him at 1620 when he was already clean would come to clean the dog, it would take the dog1550 and not the dog1620. So he would clean a dirty dog and not a clean one.

He would be in another legs of the Trouser of Time.

A couple of paper : the seminal one by Leslie Lamport on vector clocks : http://research.microsoft.com/users/lamport/pubs/time-clocks...

And here is mccarthy paper that Kay talk about : http://www.dtic.mil/dtic/tr/fulltext/u2/785031.pdf

swah
Thank you, I visit HN for this ;)
di4na
No problem, i think more people should look at the problems with time. We tend to not deal with it due to only doing single threaded synchronous stuff, but it is not how the future is going to be, nor the world it seems. So better learn now :)
None
None
brudgers
I am not Alan Kay {IANAK}.

My take is that from Kay's perspective computing is a mixture of mathematical and biological abstractions [he has degrees in both]. Monads only touch on the mathematical abstractions and ignore biological abstractions. My take is that Kay may be of a mind that absent biological abstractions computing is diminished in its ability to solve important problems.

None
None
lomnakkus
The thing about biology is that it's had a ~3 bn. year head start. It's also massively contingent; see evolution by natural selection. Which means that it may not actually represent any kind of global optimum[1].

[1] EDIT: Or even a local one if you're not optimizing the same thing that biology is.

None
None
mpweiher
I obviously can't tell you what he's thinking, but I can make an educated guess:

In the video (and almost anywhere else you look) you'll note he talks about recursion on the concept of "computer" as a way to scale, because that way the parts are as powerful as the whole.

If you split that concept into the sub-notions of "procedures" and "data", you no longer have that recursive principle, and "functions" in this context are sufficiently equivalent to procedures.

Going the other way, functions don't really scale up, many if not most things in computing are not functions to be computed. One could argue that computers and "computer science" are misnamed, as computation is more the exception than the rule. "Data-around-shufflers" would be more appropriate.

I didn't see the exact monad references, but they look like ways of working around the fact that the "function" primitive is inappropriate. It is really nice, mind you, and has wonderful properties. Kind of like circles as the basis for astronomical orbits: they are "perfect", but the world isn't perfect in the same way, so trying to model the imperfect world with these perfect primitives leads to having to introduce epicycles/monads.

My 2 €¢

Edit: “Functions are nice, but you need to advance the state of the system over time” — https://www.youtube.com/watch?v=fhOHn9TClXY&feature=youtu.be...

sebastianconcpt
"Data-around-shufflers" <- that's so it!
liotier
> "Data-around-shufflers" would be more appropriate

In the French language we have "calculateur" which is the direct translation of "computer" but only commonly used in the context of scientific computing, and "ordinateur" which is the common name for computers... The meaning of "ordinateur" pretty close to "data-around-shufflers", from latin ordino ‎(“to order, to organize”) - https://en.wiktionary.org/wiki/ordinateur: "in its application to computing, [ordinateur] was coined by the professor of philology Jacques Perret in a letter dated 16 April 1955, in response to a request from IBM France, who believed the word calculateur was too restrictive in light of the possibilities of these machines (this is a very rare example of the creation of a neologism authenticated by dated letter)"

sebastianconcpt
Great point! Due to the latin origin as you mentioned, in spanish there is this too: Ordenador. Would be more or less: "sorter" (implying that it does that to data). Very likely influenced by Perret's coined term translated to spanish
qwertyuiop924
I sense a great disturbance in the network... as if a thousand programmers squeed out in joy and then fell silent.
None
None
mr_luc
MIT's PDF of the Minsky book was given the HN hug of death for a while.
signa11
> MIT's PDF of the Minsky book was given the HN hug of death for a while.

may you please let me know which book you are referring to here ? thanks !

mr_luc
Computation: Finite and Infinite Machines
steveeq1
I don't understand. What does "hug of death" mean?
grzm
A rush of traffic that overwhelms a site due to a submission. Similar to being slashdotted or fireballed.
qwertyuiop924
...Or being wanged.
willvarfar
general term: "thundering herd" :)
mitchtbaum
so I searched, "stampede"

it seems too chaotic though

synonyms: crush, jam, trampling

hmm... "traffic jam" :)

signa11
> general term: "thundering herd" :)

more like "thundering nerds" ;)

khrm
I can't find that book anywhere in India. Is it available on public domain?
None
None
sea-shore
What happens at 38:25? Joe: "This algebra you've made it's a Prolog program you just move a few brackets". And I went "oh my god". From that moment on I was not interested in Smalltalk anymore. Alan: Yeah, nether was I! But...
vanderZwan
It's so fun and humanising to see Joe Armstrong geek out in the interview
imglorp
I had the privilege to hang out with him at Abstractions this fall, where he gave the keynote. He was just holding court in the commons, shooting the breeze about halfwit management, project estimation, Jira tickets, and everything that hasn't changed since Brooks.

Then he asked which session we were going to next, so we all hit one about concurrency issues. It was jammed and came in late to stand in the back. The presenter bemoaned locking and inconsistency and got to solutions. He started with a slide on immutability and lightweight functional actors in Erlang, having solved it 30 years ago, and pointed out Joe, who got a roomful of cheers and laughs and validation for his work, in person.

It was all very human.

History Bonus - Armstrong chatting with Stallman: http://imgur.com/a/ETkhZ

andrewl
I've never seen an interview with Alan Kay that I did not find fascinating.

I like Joe Armstrong, and he's brilliant of course, but he interrupted too much in places. But it seemed to be out of enthusiasm, so I can't really be bothered.

zzzcpan
Their message is essentially: you are all in cults, learn old stuff and start using actors and message passing. And not CSP, as Alan Kay puts it - it's too rigid and gear like.
lliamander
What I got from it: - model time explicitly - build computer systems to work at a galactic scale - know that reality is distinct from your mental models ("the map is not the territory") and you will see the present more clearly. The essence of comedy, science, and art involves changing the models through which we view the world.
dkarapetyan
Great stuff as usual: inverse vandalism, confusing means to an end with an end, system programmers are high priests of a low cult, if you're not failing 90% of the time then you're not trying hard enough, science is a set of heuristics to get around buggy brains, etc.
pierre_d528
How ironic.

All these are "slogans" aimed at lightning up broken minds.

And "we" resonate with them i.e. we really have broken minds.

Taking this talk seriously means getting together, working hard at making computer science a... science. Finding problems worth solving not solving problems we can solve (or worse, that we do not/should not have).

How hard it is to do that with broken, insulting, narrow minded (autistic to some degree) and violent minds (think of high priests of lower cults)?

The fact that these ideas did not take off (or wrongly: Java, "modern" GUI that initially targeted 8yo children, iPad which is a lying Dynabook) makes you wonder: despite the obvious and useful function that computation could satisfy, i.e. infusing the most powerful ideas into young children & advancing these ideas: What in the world is holding us back? Can we break loose?

dkarapetyan
Well that is the default state of human affairs in general. Even Kay mentions that he was in a funk and hearing these great ideas from others and then putting them into slogans is what helped him repair part of his brain damage. Inventing the future is hard work, not everyone is cut out for it.

I don't think having a negative spin on having a buggy brain is the constructive approach. Avoiding the pop science is really the key point Alan Kay keeps going back to. Learn from the elders, there is much wisdom in the history of computing.

pierre_d528
> "I don't think having a negative spin on having a buggy brain is the constructive approach."

I disagree. Having a "negative spin" is precisely what we need. We mostly do not even understand that we are stuck in a "Pink plane" (see Alan Kay videos for Pink|Blue plane definitions). Result: stagnation with more or less identical "paradigms" (see [1]) for more than 50 years (FORTRAN?).

Computing can do much better than Facebook or Google... where is my Dynabook?

[1]: https://mitpress.mit.edu/books/concepts-techniques-and-model...

sgt
Agreed. Just finished watching this. It's definitely worth spending an hour of your morning on listening to what these two giants (mostly Kay) have to say.
pjmlp
I love his work, and the ones that worked with him.

After learning Smalltalk, Oberon and Modula-3, I started a journey into the world of Xerox PARC and their influences into ETHZ and DEC research labs and became amazed how computing could have looked like already by the mid-90's.

To the point using UNIX stopped being fun, I wanted one of those environments, not a PDP-11 replica.

elviejo
Have You used Pharo Smalltalk?
elviejo
Have You used Pharo Smalltalk?
pjmlp
Not really, just installed it a few times to play around with it.

I actually used Smalltalk before Java was announced to the world.

We were using Smalltalk/V and VisualWorks for some university classes.

Luckily our university library had all the Smalltalk canonical books available and I spent quite a few long nights reading them.

Nov 23, 2016 · 2 points, 0 comments · submitted by safij
Nov 22, 2016 · 5 points, 0 comments · submitted by mpweiher
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.