HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Alan Kay's tribute to Ted Nelson at "Intertwingled" Festival

TheTedNelson · Youtube · 103 HN points · 19 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention TheTedNelson's video "Alan Kay's tribute to Ted Nelson at "Intertwingled" Festival".
Youtube Summary
It was late arriving on disk,
but better late than never :)
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Well, something like the smalltalk-78 that Alan Kay demos here (https://youtu.be/AnrlSqtpOkw?t=242) would be an ideal starting interface for e-ink based tablet.
Jul 12, 2021 · 1 points, 0 comments · submitted by katzeilla
Jun 21, 2021 · 2 points, 0 comments · submitted by sebastianconcpt
Jun 11, 2021 · pjmlp on Try APL
> Thanks for sharing, I love these types of stories. Really makes me pine for the "old" days, and wonder if there's a parallel universe where technology took a very different route such that languages like APL, Lisp, and Smalltalk are used instead of JavaScript, Java, and C#, and what that world looks like.

Easy, here is some time travel,

"Alan Kay's tribute to Ted Nelson at "Intertwingled" Festival"

https://www.youtube.com/watch?v=AnrlSqtpOkw

"Eric Bier Demonstrates Cedar"

https://www.youtube.com/watch?v=z_dt7NG38V4

"Yesterday's Computer of Tomorrow: The Xerox Alto │Smalltalk-76 Demo"

https://www.youtube.com/watch?v=NqKyHEJe9_w

However to be fair, Java, C# and Swift alongside their IDEs are the closest in mainstream languages to that experience, unless you get to use Allegro, LispWorks or Pharo.

Qem
Nice links. For a taste of what to program in Pharo is like, see https://avdi.codes/in-which-i-make-you-hate-ruby-in-7-minute...
Jun 09, 2021 · ryukafalz on Building the First GUIs
Personally I think GUIs took a wrong step somewhere between the Alto and the first Macs. Specifically because we've lost this ability: https://youtu.be/AnrlSqtpOkw?t=605

People say that CLIs are composable and GUIs aren't, and so CLIs are better for power users. They're right, for today's GUIs, but yesterday's GUIs were composable. We've lost that.

amatecha
Uhhhh... that demo is blowing my mind. Holy crap.
ryukafalz
It's one of my absolute favorites. And don't forget, this predates the Macintosh.

Imagine where we could be today if this was our starting point in the 70s.

pjmlp
We also lost this, https://www.youtube.com/watch?v=z_dt7NG38V4
"In Imagine a world without apps Shira Ovide asks “a wild question: What if we played games, shopped, watched Netflix and read news on our smartphones — without using apps? Our smartphones, like our computers, would instead mostly be gateways to go online through a web browser.” This question can be extrapolated into a larger question: “What do we want from our technology?""

To quote Bret Victor, "Toolkits, not Apps" : https://twitter.com/worrydream/status/881021457593057280

https://www.youtube.com/watch?v=AnrlSqtpOkw&t=4m19s

Jumping straight into talk about providers, personal data, security, etc. assumes cultural scaffolds of 'personal' around the device. That's fine, but the assumption needs to be made aware.

Ted Nelson: Computer Lib / Dream Machines (1975) [pdf] (worrydream.com)

https://news.ycombinator.com/item?id=19249556

http://worrydream.com/refs/Nelson-ComputerLibDreamMachines19...

For what it's worth, YC is helping Ted Nelson sell his "Computer Lib / Dream Machines" book:

https://twitter.com/nolimits/status/1087770718878687232

This book is a truly unique and is worth owning in hardcopy format.

https://news.ycombinator.com/item?id=19058137

Ted versus The Media Lab [video] (youtube.com)

https://news.ycombinator.com/item?id=22169775

https://www.youtube.com/watch?v=qH4Kr3Gsadc

Interview with Ted Nelson (notion.so)

https://news.ycombinator.com/item?id=19057331

https://www.notion.so/tools-and-craft/03-ted-nelson

Ted Nelson on What Modern Programmers Can Learn from the Past [video] (ieee.org)

https://news.ycombinator.com/item?id=16222520

https://spectrum.ieee.org/video/geek-life/profiles/ted-nelso...

Ted Nelson struggles with uncomprehending radio interviewer (1979) [audio] (youtube.com)

https://news.ycombinator.com/item?id=17376753

https://www.youtube.com/watch?v=RVU62CQTXFI

Ted Nelson’s published papers on computers and interaction, 1965 to 1977 (archive.org)

https://news.ycombinator.com/item?id=16245697

https://archive.org/details/SelectedPapers1977

Ask HN: What is the best resource for understanding Ted Nelson's ZigZag?

https://news.ycombinator.com/item?id=22518401

http://www.xanadu.com.au/ted/XUsurvey/xuDation.html

http://mimix.io/getting-to-xanadu

Alan Kay's tribute to Ted Nelson at "Intertwingled" Fest (how the script of Tron was the first movie script to ever be edited by a word processing program, on the Alto computer)

https://www.youtube.com/watch?v=AnrlSqtpOkw

"Silicon Valley Story" — a Very Short Romantic Comedy by Ted Nelson

A playful story about the microcircuitry of love, with Ted Nelson as an absentminded genius, featuring Doug Engelbart as Ted's father and Stewart Brand as the villainous CEO.

Closing song: "Information Flow", sung by Donna Spitzer and the auteur.

With Timothy Leary as the Good Venture Capitalist!

https://www.youtube.com/watch?v=AXlyMrv8_dQ

Ted Nelson's Channel

https://www.youtube.com/channel/UCr_DXJ7ZUAJO_d8CnHYTDMQ

lioeters
Oh, joy - the play "Silicon Valley Story" by Ted Nelson is so funny, weird, and self-consciously awkward in the best theatrical sense. Brilliant. I had never seen that before.

Saved the whole list in study/ted-nelson.txt. Thank you for gathering the links and sharing. (I'm a long-time fan of your work!)

A great visual example of messaging / sharing between two <s>apps</s> tools is in "Alan Kay's tribute to Ted Nelson at "intertwingled" fest": https://youtu.be/AnrlSqtpOkw?t=607
Yes, this siloing stops me from using annotation features "native" to each app and format as such as well.

Instead, I prefer to exfiltrate information from the silos(apps, formats, etc) and put them into my note taking system. Then I can do highlights, annotations, etc. on my own terms and also get the benefits of centralization such as searching and linking(the OP has another post describing their own system, which is pretty cool[1]). Currently I'm using Notion, which is also a silo of its own, but it's one that gives me a lot of control over how I lay my information out(and an escape plan from).

There are a lot of perspectives on this issue with data silos and walled gardens. But I'm of the opinion that it's a fairly bad state for all of us "end users". Computers to me are about infinite flexibility and malleability, but ironically the tools we have for annotation and remixing are in practice worse than what we have in the physical world. Reading a book in the physical world, I can converse with the author simply by jotting marginalia with my pencil. It's fluid, intuitive, and the medium of paper encourages it(in fact it can't help but be mutated by my use!: pages get bended, stained, torn, etc.). If I want to go further I can add post-it notes to mark interesting passages, I can xerox some pages and create subsections, if it's a magazine I can just tear them all out! That kind of flexibility just isn't available on a computer.

I think it's worth thinking really hard why we're in this state, especially since computing pioneers were actually very optimistic that data and computing would be way more personally malleable than it is now(I've been working on a small comic on this theme myself[2]). For example, check out this short demo[3] of Smalltalk where Alan Kay hooks up a single frame from an animation of a bouncing ball to a painting program, to modify that one frame while also monitoring the loop. Smarter than paper, but way more flexible.

My own thinking lately has been that developers need to think more about how their apps, like your PDF viewer, could cooperate with other apps to achieve our goals. All sorts of deep questions spring forth from here: "what is the best inter-communication system for them to cooperate with?", "how do you design them to intuitive?", "how can you make the UX as good as 'packaged apps'?". And looking at the history of personal computing, these are fairly old questions. The Unix Philosophy provides us with some clues, and its success, even in the smaller world of developer-oriented computing, gives us some hope.

Personally, I'm excited to one day live in a world where my desktop and smartphone and other devices -my computing spaces- feel less like a collection of walled gardens that refuse to intermingle, and more like one big beautiful garden, an ecosystem: lots of small, useful programs, chatting and cooperating, data freely flowing between them, each new program multiplying their collective potential and creating a new ecology, that I can adapt to my psyche and my needs, helping me be a better human.

[1] https://beepb00p.xyz/pkm-search.html

[2] https://twitter.com/yoshikischmitz/status/118845556004515840...

[3] https://youtu.be/AnrlSqtpOkw?t=607

jiriro
Can I do similar magic as shown in the demo [0] today?

I mean .. is there a system/tool/platform which allows that kind of interaction?

I know about Smalltalk instances like Squeak[1], Etoys[2], Pharo[3] .. Are these capable of what is shown in the demo?

[0] https://youtu.be/AnrlSqtpOkw?t=607

[1] https://squeak.org

[2] http://www.squeakland.org

[3] https://pharo.org

grblovrflowerrr
I'm not familiar enough with any of those systems to definitively say if that demo could be reproduced in them(though I'm very interested in getting acquainted with them). My impression is that all of those environments are quite powerful in different ways though.

Some other systems worth learning more about:

- https://github.com/kenperlin/chalktalk - https://dynamicland.org/

karlicoss
Hey, author here! That's exciting, you basically mirror my thoughts here :)

I agree with what you're saying about siloes, that's especially sad considering that having all this stuff unified and interacting is not some sort of mad science fiction, it's totally possible with technology that we have. It's just tedious for various reasons (one of which is that demand from users isn't high in the first place).

I'm working on a browser extension that unifies annotations and highlights from different sources like pocket, instapaper, hypothesis, or even plaintext notes: https://github.com/karlicoss/promnesia . I've been using it for more than a year, hope to release it soon (few things are specific to my setup, so I need to make them simpler/clearer for other people to use).

grblovrflowerrr
Hey that tool looks awesome, I'm excited to see where it goes!
It's out of the context of this thread, but we were quite sure that "simulation-style" systems design would be a much more powerful and comprehensive way to create most things on a computer, and most especially for personal computers.

At Parc, I think we were able to make our point. Around 2014 or so we brought back to life the NoteTaker Smalltalk from 1978, and I used it to make my visual material for a tribute to Ted Nelson. See what you think. https://www.youtube.com/watch?v=AnrlSqtpOkw&t=135s

This system --including everything -- "OS", SDK, Media, GUI, Tools, and the content -- is about 10,000 lines of Smalltalk-78 code sitting on top of about 6K bytes of machine code (the latter was emulated to get the whole system going).

I think what happened is that the early styles of programming, especially "data structures, procedures, imperative munging, etc." were clung to, in part because this was what was taught, and the more design-intensive but also more compact styles developed at Parc seemed very foreign. So when C++, Java, etc. came along the old styles were retained, and classes were relegated to creating abstract data types with getters and setters that could be munged from the outside.

Note that this is also "simulation style programming" but simulating data structures is a very weak approach to design for power and scaling.

I think the idea that all entities could be protected processes (and protected in both directions) that could be used as communicating modules for building systems got both missed and rejected.

Of course, much more can and should be done today more than 40 years after Parc. Massive scaling of every kind of resource requires even stronger systems designs, especially with regard to how resources can be found and offered.

Apr 21, 2019 · dang on Project Xanadu
Alan Kay on Ted Nelson: https://www.youtube.com/watch?v=AnrlSqtpOkw

Steve Wozniak on Ted Nelson: https://www.youtube.com/watch?v=gl0Wfs70rV4

Having everything in the same space, and being able to inspect and modify anything in real time by the seat of your pants, was exactly how Dan Ingalls impressed Steve Jobs on his fateful visit to Xerox PARC! It's unfortunate the iPad never did (and never will) achieve that level of power and flexibility.

https://www.quora.com/What-was-it-like-to-be-at-Xerox-PARC-w...

>Q: What was it like to be at Xerox PARC when Steve Jobs visited?

>A: Alan Kay, Agitator at Viewpoints Research Institute

>[...] The demo itself was fun to watch — basically a tag team of Dan Ingalls and Larry Tesler showing many kinds of things to Steve and the several Apple people he brought with him. One of Steve’s ways to feel in control was to object to things that were actually OK, and he did this a few times — but in each case Dan and Larry were able to make the changes to meet the objections on the fly because Smalltalk was not only the most advanced programming language of its time, it was also live at every level, and no change required more than 1/4 second to take effect.

>One objection was that the text scrolling was line by line and Steve said “Can’t this be smooth?”. In a few seconds Dan made the change. Another more interesting objection was to the complementation of the text that was used (as today) to indicate a selection. Steve said “Can’t that be an outline?”. Standing in the back of the room, I held my breath a bit (this seemed hard to fix on the fly). But again, Dan Ingalls instantly saw a very clever way to do this (by selecting the text as usual, then doing this again with the selection displaced by a few pixels — this left a dark outline around the selection and made the interior clear). Again this was done in a few seconds, and voila!

>The Smalltalk used in this demo was my personal favorite (-78) that was done for the first portable computer (The Parc Notetaker), but also ran on the more powerful Dorado computer. For a fun “Christmas project” in 2014, several of us (with Dan Ingalls and Bert Freudenburg doing the heavy lifting) got a version of this going (it had been saved from a disk pack that Xerox had thrown away).

>I was able to use this rescued version to make all the visuals for a tribute to Ted Nelson without any new capabilities required. The main difference in the tribute is that the revived version had much more RAM to work with, and this allowed more bit-map images to be used. This is on YouTube, and it might be interesting for readers to see what this system could do in 1978–79.

Alan Kay's tribute to Ted Nelson at "Intertwingled" Fest

https://youtu.be/AnrlSqtpOkw?t=142

Oct 19, 2018 · seltzered_ on Systems, not Programs
“The real goal is to design the form and nature of the software ‘entities’ — the ‘programmable substrate’ that we manipulate and compose when we program and use the system”

Reminded me of bret victors “toolkits, not apps” comment ( https://mobile.twitter.com/worrydream/status/881021457593057... ) and an excerpt from the smalltalk era : https://www.youtube.com/watch?v=AnrlSqtpOkw&t=4m19s

shalabhc
That ST video is great.

Yes, apps are silos - why? Making non-siloed apps is harder, and integration across these is harder still. Is it possible to design the underlying system/substrate such that siloed apps are not the structures grow easily, and integration is not something that has to be 'added on', but emerges automatically?

Jtsummers
Have you examined BeOS, really the BeFS portion, in the past?

It offered some of these kinds of things. A contact card could be a file system object with metadata, and you could query the file system not just for them but for all contact cards that had a phone number or AIM handle in the metadata. So if applications like your mail client and chat clients are aware of this they can use one common data store and just pull info from it in a similar manner. This breaks down the silo between applications that might use the same data and/or files.

shalabhc
I'm aware of extended attributes and it definitely seems like a step above the completely opaque byte arrays that are the status quo. I suppose we could call it db as a file system.

Another interesting idea is the data types in Amiga OS (http://www.mfischer.com/legacy/amosaic-paper/datatypes.html). I believe it lets applications be generic enough so that as new media formats are added, the applications automatically work with them, without rebuilding or restarting.

Lisp machines were mentioned in another thread here. They also had a better story around sharing data - you shared rich structures (rather than blobs of bytes that are encoded and decoded at every boundary) - and they would update live, etc.

In the end it seems today we're entrenched in something that's reasonable, but many good ideas could be explored more.

To get another taste of the kinds of systems possible in the 70s, in spite of the constraints, see Kay's demo of a revived Smalltalk-78 system: https://youtu.be/AnrlSqtpOkw?t=3m3s (I believe this system is about 10-20K LOC software in total)

Notes about the the revival are here: https://freudenbergs.de/bert/publications/Ingalls-2014-Small...

Here's alankay1 ( https://news.ycombinator.com/user?id=alankay1 ) presenting the Alto OS as part of his tribute to Ted Nelson:

https://www.youtube.com/watch?v=AnrlSqtpOkw

pjmlp
Great video thanks for sharing it.
#Softwarish - I'm biased a bit more towards interface development:

greg wilson - What We Actually Know About Software Development, and Why We Believe It’s True - https://vimeo.com/9270320#t=3450s

steve wittens - making webgl dance - the title is deceptive, it's in some ways a visual crash course in linear algebra - https://www.youtube.com/watch?v=GNO_CYUjMK8&t=84s

glenn vanderburg - software engineering doesn't work - https://www.youtube.com/watch?v=NCns726nBhQ

chris granger - in search of tomorrow - https://www.youtube.com/watch?v=VZQoAKJPbh8

alan kay - tribute to ted nelson at intertwingled fest - https://www.youtube.com/watch?v=AnrlSqtpOkw

bret victor - this is already mentioned but if I had to pick one it'd be 'the humane representation of thought' - https://vimeo.com/115154289

#Hardwarish:

saul griffith - soft, not solid: beyond traditional hardware engineering - https://www.youtube.com/watch?v=gyMowPAJwqo

deb chachra - Architectural Biology and Biological Architectures - https://vimeo.com/232544872

#Getting more meta in technology and history:

James Burke - Connections

Oct 29, 2017 · 2 points, 0 comments · submitted by rbanffy
Oct 28, 2017 · 1 points, 0 comments · submitted by jot
An easy way to try ST-72 is in your browser here: https://lively-web.org/users/Dan/ALTO-Smalltalk-72.html and ST-78 here: https://lively-web.org/users/bert/Smalltalk-78.html

The above links run emulators in Javascript and have been used for live demos as well (see https://youtu.be/AnrlSqtpOkw?t=2m29s for a fun one)

Related, for a Javascript based live system check out https://www.lively-kernel.org/ (also created by Dan Ingalls).

robertkrahn01
Thanks for posting the links.

The Lively project evolved over time:

- https://www.lively-kernel.org is from the Sun Labs / HPI days (check out the ancient http://sunlabs-kernel.lively-web.org, fully SVG based rendering :D)

- Lively Web: https://lively-web.org A live, programmable wiki (2012-2015)

- Since 2016 we are working on lively.next: https://lively-next.org. lively.next will focus more on the "personal environment" aspect.

shalabhc
Ah, thanks for explaining that. I was always a little confused about the relationship between these projects.
detaro
There also seems to be https://github.com/LivelyKernel/lively4-core
Oct 20, 2017 · 2 points, 0 comments · submitted by gjvc
OP here. Definitely some improvements... but (at least in my eyes) incremental.

Notebooks are pretty sweet, but in the hands of very few. They're definitely growing in popularity, which is great.

But even the "notebook" concept is a very limited "low pass filter" version of the visions of Engelbart, AKay, TN, etc.

The video that AKay has been sharing around in the post than @dang linked to[0] is a great example of what's not-even-close to possible on today's systems.

[0]https://www.youtube.com/watch?v=AnrlSqtpOkw&feature=youtu.be...

devereaux
(sorry on my first sentence I wanted to write "very few" - I gave the example of the Wacom to show that touchscreen were nothing new, and in fact quite incremental changes)

I agree the notebook concept is still quite limited. I just see it as a step in the right direction.

And great video BTW, thanks a lot for the link!

Sep 17, 2017 · 3 points, 0 comments · submitted by tosh
People often ask me "Is this a Dynabook, is that a Dynabook?". Only about 5% of the idea was in packaging (and there were 2 other different packages contemplated in 1968 besides the tablet idea -- the HMD idea from Ivan, and the ubiquitous idea from Nicholas).

Almost all the thought I did was based on what had already been done in the ARPA community -- rendered as "services" -- and resculpted for the world of the child. It was all quite straightforwardly about what Steve later called "Wheels for the Mind".

If people are interested to see part of what we had in mind, a few of us including Dan Ingalls a few years ago revived a version of the Xerox Parc software from 1978 that Xerox had thrown away (it was rescued from a trash heap). This system was the vintage that Steve Jobs saw the next year when he visited in 1979. I used this system to make a presentation for a Ted Nelson tribute. It should start at 2:15. See what you think about what happens around 9:05. https://youtu.be/AnrlSqtpOkw?t=135

Next year will be the 50 anniversary of this idea, and many things have happened since then, so it would be crazy to hark back to a set of ideas that were in the context of being able to be built over 10 years, and would be ridiculous if we didn't have in 30 (that would be 1998, almost 20 years ago).

The large idea of ARPA/PARC was that desirable futures for humanity will require many difficult things to be learned beyond reading and writing and a few sums. If "equal rights" is to mean something over the entire planet, this will be very difficult. If we are to be able to deal with the whole planet as a complex system of which we complex systems are parts, then we'll have to learn a lot of things that our genetics are not particularly well set up for.

We can't say "well, most people aren't interested in stuff like this" because we want them to be voting citizens as part of their rights, and this means that a large part of their education needs to be learning how to argue in ways that make progress rather than just trying to win. This will require considerable knowledge and context.

The people who do say "well, most people aren't interested in stuff like this" are missing the world that we are in, and putting convenience and money making ahead of progress and even survival. That was crazy 50 years ago, and should be even more apparently crazy now.

We are set up genetically to learn the environment/culture around us. If we have media that seems to our nervous systems as an environment, we will try to learn those ways of thinking and doing, and even our conception of reality.

We can't let the commercial lure of "legal drugs" in the form of media and other forms put us into a narcotic sleep when we need to be tending and building our gardens.

The good news about "media as environment" was what attracted a lot of us 50 years ago -- that is, that making great environments/cultures will also be readily learned by our nervous systems. That was one of Montessori's great ideas, and one of McLuhan's great ideas, and it's a great idea we need to understand.

There aren't any parents around to take care of childish adults. We are it. So we need to grow up and take responsibility for our world, and the first actions should be to try to understand our actual situations.

mmiller
"There aren't any parents around to take care of childish adults. We are it. So we need to grow up and take responsibility for our world, and the first actions should be to try to understand our actual situations."

I am seeing the expectation that there will be parents around to take care of childish adults, though this has really come into prominence in the last 10 years, the last 3 years, in particular. For me, it's evoked notions of H. G. Wells's "Eloi." If that sentiment moves forward unchanged, we won't get "parents" in reality, of course, but some perverted in loco parentis in society. I've heard hope expressed in some quarters that reality will provide some needed blows from some 2x4's across the head, once the young venture out into the world, but I wonder whether sheer numbers will decide this; whether the young will choose to reorient our society, in an attempt to please themselves, rather than being influenced by its experience.

Re. "most people are not interested in this"

From what I've seen, this excuse came out of a combination of the technical side of the "two cultures," and the distraction of a lot of people becoming excited by some perceived new possibilities. More recently, my perspective has shifted to it coming out of a perverted notion of "self-esteem," that challenge is "harmful," because being contradicted creates a sense of limits, isolation or shame, or more materialistically, the fear of economic isolation, thereby reducing career prospects for something original.

What's emerged is a desire to affirm one's self-image as "good," regardless of notions of good works. This is where the "legal drugs" come in, reinforcing this. Neil Postman was right to fear this dynamic.

Diverting off of what I'm saying here (though staying on your topic), have you looked at William Easterly's critique of how foreign aid is conducted? I think it dovetails nicely with what you're talking about, here. The short of it is that most aid efforts to the undeveloped world offer some form of short-term relief, but they don't address at all the political and economic issues that cause the problems the aid is trying to address in the first place. Secondly, when he's tried to confront the aid organizations about this, there is no interest in pursuing these matters, a version of "most people aren't interested in stuff like this." Whether there's a sincere desire to solve problems, or just go through the motions to "help," to make it appear like something is happening (ie. putting on a kind of show of compassion for public relations, and satisfying certain political goals), I don't know. It seems like the latter.

Do you have any insight on what's causing the reticence to get into these matters? Easterly didn't seem to have answers for that, as I recall.

alankay1
Closer to home -- for example in the US public educational system -- we have prime examples of your (and Easterly's) point about "band-aids" vs. "health". After acknowledging how politics works, I think we can see other factors at work in those more genuinely interested in dealing with problems. Some of these are almost certainly (a) the idea that "doing something" is better than doing nothing (b) that "large things are harder than small things" (c) the lack of "systems consciousness" amongst most adults (d) pick a few more.

The "it's a start" reply, which is often heard when criticizing actions in education which will get nowhere (or worse, dig the hole deeper) is part of several fallacies about "making progress": the idea that "if we just iterate enough" we will get to the levels of improvement needed. Any biologist will point out that "Darwinian processes" don't optimize, they just find fits to the environment. So if the environment is weak you will get good fits that are weak.

A "being more tough" way to think about this is what I've called in talks "the MacCready Sweet Spot" -- it's the threshold above the "merely better" where something important is different. For example, consider reading scores. They can go up or down, but unless a kid gets over the threshold of "reading for meaning" rather than deciphering codes, none of the ups and downs below count. For a whole population, the US is generally under the needed threshold for reading, and that is the systemic problem that needs to be worked on (not raising the scores a few points).

To stay on this example, we find studies that show it is very hard to learn to read fluently after we've learned oral language fluently. Montessori homed in on this earlier than most, and it has since been confirmed more rigorously. And this is the case for many new things that we need to get fluent at and above threshold.

So at the systems level of thinking we should be putting enormous resources into reforming the elementary grades rather than trying to "fix" high schools.

And so forth. This is the logic behind building dams and levees and installing pumps and runoff paths before flooding. One recent study indicated that the costs of prevention are 20% of the costs of disaster.

We could add to (d) above the real difficulties humans have of imagining certain kinds of things: we have no trouble with imagining gods, demons, witches, etc. but can't get ourselves beforehand into the "go all out" state of mind we have during an actual disaster (where heroes show up from everywhere). The very same people mostly can't take action when there isn't a disaster right in front of them.

This is very human. But, as I've pointed out elsewhere, part of "civilization" is to learn how to "do better than human" for hard to learn things.

mmiller
Hmm. So, it sounds like the same "keyhole" problem I've seen you talk about before (you used an AIDS epidemic as an example with this). What's seen is taken as "good enough," because the small perspective seems large enough. If there are any frustrations or tragedies, they're taken as, "It goes with the territory. Just keep plugging away."

There's a parable I used to hear that I think plays right into this:

Two people are walking along a beach, and they see an enormous field of starfish stranded ashore, and one of them starts throwing them, one by one, back into the sea. The other is watching, and says, "What's the point? You're not going to be able to save all of them." The person doing the throwing holds up a starfish, and says, "I can save this one."

It's a nice thought, talking about good will and perseverance, and certainly the message shouldn't be, "Give up," but I think it nicely illustrates the "keyhole" problem, because ideas like this lead people to believe that because they can see people who need help, even if the number is more than they can handle, and they're trying to help those people in the moment, that they're improving their lives in the long-term. That may not be true.

I've seen you talk about the "MacCready Sweet Spot" in relation to the Apollo program. BTW, I first heard you talk about that in a web video from some congressional testimony you gave back in 1982, when Al Gore was Chairman of the Science and Technology Committee. When you said that the Apollo rockets were below threshold, not nearly good enough to advance space travel, and that the rockets were a kind of kludge, the camera was panning around the room, showing large posters of different NASA missions that had been hung up around the chamber. Gore said in jest, "The walls in this room are shaking!" I can imagine! When I first heard you say that, it struck me as so contrary to the emotional impact I had from understanding what was accomplished (I do think that landing on the Moon and returning safely to Earth was a mean feat, particularly when the U.S. couldn't get a rocket into space to save our lives 12 years earlier (I don't mean that literally)), but as I listened to you explain how the rocket was designed (450 ft., mostly high explosives, with room for only 3 astronauts, not to mention that the missions were for something like 9 days at a time. Three days to get there. Two, sometimes more days, on the surface, and then three days to get home), it occurred to me for the first time, "My gosh! He's right!" It really helped explain my disappointment at seeing us not get beyond low-Earth orbit for decades. For years, I thought it was just a lack of will.

I've explained to people that when I was growing up in the '80s, I had this expectation drummed into me (willingly), as many people in my generation did, that we would see interplanetary travel, probably within our lifetimes, and in several generations, interstellar travel. It was very disappointing to see the Space Shuttle program cancelled with seemingly nothing beyond it on the horizon, and I think more importantly, no goals for anything beyond it that have been compelling. I heard you explain in a more recent presentation that this was a natural outcome of Apollo, that it set in motion something that had its own inertia to play out, but the end result is no one has any enthusiasm for space travel anymore, because the expectations have been set so low. The message being, "Beware of large efforts below threshold." Indeed!

alankay
We are "story creatures" and it takes a lot of training and willpower to depart from "fond stories and beliefs" to "actually think things through".

That the moon shot was just a political gesture -- and also relevant to ICBMs etc -- was known to every scientist and most engineers who were willing to think about the problem for more than a few seconds.

We hoped that the -romance- of the shot would lead to the very different kinds of technologies needed for real space travel (basically it's about MV = mv, and if you don't want to have to carry (and move) a lot of M, you have to have very high V (beyond what chemical reactions can produce). If you have to have a large M you use most of it to move just it! This has been known for more than 100 years.

But the real romance and its implications didn't happen in the general public and politicians.

mmiller
"We are 'story creatures' and it takes a lot of training and willpower to depart from 'fond stories and beliefs' to 'actually think things through'."

What your analysis did for me was help put two and two together, but yes, it "collided" with my notions of what an accomplishment it was, and what I had been led to believe that would lead to. What you exposed was that the reality of "what that would lead to" was quite different, and it explained the reality that was unfolding.

I knew that Apollo was a big rocket (ironically, that was one thing that impressed me about it, but I thought how amazing it was that such a thing could be constructed in the first place, and work. Though, I thought many years later about just what you said, that the more fuel you add, the more the fuel is just expending energy moving itself!), and that there were only three people on it, though the "efficiency" perspective, relating that to how it did not contribute to further knowledge for space travel, didn't occur to me until you laid it out. I also knew from listening to Reagan's science advisor that NASA was heavily influenced by the goals of military contractors that had done R&D on various technologies in the '60s, and which exerted political pressure to put them to use, to get return on investment. He said something to the effect of, "People worry about the Military Industrial Complex. Well, NASA IS the Military Industrial Complex! People don't think of it that way, but it is."

Not too long after I heard you talk about this, I happened to hear about a simulator called the Kerbal Space Program (commonly referred to as KSP), and someone posted a video of a "ludicrous single-launch vehicle to Mars (and back)" in it. Even though I think I've heard that KSP does not completely use realistic physics, it drove your point home fairly well. Though, people would point out that none of the proposed missions to Mars have talked about a single-launch vehicle from surface to surface. All of the proposals I've heard about have talked about constructing the vehicle in orbit. KSP, though, assumes chemical propellant.

https://youtu.be/mrjpELy1xzc

"the real romance and its implications didn't happen in the general public and politicians."

In hindsight, I've been struck by that. When I took the time to learn about the history of the Apollo program, I learned that Apollo 11 made a big impression on people all over the world, but that was really it. I think as far as the U.S. was concerned, people were probably more impressed that it met a political goal, JFK's bold proclamation that we would get men to the Moon and back, and that it was a historic first, but there was no sense of, "Great! Now what?" It was just, "Yay, we did it! Now onto other things." There's even been some speculation I've heard from politicos, who were in politics at the time, that we wouldn't have done any of the moon shots if Kennedy hadn't been assassinated, that it was sympathy for his legacy that drove the political will to follow through with it (if true, that's where the romance lay). Hardly anybody paid attention to it after 11, with the exception of Apollo 13, since there was the drama of a possible tragedy. Apollo 18 never got off the ground. The rocket was all set to go, but the program was scrubbed. People can look at the rocket, laid out in its segments sideways, at the Johnson Space Center in Houston.

alankay1
James Fletcher -- twice the head of NASA -- had a very good speech that "the moon shot, and etc" were really about learning to coordinate 300,000 people and billions of dollars to accomplish something big in a relatively short time. (And that the US should use these kinds of experiences (wars included because the moon shot was part of the cold war), to pick "goals for good" and do these.

Most of the old hands and historians of the moon shot point to the public in the 70s no longer being afraid of the Russians in the way they had been in the 50s, and the successful moon landings helped assuage their fears. The public in general was not interested in space travel, science, etc. and did not understand it or choose to understand it. I think this is still the case today.

mmiller
"Most of the old hands and historians of the moon shot point to the public in the 70s no longer being afraid of the Russians in the way they had been in the 50s, and the successful moon landings helped assuage their fears."

That's what I realized about 10 years ago. The primary political motivation for the space missions was to establish higher ground in a military strategic sense, and once that was accomplished, most people didn't care about it anymore. There was also an element of prestige to it, at least from Americans' perspective, that because we had reached a "higher" point in space than the Russians did, that gave us a sense of dominance over their extension of power.

You know this already, but people should keep in mind that what got the ball rolling was the launch of Sputnik in 1957. The message that most people got from that was that the Russians controlled higher ground, militarily, and that we needed to capture that pronto, or else we were going to be at a disadvantage in the nuclear arms race.

It also created a major push, as I understand, by the federal government to put more of an emphasis on math and science education, to seed the population of people who would be needed to pull that off. I've heard thinking that this created a generation of scientists and engineers who eventually came into industry, which created the technological products we eventually came to use. There's been a positive sense of that legacy from people who have reviewed it, but I've since heard from people who went through the "new math" that was taught through that push. They hated it with a passion, and said it turned them off to math for many years to come.

The more positive aspect I like to reflect on is that Sputnik inspired young people to become interested in math, science, and engineering on their own, and they really experienced those disciplines. A nice portrayal of one such person is in the movie "October Sky," based on Homer Hickam's autobiographical book, "Rocket Boys."

scroot
And the story we tell ourselves today is, frankly, a dismal one. It's that all of computing should be invented and put into the service of "the economy" rather than people. Instead of a culture of "computational literacy" in which human thought is extended to another level to the same effect as written literacy hundreds of years ago, we have an environment of complex technologies that cater to our most base evolutionary addictions and surveil us for profit.

Our universities are no longer institutions where people learn how to think, but rather where they learn how to "do" -- usually "doing" involves vocational practices that already exist, especially those that some manager (ie provost or dean) deems economically important. This is why you have generations of programmers bitching about type systems instead of the very politics, history, and social consequences of their own wares.

We don't have funding like ARPA/IPTO anymore and the devices and software of our world show it. Everything is some iteration on ideas that came from that period, good or bad – iterations whose goal is always "efficiency" in some form. Our current political culture prevents big initiatives like this, because how on Earth would they benefit the economy in the short or medium term, the limits of our new horizon?

Because these technologies have been created in service of an economic system that has proliferated social problems, they can never be a meaningful solution to those problems. Sure, we might invent some new systems for dealing with environmental catastrophe, but they are always predicated on the assumption that people should consume more and more. We are at the behest of billionaires – smart ones, mostly – who understand complex systems but also have an interest in ensuring that they remain complex.

It is unlikely that we will achieve a new kind of transcendent way of computing until we change the way we think about politics and economics. That is our environment. That is the "fit" that our technical systems have, as you say.

alankay1
Clear thoughts and summary!
mmiller
Great description of the problem (and great description of what we could have instead)! What came to mind as I read what you said here was a bit that I caught Neil deGrasse Tyson talking about from 8 years ago. As I heard him say this, I thought he was right on point, but I also felt sad that it's pretty obvious we're not thinking like this in computing. It turns out this is not just a problem in computing, but in science funding generally. That's what he was talking about, though he was quite polite about it:

https://youtu.be/UlHOAUIIuq0?t=22m30s

It strikes me that a very corrosive thought process in our society has been to politicize the notion of "how competitive we are" economically. Sure, that matters, but I see it more as a symptom than a cause of social problems. I hate seeing it brought up in discussions about education, because sure, competition is going to be a part of societal living, and in many educational environments, there's some aspect of competition to it (a story I heard from my grandfather from when he entered medical school was, "Look to your left. Look to your right. Only one of you will be graduating with a medical degree," because that was the intended ratio along the bell curve), but bringing economic competitiveness into education misses the point badly. I understand where the impulse to focus on that comes from, because globalization tends to produce a much more competitive economic landscape, where people feel much more uncertain about basic questions they have to answer. Part of which is creating the life they want, but often people end up missing a significant part of actually creating it (if it's even feasible. What I see more often is a compromise, because there are only so many hours in a day, and only so much effort can be put into it) in the process of trying to create it. They get caught up in "doing," as you said.

As I've thought back on the '60s, it seems like while there was still competition going on, the emphasis was on a political competition, internationally, not economic. There was a significant technological component to that, because of the Cold War/nuclear weapons. The creation of ARPA and NASA was an effect of that. My understanding is we underwent a reorientation in the 1980s, because it was realized that there was too little attention paid to the benefits that a relatively autonomous economy can produce, killing off bad ideas, where what's being offered doesn't match with what people need or want, and allowing better ones to replace them. That's definitely needed, but I'm in agreement with Kay that what education should be about is helping people understand what they need. Perhaps we could start by telling today's students that if and when they have children, what their children need is to understand the basic thought-inventions of our society in an environment where they're more likely to get that. Instead, what we've been doing is treating schools like glorified daycare centers. Undergraduate education has been turned into much the same thing.

scroot
> My understanding is we underwent a reorientation in the 1980s, because it was realized that there was too little attention paid to the benefits that a relatively autonomous economy can produce, killing off bad ideas, where what's being offered doesn't match with what people need or want, and allowing better ones to replace them.

There was a rightward swing in the late 1970s that took root in our political system, then commentariat, and then culture. It has never reverted. The term "neoliberalism" gets thrown around (usually by dweebs like me) but it's the precise term to use. Wendy Brown's recent book is probably the best overview of the topic in recent years.

The cultural shift that was unleashed in that period is so insidious that you don't even notice it half the time. Think about dating apps/sites where users talk about their romantic lives using terms like "R.O.I." Or people discussing ways to "optimize" their lives by making them more efficient. It's nuts.

Steve Jobs' old "bicycle of the mind" chestnut is, in a way, emblematic of this way of thinking. He was talking about how the most "efficient" animal was a human with a bicycle. He wanted human thinking to be "more efficient." If you listen to Kay, on the other hand, he's talking about something entirely different. The transcendent effect of literacy on mankind created the very possibility of civilization, for good or ill. Computing as an aid to thinking in the way the written word was previously could take this to the next, higher stage – one we cannot really describe or talk about because we don't even have the language to do so.

But short term thinking, shareholder value, and the need for economic growth – these are and have been the pillars of our politics and culture for several decades now. No one says who that growth benefits, of course, which is why it's no coincidence that the maw of inequality has opened ever wider during the same period. If you're wondering where all the "good ideas" are, well, we don't have time for good ideas. We only have time for profitable ones, or at least ones that can be sold after a high valuation.

The culture also trickled into the university, and then to funding (not just science funding, but funding for most fields. We need more than science to do new science). I have been on the bitch end of writing NSF grants for pretty ambitious projects, and the requirements are straight out of Kafka. They want you to demonstrate that you'll be able to do the things you're saying you'll hope to be able to do. That's not how it used to work. But the angle is always the same: they want something "innovative" that can be useful as immediately as possible. Useful for the economy, that is. They don't understand this undeniable fact: if you want amazing developments, you have to let passionate and smart people screw around and you have to pay them for it. The university used to be the place to screw around with ideas and methods. Now it's career prep.

> I understand where the impulse to focus on that comes from, because globalization tends to produce a much more competitive economic landscape, where people feel much more uncertain about basic questions they have to answer

This kind of globalization is a choice, one made by powerful people with explicit interests. It was not inevitable. Right now I live in the wealthiest country that has ever existed on the planet. And right now many of its citizens are calling their elected leaders to beg them not to take away the sliver of health care that they have left. We serve the economy and not each other. When there's a big decision to make, our leaders wonder "how the market will react," rather than how people will be affected.

Last point: the idea of this thing called "the economy" as an object of policy is relatively new. Timothy Mitchell has an amazing chapter on it in his book Carbon Democracy. The 20th century was one where we allowed the field of economics to cannibalize all others. The 21st has not taken the chance to escape this.

mmiller
What came to mind when I read your comments were some complaints I've had that relate to the "looking for the keys under the streetlight" fallacy. There are intuitions and anecdotes we can have about the unknowns, which is the best we can do about many things in the present, until they can be measured and tested. A problem I see often is there are people who believe that if it can't be measured, it's not part of reality. I find that the unknowns can be a very important part of working with reality successfully, and that what can be measured in the present can end up being not that significant. It depends on what you're looking for.

As Kay and I have discussed here, efficiency is not irrelevant, but we agree it's not the only significant factor in a system that we all hope will produce the wealth needed for societal progress. What seems to be needed is some knowledge and ethics re. the wealth of society, ideally enacted voluntarily, as in the philanthropic efforts of Carnegie, and similar efforts.

I happened to watch a bit of Ken Burns's doc. on the Vietnam War, and I was reminded that McNamara was a man of metrics. He wanted data on anything and everything that was happening to our forces, and that of the Viet Cong. He got reams of it, but there were people who asked, "Are we winning the hearts and minds?" There were no metrics on that. We didn't have a way to measure it, so the question was considered irrelevant. The best that could've been done was to get honest opinions from commanders in the field, who understood the war they were fighting, and were interacting with the civilian population, if people were willing to listen to that. In a guerrilla war, which is what that was, "hearts and minds" was one of the most important factors. Most of the rest could've been noise.

I dovetail with your complaint about focus on the economy in policy, but for me, it's philosophical: It's not the government's job to be worrying about that so much. If you look at the Constitution, it doesn't say a thing about "shall maintain a prosperous economy," or, "shall ensure an equitable economy," or any of that. Sure, people want enough wealth to go around, but it's up to us to negotiate how that happens, not the government. I think unfortunately, politicians and voters, no matter their political stripe, have lost track of what the government's job is. I think, broadly, we treat it like an insurer, or banker of last resort. If things don't seem to work out the way we'd like, we appeal to government to magically make us whole (including economically). That's really missing the point of it.

I could go into a whole thing about the medical system (I won't), but I'll say from the research I've done on it (which probably is not the best, but I made an effort of it), it is one of the most tragic things I've seen, because it is grossly distorted from what it could be, but this is because we're not respecting its function. As you've surmised with globalization, it's been set up this way by some interested people. It's a choice. I see a big knowledge problem with what's been done to it for decades. Doesn't it figure that people interested in healing people should be figuring out how to do it, to serve the most people who need their help, rather than people who have no idea how to do that thinking they should tell them how to do it? This relates back to your proposition about scientific research. Shouldn't research be left to people who know how to do that, rather than people who don't trying to micromanage how you do it? I think we'd be better off if people had a sense of understanding the limits of their own knowledge. I don't know what it is that has people thinking otherwise. The best term I can come up for it is "hubris." Perhaps the more accurate diagnosis, as Kay was saying, is fear. It makes sense that that can cause people to put their nose in deeper than where it should be, but it's like a horde panicking around someone who's collapsed from cardiac arrest, which doesn't have the good sense to give someone who knows CPR some room, and then to allow medical personnel in, once they show up.

It's looked to me like a feedback loop, and I shudder to think about where it will end up, but I feel pretty powerless to stop the process at the moment. I made some efforts in that direction, only to discover I have no idea what I'm really dealing with. So, with some regret, I've followed Sagan's advice ("Don't waste neurons on what doesn't work."), left it alone, and directed my energy into areas I love, where I hope to make a meaningful contribution someday. The experience of the former has given me an interest in listening to scientists who have studied people, what they're really like. It seems like something I need to get past is what Jon Haidt has called the "rationalist delusion," particularly the idea that rational thought alone can change minds. Not so.

alankay1
We should get "Fast Company" to interview you -- you'd do a better job! (Actually I think I did do a better job than their editing wound up with.)

Your comments and criticism of the NSF are dead-on (and is the reason I gave up on NSF a few years ago -- and I was on several of the Advisory Committees and could not convince the Foundation to be tougher about its funding autonomy -- very trick for them admittedly because of the way it is organized and threaded through both Congress and OMB).

One way to look at it is there is a sense of desperation that has grown larger and larger, and which manifests both in the powerless and the powered.

pepijndevos
It seems to me the Raspberry Pi people have put out a lot of good work making hard things possible and transforming education.

The video you posted reminded me about some of the work of Bret Victor, especially his interactive environments in his video on "inventing on principle". Although what's missing there is the ability to connect and modify the environment itself.

I still have to think a bit more about your link to Montessori, who has been a great inspiration for the school ( http://aventurijn.org ) that my parents started. Also in relation to what you said about teaching real math. Montessori has this system with beads and other countable cubes and pies to teach things like multiplying fractions, that is not used in most schools that call themselves "Montessori schools".

alankay1
Bret is a great thinker and designer
shalabhc
> See what you think about what happens around 9:05

I saw two interesting things around 9:05. A 13 year old made an 'active essay' on the computer which contains not just text but also a dynamic interactive environment so the reader can follow along and even try out new ideas. This type of media is not prevalent today - essays written by 13 year olds today would be in Google/Word docs and contain only static text and static pictures (i.e. digitized paper), but no interactivity. There are ways to do interactivity today, of course, but they are not easy and not the default. Is this what you are getting at?

The other interesting thing is how two tools - the drawing tool and animation tool - are made to work together, even when they were not created with each other in mind. IIUC the image is not a file format here but an object, but don't both tools then need to work with the same image protocol? I suppose you can always have adapters to connect different image protocols, but it doesn't seem like the best option. Still thinking about how much (or how little) shared knowledge is needed to make this scale to all types of objects.

alankay1
My reason for drawing your attention to this section of the talk is to show some of the ideas (now 40-50 year old) were about "dynamic media" - of course live computing should be part of the combined media experience on a personal computer -- and of course you should be able to do what are now called "mash ups", but to be able to combine useful things easily and at will (it's crazy that this e.g. isn't even provided for maps in a general way on smartphones, tablets and PCs).

But the larger point here is that if one is dealing with dynamic objects as originally intended, the objects can help greatly and safely in coordinating them. This shouldn't be more difficult than what we do in combining ordinary materials in our physical world (it should be even simpler!).

In the system used for the demo -- Smalltalk-78 -- every thing in the system is a dynamic object -- there are no "date structures". This means in part that each object, besides doing its main purpose, can also provide useful help in using it, can include general protocols for "mashing up", etc.

We can do better today, but my whole point in the interview and in these comments is that once e.g. Engelbart showed us great ideas for personal computing, we should not adopt worse ideas (why would any reasonable people do this?), once dynamic media has been demonstrated in a comprehensive way, we should not go back to imitating static media in ways that preclude dynamic media (if you have dynamic media you can do static media but not vice versa!).

Going back and doing Engelbart or Parc also makes no sense, because we have vastly more computing resources today than 50 years ago. We need to go forward -- and -to think things through- ! -- about what computers are, what we are, and how to use the best of both in powerful combinations. This was Licklider's dream from 1960, and some of it was built. The dream is still central to our thinking today because it was so large and good to be always beckoning us ahead.

shalabhc
Thank you for taking the time to respond, and I'd really appreciate if you can clarify my follow up below too!

> every thing in the system is a dynamic object -- there are no "date structures"

I'm still programming in data structures :/. I've seen many of your talks over the years and it took me quite a while to realize what you mean by objects (I think) is not just the textual specification (i.e. 'source code' in today's world), but rather a live, run-able thing that can be probed, inspected and made to do its thing, all by sending messages to it. In the Unix world this would be more akin to a long running server process, but with a much better unified, discover-able IPC mechanism (i.e. 'messaging'). The only thing that needs to be standardized here is the messaging mechanism itself. Larger processes would be constructed by just hooking up existing objects. Automatic persistence would mean these objects don't need to extract and store 'just data' outside themselves, etc.

This model blurs the distinction between what today we call 'programming' (writing large gobs of text), what we call 'operations' (configuring and deploying programs), and what we call 'using' (e.g. reviewing, organizing my photos). Instead, for every case, I would be doing the same kind of operation - i.e. inspecting and hooking up objects - but the objects I'm working with would be different, and the UI could be different. This makes programming more interactive ('let me see if this object can talk to that object by actually connecting them' vs. 'let me see if I can write a large blob of text that satisfies the compiler, by simulating the computer in my head').

The other thing I notice is you don't slice the computation the same way that is so common today. E.g. today I write source code (form #1 of computation), which runs through a compiler to produce an executable file (form #2 of the same computation), which is then executed and loaded in memory (form #3, because now it merges with the data from outside itself). Form #1 is checked in to source control, form #2 is bundled for distribution and form #3 is rather transient.

Instead, you're slicing computation on a different axis and all forms of the same computation are kept together - i.e. the specification, executable and runtime forms are one and the same 'object'. The decomposition happens by breaking down along functional boundaries. This means modification of the specification can happen anywhere I encounter one of these live objects, right then and there. I don't have to trace the computation to its 'source'.

So my main question is - am I on the right track here?

> Going back and doing Engelbart or Parc also makes no sense

I agree, but given the sad state of composition, even if we had some of those ideas today, it would seem like a step forward :) IMO, today we want to think of farms of computers as one large computer, and instead of programming in the small, we want to program all of them together.

mmiller
I think your analogy is going in a better direction. I had the same idea after taking a look at Squeak for a while. What would need to be added to your analogy is a notion of design. You see, in Smalltalk, for example, your programming takes place in the messages. So, even the daemons (which, for the sake of argument, we could think of as analogies to objects), which would be the senders and receivers of messages would be made out of the same stuff, not C/C++. So, this is a pretty dramatic departure from the way Unix operates. What this should suggest is that the semantic connection between objects is late-bound. Think of them as servers.

Secondly, in the typical Smalltalk implementation, there is still compilation, but it's incremental. You compile expressions and methods, not the whole body of source code for the whole system. What's really different about it is since the semantic actions are late-bound, you can even compile something while a thread is executing through the code you're compiling. So, you get nearly instant feedback on changes for free. Bret Victor's notion of programming environments blurs the axes you're talking about even more, so that you don't have to do two steps to see your change, while a thread is running (edit, then compile). You can see the effect of the change the moment you change an element, such as the upper bound of an iterative loop. To make it even more dynamic, he tied GUI elements (sliders) to such things as the loop parameters, so that you don't have to laboriously type the values to try them out. You can just change the slider, and see the effects of the change in a loop's range very quickly, such that it almost feels like you're using a tool-based design space, rather than programming.

I don't know how this was done in ST-78, but in ST-80, at least in accounts I've heard from people who've used versions of it, and in the version of Squeak I've used from time to time, the source code is not technically stored with the class object, though the system keeps references to the appropriate pieces of source code, and their revisions, mapping them to the classes in the system, so that when you tell the system you want to look at the source code for a class, it pulls up the appropriate version of the code in an editor.

Source code is stored in a separate file, and Smalltalk has a version control system that allows reviewing of source code edits, and reversion of changes (undo). The class object typically exists in the Smalltalk image as compiled code.

There are many things that are different about this vs. what you typically practice in CS, but addressing your point about data, in OOP, objects are supposed to take the place of data. In OOP, data contains its own semantics. It inverts the typical notion of procedures acting on data. Instead, data contains procedures. It's an active "live" part of the programming that you do. So, yes, data is persisted, along with its procedures.

A simple example in Smalltalk is: 2 + 2. If we analyze what's going on, the "2"'s are the pieces of data, the objects/servers, and "+" is used to reference a method in one of the data instances, but it doesn't stop there. The "2" objects communicate with each other to do the addition, getting the result: 4. As you can tell implicitly, "2 + 2" is also source code, to generate the semantic actions that generate the result.

shalabhc
I've seen some of Bret Victor's talks but your description made it click!

> the source code is not technically stored with the class object

This is more of an optimization choice, perhaps? Given the link is maintained, it might be OK to say the source code and the bytecode form are two forms representing the same object?

mmiller
"This is more of an optimization choice, perhaps? Given the link is maintained, it might be OK to say the source code and the bytecode form are two forms representing the same object?"

Sure. :)

alankay1
Nothing is technically stored with the class object (everything is made from objects related by object-references). Semantically everything is "together". Pragmatically, things are where they need to be for particular implementations. In the early versions of Smalltalk on small machines it was convenient to cache the code in a separate file (but also every object -- e.g. in Smalltalk-76) was automatically swappable to the disk -- just another part of the pragmatics of making a very comprehensive system run on a tiny piece of hardware.
alankay1
"More or less" ... I can see that it's hard after decades of "data-centric" perspectives to think in terms of "computers" rather than "data", and about semantics rather than pragmatics. It's not "data contains procedures" but that objects are (a) semantically computers, and are impervious to attack from the outside (a computer has to let an attack happen via its own internal programming), and (b) what's inside can be anything that is able to deal with messages in a reasonable way. In Smalltalk, these are more objects (there are only objects in Smalltalk). The way the internals of typical Smalltalk objects are organized could be done better (we used several schemes, but all of them had variables and methods (which were also objects).

So "2" is not "data" in Smalltalk (unfortunately, it is in Java, etc.)

We had planned that the interior of objects should be an "address space of objects" because this is a better and more recursive way to do modularization, and to allow a different inter-viewing scheme (we eventually did some of this for the Etoys system for children about 20 years ago). But the physical computers at Parc were tiny, and the code we could run was tiny (the whole system in the Ted Nelson demo video was a little over 10,000 lines of code for everything). So we stayed with our top priority: to make a real and highly interactive system that was very comprehensive about what it and the user could do together.

mmiller
I was using "data" in the spirit of a saying I heard many years ago in CS, that, "Data is code, and code is data." It seems that people in CS are still familiar with this phrase. I was focusing on the latter part of that phrase. I was trying to answer the question that I think is often implied once you start talking to people about real OOP, "What about data?" I almost don't like the term "data" when talking about this, because as you say, it gets one away from the focus on semantics, but whenever you're talking to people in the computing field, such as it is, I think this question is unavoidable, because people are used to thinking of code and information as separate, hence the notion of data structures. People need a way to translate in their minds between what they've done with information before, and what it can be. So, I used the term "data" to talk about "literal objects" (like "2", or other kinds of input), but I was using the description of "processors" (ie. computers), "containing procedures," which can also be thought of as "operators."

I think the idea of an "inversion" is quite apt, because as you've said before, the idea of data structures is that you have procedures acting on data. With real objects, you still have the same essential elements in programming, the same stuff to deal with, but the kinds of things programmers typically think about as "data" are objects/computers in OOP, with intrinsic semantics. So, you're still dealing with things like "2", just as procedures acting on data do, but instead of it being just a "dead" symbol, that can't do anything, "2" has semantics associated with an interface. It's a computer.

shalabhc
> We had planned that the interior of objects should be an "address space of objects" because this is a better and more recursive way to do modularization

Something that nags me in the back of my mind is that messages are not just any object, they always have the selector attached. Why not let objects handle any other object as a message? Is this what you mean by the above?

Thinking about the biological analogy (maybe taking it too far...): the system of cells is distinct from the system of proteins inside the cells and going up the layers we have the systems of creatures. So the way proteins interact is different from how cells interact, etc. but each system derives its distinct behaviors from the lower ones. Also, the messages are typically not the entities themselves but other lower level stuff (cells communicate using signals that are not cells). So in a large scale OO system we might see layers of objects emerge. Or maybe we need a new model here, not sure.

alankay1
Take a look at the first implemented Smalltalk (-72). It implemented objects internally as a "receive the message" mechanism -- a kind of quick parser -- and didn't have dedicated selectors. (You can find "The Early History of Smalltalk" via Google to see more.)

This made the first Smalltalk "automatically extensible" in the dimensions of form, meaning, and pragmatics.

When Xerox didn't come through with a replacement for the Alto we (and others at Parc) had to optimize for the next phases, and this led to the compromise of Smalltalk-76 (and the succeeding Smalltalks). Dan Ingalls chose the most common patterns that had proved useful and made a fixed syntax that still allowed some extension via keywords. This also eliminated an ambiguity problem, and the whole thing on the same machine was about 180 times faster.

I like your biological thinking. As a former molecular biologist I was aware of the vast many orders of magnitude differences in scale between biology and computing. (A typical mammalian cell will have billions of molecules, etc. A typical human will have 10 Trillion cells with their own DNA and many more in terms of microbes, etc.) What I chose was the "Cambrian Revolution Recursively": that cells could work together in larger architecture from biology, and that you can make the interiors of things at the same organization of the wholes in computing because of references -- you don't have to copy. So just "everything made from cells, including cells", and messages made from cells, etc.

Some ideas you might find interesting are in an article I wrote in 1984 -- called "Computer Software" -- for a special issue of Scientific American on "Software". This talks about the subject in general, and looks to the possibility of "tissue programming" etc.

alankay1
I should have mentioned a few other things for the later Smalltalks. First, selectors are just objects. Second, you could use the automatic "message not understood" mechanism to field an unrecognized object. I think I'd do this by adding a method called "any" and letting it take care of arbitrary unknown objects ...
shalabhc
> adding a method called "any"

Right, I understand there are ways to do this with methods but my question was more about the purity aspect, which you already addressed above.

alankay1
A selector is an object -- so that is pure -- and its use is a convention of the messaging, and the message itself is one object, that is an instance of Class message.

What's fun is that every Smalltalk contained the tools to make their successors while still running themselves. In other words, we can modify pretty much anything in Smalltalk on the fly if we choose to dip into the "meta" parts of it, which are also running. In Smalltalk-72, a message send was just a "notify" to the receiver that there was a message, plus a reference to the whole message. The receiver did the actual work of looking at it, interpreting it, etc.

This is quite possible to make happen in the more modern Smalltalks, and would even be an interesting exercise for deep Smalltalkers.

shalabhc
> A selector is an object -- so that is pure -- and its use is a convention of the messaging

The selector 'convention' is hard coded in the syntax - this appears to elevate selector based messaging over other kinds. But now I'm rethinking this differently - i.e. selectors isn't part of the essence, but a specific choice that could be replaced (if we find something better.)

alankay1
It's an extensible language with a meta system so you can make each and every level of it do what you want. And, as I mentioned, the first version of Smalltalk (-72) did not have a convention to use a selector. The later Smalltalks wound up with the convention because using "keywords" to make the messages more readable for humans was used a lot in Smalltalk-72.
mmiller
I can't remember if I've brought this up already in this thread, but if you want to "kick the tires" on ST-72, Dan Ingalls has an implementation of it up on the web. It's running off of a real ST-72 image. I wrote about it at https://tekkie.wordpress.com/2014/02/19/encountering-smallta...

I include a link to it, and described how you can use it (to the best of my knowledge), though my description was only current to the time that I wrote it. Looking at it again, Ingalls has obviously updated the emulation.

The nice thing about this version is it includes the original tutorial documentation, written by Kay and Adele Goldberg, so you can download that, and learn how to use it. I found that I couldn't do everything described in the documentation. Some parts of the implementation seemed broken, particularly the class editor, which was unfortunate, and some attempts to use code that detected events from the mouse didn't work. However, you can write classes from the command line (ST-72 was largely a command-line environment, on a graphical display, so it was possible to draw graphics).

If you take a look at it, you will see a strong resemblance to Lisp, if you're familiar with that, in terms of the concepts and conventions they were using. As Kay said in "The Early History of Smalltalk," he was trying to improve on Lisp's use of special forms. I found through using it that his notion of classes, from a Lisp perspective, existed in a nether world between functions and macros. A class could look just like a Lisp function, but if you add parsing behavior, it starts behaving more like a macro, parsing through its arguments, and generating new code to be executed by other classes.

The idea of selectors is still kind of there, informally. It's just that it takes a form that's more like a COND construct in Lisp. So, rather than each selector having its own scope, as in later versions, all of them exist in an environment that exists in the scope of the class/instance.

After using it for a while, I could see why they went to a selector model of message receipt, because the iconic language used in ST-72 allowed you to express a lot in a very small space, but I found that you could make the logic so complex it was hard to keep track of what was going on, especially when it got recursive.

shalabhc
> I wrote about it at https://tekkie.wordpress.com/2014/02/19/encountering-smallta....

Sweet, thanks! There's also the ST-78 system at https://lively-web.org/users/bert/Smalltalk-78.html

> existed in a nether world between functions and macros

Macros are just functions that operate on functions at 'read-time', from my POV. So if you eliminate the distinction between read-time and run-time, they're the same.

> It's just that it takes a form that's more like a COND construct in Lisp.

And even COND isn't special, it's just represented as messaging in Smalltalk, right?

> you could make the logic so complex it was hard to keep track of what was going on

Interesting, I see.

mmiller
"And even COND isn't special, it's just represented as messaging in Smalltalk, right?"

Right. What I meant was that the parsing would begin with "eyeball" (ST-72 was an iconic language, so you would get a character that looked like an eye viewed sideways), and then everything after that in the line was a message to "eyeball," talking about how you wanted to parse the stream--what patterns you were looking for--and if the patterns matched, what messages you wanted to pass to other objects. That was your "selector" and method. What felt weird about it, after working in Squeak for a while, is these two concepts were combined together into "blobs" of symbolic code. You would have a series of these "messages to eyeball" inside a class. Those were your methods.

The reason I said it was similar to COND was it had a similar format: A series of expressions saying, "Conditions I'm looking for," and "actions to take if conditions are met." It was also similar in the sense that often that's all that would be in a class, in the same way that in Lisp, a function is often just made up of a COND (unless you end up using a PROG instead, which I consider rather like an abomination in the language).

In ST-72, there's one form of conditional that uses a symbol like "implies" in math (can't represent it here, I don't think), and another where you can be verbose, saying in code, "if a = b then do some stuff." But what actually happens is "if" is a class, and everything else ("a = b then do some stuff") is a message to it. Of course, you could create a conditional in any form you want.

In ST-80, they got rid of the "if" keyword altogether (at least in a "standard" system), and just started with a boolean expression, sending it a message.

a = b ifTrue: [<do-one-thing>] ifFalse: [<do-something-else>].

They introduced lambdas (the parts in []'s) as objects, which brought some of the semantics "outside of the class" (when viewed from an ST-72 perspective). It seems to me that presents some problems to its OOP concept, because the receiver is not able to have complete control over the meaning of the message. Some of that meaning is determined by partitioned "blocks" (lambdas) that the receiver can't parse (at least I don't think so). My understanding is all it can do with them is either pass parameter(s) to the blocks, executing them, or ignore them.

One of the big a-ha moments I had in Smalltalk was that you can create whatever control structures you want. The same goes for Lisp. This is something you don't get in most other languages. So, a temptation for me, working in Lisp, has been to spend time using that to work at trying to make code more expressive, rather than verbose. A positive aspect of that has been that it's gotten me to think about "meanings of meaning" in small doses. It creates the appearance to outsiders, though, that I seem to be progressing on a problem very slowly. Rather than just accepting what's there and using it to solve some end goal, which I could easily do, I try to build up from the base that's there to what I want, in terms of expression. What I have just barely scratched the surface of is I also need to do that in terms of structure--what you have been talking about here.

mmiller
Taking your feedback into consideration, I had the thought that it would be more accurate to talk about things like "2" in an OOP context as symbols with semantics (which provide meaning to them), not data, since "data" connotes more a collection of inputs/quantities, where we may be able to attach a meaning to it, or not, and that wasn't what I was going after. I was going after a relationship between information and semantics that can be associated with it, but trying to provide a transition point from the idea of data structures to the idea of objects, for someone just learning about OOP. Doing a sleight of hand may not do the trick.

My starting point was to use a very interesting concept, when I encountered it, in SICP, where it discussed using procedures to emulate data, and everything that can be done with it. It seemed to help explain for the first time what "code is data" meant. It illustrated the inversion I was talking about:

https://tekkie.wordpress.com/2010/07/05/sicp-what-is-meant-b...

"In 2.1.3 it questions our notion of “data”, though it starts from a higher level than most people would. It asserts that data can be thought of as this constructor and set of selectors, so long as a governing principle is applied which says a relationship must exist between the selectors such that the abstraction operates the same way as the actual concept would." It went on to illustrate how in Lisp/Scheme you could use functions to emulate operations like "cons", "car", and "cdr", completely in procedural space, without using data structures at all.

This is what I illustrate with "2 + 2", and such, that code is doing everything in this operation, in OOP. It's not a procedure applied to two operands, even though that's how it looks on the surface.

alankay1
Yes, the SICP follows "simulate data" ideas much further back in the past, including the B5000 computer, and especially the OOP systems I did. But the big realization is that there are very few things that are helpful when they are passive, and the non-passive parts are the unique gift of computing. The question is not whether ideas from the past can be simulated (easy to see they can be if the building blocks are whole computers) but what do we "mean by 'meaning' "?

Good answers to this are out of the scope of HN, but we should be able to imagine -processes- moving forward in both our time and in simulated time that can answer our questions in a consistent way, and can be set to accomplish goals in a reasonable way.

alankay1
Yes, you have the gist of our approach in the 60s and especially at Parc in the 70s. And the Doug McIlroy parts of Unix also got this (the "pipes" programming and other ideas).

What I called "objects" ca 1966 was a takeoff from Simula and Sketchpad that was highly influenced by both biology, by the "processes" (a kind of virtual computer) that were starting to be manifested by time-sharing systems, and by my research community's discussions and goals for doing an "ARPAnet" of distributed computers. If you took the basic elements to be "computers in communication" you could easily get the semantics of everything else (even to simulate data structures if you still thought you needed them).

So, yes, everything could be thought of as "servers". Smalltalk at Parc was entirely structured this way (and the demo I made from one of the Parc Smalltalks for the Ted Nelson tribute shows examples of this).

It's worth noting that you then have made a "software Internet", and this can be mapped in many ways over a physical Internet.

And so forth. This got quite missed. In a later interview Steve said that he missed a bunch of things from his visit to Parc in 1979. What was ironic was that the context of the interview was partly that the SW of the NEXT computer now did have these (in fact, not really).

To be a bit more fair, big culprits in the miss in the 80s were Motorola and Intel for not making IC CPUs with Chuck Thacker's emulation architectures that we used at Parc to be able run ultra high level languages efficiently enough. The other big culprit was that you could do -something- and sell it for a few thousand dollars, whereas what was needed was something whose price tag in the early 80s would have been more like $10K.

Note that a final culprit here is that the personal computer could not be seen for all it really was, and especially in the upcoming lives of most people. The average price of a car when the Mac came out was about $9K (that according to the web is about $20K today -- the average price of a car today is about $28K). To me a really good personal computer is worth every penny of $28K -- I'd love to be able to buy $28K of personal computer! One way to evaluate "the computer revolution" is to note not just what most people do with their computers in all forms, but what they are willing to pay. I think it will be a while before most people can see enough to put at least the value of a car on their "information and thinking amplifier vehicle"

shalabhc
> It's worth noting that you then have made a "software Internet", and this can be mapped in many ways over a physical Internet.

The more interesting/optimized ways to map this would be where a single object in the software internet somehow maps to multiple computers, either doing parallel computation or partitioned computation on each. I feel the semantics of mapping the object onto a physical computer would have to be encoded in the object itself.

Perhaps some other kinds of higher level semantic model (i.e. not a 'software internet') might also be easy to map onto a physical internet. This is something I am interested in actively exploring. That is, how to build semantic models that are optimized for human comprehension of a problem, but can be directly run on farms of physical computers? Today a lot of the translation is done in our heads - all the way down to a fairly low level.

> big culprits in the miss in the 80s were Motorola and Intel for not making IC CPUs with Chuck Thacker's emulation architectures

Maybe there is a feedback loop where the growth of Unix leads to hardware vendors thinking 'lets optimize for C', which then feeds the growth further? OTOH, even emulated machines are faster than hardware machines used to be.

> I'd love to be able to buy $28K of personal computer!

Well, you can already buy $28K or more of computing resources and connect it to your personal device. It's not easy to get much value from this today, though.

alankay1
> The more interesting/optimized ways to map this would be where a single object in the software internet somehow maps to multiple computers ...

Yes, this is the essence of Dave Reed's 1978 MIT thesis on the design of a distributed OS for the Internet of "consistent" objects mapped to multiple computers. In the early 2000s we had the opportunity to test this design by implementing it. This led to a series of systems called "Croquet" and an open source system and foundation called "Open Cobalt".

> how to build semantic models that are optimized for human comprehension of a problem, but can be directly run on farms of physical computers?

Keep on with this ...

shalabhc
> Keep on with this ...

This is still kind of a mishmash of early thoughts and I have a couple of different lines of thought, which I hope will come together. I'll start with a couple of observations:

1. Most programming languages and DSLs are uni-directional - the computer doesn't talk back to the human in the same language.

2. The mental models (not the language) humans use to communicate to each other, even when using a lot of rigor and few ambiguities, is often different than the languages and models used for computation.

The first idea is: there are some repeating structures in mental models. We think new concepts in terms of old by first thinking the structures (which are few and axiomatic, like the structure/function words in English) and then materializing the content, as well as refining the structures. E.g. I can say to a non-programmer that 'classes contain methods' and they kind of get the structure without knowing the content. In my mind this is represented as a graph, where the 'contains' relationship is an edge that connects two 'content nodes'.

   [something called class] --(contains)-> [something called method]
If I follow up with 'methods contain code', they can reason that classes indirectly contain code, without even knowing what these things actually mean! So 'contains' is kind of a universal concept - it applies to abstract content and physical content in a similar way. Another universal connection is 'abstraction of', this implies one node (the abstract thing) is related to other nodes (the concrete things) in a specific way.

Maybe structures can be made composable, and we can operate on graphs structurally, without knowing what the content means? While another operation might eventually figure out what the content means. The main assumption here is my thoughts are organized as graphs, where connections are both universal also domain specific but of few kinds. Can I talk to the computer in terms of such graphs?

The second idea is: I want to combine high level concepts and strategies from from somewhat different domains. E.g. if I know different strategies for 'distributing things into bins' (consistent hashing, sharding, etc.), I invoke this 'idea' manually whenever I have see a situation which looks like 'distribute things into bins' and make a choice - irrespective of scale. Can I get the computer to do this for me instead?

So the final thing here is to get to something like this: I take an idea (i.e. a node in a graph) from the distributed computing domain, merge it with a definition (another node) of a computation I created (e.g. persistence strategies), and have the computer offer options on how to distribute that computation (i.e. 'distributed persistence strategies'). Then I can make choices and combine it with a 'convert idea to machine code' strategy and generate a program. This is all a bit abstract at this point, but I'm also trying to find where this overlaps with prior art.

scroot
> The main assumption here is my thoughts are organized as graphs

Herbert Simon talks a lot about this in Sciences of the Artificial. It turns out most of human thinking is just lists. I'm not sure if that still stands in the field of psychology (my version of the book is pretty old, from the 80s).

There's a good book (a little dense though) that might help with the more abstract thinking in the direction you're going. It's called "Human Machine Reconfigurations" and it's one of the more clever books I've come across on human machine interaction, written by an anthropologist/sociologist who also worked at PARC. So often the human part is what gets lost here.

shalabhc
Thanks for the references! Sciences of the Artificial is already on my list.

> The main assumption here is my thoughts are organized as graphs

I realize this would be better phrased as "the information I'm trying to communicate is organized as graphs'.

alankay1
A clarifying comment here. When one thinks in terms of what I called objects ca 1966, one is talking about entities that from the outside are identical to what we think of as computers (and this means not just sending messages and getting outputs, but that we don't get to look inside with our messages, and our messages don't get to command, unless whatever is going on in the interior of the computer has decided to allow.

So from the outside, there are no imperatives, only requests and questions. Another way to look at this is that an object/computer is a kind of "server" (I worry to use this term because it also has "pop" meanings, but it's a good term.)

This is sometimes called "total encapsulation".

From this standpoint, we don't know what's inside. Could be just hardware. Could be variables and methods. Could be some form of ontology. Or mix and match.

This is the meaning of computers on a network, especially large worldwide ones.

The basic idea of "objects" is that what is absolutely needed for doing things at large scale, can be supplied in non-complex terms for also doing the small scale things that used to be done with data structures and procedures. Secondarily, some of the problems of data structures and procedures at any scale can be done away with by going to the "universal servers in systems" ideas.

Similarly, what we have to do for critical "data structures" -- such as large scalings, "atomic transactions", versions, redundancy, distribution, backup, and "procedural fields" (such as the attribute "age") are all more easily and cleanly dealt with using the idea of "objects".

One of the ways of looking at what happened in programming is that many if not most of the naive ways to deal with things when computers were really small did not scale up, but most programmers wanted to stay with the original methods, and they taught next gen programmers the original methods, and created large fragile bodies of legacy code that requires experts in the old methods to maintain, fix, extend ...

shalabhc
> Could be just hardware. Could be variables and methods. Could be some form of ontology. ...

> more easily and cleanly dealt with using the idea of "objects".

OK after this sitting in my mind for a bit longer something 'clicked'. What I'm thinking now is that there are many types of 'computer algebra' that can be designed. Data structures and procedures are only one such algebra - but they have taken over almost all of our mainstream thinking. So instead of designing systems with better suited algebra, we tend to map problems back to the DS+procedures algebra quickly. Smalltalk is well suited to represent any computer algebra (given the DS/procedure algebra is implemented in some 'objects', not the core language).

> created large fragile bodies of legacy code that requires experts in the old methods to maintain, fix, extend

If I understand correctly you are saying that better methods would involve objects and 'algebra' that perhaps don't involve data structures and procedures at all, even all the way down for some systems.

alankay1
Mathematics is a plural for a reason. The idea is to invent ways to represent and infer that are not just effective but help thinking.

I don't think Smalltalk is well suited to represent any algebra (the earliest version (-72) was closer, and the next phase of this would have been much closer as a "deep" extensible language.

A data structure is something that allows fields to be "set" from the outside. This is not a good idea. My original approach was to try to tame this, but I then realized that you could replace "commands" with "requests" and imperatives with setting goals.

shalabhc
> and the next phase of this would have been much closer as a "deep" extensible language.

Are these ideas (and the 'address space of objects') elaborated on somewhere?

> A data structure is something that allows fields to be "set" from the outside. This is not a good idea. My original approach was to try to tame this, but I then realized that you could replace "commands" with "requests" and imperatives with setting goals.

I agree with in principle - but I'm having trouble imagining computing completely without data structures though (and am reading 'Early History of Smalltalk' to see if it clicks.)

alankay1
You need to have "things that can answer questions". I'd like to get the "right answer" when I ask a machine for someone's date of birth, and similarly I'd like to get the right answer when I ask for their age. It's quite reasonable that the syntax in English is the same.

? Alan's DOB

? Alan's age

Here "?" is a whole computer. We don't know what it will do to answer these questions. One thing is for sure: we are talking to a -process- not a data structure! And we can also be sure that to answer the second it will have to do the first, it will have to ask another process for the current date and time, and it will have to do a computation to provide the correct answer.

The form of the result could be something static, but possibly something more useful would to have the result also be a process that will always tell me "Alan's age" (in other words more like a spreadsheet cell (which is also not "data" but a process)).

If you work through a variety of examples, you will (a) discover that questions are quite independent of the idea of data, and (b) that processes are the big idea -- its just that some of them change faster or slower than others.

Add in a tidy mind, and you start wanting languages and computing to deal with processes, consistency, inter-relations, and a whole host of things that are far beyond data (yet can trivially simulate the idea in the few cases its useful).

On the flip side, you don't want to let just anybody change my date of birth willy nilly with the equivalent of a stroke of a pen. And that goes for most answers to most questions. Changes need to be surrounded by processes that protect them, allow them to be rolled back, prevent them from being ambiguous, etc.

This is quite easy stuff, but you have to start with the larger ideas, not with weak religious holdovers from the 50s (or even from the way extensional way math thinks via set theory).

shalabhc
(also, for the benefit of anyone else reading this thread, the following section written in 1993 talks more about these ideas: http://worrydream.com/EarlyHistoryOfSmalltalk#oostyle)
shalabhc
Thank you for the elaboration!

(And for anyone else reading this thread I found an old message along the same topic: http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-...)

I'm thinking along these lines now: decompose systems along lines of 'meaning', not data structures (data structures add zero meaning and are a kind of 'premature materialization'), design messages first, late bind everything so you have the most options available for implementation details, etc.

The other thing I'm thinking is why have only one way to implement the internal process of an object/process? There are often multiple ways to accomplish goals, so allow multiple alternative strategies for responding to the same message and let objects choose one eventually.

Edit: wanted to add that IIUC, 'messages' doesn't imply an implementation strategy - i.e. they are messages in the world of 'meaning'. In the world of implementation, they may just disappear (inlined/fused) or not (physical messages across a network), depending on how the objects have materialize at a specific point in time.

mmiller
You've got the right idea. If you think about how services on the internet operate, they don't have one way of implementing their internal processes, either.

I finally got the idea, after listening to Kay for a while, that even what we call operating systems should be objects (though, as Xerox PARC demonstrated many years ago, there is a good argument to be made that what we call operating systems need a lot of rethinking in this same regard, ie. "What we call X should be objects."). It occurred to me recently that we already have been doing that via. VMs (through packages such as VMWare and VirtualBox), though in pretty limited ways.

Incidentally, just yesterday, I answered a question from a teacher on Quora who wanted advice on how to teach classes and methods to a student in Python, basically saying that, "The student is having trouble with the concepts of classes and methods. How can I teach those ideas without the other OOP concepts?" (https://www.quora.com/Can-I-explain-classes-and-objects-with...) It prompted me to turn the question around, and really try to communicate, "You can teach OOP by talking about relationships between systems, and semantic messaging." If they want to get into classes and methods later, as a way of representing those concepts, they can. What came to mind as a way around the class/method construct was a visual environment in which the student could experience the idea of different systems communicating through common interfaces, and so I proposed EToys as an alternative to teaching these ideas in Python. I also put up one of Kay's presentations on Sketchpad, demonstrating the idea of masters and instances (which you could analogize to classes and object instances).

I felt an urge, though, to say something much more, to say, "You know what? Don't worry about classes and methods. That's such a tiny concept. Get the student studying different kinds of systems, and the ways they interact, and make larger things happen," but I could tell by the question that, dealing with the situation at hand, the class was nothing close to being about that. It was a programming class, and the task was to teach the student OOP as it's been conceived in Python (or perhaps so-called "OOP". I don't know how it conceives of the concept. I haven't worked in it), and to do it quickly (the teacher said they were running out of time).

The question has gotten me thinking for the first time that introducing people to abstract concepts first is not the right way to go, because by going that route, one's conceptions of what's possible with the idea become so small that it's no wonder it becomes a religion, and it's no wonder our systems don't scale. As Kay keeps saying, you can't scale with a small conception of things, because you end up assuming (locking down) so much, it's impossible for its morphology to expand as you realize new system needs and ways of interacting. The reason for this is that programming is really about, in its strongest conception, modeling what we know and understand about systems. If we know very little about systems that already exist, their strengths and weaknesses, our conception of how semantic connections are made between things is going to be very limited as well, because we don't know what we don't know about systems, if we haven't examined them (and most people haven't). The process of programming, and mastering it, makes it easy enough to tempt us to think we know it, because look, once we get good enough to make some interesting things happen (to us), we realize it offers us facilities for making semantic connections between things all day long. And look, we can impress people with that ability, and be rewarded for it, because look, I used it to solve a problem that someone had today. That's all one needs, right?...

Kay has said this a couple different ways. One was in "The Early History of Smalltalk". He asked the question, "Should we even teach programming?" Another is an argument he's made in a few of his presentations: Mathematics without science is dangerous.

shalabhc
> So from the outside, there are no imperatives, only requests and questions.

This threw me off a bit as Smalltalk collections have imperative style messages for instance.

> Could be some form of ontology.

This remark helped me find some clarity.

I want the computer to help me do cross ontological reasoning and mapping. For instance, if I want to compute geometry, how do I map the ontology of 'geometry' onto the ontology of 'smalltalk'? I 'think up' the mapping, but it would be great if the computer helps me here too. Mapping 'smalltalk' onto 'physical machines' is another ontological mapping. The 'mapping of ontologies' is itself an ontology.

In large systems there are a lot of ontological 'views' and 'mappings' at play. I want to inspect and tweak each independently using the language on the ontology, and have the computers automatically map my requests to the physical layer in an efficient way. This is not possible in systems today because there an incredible amount of pre-translation that happens so a high level questions cannot be directly answered by the system - I have to track it down manually to a different level.

Maybe the answer is to define the ontologies as object collections and have them talk to each other and figure it out. I want to tweak things after the system is up, of course, so I could send an appropriate message (e.g. 'change the bit representation of integers' or 'change the strategy used in mapping virtual objects to physical') and everything affected would be updated automatically (is this 'extreme late binding'?).

alankay1
Yes, "collections" and other such things in Smalltalk are "the Christian Scientists with appendicitis". Our implementations were definitely compromises between seeing how to be non-imperative vs already having the "devil's knowledge" of imperative programming. One of the notions we had about objects is that if we had to do something ugly because we didn't have a better idea, then we could at least hide it behind the encapsulation and the fact that message sending in the Smalltalks really is a request.

Another way of looking at this is if an "object" has a "setter" that directly affects a variable inside then you don't have a real object! You've got a data structure however much in disguise.

Another place where the "sweet theory" was not carried into reality was in dependencies of various kinds. Only some important dependencies were mitigated by the actual Smalltalks.

Two things that helped us were that we did many on the fly changes to the system over 8-10 years -- about 80 system releases -- and including a new language every two years. This allowed to avoid getting completely gobbed up.

The best and largest practical attempt at an ontology is in Doug Lenat's CYC. The history of this is interesting and required a number of quite different designs and implementations to gather understanding.

shalabhc
> Yes, "collections" and other such things in Smalltalk are "the Christian Scientists with appendicitis".

Interesting to hear this perspective - drives home the point that we shouldn't just stop at generic late bound data structures.

> Only some important dependencies were mitigated by the actual Smalltalks.

Dependency management in today's systems is just mind numbing. If only we had a better way to name and locate these.

mmiller
One of the things I've realized is that using names for locating what's needed (I assume we're talking about the same idea) is part of the problem. At small scales it's fine. As systems get bigger, it becomes a problem. The internet went through this. When it started as the Arpanet, there was (if I remember correctly) one guy who kept the directory of names for each system on the network. The network started small, so this could work. As it grew into the thousands of nodes, this became less manageable, partly because there started to be duplicate requests for the same name for different nodes--naming conflicts, which is why DNS was created, and why ICANN was ultimately created, to settle who got to use which names. I doubt something like that, though, would scale properly for code, though many organizations have tried that, by having software architects in charge of assigning names to entities within programs. The problem then comes when companies/organizations try to link their systems together to work more or less cooperatively. I heard despondent software engineers talk about this 15 years ago, saying, "This is our generation's Vietnam." (They didn't lack for the ability to exaggerate, but the point was they could not "win" with this strategy.) They were hoping to build this idea of the semantic web, but different orgs. couldn't agree on what terms meant. They'd use the same terms, but they would mean different things, and they couldn't make naming things work across domains ("domains" in more than one dimension). So, we need something different for locating things. Names are fine for humans. We could even have names in code, but they wouldn't be used for computers to find things, just for us. If we need to disambiguate, we can find other features to help, but computing needs something, I think, that identifies things by semantic signifiers, so that even though we use the same names to talk about them, computers can disambiguate by what they actually need by function. It wouldn't get rid of all redundancy, because humans being humans, economics and competition are going to promote some of that, but it would help create a lot more cooperation between systems.
shalabhc
> One of the things I've realized is that using names for locating what's needed (I assume we're talking about the same idea) is part of the problem.

I don't think naming itself a problem if you have a fully decentralized system. E.g., each agent (org or person) can manage their namespace any way they choose in a single global virtual namespace. I'm imagining something like ipfs/keybasefs/upspin, but for objects, not files, and with some immutability and availability guarantees.

But yes, there should probably be other ways to find these things, using some kind of semantic lookup/negotiation.

alankay1
What he means is that names are a local convention, and scaling soon obliterates the conventions. Then you need to go to descriptions that use a much smaller set of agreed on things (and you can use the "ambassador" idea from the 70s as well).
shalabhc
Ah I see, we're taking about interoperability, not just naming.

This is related to my original interest in language structure words and ontologies. The idea there is that the set of 'relationships' between things is small and universal (X 'contains' Y, A 'is an abstraction of' B) and perhaps can be used to discover and 'hook up' two object worlds that are from different domains.

scroot
A few year ago when I was doing some historical research about DNS, I came across quite a few interesting papers that all discussed "agents" in a way that seemed based on some shared knowledge/assumptions people had at the time. In particular, these would be agents for locating things in the "future internetwork". There's a paper by Postel and Mockapetris that comes to mind. Is this an example of "ambassadors"?
alankay1
Yes, this got very clear in a hurry even in the ARPAnet days, and later at Parc. (This is part of the Licklider "communicating with aliens" problem.)

Note that you could do a little of this in Linda, and quite a bit more in a "2nd order Linda".

I've also explained the idea of "processes as 'ambassadors'" in various talks (including a recent one to the "Starship Congress").

scroot
The thing about DNS and naming is that there were a lot of ideas flying around, some of them in the big standards committees. X.400 and X.500 were the OSI standards for messaging and directory services that were going to handle finding entities using specific attributes rather than with direct names or even straight hierarchical naming (like DNS finally used). It's interesting to read all the old stuff -- I had to sift through much of it a few years back when I wrote my dissertation on the early history of DNS (a cure for insomnia).

I wonder with the Internet now if anything effectively different is even possible, considering that it's no longer a small network but everywhere like the air we breathe.

shalabhc
> I wonder with the Internet now if anything effectively different is even possible, considering that it's no longer a small network but everywhere like the air we breathe.

You could slowly bootstrap a new system on the existing one, but you'd need a fleshed out design first :) Everything is replaceable, IMO, even well established conventions and standards, if something compelling comes along.

The CCN ideas are related to naming as well. Maybe the ideas could be extended to handle 'objects' rather than just 'content'.

scroot
Hardware architecture is the horizon of my knowledge. But one thing I've always wondered is this: why not just have memory addresses inside a computer map to local IPv6 addresses, then have some other "chip" that can distinguish between non local IP addresses that would, in a perfect object world, point to places in memory on another remote machine?

Obviously there would need to be some kind of virtualization of the memory but hopefully you get the idea. Not exactly related to naming but whatever.

shalabhc
Interesting - is the broader idea here that there is a virtual machine that spans multiple physical machines? Instead of virtual 'memory access', why not model this as a virtual 'software internet'?
scroot
I don't even think you need a VM, really. Just have this particular computer equipped with some soft-core that handles IP from the outside. The memory mapping, since it's just IPv6, can determine whether you are dealing with information from the outside world (non-localhost ips) or your own system (local ips). Because logically they are already different blocks in memory, they're already isolated.

With something like that you might be able to have "pure objects" floating around the internet. Of course your computer's interpretation of a network object is something it has to realize inside of itself (kind of like the way you interpret the words coming from someone else's mouth in your own head, realizing them internally), but you will always be able to tell that "this object inside my system came from elsewhere" for its whole lifecycle.

Maybe you could even have another soft-core (FPGA like) that deals with brokering these remote objects, so you can communicate changes to an incoming object that you want to send a message to. This is much more like communication between people, I think.

shalabhc
> I don't even think you need a VM, really.

I mean a VM as in the idea that you are programming an abstract thing, not a physical thing. Not a VM as in a running program. You could emulate the memory mapper in software first - hardware would be an optimization.

The important point is 'memory mapper' sounds like the semantics would be `write(object_ip, at_this_offset, these_bytes)`, but what you really want IMO is `send(object_ip, this_message)`. That is, the memory is private and the pure message is constructed outside the object.

You still need the mapping system to map the object's unique virtual id to a physical machine, physical object. So having one IP for each of these objects could be one way.

Alan Kay mentioned David Reed's 1978 thesis (http://publications.csail.mit.edu/lcs/pubs/pdf/MIT-LCS-TR-20...) which develops these ideas (still reading). In fact, a lot of 'recently' popular ideas seem to be related to the stuff in that thesis (e.g. 'psuedo-time')

mmiller
"The more interesting/optimized ways to map this would be where a single object in the software internet somehow maps to multiple computers, either doing parallel computation or partitioned computation on each. I feel the semantics of mapping the object onto a physical computer would have to be encoded in the object itself."

You might be interested in Alan Kay's '97 OOPSLA presentation. He talked in a similar vein to what you're talking about: https://youtu.be/oKg1hTOQXoY?t=26m45s

Inspired by what he said there, I tried a little experiment in Squeak, which worked, as far as it went (scroll down the answer a bit, to see what I'm talking about, here): https://www.quora.com/What-features-of-Smalltalk-are-importa...

I only got that far with it, because I realized once I did it that I had more work to do in understanding what to do with what I got back (mainly translating it into something that would keep with the beauty of what I had started)...

"Maybe there is a feedback loop where the growth of Unix leads to hardware vendors thinking 'lets optimize for C', which then feeds the growth further? OTOH, even emulated machines are faster than hardware machines used to be."

There is a feedback loop to it, though as development platforms change, that feedback gets somewhat attenuated.

As I recall, what you describe with C happened, but it began in the late '90s, and into the 2000s. I started hearing about CPUs being optimized to run C faster at that point.

I once got into an argument with someone on Quora about this re. "If Lisp is so great, why aren't more people using it?" I used Kay's point about how bad processor designs were partly to blame for that, because a large part of why programmers make their choices has to do with tradition (this gets translated to "familiarity"). Lisp and Smalltalk did not run well on the early microprocessors produced by these companies in the 1970s. As a consequence, programmers did not see them as viable for anything other than CS research, and higher-end computing (minicomputers).

A counter to this was the invention of Lisp machines, with processors designed to run Lisp more optimally. A couple companies got started in the '70s to produce them, and they lasted into the early '90s. One of these companies, Symbolics, found a niche in producing high-end computer graphics systems. The catch, as far as developer adoption went, was these systems were more expensive than your typical microcomputer, and their system stuff (the design of their processors, and system software) was not "free as in beer." Unix was distributed for free by AT&T for about 12 years. Once AT&T's long-distance monopoly was broken up, they started charging a licensing fee for it. Unix eventually ran reasonably well on the more popular microprocessors, but I think it's safe to say this was because the processors got faster at what they did, not that they were optimized for C. This effect eventually occurred for Lisp as well by the early '90s, which is one reason the Lisp machines died out. A second cause for their demise was the "AI winter" that started in the late '80s. However, by then, the "tradition" of using C, and later C++ for programming most commercial systems had been set in the marketplace.

The pattern that seems to repeat is that languages become popular because of the platforms they "rode in on," or at least that's the perception. C came on the coattails of Unix. C++ seems to have done this as well. This is the reason Java looks the way it does. It came out of this mindset. It was marketed as "the language for the internet," and it's piggybacked on C++ for its syntax and language features. At the time the internet started becoming popular, Unix was seen as the OS platform on which it ran (which had a lot of truth to it). However, a factor that had to be considered when running software for Unix was portability, since even though there were Unix standards, every Unix system had some differences. C was reasonably portable between them, if you were careful in your implementation, basically sticking to POSIX-compliant libraries. C++ was not so much, because different systems had C++ compilers that only implemented different subsets of the language specification well, and didn't implement some features at all. C++ was used for a time in building early internet services (combined with Perl, which also "rode in" on Unix). Java was seen as a pragmatic improvement on C++ among software engineers, because, "It has one implementation, but it runs on multiple OSes. It has all of the familiarity, better portability, better security features, with none of the hassles." However, it completely gave up on the purpose of C++ (at the time), which was to be a macro language on top of C in a similar way to how Simula was a macro language on top of Algol. Despite this, it kept C++'s overall architectural scheme, because that's what programmers thought was what you used for "serious work."

From a "power" perspective, one has to wonder why programmers, when looking at the prospect of putting services online, didn't look at the programming architecture, since they could see some problems with it pretty early, and say to themselves, "We need something a lot better"? Well, this is because most programmers don't think about what they're really dealing with, and modeling it in the most comprehensive way they can, because that's not a concept in their heads. Going back to my first point about hardware, for many years, the hardware they chose didn't give them the power so they could have the possibility to think about that. As a result, programmers mostly think about traits, and the community that binds them together. That gives them a sense of feeling supported in their endeavors, scaling out the pragmatic implementation details, because they at least know they can't deal with that on their own. Most didn't think to ask (including myself at the time), "Well, gee. We have these systems on the internet. They all have different implementation details, yet it all works the same between systems, even as the systems change... Why don't we model that, if for no other reason than we're targeting the internet, anyway? Why not try to make our software work like that?"

On one level, the way developers behave is tribal. Looked at another way, it's mercantilistic. If there's a feedback loop, that's it.

"OTOH, even emulated machines are faster than hardware machines used to be."

What Kay is talking about is that the Alto didn't implement a hard-coded processor. It was soft-microcoded. You could load instructions for the processor itself to run on, and then load your system software on top of that. This enabled them to make decisions like, "My process runs less efficiently when the processor runs my code this way. I can change it to this, and make it run faster."

This will explain Kay's use of the term "emulated." I didn't know this until a couple years ago, but at first, they programmed Smalltalk on a Data General Nova minicomputer. When they brought Smalltalk to the Alto, they microcoded the Alto so that it could run Nova machine code. So, it sounds like they could just transfer the Smalltalk VM binary to the Alto, and run it. Presumably, they could even transfer the BCPL compiler they were using to the Alto, and compile versions of Smalltalk with that. The point being, though, that they could optimize performance of their software by tuning the Alto's processor to what they needed. That's what he said was missing from the early microprocessors. You couldn't add or change operators, and you couldn't change how they were implemented.

shalabhc
Thanks for the long write up. I found it very interesting.

> You might be interested in Alan Kay's '97 OOPSLA presentation

Oh yeah I have actually seen that - probably time to watch it again.

> Well, this is because most programmers don't think about what they're really dealing with

Agree with that. Most people are working on the 'problem at hand' using the current frame of context and ideas and focus on cleverness, optimization or throughput within this framework. When changing the frame of context may in fact be much better.

> What Kay is talking about is that the Alto didn't implement a hard-coded processor. It was soft-microcoded.

Interesting. I wonder if FPGAs could be used for something similar - i.e. program the FPGAs to run your bytecode directly. But I'm speculating because I don't know too much about FPGAs.

alankay1
Yes re: FPGAs -- they are definitely the modern placeholder of microcode (and better because you can organize how the computation and state are hooked together). The old culprit -- Intel -- is now offering hybrid chips with both an ARM and a good size patch of FPGA -- combine this with a decent memory architecture (in many ways the hidden barrier these days) and this is a pretty good basis for comprehensive new designs.
alankay1
Actually ... only the first version of Smalltalk was done in terms of the NOVA (and not using BCPL). The subsequent versions (Smalltalk-76 on) were done by making a custom virtual machine in the Alto's microcode that could run Smalltalk's byte codes efficiently.

The basic idea is that you can win if the microcode cycles are enough faster than the main memory cycles so that the emulations are always waiting on main memory. This was generally the case on the Alto and Dorado. Intel could have made the "Harvard" 1st level caches large enough to accommodate an emulator -- that would have made a big difference. (This was a moot point in the 80s)

mmiller
I know this is getting nit-picky, but I think people might be interested in getting some of the details in the history of how Smalltalk developed. Dan Ingalls said in "Smalltalk-80: Bits of History":

"The very first Smalltalk evaluator was a thousand-line BASIC program which first evaluated 3 + 4 in October 1972. It was followed in two months by a Nova assembly code implementation which became known as the Smalltalk-72 system."

The first Altos were produced, if I have this right, in 1973.

I was surprised when I first encountered Ingalls's implementation of an Alto on the web, running Smalltalk-72, because the first thing I was presented with was, "Lively Web Nova emulator", and I had to hit a button labeled "Show Smalltalk" to see the environment. He said what I saw was Nova machine code from a genuine ST-72 image, from an original disk platter.

I take it from your comment that you're saying by the time ST-76 was developed, the Alto hardware had become fast enough that you were able to significantly reduce your use of machine code, and run bytecode directly at the hardware level.

I could've sworn Ingalls said something about using BCPL for earlier versions of Smalltalk, but quoting out of "Bits of History" again, Ingalls, when writing about the Dorado and Smalltalk-80, said of BCPL that the compiler you were using compiled to Alto code, but ...

"As it turned out, we only used Bcpl for initialization, since it could not generate our extended Alto instructions and since its subroutine calling sequence is less efficient than a hand-coded one by a factor of about 3."

alankay1
The Alto didn't get any faster, and there was not a lot of superfast microcode RAM (if we'd had more it would have made a huge difference). In the beginning we just got Smalltalk-72 going in the NOVA emulator. Then we used the microcode for a variety of real-time graphics and music (2.5 D halftone animation, and several kinds of polytimbral timbre synthesis including 2 keyboards and pedal organ). These were separate demos because both wouldn't fit in the microcode. Then Dan did "Bitblt" in microcode which was a universal screen painting primitive (the ancestor of all others). Then we finally did the byte-code emulator for Smalltalk-76. The last two fit in microcode, but the music and the 2.5 D real-time graphics didn't.

The Notetaker Smalltalk (-78) was a kind of sweet spot in that it was completely in Smalltalk except for 6K bytes of machine code. This was the one we brought to life for the Ted Nelson tribute.

gone35
9:05-12:30 I'm sold. A mindblowing vision for what the universality of computation can truly do... Thank you.
alankay1
Actually, just what could be done 40 years ago. Much more can be (and should be) done today. That's the biggest point.
pls2halp
I'm interested to see if you've seen the concepts of atemporality[1] and network culture[2] floating around. Basically, the core thesis associated with these is that we have adopted the internet as our primary mode of processing information, and have as such lost the sense of a cohesive narrative that is inherent in reading a book/essay, or listening to a whole talk, in the process.

You become fully immersed in Plato's worldview when reading The Republic, but if you were see someone explaining the allegory of the cave in the absence of a wider context you will only take the elements which fit your worldview and not his wider conception of knowledge.

I think this ties into what happened to the concept of a centralised computer network, working for the good of humanity, turning into todays fractured silos, working to mine individuals for profit.

[1]http://index.varnelis.net/network_culture/1_time_history_und... [2]http://index.varnelis.net/network_culture

alankay
Thanks for the references. I haven't seen these, but the ideas have been around since the mid-60s by virtue of a number of the researchers back then having delved into McLuhan, Mumford, Innis, etc. and applied the ideas to the contemplated revolutions in personal computing and networking media we were trying to invent.

I think a big point here is that going to a network culture doesn't mandate losing narrative, it just makes the probability much higher for people who are not self-conscious about their surrounding contexts. If we take a look at Socrates (portrayed as an oral culture genius) complaining about writing -- e.g. it removes the need to remember and so we will lose this, etc. -- we also have to realize that we are reading this from a very good early writer in Plato. Both of them loved irony, and if we look at this from that point of view, Plato is actually saying "Guess what? If you -decide- to remember while reading then you get the best of both worlds -- you'll get the stuff between your ears where it will do you the most good -and- you will have gotten it many times faster than listening, usually in a better form, and from many more resources than oral culture provides".

This was the idea in the 60s: provide something much better -- and by the way it includes narrative and new possibilities for narrative -- but then like reading and writing, teach what it is in the schools so that a pop culture version which misses most of the new powers is avoided.

When I write code it is usually either "kiddicode" for future "kiddilanguages" or "metacode" (for future languages ".")

I did have a lot of fun last year writing code in a resurrected version of the Notetaker Smalltalk-78 (done mostly by Dan Ingalls and Bert Freudenberg from a rescued disk pack that Xerox had thrown away) to create a visual presentation for a tribute to Ted Nelson on his 70th birthday: https://youtu.be/AnrlSqtpOkw?t=135

This particular system was a wonderful sweet spot for those days -- it was hugely expressive and quite small and understandable (my size). (This was the Smalltalk system that Steve Jobs saw the next year in 1979 -- though with fewer pictures because of memory limitations back then).

Jan 07, 2015 · 77 points, 4 comments · submitted by mpweiher
agumonkey
Just when I thought I'd seen everything Kay's was involved in...

ps : at 9'47 is an interactive document with embedded live graphical interpreter. Stunning.

slashink
This vision of what computing could be is amazing.
fmoralesc
Ted Nelson is one of my heroes, if only by his ability to connect so many dots in such an original way. The more his ideas spread, the better the world.
shaunxcode
For anyone who wants to play with the actual revived smalltalk system: http://lively-web.org/users/bert/Smalltalk-78.html
Nov 02, 2014 · 15 points, 1 comments · submitted by da02
ChaoticGood
Alan Kay has a novel sense of depth perception.
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.