HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Symbolics Lisp Machine demo Jan 2013

Kalman Reti · Youtube · 150 HN points · 47 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Kalman Reti's video "Symbolics Lisp Machine demo Jan 2013".
Youtube Summary
This is a rough introductory demo of the Symbolics Lisp Machine (in Brad Parker's emulator).
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Typically an advanced UI where one types into a REPL is called a 'Listener' in Lisp. Examples for Listeners: the MCL Listener, Genera's Listener, LispWorks Listener, the SLIME Listener and others.

For an impression of a Genera Listener I would recommend to see Kalman Reti's Youtube video: https://www.youtube.com/watch?v=o4-YnLpLgtk He shows there the Lisp Machine Listener debugging/interacting with mixed Lisp and C code.

MCL and LispWorks IDEs Listeners are running in an integrated editor. The running programming runs inside the development environment.

SLIME's Listener uses an external editor (GNU Emacs) for the Listener.

Genera has an integrated application as a Listener and that one is not based on an editor.

That's also a significant difference if the Listener is an internal tool, compared to an externally attached tool. External: from a user point of view, I use an IDE and connect to a running Lisp. Internal: I use the IDE and spawn a new Listener window (which could be on another X11 screen in case of an X11-based GUI). Usually the integration with internal Listeners is higher, but they may be more fragile, since they share the process & UI with the running program.

Using an editor as a base substrate has some advantages: one has usually better editing support in the Listener. But as Genera shows, a Listener does not need to run on top of an editor to be powerful. The Genera listener has for example full output recording, each listener is also a drawing plane and remembers all output and associates it with the displayed Lisp objects. That makes the interaction with code and data extremely convenient, a feature which is not provided by evaluating code from an editor buffer. SLIME provides a similar feature, but in a very limited way. The richer the Listener UI, the more of the interaction of the user will be in the Listener. Thus often an exclusive use of the editor to evaluate code is either a sign of a powerful editor integration or a weak Listener implementation. In Genera the Lisp listener is also not only a powerful data explorer, but also a shell with a lot of commands for exploring the Lisp system. A portable and in some ways slightly less polished / extensive version is the McCLIM listener. Example: https://mcclim.common-lisp.dev/static/media/screenshots/bund...

Also a Lisp might provide Listeners as panes of application frames. Thus an application window (either a tool of the IDE or any application GUI window) includes a corresponding Listener as a pane. As a simple example I can open a LispWorks Inspector and add a Listener pane. Any result from evaluation in the Listener will be displayed in a Inspector, with history.

I've ran chsh on top of a (virtual) Guix(SD) installation. It was rather nice but is still _not_ a lisp machine: https://www.youtube.com/watch?v=o4-YnLpLgtk
Jupiter notebook is the rediscovery of how Lisp Machines REPL used to work. :)

"Symbolics Lisp Machine demo Jan 2013"

https://www.youtube.com/watch?v=o4-YnLpLgtk

lysecret
That is fascinating.
Mar 26, 2022 · bitwize on Why we need Lisp machines
You are aware that Lisp machines understood several different flavors of Lisp? The Symbolics ones understood Zetalisp and Common Lisp at least. Were they on the market today they could be convinced to run Clojure and Scheme as well. There are a few old-timers developing Common Lisp applications to run on modern hardware, using Symbolics hardware.

In fact, Symbolics shipped compilers for other non-Lisp programming languages, including C and Ada. These interoperated smoothly with Lisp code, much more so than they do under Unix. In this demo, Kalman Reti compiles a JPEG decoder written in C, and replaces the Lisp JPEG decoder that came with the system with it, yielding a performance boost:

https://www.youtube.com/watch?v=o4-YnLpLgtk

chubot
OK interesting, I will check out the link.

I still think various Lisps don't interoperate enough today, but I'm not very familiar with the Lisp machines of the past. If it can interoperate with C and Ada that's interesting. But I also wonder about interop with JavaScript :) i.e. not just existing languages but FUTURE languages.

These are the M x N problems and extensibility problems I'm talking about on the blog.

bitwize
I'm not sure what you mean by "The lowest common denominator between a Common Lisp, Clojure, and Racket program is a Bourne shell script (and eventually an Oil script)." You can get two Lisp programs (or two Python programs, or two mixed-language programs, etc.) to intercommunicate without involving the shell at all.

It'd be more accurate to say "The lowest common denominator between a Common Lisp, Clojure, and Racket program is sexpr notation." By using sexprs over a pipe, named pipe, or network socket, you can very easily get any of those Lisps to intercommunicate deeply structured data with any other. This is how SLIME and its SWANK protocol work. I don't even think the shell is involved; Emacs handles spawning the inferior Lisp and setting up the communication channel itself.

The thing the Lisp machines had was a very robust ABI. Lisp has a specific notion about what a function is in memory. This is largely because Lisp is always "on" on a Lisp machine, there is no point at which you cannot take advantage of the entire Lisp runtime, including the compiler, in your own programs. Accordingly the Lisp machine C compiler output Lisp functions containing code compiled from C, that could be called by Lisp functions directly (and vice versa). Presumably a JavaScript runtime for Lisp machines would be able to do the same thing.

By contrast, C has no notion of what a function is; and the C ABI used by many operating systems presents a function as simply a pointer in memory that gets called after some parameters are pushed to the stack or left in registers. (How many parameters, their type, and their size, is unspecified and simply agreed upon by caller and callee.) Nothing about memory management is provided by the runtime either. All that has to be provided by user programs. All this adds friction to interoperation by function call, and makes IPC the most straightforward way to interoperate.

But oh well. Our computers are all slowly turning into JavaScript machines anyway, so maybe those Lisp/Smalltalk happy days can return again soon.

chubot
What I mean is "coarse-grained composition with text/bytes" (described in the blog post in my first reply)

If you're thinking that's tautological (how else would Common Lisp and Clojure communicate?), then this subthread might help with the context:

https://news.ycombinator.com/item?id=30814716

I don't think Common Lisp, Clojure, and Racket are compatible with Emacs' s-expression format. Lots of people here are saying they use JSON or the like with Lisp, not s-expressions.

Emacs can use its own format to communicate with itself, because it controls what process is on the other side of the wire.

(In any case, any variety of s-expressions IS TEXT; there are non-trivial issues to get it back in memory -- like what to do about graphs and sharing, node cycles, integer sizes, string encodings, etc.)

But the point of the slogan is that when you have even a small bit of heterogeneity (among Lisps) then what you're left with is "Unix". A Lisp machine isn't a big win for this reason.

It is cool that Lisp machines had a robust ABI. That could solve the IPC problem. But then you have the problem of a Lisp machine communicating with a Unix machine, which I'm sure was a very real thing back in the day. So now you're left with the lowest common denominator of Unix. Again that is coarse-grained composition over wires, which is analogous to shell. The shell doesn't have to be involved strictly speaking, but the syscalls that are made and parsing that is done is pretty much the same.

fiddlerwoaroof
If you’re fine with ES3, there’s https://marijnhaverbeke.nl/cl-javascript/ (I’ve been intending to use Babel to compile modern JS to ES3 and then load it into CL, but no time)

Or parenscript: https://parenscript.common-lisp.dev/

Or CLOG: https://github.com/rabbibotton/clog

And, there’s always the option of writing an HTTP or websocket server and serving JSON.

I personally use the SLIME repl in emacs for a ton of things I used to use bash one-liners for and you can push it pretty far (this is an experiment, I’ve wrapped it up more nicely into my .sbclrc since):

https://twitter.com/fwoaroof/status/1502881957012668417?s=21...

The key thing about those type of systems was the ability to reach down into the system and edit the code of the system currently in operation.

Here's a demonstration of Symbolics Open Genera (TM) 2.0, demonstrated running in a virtual machine. It is noted by the author of the video in the first minute or so that even in emulation, it is much faster than the original machines. - https://www.youtube.com/watch?v=o4-YnLpLgtk

Oberon also had a similar attribute, in that they kept the names of functions to operate on objects visible.

The same was true of the Canon Cat, Hypercard, Ted Nelson's Xanadu project, Smalltalk, and a number of other early computing systems.

The main feature common to all of these systems is that they all preserve context. In Genera, Oberon, Canon Cat, Hypercard, and SmallTalk the source was always available. (as far as I know). In Xanadu, the main functionality of the web was present, but it wouldn't allow the broken links (and lost context) that now plague the web.

I think a future platform could take code in a number of languages, compile it to an abstract syntax tree, but preserve the context required to recreate the source. In fact, it's reasonable that you could import a routine in a language you aren't familiar with, and reverse the compilation to get it expressed in an equivalent (but less elegant) form in your language of choice, along with the original comments.

There's nothing stopping an open source project from taking elements of these existing systems and moving forward from that basis. It might be profitable to include ideas such as Coloring of the Source text to label intent, such as in ColorForth.

Also, consider "Literate Programming" - Literate programs are written as an uninterrupted exposition of logic in an ordinary human language, much like the text of an essay, in which macros are included to hide abstractions and traditional source code.

You could also add the ability to store graphics and other data along with the source code.

Of course, if you are required to run code you didn't write and don't trust, your operating system must provide means to run it only against the files or folders you wish to let it operate on. The principle of least privilege needs to be supported at a fundamental level. This is one of the big shortcomings of the Unix model.

Sorry it was a bit of a ramble, but this seemed to be a call for ideas, so I ran with it.

PS: In the past, getting your vision of Computing required building a machine, and then getting it manufactured. Now it just requires that you make it work in a VM, Raspberry Pi, or web browser window.

It is MUCH easier to try out and/or create alterative systems now that it has ever been in the past.

jodrellblank
Why would you reach down into the system and edit the code of the system currently in operation? Because it doesn't already do what you want. Isn't it a bit pathalogical to start by building a system which doesn't do what you want, then building in the tools to let you fix it, so you can use those make it do what you want? Why not skip all the middlemen and make what you want in the first place?

If you arrive at "we can't do that because everyone wants different things", you're in this weird place where you can't build one system to suit everybody so as a fix for that you will ... make one dev system to suit everybody? Why is that going to work out any better? If people are going to have to build the system they want anyway, why not skip all the middlemen and leave people to build the system they want from scratch their own way?

"Well we can't do that, having to build everything from scratch is hard, people don't want that. And we don't know what people want (and can't be bothered to ask) and won't try to build it for them, but we can give everyone Our Blessed System and Our Blessed Tooling which we presume they will want to use, while we abandon them to have to build what they actually want using it". It's patriarchal and preachy and still unhelpful. The kind of person willing to put up with your system and language and its edge-cases and limitations instead of making their own, is quite likely the same person who would have put up with your spreadsheet and its edge-cases and limitations if only you'd built it.

It's the "all progress depends on the unreasonable person" meme; you need the person who can't tolerate a system which isn't perfect and demands to be able to tune it to their perfection, but is simultaneously fine with a customizable-system built by someone else for someone else's ideas. Then you say "ah but they can rebuild as much of the system as they want!" they're now having to build it themselves from scratch but based on what you built, which is even more work for them, not less.

And the whole thing says little about working with other people/organizations; everyone building their own thing isn't great for that. One could argue that Microsoft did a lot of good for the world by strongarming companies into .doc and .xls formats for interchange, in a similar way that HTTP/HTML did (but more tyrannically). Microsoft-Word-documents-over-email should probably go down in history as one of the big enablers of distributed human organization, like the telephone. Moreso than SMTP alone or TCP/IP alone which enabled computers to connect, not people. More than HTTP/HTML which ordinary people can't use without setting up a webserver and publishing first.

> "I think a future platform could take code in a number of languages, compile it to an abstract syntax tree, but preserve the context required to recreate the source. In fact, it's reasonable that you could import a routine in a language you aren't familiar with, and reverse the compilation to get it expressed in an equivalent (but less elegant) form in your language of choice, along with the original comments."

Would that even work? What about "code is primarily for people to read"; loss of indentation, of alignment, of variable names if you're changing language with different variable naming rules, loss of separate files/folder/module-context of how it was built in the first place, and loss of idiomatic patterns from changing one language to another, or to languages with different scoping behaviour. Look at the image in The Article for the square root code[1], can you decompile the syntax tree for a language which doesn't have a complex number type? What about one which doesn't have a short-float for the result, or has different floating point precision assumptions? What does "F and I2 need to be regular-heap-consed to avoid the extra-pdl lossage" mean for JavaScript or Python or Clojure? What's the magic 0.4826004 for, without any comments next to it?

And that square root function alone is a screenful of code, the very idea of being able to rebuild the system to your desires falls out the window if it's going to take you a lifetime to do that, and if not then we're back to my original paragraph where you're building a massive programmable system so someone can tweak one or two tiny things occasionally, which seems hella overkill.

> "Of course, if you are required to run code you didn't write and don't trust, your operating system must provide means to run it only against the files or folders you wish to let it operate on. The principle of least privilege needs to be supported at a fundamental level. This is one of the big shortcomings of the Unix model."

The XKCD comment that malicious code can ransomware all my documents and upload them to the cloud, but at least it can't add a new printer. One of the big shortcomings of the principle of least privilege is the effort of carefully and precisely describing the privileges needed - see the complexity and quantity of SELinux rules for a whole system. Even then you get immediately to the point where "my editor can edit all my text files" and that's now a large blast radius.

[1] https://cdn.substack.com/image/fetch/w_1456,c_limit,f_auto,q...

mikewarot
>One of the big shortcomings of the principle of least privilege is the effort of carefully and precisely describing the privileges needed - see the complexity and quantity of SELinux rules for a whole system.

SELinux was a terrible thing to inflict upon the world. Capabilities are more like cash in your wallet.... you pick what you want to use in a transaction, and hand it over... never worrying that the rest of your money would somehow get siphoned off later.

Static rules aren't what capabilities are about. Run this program, oh.. it wants a file? Here... it can have this. The default access would be nothing (or in a more modern context, it's config file) The program wouldn't directly access files, but would use dialog boxes to do I/O, or you could drop them in via batch, etc.

skissane
> Why would you reach down into the system and edit the code of the system currently in operation? Because it doesn't already do what you want. Isn't it a bit pathalogical to start by building a system which doesn't do what you want, then building in the tools to let you fix it, so you can use those make it do what you want? Why not skip all the middlemen and make what you want in the first place?

Because what you want is not set in stone for all time. Needs and desires change, and often what you think you want is different from what you actually want, but discovering that can take time.

I spend so much time trying to work out things like "when I click this button in the UI, what does that actually end up doing in the backend?" I wish I could right-click on a UI element and it would take me directly to the backend code which actually implements it. And a system like that should be much quicker to change – it should be much quicker for a new developer to get up to speed with it and start implementing things, rather than spending days (even weeks) trying to understand how it all fits together.

pjmlp
You can get a bit of that Oberon experience on Windows via PowerShell, .NET and COM.

Now if Windows team would get over their COM worship, it would be even better.

The animations in this post remind me of nothing more than a Symbolics LISP Machine: https://www.youtube.com/watch?v=o4-YnLpLgtk

which also had semantically driven clickable data structures one could use to build up a command line invocation. In about 1982.

Apr 27, 2021 · 139 points, 54 comments · submitted by lelf
segmondy
For those who don't get the lisp machine. imagine how you can inspect your browser, see the html/js, go to console, run commands modify programs etc. Imagine doing that on your OS, that's the experience of Lisp machine.
agumonkey
I feel that way everytime I use <browser> devtools.. and that's why I think the web is gonna be the only reincarnation of these lisp machines / smalltalk images
pjmlp
That was my expectation for ChromeOS, but then Google borked the idea.
capableweb
Alternatively: your expectations were wrong and Google never aimed for anything like that
pjmlp
I haven't said otherwise, and you comment hardly contributed to the discussion.

Naturally life doesn't always make our expectations reality.

agumonkey
But we should all expect reality not to do so :)

Maybe the Pharo Guys will have a lucky encounter with a risc-v dude and they'll make the hardware+live software a new thing in 2025

Blikkentrekker
Certainly that is the experience of a specific operating system, not a specific processor architecture.

Such an operating system could conceivably run on any machine; it simply ran more efficiently on a Lisp machine.

aidenn0
Indeed, Open Genera ran on DEC Alpha workstations
eschaton
Mezzano even demonstrates this by being a working operating system written in Common Lisp for x86-64 and arm64.
abhinav22
Man that looks so cool. What a beautiful, clean user interface and I assume an amazing experience, with the full power of Lisp.

Is there any chance to get a Lisp Machine as an image that we can load in a VM?

All I need is a connection the internet and I would be happy to spend all my spare hobby programming time dialed away in my own small corner of a Lisp Machine. No distractions, only Lisp.

bluefox
Not exactly a Lisp machine, but check out Medley: https://github.com/Interlisp/medley/

Lots of fun...

abhinav22
Thanks! Will check it out.
capableweb
You can give OpenGenera a try.

https://github.com/ynniv/vagrant-opengenera has some instructions to get started.

http://3e8.org/blog/2016/04/04/symbolics-concordia-in-a-virt... can take you a bit further.

You should give smalltalk a try too, same idea but OOP focus ("defining" maybe rather) https://squeak.org/

abhinav22
Thanks! Will check it out!

Re Smalltalk, I’m 100% a Lisp man, so perhaps later, but not at the moment :-)

sprynr
The Symbolics Lisp keyboards are also such amazing inputs devices!(Like the Space Cadet Keyboard https://en.wikipedia.org/wiki/Space-cadet_keyboard) They had a button or meta key combination for what seems like thousands of characters or functions. I actually netted an old Symbolics 364000 keyboard from my university and I love the design of it, makes me wish we had kept some of that design language in our modern day languages.
sedachv
They are just not. The Honeywell switches feel like crap IMO. There are no cursor keys. A 104 key Unicomp has better switches and better ergonomics, and just as many function keys - the only thing missing is a 5th (Symbol) modifier key. All you need is the proper key mapping. A 109 key Japanese keyboard is going to have more modifier keys available, and you can get them in your choice of switches.

Please don't part out keyboards from old computer systems so you can use them with your Macbook. Give them to people who are actually working on collecting and preserving the old systems.

sprynr
They aren't tactile, but they weren't meant to be, they were utilitarian. Under normal conditions hall-effect switches could last up to 30 billion operations (From Honeywell's own documentation https://sensing.honeywell.com/hallbook.pdf). They are designed to last and be used for the entire life of a machine, through all its upgrades. The lack of cursor keys was normal for the time, just look at the IBM Model F. As for Unicomp, they are based off of the IBM Model M (Which I have a few of and love the buckling spring mechanism) but they were designed to have feedback (auditory and tactile) built in. For sure a good design, but that doesn't make it the only good design.

>Please don't part out keyboards from old computer systems so you can use them with your Macbook. Give them to people who are actually working on collecting and preserving the old systems. I never said I did or would? Hall effect sensors are still available if someone wanted some they have much cheaper and easier options than tracking down an old keyboard. Also who ever implied that I'm not collecting and preserving old systems?

nexuist
> Please don't part out keyboards from old computer systems so you can use them with your Macbook. Give them to people who are actually working on collecting and preserving the old systems.

...Or do with them as you please, so that they may continue pushing the world forward instead of uselessly sitting around in a museum corner collecting dust.

sprynr
100% agree, we have full documentation of the board schematics and have photos from full teardowns, along with examples already stored in museums. Storing becomes hoarding after a point.
sedachv
I am not talking about hypotheticals or museums, I am talking about actual running systems. I know the most active restorer of Symbolics systems in the US. His biggest problem is finding enough consoles and keyboards.

Why don't you try to actually build a reproduction Symbolics keyboard (I did), and see how much time and money it is going to cost you.

nexuist
Perhaps you could connect OP to your friend and see if they would be willing to make a trade!
sedachv
Sure. My email address is in my profile: [email protected]
dmd
Kalman was a coworker of mine at my previous job, and is just an amazing, stunning hacker.
posobin
There are 30(!) comments on HN linking to this video, including discussions of Bret Victor's "Learnable programming", using Lisp in production from last year, comparisons with Smalltalk, a thread with examples of beautiful software, and more: https://ampie.app/url-context?url=https://www.youtube.com/wa...
peterohler
The good old days. I remember getting one fo the first color machines and writing and dynamic orbital evaluator. What a great development environment.
tech_tuna
I worked with a bunch of OG Symbolics guys back in the mid 2000s. They are the smartest engineers I've ever known. Good people too.
natas
I wish such system was still being commercialized.
tech_tuna
I worked with some of OG Symbolics back in the mid 2000s. The were the smartest engineers I've ever known. Cool people too.
natas
is this legal? (without a genera license?)
segmondy
maybe, maybe not.

https://archives.loomcom.com/genera/genera-install.html

natas
oh that's great.
smabie
I believe those instructions do not work anymore. There's a project on github that will build an appropriately old ubuntu vm that runs open genera pretty well provided you have the source. You can also find the source tarball without too much hassle on some torrent sites.
tablespoon
Do you have any links for that stuff?
smabie
https://github.com/ynniv/vagrant-opengenera
None
None
mepian
The author of the video works for Symbolics (or rather, for what's left of Symbolics today).
natas
oh sweet, I wonder who are their customers today.
capableweb
Well, their arguments[1] are quite compelling in general, rest of the software[2][3] seems to be highly specialized and specific, I'm guessing there is a lot of maintenance that has to be done still.

- [1] http://www.symbolics-dks.com/Genera-why-1.htm

- [2] http://www.symbolics-dks.com/Macsyma-1.htm (general purpose symbolic-numerical-graphical mathematics software product)

- [3] http://www.symbolics-dks.com/PDease-1.htm (general purpose software that uses Finite Element Analysis to obtain numerical solutions to a large class of partial differential equations)

fmakunbound
I wish I had tools like that today.
brundolf
I used to work at Cycorp, a holdover from the AI world of the 80s and to this day a classical (CL-like) Lisp shop. They have an old Symbolics box in their entryway, positioned by a couch as a small coffee table :)

I find the whole idea of hardware that's specifically optimized for a totally different programming paradigm than what we're used to just fascinating. It's not hard to imagine why we haven't seen more of it: it's really expensive to iterate on a branch in the computing hardware tree, and you're probably better off just writing a runtime for mainline systems. But still, it's fun to think about.

I am a little surprised though that nobody's written an operating system like this for standard hardware. Still a big task, but an order of magnitude less ambitious than custom hardware and with many of the benefits people would've gotten from a Lisp machine.

setpatchaddress
I've never used it, but Oberon running on bare hardware might qualify if one is OK with Modula instead of Lisp.

https://en.wikipedia.org/wiki/Oberon_%28operating_system%29#...

pjmlp
Oberon uses Oberon, for Modula-2 that would be Lillith.
musicale
> I am a little surprised though that nobody's written an operating system like this for standard hardware

You can run a bare metal Smalltalk-80 or Squeak on the Raspberry Pi.

https://github.com/michaelengel/crosstalk https://github.com/pablomarx/RaspberrySqueak

I think Pharo uses the Squeak VM so it could conceivably run on the bare metal Pi as well.

pjmlp
Pharo started as a Squeak fork, but I think they diverged quite a bit by now though.
mark_l_watson
Thanks for mentioning Pharo Smalltalk, I wanted to widen the conversation also.

I had a Xerox 1108 Lisp Machine from 1982 to about 1986 (did a successful commercial expert system product and used it for technical demos for marketing).

I think that Pharo is the closest thing today to that good old Lisp Machine feeling/experience. An amazing development environment.

Today, I pay for LispWorks Professional that while a great developer experience, is not like a Lisp Machine. I find it easy to get people to pay me to do Lisp development, but not Smalltalk.

huachimingo
Some phones also were made with Java in mind, I think?

Not sure if android or old nokias, or both.

zokier
That would be Jazelle, I think handful of Sony-Ericsson phones supported that. Nokia was not a big proponent of Java (they had Symbian to sell), and that was all before Androids time.
randomifcpfan
I was told that Jazelle was not popular because it was complicated/expensive to license and slower than JITing. (Basically Jazelle was a hardware interpreter for a subset of Java byte code.)
monocasa
It was slower than JITing, but the kinds of J2ME feature phones that ran it weren't using JITing runtimes anyway. And it was for sure faster than interpreter loop they otherwise would have.
pjmlp
As ex-employee I can tell you that Nokia was surely a big proponent of Java, before LWT appeared, the Nokia UI framework was the best extension to J2ME, hardware accelerated on their devices.

They also add J2ME extensions for Symbian capabilities and N95 was the first handset to have real hardware acceleration to J2ME 3D features.

It was a big shock to everyone the communication to move beyond Java and C++ into C# and adopt WP7.

killingtime74
Maybe you can be that someone
fidesomnes
I used to work with you there and you were interesting for the only person building useful software in the whole org. Soon you will learn to hide the fact you worked there.
mepian
There is Mezzano: https://github.com/froggey/Mezzano
brundolf
Nice, that does look like a similar idea
eschaton
It’s the same idea—it’s just as much “Lisp all the day down” as Genera but targeting commodity hardware.
Apr 07, 2021 · rml on What have we lost? [video]
If you are into Lisp Machines, this talk on Symbolics by Kalman Reti (who worked there) is worth watching (1h):

https://www.youtube.com/watch?v=OBfB2MJw3qg

If you only have 15 minutes, he gives another demo here:

https://www.youtube.com/watch?v=o4-YnLpLgtk

He has another video where he does some actual hacking (smooth scrolling of sheet music for display while playing an instrument):

https://www.youtube.com/watch?v=sfgjL7EUHZ8

On Oberon, Mesa/Cedar and Lisp Machines, the experience extends to the whole OS and any application, or REPL, not just a plain text editor.

Althought I haven't mentioned, Smalltalk also provides a similar experience.

Here are some videos that try to convey it, from my playlists,

"Eric Bier Demonstrates Cedar"

https://www.youtube.com/watch?v=z_dt7NG38V4

"Emulating a Xerox Star (8010) Information System Running the Xerox Development Environment (XDE) 5.0"

https://www.youtube.com/watch?v=HP4hRUEIuxo

"SymbolicsTalk28June2012"

"SYMBOLICS S-PACKAGES 3D GRAPHICS AND ANIMATION DEMO"

https://www.youtube.com/watch?v=gV5obrYaogU

"Symbolics Lisp Machine demo Jan 2013"

https://www.youtube.com/watch?v=o4-YnLpLgtk

The Native Oberon demos available at

https://www.youtube.com/channel/UCw6NbzmjW-wLvqVbxOXj7Ug/vid...

That's a good suggestion. Walking through a set of interactions is a solid idea.

For what it's worth, there are some videos around of people actually doing it with Lisp and Smalltalk systems, and pjmlp already posted a pile of them elsewhere in this thread.

I can add a few more:

Kalman Reti walking through some interactions with a Symbolics LispM repl:

https://www.youtube.com/watch?v=o4-YnLpLgtk

Brian Mastenbrook demonstrating Interlisp's SEDIT structure editor in the Xerox Lisp environment:

https://www.youtube.com/watch?v=2qsmF8HHskg

Rainer Joswig (lispm here on HN) showing us a little bit of repl and Zmacs interaction on a Symbolics Lisp Machine:

https://www.youtube.com/watch?v=LIGt5OwkoMA&list=PLN1hNlVqKB...

Rainer again, showing some simple interactions with Macintosh Common Lisp, which was my daily driver for years:

https://www.youtube.com/watch?v=GKG8cJl70mo

Ruby programmer Avdi Grimm shows some things that he found cool about Pharo Smalltalk:

https://www.youtube.com/watch?v=HOuZyOKa91o

Dan Ingalls (one of the original authors of Smalltalk) in a 2017 demo of Smalltalk 76:

https://www.youtube.com/watch?v=NqKyHEJe9_w

There are some other things I'd like to find for lists like this, but haven't been able to. In particular, a good demo of Apple's SK8 would be great.

If you can imagine a full-color Hypercard that could crack open and reprogram absolutely everything on the screen, including the machine code that drew the window system's widgets, all in a repl while the code was live; in which you could grab an arbitrary widget and drop it on the repl window to get a live variable reference to the widget, and then inspect it, operate on it, and reprogram it, again, while everything continued to run; in which you could build new window-system widgets by snapping together shapes and telling them to become widgets; in which you were not limited to HyperTalk for coding and text strings for data, but had a full Common Lisp at your disposal plus a Minsky-style frame system for representing data and knowledge, then you have some idea of what SK8 was like.

Doesn’t have the beautiful Display Postscript/LeX-esque typography that I want, and less said about Lots of Irritating Silly Parentheses the better, but even 1980s tech was doing it better:

https://www.youtube.com/watch?v=o4-YnLpLgtk

I think there are lessons to be had from the likes of Smalltalk and Logo too. Highly expressive without insane levels of cryptic inconsistent punctuation rules and UX man-traps.

The terminal is exactly an example of an app-centric workflow and why it is easier to build and/or easier to use than an API focused one. If you want to see a no-apps workflow, something like the Genera LISP OS was probably much closer[0].

People choose to use apps like ls, cat, less, echo, touch, find instead of using the FILE* object directly with readdir(), stat(), read(), write(), open(), creat() etc. All of the apps are designed to have human-readable output first, lots of options for controlling that output to make it as readable as possible etc. However, none of them exposes a rich model for its output that would make it possible to easily integrate it into more complex workflows. Instead, we rely on yet other apps, like sed, grep, xargs and essentially copy-pasting text between these apps (this is all that pipes really are, essentially).

This becomes even more obvious when you use stuff like gcc or gdb, which have extremely rich and potentially useful layers of representation that they refuse to expose at all, even as APIs - only text is allowed in the interaction.

Hell, the MS Office suite is a better example of a no-apps workflow, since each of the Office UIs has a deep understanding of the data produced by the others, and you can combine these in meaningful ways - much more so than terminal apps (for example, Word can show a portion of a spreadsheet without you having to guess at what contents it might have and how to parse it, like you would if you want to expose a portion of ls's output to a file).

Interfacing code is hard, it requires well-thought-out APIs and much more work even with the best APIs. Interfacing apps with extremely minimalistic APIs (copy/paste, share) is much easier for everyone.

[0] https://www.youtube.com/watch?v=o4-YnLpLgtk&t=6m0s

TeMPOraL
In context of terminals, if you want to see a modern API-based instead of app-based CLI experience, check out PowerShell. The underlying principle is that all commands like "ls" or "ps"[0] return their results as .NET objects, and not unstructured text. If you just call "ps" in the shell, you'll get the default visual representation you'd expect - but you can also choose a different one (e.g. list view by Format-List, or filterable GUI table view with Out-GridView), but you can also then filter the objects by properties and call methods on them.

For example, to find and kill all instances of notepad.exe, you'd write:

   Get-Process | ForEach-Object { if($_.ProcessName -eq "notepad") { $_.Kill(); } }
A bit verbose (and that's generally a problem with day-to-day PowerShell usage), but relatively trivial to turn into a cmdlet and alias it to "killall".

Of course, the above example was trivial, but the same principle works for more complex ones - instead of streams of text, you have streams of objects, which you can filter and run methods on, without doing any parsing.

--

[0] - Them being aliases to PowerShell's Get-ChildItem and Get-Process, respectively.

tsimionescu
Right, should have mentioned PS as well.

However, it's still important to note that it's easier in some sense, especially for simple tasks, to use Bash than PS, and I think that this is deeply tied to the reason why we fall back to apps rather than rich objects as the fundamental interactions.

I believe that human capacity to massage data together is still very hard to replicate in the kind of formal manner required by programming tools (for now, at least). That is why it is significantly easier for someone to copy data from one web UI to another than it is to write the rules for copying backend-to-backend automatically (up to some amount of data). This is a problem much more fundamental than the economic incentives for app creation. It's similar to the observation that, for small amounts of data, it is easier to run a select * from table and visually search, rather than go though the trouble of mentioning which columns and rows you want to see.

Jul 27, 2020 · hhas01 on Colorize Your CLI
The technical problem is you can’t effect worthwhile improvements in the terminal without also fixing the fundamental flaws below it: a huge mess of bloated, inconsistent, poorly-introspectable command executables, with only untagged, untyped “dumb” pipes to connect them together in the most laborious, unsafe way possible.

The logistical problem is, well… see all the other comments from those who don’t want it to change, whether through laziness, apathy, turf protection, or whatever. Next to the People Problem, the technical problem is the easy part to solve.

Incidentally, here’s what a good 1980s CLI looks like:

https://www.youtube.com/watch?v=o4-YnLpLgtk

Whereas the nix craphouse we’re now stuck with isn’t even a good 1970s* CLI.

Honestly, at this point I wouldn’t even bother trying incremental improvement. The *nix CLI does what it was built to do, and that’s all. When substantive change does come to text UIs it will be as revolution, not evolution.

Draw a line under it. Learn its lessons, both the DOs and DON’Ts. Then whiteboard from the ground-up what needs to come next. Work out where UI/UX needs to be in 2040 to effectively serve the billions of users of the generations to come, and make a start on building that. Dash ahead of history for a change, not be shackled and dragged back by it.

As old, traditional divisions between text, voice, and touch interaction become increasingly frustrating impediments in modern mass-market devices, there’s a killing to be made in smashing down all those old artificial barriers and replacing them all a single unified interaction model that can transition seamlessly between all three modes as its users’ needs and wishes direct.

There's this recent video showing a little of what the Pharo Smalltalk environment is like:

https://www.youtube.com/watch?v=HOuZyOKa91o

The video was made by a Ruby programmer, rather than an experience Smalltalker, so it's more a newbie showing you cool things he's discovered, rather than an expert showing the power of the system.

On the other hand, there's this one:

https://www.youtube.com/watch?v=o4-YnLpLgtk

That one is an expert, Kalman Reti, showing some of what it's like to work with a Symbolics Lisp Machine. (It's not a real Lisp machine; they're all at least thirty years old now; this is an emulator running the old Symbolics OS on Linux).

Go to YouTube and search for Rainer Joswig and you'll find a few video demos of Lisp Machines and MCL and so forth.

Finally, here are a few images from about eighteen years ago, also from Rainer. They show various views of the Macintosh Common Lisp and SK8 environments. It's not an experienced user showing you around, but at least you can get some idea what the environments looked like:

http://www.lemonodor.com/archives/000028.html

Symbolics offered for the Lisp Machines various programming languages: C, Pascal, Fortran and Prolog. They also resold a certified ADA system.

The C, Pascal and Fortran was based on a common substrate and they were all written in Lisp. Editing was done in Zmacs with syntax editor support. One could incrementally develop code in these languages and call them from Lisp.

For example the Pascal version of TeX was compiled and distributed with Symbolics Pascal and the C-written X11 window server was compiled and distributed with Symbolics C.

The Symbolics Genera demo by Kalman Reti showed a bit of the integration of C and Lisp code: https://www.youtube.com/watch?v=o4-YnLpLgtk He used a C-based JPEG library in the video showing the Symbolics Lisp Machine system.

There were also various programming languages and development systems available. On top of Lisp there were various extensive development environments available: KEE, ART, Knowledge Craft - combining programming with objects, rules, logic, ... Even the Nexpert Object system was initially developed on a Lisp Machine.

One commercial system initially developed for software maintenance, software porting and refactoring is REFINE. It is still available on current machines. Targets were various assemblers, COBOL, Mainframe languages, ...

Jan 16, 2020 · agentultra on On Composition (2019)
They did make a great foundation. Symbolics Genera was super nice apparently: https://www.youtube.com/watch?v=o4-YnLpLgtk
shalabhc
Seems so. Also this twitter thread: https://twitter.com/RainerJoswig/status/1213484071952752640

- all commands have uniform introspection - console contains 'live views', not dead text - embedded REPL inside apps with commands that mirror the menus and buttons - seamless jump to source, live edit

It's JPEG, not PNG, but here you go, a demo by Kalman Reti (one of the Symbolics' developers):

https://www.youtube.com/watch?v=o4-YnLpLgtk

It shows a debug session where C code running on the lisp machine (VLM in this case, unknown variant of the VLM though) is debugged to fix JPEG decompression error.

fit2rule
Thanks, that's really intriguing. Ah, to think of all the things that could have been ..
Here's another Lisp Machine demo showing some of these ideas in action: https://www.youtube.com/watch?v=o4-YnLpLgtk
> No, other languages (most of them) don't have real REPLs. What they have, at best can be categorized as "interactive shells".

> Try Racket, Clojure, Clojurescript, Fennel, Chez. Give Lisp an honest, heartfelt attempt to learn it.

Compared to actual Lisps, some of these have mostly interactive shells.

https://www.youtube.com/watch?v=o4-YnLpLgtk

iLemming
We can debate for a long time if other Lisps deserve the status of "true" lisp or that title forever belongs to Common Lisp only. There are many prominent Lispers that criticized Common Lisp for being overly bloated and for butchering the sole idea of Lisp. To be honest I do feel like agreeing with them and kinda glad that Common Lisp is dying. That is why I did not include CL in that list.
lispm
Unfortunately you are missing out on a lot of things Lisp has to offer, by working with non-core Lisp-variants. For example it would be clearer to you what interactive working with a Read-Eval-Print-Loop in Lisp actually can do - besides the simpler improvements in interactivity. There is a whole world of core Lisp which you are ignoring. You are also slightly misleading people by not mentioning core Lisp languages and language contributors to those.

> or that title forever belongs to Common Lisp only.

Emacs Lisp, Visual Lisp, ISLisp (an ISO standard), various Common Lisps and their variants, Interlisp, Portable Standard Lisp, ... and a bunch of others.

You are setting your focus on languages which are more or less derived from those (for a reason, often with specific improvements or alternative features), but missing out on the core language implementation features. All the above have LISP in their name, which Racket, Clojure, Clojurescript, Fennel, Chez don't. There must be a reason for that. ;-)

Take Clojure and ClojureScript. Without Common Lisp those would not exist. I can remember when Rich Hickey was an active Common Lisp user (he used LispWorks and we were on the same mailing lists). He developed stuff in Common Lisp, worked on Common Lisp to Java integration, wrote a first sketch of Clojure in Common Lisp and so on.

Luckily he was open minded, learned a lot and created a language (with lots of lessons from Common Lisp), which a lot of people like...

The language you wish is dying, was literally the one where Rich Hickey learned from..., like many people before and others, open minded, will also do in the future.

iLemming
I don't want to criticize CL, but I feel today it's neither an introductory Lisp nor academical or pragmatic. I think advocating for Lisp by pushing newbies to CL, telling "that's the true Lisp, and others not so much" is damaging. Common Lisp today has become like a Latin of Lisps - knowing it is awesome, but practicality of that knowledge is quickly becoming irrelevant. Those who have zero exposure to Lisps can be easily intimidated by it.

My own opinion about CL (why I said "I'm kinda glad it's dying"), shaped through influence of other respectable Lispers quotes:

Guy L Steele notably criticized its standard for being over 1000 pages.

Daniel Weinreb criticized it for being Lisp2:

> ... It makes the language bigger, and that's bad in and of itself.

Richard Gabriel:

> “Common Lisp is a significantly ugly language. If Guy and I had been locked in a room, you can bet it wouldn't have turned out like that”

Paul Graham:

> A hacker's language needs powerful libraries and something to hack. Common Lisp has neither. A hacker's language is terse and hackable. Common Lisp is not. The good news is, it's not Lisp that sucks, but Common Lisp.

etc.

lispm
> but practicality of that knowledge is quickly becoming irrelevant

just the opposite. When there is lots of interest in Lisp derived languages, the core language stays important, since it's the fountain for many ideas.

> through influence of other respectable Lispers quotes

But you ignore the context. The only real one critical about Common Lisp is Paul Graham, and he got rich and famous through a Common Lisp application: he wrote an online-store system in CLISP and wrote two books on CL, one of which is a real classic: On Lisp. But later he was critical and designed Arc as a very different take on Lisp: small language, small programs, small identifiers, for web programming. It's the one which was used to develop this website here on Hackernews.

Guy L Steele, Gabriel and Weinreb came actually out of a tradition where they used Lisp systems which were MUCH larger than Common Lisp. Steele literally worked two decades with and for Maclisp and Common Lisp. Steele wrote the first two defining books on CL. Steele also has a much larger scope: he worked on things like Scheme, High-Performance Fortran, Parallel Computing with the Connection Machine (which started out with a parallel Common Lisp), C, Java, Fortress, ...

Weinreb was working at Symbolics on and with Lisp Machine Lisp. He co-wrote one of the first object-oriented databases, in Common Lisp. Later he took those ideas to a C++ based OO-database. Years later Dan was back working with Common Lisp at ITA where he spent lots of time... I met him once in Hamburg at a Lisp meeting and he was happy then, working with SBCL and Clozure CL. Unfortunately he died much too soon.

Gabriel worked on the Common Lisp design. He also wrote a book on benchmarking Lisp. Then published a critical paper on Common Lisp and THEN founded a company which developed a wonderful implementation: Lucid CL. He also was then working on the CLOS standard proposal.

You are totally missing on the background and what these people actually were working on. Since the broader Lisp community is very diverse and always was, there were and are always critics in all directions. Some people think modern Lisp is too static, other think it is not static enough, some think it's too interactive, others think its not enough interactive... but you need to understand the context.

Steele, Weinreb and Gabriel were actually the first designers of Common Lisp - the so-called 'gang of five' included also Fahlman and Moon.

It's like claiming that Tolkien was critical of The Lord of the Rings, because it has too many pages, took too long to write it and there are lots of new fantasy books. Tolkien wrote that defining book - like Steele, Weinreb and Gabriel developed Common Lisp and Graham wrote a classic book on (Common) Lisp programming.

There are lots of other things in the Lisp history and I'm happy that some people work to preserve those ideas - not wishing to ignore it and that they are dying.

http://www.softwarepreservation.org/projects/LISP

iLemming
Okay, that is fair. I honestly appreciate you spending your time, convincing me. To be honest I did not need much convincing. I am sold on Lisp already and CL is just a matter of time for me, personally. But I still think it's not a good introductory Lisp for those who still need convincing. Maybe after using it a bit my opinion changes, but I know several former CL devs, they share the similar opinion.
lispm
Maybe somebody does it like Rich Hickey, coming from C++, checking out CL for some time and then moving on. But it transformed him. The next generation may look at Clojure, check it out some time and then move on to design the next thing... it might then be helpful to understand where SOME (by far not all) of the ideas originally were coming from...
iLemming
As I said: "CL is Latin of Lisps". If you're a language designer you definitely should know it.

But using a very dense language with too many features for app development, in practice, often causes more issues rather than solving problems.

Clojure is opinionated and that's a good thing.

lispm
Clojure has the 'half' of its features on the Java side. The whole package is a huge Java infrastructure with zillions of features + Clojure.

Common Lisp was designed to be that on its own, and its actually not that large anymore - compared to similar options.

Everyone who wants to learn a Lisp which stands on its own feets - for example SBCL has only a relatively small C core and all the rest is Lisp itself - should have a look. This has deep effects generally for Lisp: better error handling, better interactivity, images, simpler debugging, less language compromises on the low-level, ...

Lisp is also not opinionated. Working in a non-opinionated languages is different. In Lisp the developer may need to develop his/her own opinions.

Right now you are looking at a shallow image of Lisp.

Jul 13, 2019 · 4 points, 0 comments · submitted by Impl0x
Symbolics Lisp Machine demo https://m.youtube.com/watch?v=o4-YnLpLgtk

And also stumbled on this which i find fascinating: https://m.youtube.com/watch?v=gV5obrYaogU

Since he did AI work in the 50's, probably a lot of his work was written in LISP or IPL, which had many of the concepts that LISP later used. Most of his high-profile inventions (Computer mice, Hypertext, Interactive Computers) were first commercialized on LISP machines in the 80s. LISP Machines also pioneered some other concepts such as windowing ystems, graphic rendering, modern networking, garbage collection, etc.

There is a fun little demo on youtube here; https://www.youtube.com/watch?v=o4-YnLpLgtk . That shows off a LISP machine roughly as it was written in the 80s/90s.

gwern
Not sure why you'd guess that since Lisp machines weren't a thing in the 1950s. His NLS https://en.wikipedia.org/wiki/NLS_(computer_system) was actually written in something rather different, TREE-META https://en.wikipedia.org/wiki/TREE-META I'm not entirely sure what you would call their programming languages, but they definitely weren't Lisps.
By all means,

"Graphical Programming with Xerox Lisp"

https://www.youtube.com/watch?v=J4F6ioMKiqw&list=PL80581C8F8...

"Symbolics Lisp Machine"

https://www.youtube.com/watch?v=o4-YnLpLgtk&t=28s

"The Interlisp Programming Environment (1981)"

https://news.ycombinator.com/item?id=5966328

"Interlisp-D Reference Manual Volume II: Environment"

http://www.bitsavers.org/pdf/xerox/interlisp-d/198510_Koto/3...

agumonkey
have you used pharo ? I quite ~loved the experience, and I wonder if lisp machines were similar in the feeling that you really can mold your system real time. Or if it was different.. (more possibilities, different idioms that changed the perspective)
pjmlp
I used Smalltalk/V at the university, a couple of years before Java sprung into existence.

Pharo just to occasionally check how much it has changed since those days.

Yes, Smalltalk does provide a similar experience to Lisp Machines.

Also if you read Xerox papers about Mesa XDE and Mesa/Cedar, one the goals was to provide a developer experience similar to the Lisp and Smalltalk workstations in the context of a strong typed systems programming language.

agumonkey
Thanks, I never bothered with MESA/Cedar.. because mesa reminds me of opengl linux libs .. enjoy the wat.
kamaal
Basically the Lisp Machine is like booting a Laptop with Emacs as a OS?

I mean not exactly, but mostly like that?

lispm
given that GNU Emacs does not implement a file system, has no process scheduler, has no driver for Ethernet cards, doesn't have a driver for a framebuffer, ... the screen would be dark and there would be no i/O... whereas the Lisp Machine is a computer with a stack-oriented CPU running a relatively capable OS written in Lisp only (incl. disk driver, file system, NFS, DNS, SMTP, FTP, TELNET, graphics, processes, postscript printer support, users/groups/networks/sites, editor, graphics editor, file browser, process browser, printer dialog, screen shots, mouse handling, tape backups, cdrom reader, ...).
pjmlp
Not at all, that is an oversimplification.

Lisp Machines allowed for multiple applications, the Emacs was only related to the REPL and text editing, aka Zmacs.

https://en.wikipedia.org/wiki/Zmacs

I don't use Emacs regularly any longer, but it used to lack the ability to embedded widgets or graphics on its REPL, one of the reasons why I used to be a XEmacs user instead.

Back to Lisp Machine, you have to imagine that the whole OS is written in a mix of Assembly and Lisp, nothing else.

Then the whole OS was exposed to the developer, as they were single user workstations.

Imagine the ability to access any running application from the REPL and interact with it in some way, for example, reformat the selected image in the word processing application. As very basic example.

Something that on modern systems is only kind of replicated with COM/.NET alongside PowerShell, and it fails short of what was possible.

When an application would trap at any level, the debugger pane gets invoked and you are able to fix the error, followed by redoing the action that triggered the error.

Also the OO system was much more powerful than what most OO programming languages are capable of. You had multiple inheritance, traits (aka protocols), aspects, contracts, multiple dispatch available.

https://en.wikipedia.org/wiki/Genera_(operating_system)

jodrellblank
Something that on modern systems is only kind of replicated with COM/.NET alongside PowerShell, and it fails short of what was possible.

Something I have always felt was a killer-feature of the Windows and Office platforms, and something the industry is moving away from. I never saw what you describe, but I'm sad to increasingly lose what interop we've had since the 1990s and 2000s in exchange for cross platform support, security of one kind or another, and silo'd cloud services.

But then again, what you describe sounds worse not better. Like half "wow, malware playground", and half "I just wanted a nutcracker but all I have available is this hammer factory and the machine shop which built it". Would I see the past differently if I started there, then moved to this present? Would I see COM interop differently if I had started in a world of Apps with constrained interop based on a "share this content with a link" popups?

Along with "if everything's top priority, nothing is", and "one size fits nobody very well", if a system is configurable for any task, does that mean it fits no task very well? A set of magnetic letters on a fridge door lets you rearrange the words to say whatever you want, but that doesn't make it "better than all novels".

agumonkey
All these concerns are understandable, but after reading a lot of lisp and non mainstream litterature (jitted OSes, ocaml compiler, lisp machines), I'd give lots of trust points to the thing described above.

But I'd expect a lot of needs for security and encryption since those times had almost no troubles regarding this.

Oct 09, 2018 · agumonkey on 12 Factor CLI Apps
I never really found the verb list to be entirely satisfactory. It's great that they gave boundaries but I find it too verbose. There are other kinds of syntactic ergonomics with concise vocabulary (lisp has ! xxx p$ for instance, they're a bit harder to swallow but I find the code a bit more poetic and easier to remember as a pattern).

About the objects vs strings, Kalman Reti (of Symbolics IIRC) talked [1] about how an OS passing pointers could enjoy a much easier time instead of serializing everything as strings then deserializing, especially when done ad-hoc through sed/grep/perl/whatever .. It pains me to see this. It pains me to see how linux basic utils are 30% --usage, 30% output formatting (and they all share this).

MS did a great thing with PS.

[1] https://www.youtube.com/watch?v=o4-YnLpLgtk

guhidalg
The verb list is, in practice, 90% Get-* and Set-* , so for most users that will cover the case where they need to read and modify some system state in a script.

It may not cover your particular use case (I want to foo this bar), but as restriction on the language it helps your users not have to discover esoteric commands.

felixfbecker
I like it. Nowadays CLIs got out of hand with verbs, every CLI assigns a very specific meaning to their verbs and it's not clear at all. Just look at kubectl: https://kubernetes.io/docs/reference/generated/kubectl/kubec... What is the difference between "get" and "describe"? It's totally not obvious what "drain", "cordon", "taint", "scale" or "expose" do. Something like Set-KubeReplicaCount describes way more accurately what scale actually does. Or New-KubeService for "expose".

Remember that you can always use and create aliases, so you could still use kube-scale or kube-expose in the terminal if you want. But for scripts readability is the most important thing. Any newcomer can look at any PowerShell script and know what it does. And any newcomer could type in New-Kube, hit tab, and see what new things you can create in Kubernetes, instead of having to google what the command for creating a service is.

agumonkey
You're right, these are completely obscure, and part of the friendly neologism fad we're seeing. Please note that the examples I showed about lisps aren't short sighted concepts but very generic ones (predicates, side effects, global variables), hence the limited scope of confusion here.
Lisp Machines REPL was graphically based, not text based.

So the REPL could display any kind of data structure, and later calls could influence already displayed data.

https://www.youtube.com/watch?v=o4-YnLpLgtk&t=687s

https://www.youtube.com/watch?v=6uPwQuxjgQo

https://www.youtube.com/watch?v=IMN5RSIm6xw

There was GUI based debuggers.

Then there was the access of the complete OS stack and other running applications, and everything was compiled to native code, there wasn't any VM running.

Try do do a (disassemble func) on Emacs for seeing actual 80x86/ARM,....

drb91
You can get some of this with McCLIM today, though not much is written for it.
jacobush
Oh man... impressive.
None
None
My ideal modern terminal would be pretty much a reimplementation of Symbolics' Listener: https://youtu.be/o4-YnLpLgtk?t=1m46s

- A presentation-based UI, where every on-screen element is always linked to the data it represents: https://dspace.mit.edu/handle/1721.1/6946

- A powerful autocomplete that knows about possible keywords and arguments, and can list them in a graphical fashion (e.g. a drop-down menu).

- An ability to use any previously displayed data as input to a new command just by clicking on it, with the UI highlighting only the things than can be used as valid input to the current command.

- Embedding of images and arbitrary UI widgets.

- Sensible names for commands and their arguments, i.e. "Delete File" instead of "rm", and ":Output Destination File /home/erikj/log" instead of "> /home/erikj/log". This makes a lot more sense with the enhanced autocompletion facility than the current 70s style cryptic Unix two-letter commands that were employed because of the hardware limitations. It would be easier to learn and less prone to errors.

baq
looks like powershell stole at least some of those ideas, which makes sense given what it is.
jstimpfle
You are not going to get more than 10% of shell users to type ":Output Destination File /home/erikj/log" instead of just "> home/erikj/log". Learn it once, save typing a lot of characters, many times a day.

The idea that you should type so much more just to be "not cryptic" is as old as COBOL. Which, as most people would agree, was not a good idea.

erikj
Why do you think I mentioned the autocompletion facility? Shell users won't have to type any more characters than they do now. Please give Genera a try in the emulator to see how it would work with your own eyes, it's nothing like COBOL.
Video of a Symbolics Lisp Machine (in Brad Parker's emulator).[1]

Not surprisingly, it looks a lot like Emacs (or vice-versa).

[1] - https://www.youtube.com/watch?v=o4-YnLpLgtk

lispm
Because it uses rectangles and text?

The Symbolics UI looks&feels nothing like Emacs.

Zmacs is just ONE application on it. Applications are not Zmacs buffer modes.

The shown applications, Listener and Peek are not Zmacs buffers or windows. Neither one uses Zmacs buffers/windows.

If you look at the Listener application window:

  * it has no terminal mode
  * it has no minibuffer
  * it has no meta-x commands
  * it is fully graphical (postscript compatible drawing model)
  * you can't split it
  * it has no menubar
  * it has no iconbar
  * it has no modes, it is just a Lisp listener and a command interpreter
  * it is not an editor window and not an editor buffer,
    you can't edit the text printed to it
  * for each output it remembers the Lisp objects
  * for each input it can reuse the Lisp objects
  * since it is a window, you can resize/move it
  * you can only type commands and s-expressions to it,
    and they will be parsed online
Aug 28, 2017 · pjmlp on What makes a good REPL?
Emacs suffers from not being like Lisp machines REPL, meaning lacking graphical display on buffers.

If I remember correctly, XEmacs used to support it (which was my favorite fork), but it seems to have faded away.

For example, try to achieve this demo on Emacs.

https://www.youtube.com/watch?v=o4-YnLpLgtk

abhirag
I have never used a Lisp Machine so forgive my naivety but I saw the demo and you seem to be referring to the ability of displaying images inline in a buffer. Emacs can definitely do that, I have used Emacs IPython Notebook[1] which is a REPL supporting this. I could fire up a jupyter notebook and use pillow[2] to recreate the image manipulation part of the demo.

[1] -- (https://github.com/millejoh/emacs-ipython-notebook/blob/mast...) [2] -- (https://python-pillow.org/)

pjmlp
That is partially what I was referring to.

The other part, which might not be visible on that video is the integration of debugger into the REPL, and the ability to redo a piece of code after breaking into the debugger and fixing it.

So you can do something like, REPL => error (ask to start debugger) => track down and fix error => restart error expression => finish the execution of the original REPL expression with the respective result.

bitwize
SLIME doesn't give you this?
pjmlp
I don't know, back when I cared about Lisp development on Emacs, SLIME wasn't a thing, as you might understand from my XEmacs reference.
juki
SLIME does let you redefine functions and restart a stack frame (assuming the Lisp implementation supports that). You can even do it without using Emacs/SLIME, although that's not very convenient of course. A silly example with SBCL,

    ~ $ rlwrap sbcl --noinform
    CL-USER> (defun foobar (x y)
               (if (evenp x)
                   (/ x y)
                   (* x y)))
    
    FOOBAR
    CL-USER> (foobar 10 0)
    
    debugger invoked on a DIVISION-BY-ZERO in thread
    #<THREAD "main thread" RUNNING {1001BB64C3}>:
      arithmetic error DIVISION-BY-ZERO signalled
    Operation was /, operands (10 0).
    
    Type HELP for debugger help, or (SB-EXT:EXIT) to exit from SBCL.
    
    restarts (invokable by number or by possibly-abbreviated name):
      0: [ABORT] Exit debugger, returning to top level.
    
    (SB-KERNEL::INTEGER-/-INTEGER 10 0)
    0] backtrace 3
    
    Backtrace for: #<SB-THREAD:THREAD "main thread" RUNNING {1001BB64C3}>
    0: (SB-KERNEL::INTEGER-/-INTEGER 10 0)
    1: (FOOBAR 10 0)
    2: (SB-INT:SIMPLE-EVAL-IN-LEXENV (FOOBAR 10 0) #<NULL-LEXENV>)
    
    0] down
    (FOOBAR 10 0)
    
    1] source 1
    
    (IF (EVENP X)
        (#:***HERE*** (/ X Y))
        (* X Y)) 
    1] (defun foobar (x y)
         (if (and (evenp x) (not (zerop y)))
             (/ x y)
             (* x y)))
    
    WARNING: redefining COMMON-LISP-USER::FOOBAR in DEFUN
    FOOBAR
    1] restart-frame
    
    0
pjmlp
Thanks for the example.
abhirag
The only debugging experience I find enjoyable in Emacs is debugging Clojure using cider[1] and I think it comes close to what you are describing but I think you might already be aware of that :)

[1] -- (https://github.com/clojure-emacs/cider/blob/master/doc/debug...)

Mar 03, 2017 · 2 points, 0 comments · submitted by gkya
Oct 13, 2016 · lispm on Atom 1.11
> If that's really your major issue with it...

You seem to be misinterpreting me quite often. It's ONE issue. A development environment which is not multithreaded, is not very advanced.

> it's quite rare for me to actually have trouble with it.

No surprise: Blub paradox at work. Your tools limit your thought.

If my Lisp Machine would be single threaded, it would suck.

> Yes, the UI is awkward, but I've never really had any issues with it. It's functional.

Most Lisp-based development environment have much better UIs. For example in LispWorks or on a Lisp Machine the keychords are shorter. The Dynamic Windows UI of the Lisp Machine is still light-years ahead of anything GNU Emacs.

Here I made a demo how the documentation system works on the Symbolics. It uses Zmacs (the Emacs editor on the Lisp Machine, which Stallman used before he developed GNU Emacs) a component to write documentation records. This stuff had been developed in the mid-end 80s...

https://vimeo.com/83886950

Here Kalman Reti gives a demo of a Lisp Listener on the Symbolics and debugging mixed Lisp/C code:

https://www.youtube.com/watch?v=o4-YnLpLgtk

qwertyuiop924
>You seem to be misinterpreting me quite often. It's ONE issue. A development environment which is not multithreaded, is not very advanced.

Seriously, give the comma a break! It's starting to actually confuse me.

Anyways, on the subject at hand... A lot of the work Emacs does is either 1) manipulating text onscreen, where multithreading doesn't matter, or 2) communicating with subprocesses, which is usually pretty close to multithreading in any case. MT would be nice, but it's not as important as you think it is.

>No surprise: Blub paradox at work. Your tools limit your thought.

Oh, it totally sucks that there's no MP, it's just that there's usually a workaround: This is Unix, not DOS: we can spawn processes if we have to.

>Most Lisp-based development environment have much better UIs. For example in LispWorks or on a Lisp Machine the keychords are shorter.

If I want shorter keychords, then I'll bind them myself. I'm not sure if you've noticed, but the rest of us don't have Knight keyboards at our desks: We have to make do with what we've got.

>The Dynamic Windows UI of the Lisp Machine is still light-years ahead of anything GNU Emacs.

You keep saying that, and have yet to show an adequate example. This seems to show that Emacs's UI is adequate. And for editing text, the thing I use my editing environment most for, it is.

>Here I made a demo how the documentation system works on the Symbolics. It uses Zmacs (the Emacs editor on the Lisp Machine, which Stallman used before he developed GNU Emacs) a component to write documentation records. This stuff had been developed in the mid-end 80s...

It's a bit nicer than Emacs's, I'm willing to admit, but it's quite close, actually.

>Here Kalman Reti gives a demo of a Lisp Listener on the Symbolics and debugging mixed Lisp/C code:

That is actually genuinely cool, but it's not something we can have anymore: Most of us are on UNIX platforms, which don't really allow for this kind of debugging quite as well as the old Smalltalk/Lisp systems. But Emacs does have GDB integration, which is the next best thing.

Symbolics Lisp Machine demo Jan 2013, Kalman Reti

https://www.youtube.com/watch?v=o4-YnLpLgtk

Unix shells are notoriously not user friendly. The commands and their interfaces are full of insonsistencies and often really strange. Reading man pages is horror.

The shell languages are not much better.

There are some useful concepts, but in general Unix shells are used despite their user unfriendliness.

The shells of the various Lisp Machines were quite different. The Symbolics shell, later called Dynamic Lisp Listener was quite nice on the GUI side and the management of commands, completions, defaults, interactive help, etc..

See for example: https://www.youtube.com/watch?v=o4-YnLpLgtk

The interactive help of that Lisp Machine OS is quite a step up from what any typical shell offers. Though it was mainly developed for single user machines with powerful GUIs.

The problems of that approach: it wasn't very sophisticated on the text terminal - actually it was quite bad. The whole UI mostly assumed a GUI. For development one needed an extended Common Lisp (or Zetalisp), which is a bit too complex for many users.

See also a video I made long ago about the user interface of Symbolics Genera, the operating system of the Symbolics Lisp Machine line of personal workstations.

https://vimeo.com/159946178

umanwizard
You are confusing terminology here. Typical UNIX commands like ls, grep, rm, and so on are not part of the shell.
laumars
> Unix shells are notoriously not user friendly. The commands and their interfaces are full of insonsistencies and often really strange. Reading man pages is horror.

They're not that bad (GNU coreutils is generally pretty good) and you usually remember the edge cases fairly quickly (eg `dd`). Sadly you get inconsistencies in all coding frameworks, whether it's semantic, function names in the core libraries or whatever.

UNIX shells do have a lot of hidden gotchya's which can make life "interesting" (read: "oh fuck oh fuck oh fuck" lol). But the power of being able to use reusable blocks of any language through pipelining (and to a lesser extent, exit codes) is genius. It means one can mix and match LISP, Java, Perl, Python, C++, Go all inside one shell script.

I do understand the hate towards UNIX shells. There are a lot of faults and a lot of times I I'd be halfway through writing a Bash script and then be wondering if I should have just written it in Perl or Go instead. But no tool is perfect and pragmatically I've found shells to be far more productive than anything I've ever attempted to replace it with. Which is the real crux of why we use these tools. But like anything in IT, this is just my personal preference. Your mileage may vary.

lispm
> They're not that bad (GNU coreutils is generally pretty good) and you usually remember the edge cases fairly quickly (eg `dd`). Sadly you get inconsistencies in all coding frameworks, whether it's semantic, function names in the core libraries or whatever.

Generally this is all awful and low-level. Just see the UI interface difference between 'dd' and 'Copy File' on a Lisp Machine. The UI is worlds away.

> UNIX shells do have a lot of hidden gotchya's which can make life "interesting" (read: "oh fuck oh fuck oh fuck" lol). But the power of being able to use reusable blocks of any language through pipelining (and to a lesser extent, exit codes) is genius. It means one can mix and match LISP, Java, Perl, Python, C++, Go all inside one shell script.

You can do that on a Lisp Machine, too. With the difference that no pipelining of text is necessary. Just reuse the objects. The data is all fully object-oriented and self identifying.

Note that I'm using Unix and other OS variants since the 80s and I'm fully aware of command line UIs from VMS, SUN OS, Cisco, IBM AIX, OSX, GNU, Plan9, Oberon OS, and various others.

> I do understand the hate towards UNIX shells.

It's not hate. Most Unix shells are just fully dumb. Many people like to use primitive text based UIs with lots of corner cases, which makes them look intelligent for remembering obscure commands, obscure options without real UI help.

Take cp.

The shell does not know the various options the command takes. The shell does not know what the types and the syntax of the options is. The shell does not know which options can be combined and which not. It can't prompt for shell options. It can't check the command syntax before calling it. It can't provide any help when the syntax is wrong. It can't deal with errors during command execution. There is no in-context help. It can't reusing prior commands other that just editing them on a textual base. The output of the command is just text and not structured data. There are really really zillions of problems.

There have been attempts to address this putting different user interfaces on top. For example IBM provided an extensive menu based administration tool for AIX.

> But no tool is perfect and pragmatically I've found shells to be far more productive than anything I've ever attempted to replace it with. Which is the real crux of why we use these tools. But like anything in IT, this is just my personal preference. Your mileage may vary.

Many people have found shell productive. That's why there is a zillion of different shells. You can even use Lisp-based shells like scsh and esh (http://www.serpentine.com/blog/2007/02/16/esh-the-fabulous-f...) or GNU Emacs.

But for most part all these attempts stay in that little box and don't escape the general problems.

laumars
I suspect your argument is now more about personal preference than anything. So I'll just address a few specific points you've raise:

> Generally this is all awful and low-level. Just see the UI interface difference between 'dd' and 'Copy File' on a Lisp Machine. The UI is worlds away.

Well yeah, I did already example `dd` as an inconsistency. :)

> You can do that on a Lisp Machine, too. With the difference that no pipelining of text is necessary. Just reuse the objects. The data is all fully object-oriented and self identifying.

Indeed you can. As you can also with DOS, Powershell and so forth. I wasn't suggesting that UNIX shells were unique (though I can see how it might read that way), but that it was UNIX shells which pioneered that concept. For all their faults and the technology that might have superseded it: the idea of pipelining reusable blocks of code was a genius idea for its era.

Powershell also supports passing objects like Lisp does. Personally I prefer the dumb approach; however in all other aspects of programming I do prefer strongly typed languages. This is just personal preference.

> The shell does not know the various options the command takes. The shell does not know what the types and the syntax of the options is. The shell does not know which options can be combined and which not. It can't prompt for shell options. It can't check the command syntax before calling it. It can't provide any help when the syntax is wrong. It can't deal with errors during command execution. There is no in-context help. It can't reusing prior commands other that just editing them on a textual base. The output of the command is just text and not structured data. There are really really zillions of problems.

This part isn't accurate. The original Bourne shell cannot but bash, zsh and fish can all do all of the above. Albeit sometimes (particularly with Bash) you need to install additional helper routines that aren't always shipped / configured with the default package. I believe csh also supports most if not all of the above too.

I'm sure lisp does it better, but I was never arguing that UNIX shells are better Lisp to begin with. Just that I believe Bash et al to have a lower barrier of entry than Lisp. I think on that specific point we might have to agree to disagree - but I don't see many non-programmers using Lisp and this is why I think Bash has a lower barrier of entry (to go back to my original point).

I can completely understand and relate to why you enjoy working inside a Lisp REPL shell though.

lispm
> I'm sure lisp does it better

Lisp does nothing. It is a programming language.

> Just that I believe Bash et al to have a lower barrier of entry than Lisp

I doubt that, given how horrible Bash as a language and as a shell is. There is no reason why there can't be more sane command systems and better shell languages.

See the zsh documentation on completion:

http://zsh.sourceforge.net/Guide/zshguide06.html

This is all totally over the head of the average user.

     _perforce_revisions() {
        local rline match mbegin mend pfx
        local -a rl

        pfx=${${(Q)PREFIX}%%\#*}
        compset -P '*\#'

        # Numerical revision numbers, possibly with text.
        if [[ -z $PREFIX || $PREFIX = <-> ]]; then
            # always allowed (same as none)
            rl=($rl 0)
            _call_program filelog p4 filelog \$pfx 2>/dev/null |
               while read rline; do
                if [[ $rline = (#b)'... #'(<->)*\'(*)\' ]]; then
                    rl=($l "${match[1]}:${match[2]}")
                fi
            done
        fi
        # Non-numerical (special) revision names.
        if [[ -z $PREFIX || $PREFIX != <-> ]]; then
            rl=($rl 'head:head revision' 'none:empty revision'
                    'have:current synced revision')
        fi
        _describe -t revisions 'revision' rl
      }
Tell me that this piece of code has a 'low barrier of entry' or even a 'lower barrier of entry'. It's just that a generation of experts has been self selected to find that usable.

> but I was never arguing that UNIX shells are better Lisp to begin with

I was not talking about Lisp. I was talking about software. One could write much better command shells in C than what Unix shells offers. I understand that certain people find Unix like shells attractive. From a general user interface perspective they are horrible.

Oberon, TempleOS or OpenGenera. These projects are vertical, consistent and use only one programming language.

In OpenGenera, everything displayed on the screen is typed and can be retrieved as an s-expression (like in a web browser):

https://www.youtube.com/watch?v=o4-YnLpLgtk

https://github.com/ynniv/opengenera#additional-reading

Xerox controlled the whole stack, they invented it for their systems.

So these workstations where an whole stack OS.

You can read about their implementations on the links I provided before.

Additionally, you can get the original Smalltalk-80 from from Stephane's web site.

http://stephane.ducasse.free.fr/FreeBooks.html

This is a very ruft idea how it felt to use a Smalltalk workstation:

https://www.youtube.com/watch?v=8yxCJfayW-8

https://www.youtube.com/watch?v=JLPiMl8XUKU

The REPL was what is known in Smalltalk as transcript.

Similarly the Interlisp-D workstation also had they own mixture of REPL and editing

https://www.youtube.com/watch?v=wlN0hHLZL8c

Some of the ideas that lived on the Lisp Machines afterwards

https://www.youtube.com/watch?v=NOysrxexTXg

https://www.youtube.com/watch?v=o4-YnLpLgtk

Mesa was a memory safe system programming language created at Xerox, as an evolution from Extended Algol to replace BCPL. Niklaus Wirth based his Modula-2 design on it.

Shortly thereafter they updated Mesa into Cedar, which added support for RC with local GC for cycle collection. The system then got called Mesa/Cedar.

It allowed the same interactive experience as the other workstations, but using a GC enabled systems programming language.

The REPL provided nice features, like auto-suggestion when a typo would cause a compilation failure. It is also probably the first graphical debugger for a strong typed language.

Any OS API could be used on the REPL by typing modulename.procedure , which could use other OS APIs to get its input from different sources.

An idea Wirth adopted into Oberon.

You can see how all three environments looked like here:

http://www.chilton-computing.org.uk/inf/literature/books/wm/...

And yes, Powershell alongside .NET is probably the closest we have in modern computers to those systems.

Followed by Swift Playgrounds and AppleScript on Apple systems.

Although Apple also provided similar experiences with Common Lisp, Hypercard and their NewtonScript and Dylan.

lispm
Personal historical favorite:

MIT Lisp Machine screenshots from 1980:

http://bitsavers.informatik.uni-stuttgart.de/pdf/symbolics/L...

pjmlp
Thanks, very nice screenshots.

Yet another proof that the success of AT&T meant a left turn.

Here's a talk from Kalman Reti using a Lisp Machine.

http://www.youtube.com/watch?v=o4-YnLpLgtk

pros: everything is lisp structure, no text serialization/deserialization to communicate between parts of the system.

cons: see above.

Actually there are a few videos. :)

But I guess all fail short how they really were.

http://www.loper-os.org/?cat=10

http://www.loper-os.org/?p=932

https://www.youtube.com/watch?v=o4-YnLpLgtk

seanmcdirmid
I still am interested in piecing together the experience for history. I hear stories but never any real details!
jules
I think most of the amazement came from the time they were in. In that time this was truly revolutionary. Having a whole OS based on it is still arguably revolutionary, but from a programming/IDE perspective I don't think they are anything special any more. As far as I know a modern Lisp + Emacs gives you roughly the same experience.
pjmlp
Emacs fails short because it doesn't allow for live editing of inline data structures, unless it has changed on the last years.

Dr Racket's REPL is probably closer to the experience.

Also very few IDEs enjoy the same edit-continue experience, maybe commercial Common Lisp environments.

TeMPOraL
Not sure what you mean by "live editing of inline data structures" here (care to give an example?), but the interactive debugger in Emacs/SLIME can do quite a lot of nice things, including modifying arbitrary data on the fly, live.

For instance, if I have a hashtable returned from a function I called in REPL, I can inspect it and modify its values and properties. Also, within the REPL itself, text is "smart" and copy-paste tracks references, so I can paste the "unreadable objects" (i.e. #<Foo 0xCAFECAFE>) directly into REPL calls and have it work, because SLIME will track the reference linked to a particular piece of text output.

pjmlp
Check the example of image editing

https://www.youtube.com/watch?v=o4-YnLpLgtk

lispm
The presentation based REPL in Emacs + SLIME was inspired by the Symbolics Lisp Machine presentation feature.

But I can assure you, there is a difference of a REPL feature in an editor and a GUI using it system wide, as on the Lisp Machine. Both in depth of the features, integration and the feel of the user interface.

TeMPOraL
Haven't had a chance to experience it myself yet. I'm watching the videos now though and I think I begin to see the difference.
lispm
Check out this video (which I made some time ago), which shows the presentation UI from an application perspective (here a document authoring system) and as a bonus, the application integrates Zmacs (the Emacs of the Lisp Machine)...

https://vimeo.com/83886950

Think of the Documentation Examiner a version of Emacs Info. Think of Concordia as a version of an Emacs buffer editing documentation records. The listener a version of the Slime listener. You can also a short glimpse of the graphics editor, IIRC.

TeMPOraL
Thanks a lot! I'll watch it after work and get back with impressions :).
When I see this unifying of language logic and shell ~logic, as a Lisp head, I can't help but to think of lisp machines:

https://www.youtube.com/watch?v=o4-YnLpLgtk

I am really curious about how much parsing and formatting occupy *nix source code. (IIRC 30% of ls), and if it would be a good idea to decouple the way you mention it. my json-infused distro is still on my mind.

ps: this lisp machine talk mention how 'programs' (I guess functions) exchanged plain old data rather than serialized ascii streams making things very fast. There are caveats of adress space sharing though but it's another nice hint we could investigate other ways.

http://www.youtube.com/watch?v=o4-YnLpLgtk

sudioStudio64
That's an interesting point. abstracting out parsing/serialization from the data piped between commands would lead to more consistent argument handling for all commands.
agumonkey
somebody wrote docopt (started as python lib) as a generic POSIX usage string parsing (part of the standard). Maybe it could lead to simpler argument parsing and 'user interface' generation, whether static documentation or shell completion scripts.

Also structured output may lead to more relation tools, less regexful ad-hoc parsing, maybe some kind of typecheck so callers can be notified when they need to be rewritten.

About 'IPC' that talk about Lisp Machines single namespace was interesting. Passing pointers instead of serialized data. www.youtube.com/watch?v=o4-YnLpLgtk
Oct 10, 2014 · Scuds on Rich Command Shells
This demo appears to be running on an emulator instead of an actual antique piece of hardware but you can get the idea.

https://www.youtube.com/watch?v=o4-YnLpLgtk

the AI winter is something you don't hear too much about. http://en.wikipedia.org/wiki/AI_winter

IIRC that site is hosted on a Raspberry Pi running SBCL and CL-HTTP, if it could handle HN traffic it would be just magical.

Since the website is not up I decided to fish for a video showing Open Genera stuff elsewhere to share with you fine folk: https://www.youtube.com/watch?v=o4-YnLpLgtk

Unfortunately there is no easy way for we to play with open genera since its proprietary and expensive. The world would be a better place if this type of system popped up more often.

For those interested in Lisp I recommend the book "Land of Lisp" http://landoflisp.com/ or if you're more like a Racketeer then "Realm of Racket" http://realmofracket.com/

Another video showing Symbolics: http://www.youtube.com/watch?v=o4-YnLpLgtk
bokchoi
That was a great intro video. Pretty amazing stuff.
You mean like the command line on Lisp machines, such as this example from Symbolics Genera? https://www.youtube.com/watch?v=o4-YnLpLgtk

The command line is contextually intelligent and presents interactive graphics and text. Not a simulation of a teletype on a simulation of a character terminal.

mietek
Thank you for sharing this video.

There's an Alpha workstation in my closet, waiting to have OpenGenera installed...

Seems like this could be handled more like the Symbolics Genera command line, which offered extensible intelligent commands with context-sensitive completions. Here's a demo by Kalman Reti: http://youtu.be/o4-YnLpLgtk
> And the old sexy. Back in the day, UNIX was a piece of garbage and the future was LISP machines, if you asked a lot of people.

I don't know... it still looks like the future today. Genera [1] was able to do things I can't do even with a lot of effort nowadays with my language/IDE/tooling.

[1] http://www.youtube.com/watch?v=o4-YnLpLgtk

Dec 02, 2013 · 1 points, 0 comments · submitted by dharmatech
Oct 08, 2013 · 2 points, 0 comments · submitted by agumonkey
Oct 08, 2013 · 2 points, 0 comments · submitted by duggieawesome
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.