HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
#rC3 - What have we lost?

media.ccc.de · Youtube · 310 HN points · 3 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention media.ccc.de's video "#rC3 - What have we lost?".
Youtube Summary
https://media.ccc.de/v/rc3-channels-2020-20-what-have-we-lost-



We have ended up in a world where UNIX and Windows have taken over, and most people have never experienced anything else. Over the years, though, many other system designs have come and gone, and some of those systems have had neat ideas that were nevertheless not enough to achieve commercial success. We will take you on a tour of a variety of those systems, talking about what makes them special.

In particular, we'll discuss IBM i, with emphasis on the Single Level Store, TIMI, and block terminals Interlisp, the Lisp Machine with the interface of Smalltalk OpenGenera, with a unique approach to UI design TRON, Japan's ambitious OS standard More may be added as time permits.

Calvin Buckley Techfury90 TQ Hirsch

https://pretalx.rc3.studio/rc3-channels-2020/talk/KMVBDB/
HN Theater Rankings
  • Ranked #29 this year (2024) · view

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Aug 25, 2022 · 309 points, 167 comments · submitted by hcarvalhoalves
donatj
I don’t think people these days realize how interesting pre-X pre-UNIXification MacOS Classic was.

Single files often were made of multiple data streams “resource forks”, and this was actually utilized heavily (NTFS has a similar concept but it’s almost never used). Files had “creator codes” as metadata rather than file extensions. They were directly and firmly associated with the application that created them rather than a description of their content.

Files in and of themselves were a very different concept in MacOS Classic.

In my eyes a big part of what killed this was the Internet. To the Internet, a file is clearly only a single stream of data. Files are typed by content via mime and file extension, not what created it. Anything with resource forks need to be bundled in a .sit file. This is clearly inconvenient, and made MacOS files second-class citizens.

Mind you, resource fork still exist in macOS today, they’re just used for custom icons, tags and file comments. Metadata. Nothing to the extent that MacOS Classic used them for, and all things users don’t mind losing on uploading to the web.

- https://www.macintoshrepository.org/articles/152-what-is-a-s...

asveikau
The coolest thing about resource forks is how it encouraged separation of data and code. You could often alter a program's behavior and UI in ResEdit without modifying any code.
WillAdams
Give me a bundle of files w/ an extension instead.

.rtfd is quite nice and it's easy to re-use all the elements of a document.

jstimpfle
Sounds like a layering issue to me. We have enough problems with semantic differences between filesystems on different platforms - permission models, allowed characters in names etc. What is a strong reason why the multiple streams shouldn't just be implemented on top of a single-stream filesystem file abstraction? I know a number of formats that do such a thing, like ELF, PDF, Sqlite3, various media containers, various archive formats.. Probably some people here can come up with dozens without looking anything up.

If you do these resource streams, how do you copy those to various other mediums, like physical drives, tapes, pipes, sockets... All consist only of a single stream at a low level, because why make it harder than that? Now that means that instead of "cat" your standard way to read such a multi stream file would be a command that serializes it to the "sit" format as you mentioned, making that format almost the canonical representation. So what was he point of implementing these resoure forks in the filesystem again?

Critizicing Unix often comes back to the "reinventing poorly" phrase, the simplicity of not having some features at every layer is a virtue.

lproven
I strongly disagree.

The problem with the Unix lowest-common-denominator model is that it pushes complexity out of the stack and into view, because of stuff other designs _thought_ about and worked to integrate.

It is very important never to forget the technological context of UNIX: a text-only OS for a tiny, already obsolete and desperately resource-constrained, standalone minicomputer. It was written for a machine that was already obsolete, and it shows.

No graphics. No networking. No sound. Dumb text terminals, which is why the obsession with text files being piped to other text files and filtered through things that only handle text files.

While at the same time as UNIX evolved, other bigger OSes for bigger minicomputers were being designed and built to directly integrate things like networking, clustering, notations for accessing other machines over the network, accessing filesystems mounted remotely over the network, file versioning and so on.

I described how VMS pathnames worked in this comment recently: https://news.ycombinator.com/item?id=32083900

People brought up on Unix look at that and see needless complexity, but it isn't.

VMS' complex pathnames are the visible sign of an OS which natively understands that it's one node on a network, that currently-mounted disks can be mounted on more than one network nodes even if those nodes are running different OS versions on different CPU architectures. It's an OS that understands that a node name is a flexible concept that can apply to one machine, or to a cluster of them, and every command from (the equivalent of) `ping` to (the equivalent of) `ssh` can be addressed to a cluster and the nearest available machine will respond and the other end need never know it's not talking to one particular box.

50 years later and Unix still can't do stuff like that. It needs tons of extra work with load-balancers and multi-homed network adaptors and SANs to simulate what VMS did out of the box in the 1970s in 1 megabyte of RAM.

The Unix was only looks simple because the implementors didn't do the hard stuff. They ripped it out in order to fit the OS into 32 kB of RAM or something.

The whole point of Unix was to be minimal, small, and simple.

Only it isn't any more, because now we need clustering and network filesystems and virtual machines and all this baroque stuff piled on top.

The result is that an OS which was hand-coded in assembler and was tiny and fast and efficient on non-networked text-only minicomputers now contains tens of millions of lines of unsafe code in unsafe languages and no human actually comprehends how the whole thing works.

Which is why we've build a multi-billion-dollar industry constantly trying to patch all the holes and stop the magic haunted sand leaking out and the whole sandcastle collapsing.

It's not a wonderful inspiring achievement. It's a vast, epic, global-scale waste of human intelligence and effort.

Because we build a planetary network out of the software equivalent of wet sand.

When I look at 2022 Linux, I see an adobe and mud-brick construction: https://en.wikipedia.org/wiki/Great_Mosque_of_Djenn%C3%A9#/m...

When we used to have skyscrapers.

You know how big the first skyscraper was? 10 floors. That's all. This is it: https://en.wikipedia.org/wiki/Home_Insurance_Building#/media...

The point is that it was 1885 and the design was able to support buildings 10× as big without fundamental change.

The Chicago Home Insurance building wasn't very impressive, but its design was. Its design scaled.

When I look at classic OSes of the past, like in this post, I see miracles of design which did big complex hard tasks, built by tiny teams of a few people, and which still works today.

When I look at massive FOSS OSes, mostly, I see ant-hills. It's impressive but it's so much work to build anything big with sand that the impressive part is that it works at all... and that to build something so big, you need millions of workers, and constant maintenance.

If we stopped using sand, and abandoned our current plans, and started over afresh, we could build software skyscrapers instead of ant hills.

But everyone is too focussed on keeping our sand software working on our sand hill OSes that they're too busy to learn something else and start over.

jstimpfle
I can relate to your point. I know both Windows and Linux quite well and both have their strengths and weaknesses. Just to put it in perspective: You didn't say why we need containers of multiple streams implemented at the filesystem level. Also, these alternative designs that you describe and that work so wonderfully are often seen through rose-tinted glasses. You've probably seen a few videos and became impressed what was possible such a long time ago. But you probably haven't actually used these things, so you can't experience the limitations.

I hate many things about Linux, but there is a lot of development work where Linux is much stronger. A lot about the "minimal design" approach is still valid today, and I don't mean Dbus or Docker or Kubernetes or whatever, which I likely would hate if I actually knew them.

In my view, the main problem about (Desktop) Linux is fragmentation and lack of standardization. A strong suit of Windows development is the APIs, at least the older ones that don't get deprecated after a year. There are useful APIs for everything related to a Desktop experience, and you can count on their existence as a developer.

The lack of standardization is what makes it feel like sand. Apart from the simpler stuff (POSIX), there isn't a trustworthy authority that maintains stable APIs for a solid user experience, at least not APIs that I feel like using personally.

> VMS' complex pathnames are the visible sign of an OS which natively understands that it's one node on a network, that currently-mounted disks can be mounted on more than one network nodes even if those nodes are running different OS versions on different CPU architectures. It's an OS that understands that a node name is a flexible concept that can apply to one machine, or to a cluster of them, and every command from (the equivalent of) `ping` to (the equivalent of) `ssh` can be addressed to a cluster and the nearest available machine will respond and the other end need never know it's not talking to one particular box.

Are you sure you understand how the Unix filesystem (VFS) works? On Unix, a filepath is exactly what you say: a name that can identify a resource. There are distributed filesystem protocols that are of course portable, not dependent on CPU architecture or anything.

I don't get what is your point about these Drive-Letter paths, they often create annoying complexity. I believe even NTFS has developed extensions to get rid of them. So, not that I think filepaths are beautifully easy to use on Unix, but they're much better than on Windows in my experience.

anthk
On Unix, sorry, but now Plan9/9front (the true succesor to Unix) made VMS something obsolete.
lproven
Did it, though?

I mean, yes, I agree with you, Plan 9 is the true successor to Unix.

But Inferno is the true successor to Plan 9.

And yet, both are obscure and relatively rarely used anywhere, whereas VMS Software Inc. just shipped OpenVMS version 9.2, the first production-ready release of native x86-64 OpenVMS.

Here's a news story I wrote on it: https://www.theregister.com/2022/05/10/openvms_92/

VMS has now run on 4 different CPU architectures, migrated 3 times, and it is still out there, still in production, still being used by enough organisations to pay for another port and a new native version in 2022.

1. VAX → Alpha

2. Alpha → IA64 (Itanium)

3. IA64 → X86-64

It's doing well for something "obsolete".

Plan 9, sadly, has not even managed to replace Unix enough to hinder the vast uptake of an original 1970s-style monolithic FOSS version: Linux. I'm typing on it right now.

anthk
9front took plan9 further. A truckload of new drivers and software. They even have a video and audio player, some game ports, system emulators, (hardware virtualization!), and so on. Get 9front and try it.

OFC not even close to a Linux or BSD support, but everything works like magic. No ssh, no enforced VTs, no POSIX (coding in C it's pure love here), no crapware.

On Linux, meh. I prefer OpenBSD, my main OS. Meanwhile I am using Alpine for, well, that ecosystem bound to the penguin with Linux-only software. But not for long...

lproven
I have read up on 9front a little. It's good to see that people are still working on it.

Personally, as someone who's used xNix OSes for >30 years out of pragmatism not any love, I find Plan 9 completely inscrutable and unusable.

Years ago there was an Inferno ISO available -- gone now, AFAICT -- and I managed to run it. I mentioned Inferno here:

"Fed up with Windows? Linux too easy? Get weird, go ALTERNATIVE" https://www.theregister.com/Print/2013/11/01/25_alternative_...

I found the Inferno desktop a lot more navigable and comprehensible than 8½, Rio, Acme etc.

I still wonder if it might be possible to merge Plan 9 (and derivatives) and Inferno. Give the choice of C compiled to a native binary, or Limbo compiled to Dis.

anthk
You can set a proper Icewm/Win95 type panel+wm in a week with very little coding.

There's a classical file manager:

https://pspodcasting.net/dan/blog/2019/images/filemng.png

With "bar" for 9front, and "winwatch", you just have to write some title bar in order to manage the windows with ease.

Jaruzel
Working on large VAXclusters early in my career, totally spoiled me as to what was possible. Even now, I look at what is laughingly called 'clustering' and I sigh.
boondaburrah
Resource Forks were necessary on early Macintosh so that the OS itself could partially load programs into RAM when you only had 512K or so, loading resources as needed.

You could argue that your resources instead should be multiple files in a folder so we don't have to treat a fork specially, and you'd be right, and you'd also have invented the NeXT/OSX .app bundle.

jstimpfle
I don't see the reason why the streams couldn't be implemented _on top_ of single stream filesystem implementations. The usecase you mention is probably solved with Virtual Memory and ELF today?
boondaburrah
Yeah this design is pre-mmu and virtual memory, and NeXT/Apple solved it with bundles (store the streams as separate files in a specially named (ends in .app, .bundle, .framework etc) directory that the OS presents as a single file).

I... kinda want to see how this could be done with ELF now. It seems totally possible.

heja2009
Actually Apple had to find solutions to some of the problems you mention when they transitioned to OS X:

* multiple resource forks can be presented as files in a directory, the directory having the name of the multi-stream file and the files in it named after the stream, e.g. "resources"

* for transfering data to other mediums/computers etc you can use container formats such as zip or tar, but with the unpacker having the ability to unpack them properly to a multi-stream file

* since on modern systems such as windows you can seamlessly file-browse into container formats such as zip so this is even less of a problem

* actually OS X to this day uses something like this for "Apps" as they are a special container/directory that you can browse into when you know the magic incantation

jagged-chisel
Additional info for macOS:

Not only apps, but many other types as well. And the organization technique is nothing magical, they’re just directories with an extension in the name, and usually some standardized structure. They’re officially referred to as “bundles.”

Apps are bundles, various plugins are bundles (audio units, pref panes for System Preferences), and if Finder didn’t treat them specially, you’d never think they were. In a shell, they’re just like any other directory.

thequux
I'm glad that this talk continues to inspire people!

FWIW, there's previous discussion over here: https://news.ycombinator.com/item?id=26723886

dang
Thanks! Macroexpanded:

What have we lost? [video] - https://news.ycombinator.com/item?id=26723886 - April 2021 (83 comments)

mintplant
Did you end up giving the follow-on talk mentioned here?

https://news.ycombinator.com/item?id=26724786

thequux
I did not; unfortunately, my mental health took a dive in 2021 and I couldn't gather the energy to write another talk. I still intend to continue the series at some point, though.
emikulic
Sorry to hear it, man. :( I hope you get better.
anyfoo
IBM i opened my eyes a bit, and made me a bit sad, when I decided to check it out after decades of working on what I now realize are entirely UNIX-y OSes. By that I mean that besides the many actual UNIX-derived systems like Linux, Solaris, HP/UX, macOS I worked on, IBM i made me realize that DOS, Windows, OS/2, and whatever else most people are probably aware of nowadays, are also UNIX clones to a much higher degree than I thought.

IBM i is completely different in many ways. It has a unified 128bit address space. It does not have the same concept of a hierarchical filesystem (by default, you can bolt one on, but it clearly does not "fit"), and it does not even strongly have the concept of having everything in "streaming" files (or their equivalent) to begin with. It also has a completely different "command line" concept for example, and countless other aspects that are hard to explain succinctly.

It is a bit like learning Haskell, where it used to feel like you thought you could learn every language in an afternoon (after C, C++, Java, JS, Pascal, perl, python, awk, shells, BASIC, and countless others), but then discover you have to relearn the very basics, and that what you thought of as universal actually isn't.

A lot of these concepts work really well. They are at a level of abstraction that I would not have thought possible in practice. They allow the system to be incredibly stable and low maintenance, and elegant. The underlying architecture was changed at least once (maybe twice, not sure), and it was entirely seamless for customers.

It made me sad because I discovered a computing world that could be widespread reality, but in all likelihood won't be. That's thanks to UNIX being so pervasive that it's now basically woven into the very fabric of computing, but that is of course in no small part thanks to IBM's extreme closeness. I once thought UNIX was the way to go, but I'm not so sure anymore. And now that it's everywhere, too many of its concepts are considered a "ground truth". UNIX won because it was hard to avoid getting exposed to it, while for IBM i you had and still have to fight for even just trying it out.

Interestingly, the IBM mainframe world, i.e. z/OS and its predecessors, do feel the same in terms of "you have to relearn everything", but with the opposite outcome. Where IBM i is presenting you with unique abstractions from the very base of the OS, it's amazing how little abstraction there is in the mainframe world. You clearly get a sense that mainframes come from a time where a lot of common concepts simply had not been invented yet, while on the other hand IBM i (or rather its predecessors) reimagined OSes at a much later time.

EdwardCoffin
I never used the i series (formerly AS/400, and System/38 before that), but reading Inside the AS/400 by Frank G. Soltis made a huge impression on me. Highly recommended for anyone interested in the details.
whartung
I kind of wished I would have got a position at some company to work on an AS/400. Back in the day, my company was looking for a new "solution" and considered most everything, including an IBM. But eventually we went UNIX.

What I'm curious, though, is in the world of a random back office developer, how much of the, well, "inner beauty" of the machine would I have encountered.

Most of the cool Unix-y stuff happened mostly through ad hoc integrations with random Stuff as circumstances presented themselves, and I don't know how much of the AS/400 a random (likely) COBOL programmer would have delved into.

EdwardCoffin
My impression from reading the book was that it had a beautiful internal architecture that I imagined would largely be hidden from the user. I think as a say COBOL programmer on an AS/400, one would pretty much see it as not much different from say a mainframe. This opinion is entirely formed without experience though.
anyfoo
It's still very different from a developer/"system user" perspective (I don't say enduser because then you are kept away from most anyway). A lot of concepts one takes for granted are different.
anyfoo
If you'd like to try, there is a way to get a free user account at pub400.com. That got me interested enough that I set out to get my own AS/400.

It took me literally years until I stumbled upon an affordable machine with licenses. The machine I got is decades old and was decommissioned in 2008, after a long life.

IBM created something revolutionary and did everything to keep the public away from it.

chiph
The team behind it also tried to keep the rest of IBM away from it, for fear that The Suits From Armonk would come in and ruin their product and their culture.
pjmlp
Exactly my experience when delving into the computer library at the university.

I was heading into some UNIX zealotry path, and then started diving into everything that happened before UNIX, what was going on at Xerox, DEC, Olivetti, ETHZ, and so forth, sunddenly UNIX wasn't that interesting as I once thought.

jmclnx
Never worked on IBM i, but Wang VS had a very unique system. It also expected the terminals connected to it to have a small CPU.

Too bad it never made it to the wild. In its last days it was ported to an IBM AIX System (RS6000?), but the company went bankrupt before that port made it out.

ksec
Never heard of IBM i [1], turns out it is a new name for OS/400.

[1]https://en.wikipedia.org/wiki/IBM_i

tuatoru
> ... on the other hand IBM i (or rather its predecessors) reimagined OSes at a much later time.

Yes. The System/38 - AS/400 - iSeries - IBM i (the lineage) resulted from a project called the "Future Systems" project which started in the late 1960s and tried to imagine computers as appliances, while recognising that hardware was changing fast.

Hardware architecture independence, encapsulation of software objects, a highly regular, helpful user interface, and minimal administration labor were all design goals of that project.

It succeeded too well. User-written programs were stored with their "intermediate representation" (think assembler). They could be, and were, retranslated automaitcally when moved to a new architecture.

Upgrades from a 36-bit processor with 20-bit addressing to a 48-bit processor with 32-bit addressing to POWER (64-bit / 64-bit) were essentially just a backup and restore[1] for customers.

As probably mentioned in the video, the system could be configured with a modem and would phone home to IBM if it detected a fault.

It was common that after a few years with turnover of accounting personnel, offices would not even know that they had an iSeries - this is probably still the case.

This lack of mindshare is probably what killed the i. That and IBM not wanting to sell it, to protect their mainframe business.

---

1. On backups: The OS stored a backup history (dates of the last 20 or so backups, from memory) with each object. It also stored each object's date of creation and the name and serial number of the system it was created on, as well as the dates of metadata modification (changes to access rights, for instance).

Not surprisingly with all the bookkeeping it was much harder on disk drives than IBM's comparable systems. Disk drives that lasted for many years when used with a 4300 series (cut-down mainframe) tended to die in 18 months used with the System/38. RAID-1 and RAID-3 (2 stripes plus a dedicated parity drive) was implemented in the early 80s, from memory. RAID-5 came a bit later IIRC.

Programs and files were objects. So were user profils and group profiles. Access control lists, objects that contained lists of users and groups and permission lists for each entry, were used to control access rights to other objects. They themselves were objects at the same level as these - created, manipulated, and backed up in just the same way.

It tried out a lot of things. It had a unified concept of "message queues". There were permanent ones like the QSYSOPR (system operator) queue, the equivalent of syslog. Processes each got a message queue. Programs within a process each got a message queue, so a program could tell its grandparent something and continue. As a programmer you could create your own message queues, the analog of named pipes in Unix. Message templates were predefined and stored in "message files", which allowed you to write "second level text" --esentially detailed help--for each message. The shell's built-in command prompting and menu system was built around message files and queues as well.

anyfoo
I don't know why your comment isn't rated higher. That's a lot of very interesting information that is directly relevant to the topic.
atombender
Where can I learn more about this?
jacquesm
I've worked on old IBM systems (old by today's standards, back then they were top of the line), the 4381 to be specific, it ran very fast (the IO capabilities of those systems was quite impressive for the time and even today such a system would, besides it size be quite ok) compared to all of the UNIX machines I had played with but it wasn't elegant in the way that UNIX was, just tons and tons of little details to remember, whereas with the UNIX survival guide (about 15 commands) you could normally get through the day until you started to do crazy stuff. The IBM gear cam with absolutely amazing documentation though.

"everything is a file" is brilliant, but UNIX didn't take that as far as it should have, Plan 9 is much further along that road and I would consider it to be even more elegant than UNIX.

The way in which things work is just like you would expect them to work, including being able to compose stuff (for instance: in Plan 9 to run a new version of the window manager in a window in the old one) is what I really like about that particular system.

Between Plan 9 and Erlang we missed a bus somewhere.

anyfoo
Be careful though, IBM 4381 is an example of the mainframe world, so very much not similar to IBM i. It's s/370 compatible and ran OS/VS1 and VM/370.

In terms of abstraction, almost the opposite in some sense, as I've noted in my last paragraph. In terms of usage as well: IBM i's command line model I also mentioned helps a lot in using the system even without external documentation (more than the UNIX shell does), which seems to be the opposite in the mainframe world.

jacquesm
Yes, that's true it is a completely different beast from IBM i. By the way, I'm not sure if you have just used these remote or virtualized or in person but they are quite impressive from a hardware perspective and built incredibly solid compared to almost everything else that I've worked with including VAXen from that era. We had two of them (cold spare...) maxed out Group 2 models.

That's still 'only' 32 MB of RAM which may seem tiny by today's standards but that machine happily served a few hundred branch offices of a fairly major bank all by its lonesome, so that's 1000's of concurrent users (https://en.wikipedia.org/wiki/CICS).

contingencies
Between Plan 9 and Erlang we missed a bus somewhere.

Love it. Added to https://github.com/globalcitizen/taoup

That's your second pithy wisdom tidbit. (The first was The idea that data is a corporate asset needs to die. Data is a corporate liability.)

jacquesm
I'm honored. :)
topspin
> "everything is a file" is brilliant, but UNIX didn't take that as far as it should have

Agreed. The interface between applications and the operating system is not sufficiently abstracted. If it were software would be vastly better; faster, more secure, easier to manage, scale, migrate, troubleshoot, etc.

There is a lot of attention paid to programming language design and too little paid to the environment in which software has to operate. I think the low hanging fruit is improving operating systems and their abstractions. Solving this at the programming language level is not feasible; all that produces is a virtual machine that adds overhead, complexity and valueless diversity.

jtvjan
You can also watch this on their own website if you don't want to deal with YouTube: https://media.ccc.de/v/rc3-525180-what_have_we_lost
ggm
I used a "pick" system briefly in 1982/3 timeframe, in Leeds. The guy who was running it was a DB obsessive and was convinced world hunger was going to be solved if the entire department dropped VMS and Unix for Pick. He moved on. Pick didn't solve world hunger.

I was also using Norse Data systems with some OS specific to them, which was basically Job Control Language uplifted from cards to a terminal. It sucked. If memory serves me right it had a problem similar to early Tops-10: you could walk down directory trees but there was no "up" function, just 'go back to root' or 'home' -the Unix creation of . and .. as links in the current directory was just phenomenal to me, in terms of simplicity and outcome. From memory, Norse Data moved to a Unix variant.

When CP/M ruled the 8086 world, MS-DOS was to some extent "exotic" -That didn't last, nor did CP/M in the end.

Burroughs mainframes had the kernel integrated into a CI/CD system: if you entered kernel editable code in an editor with write permission, save and exit went to compile-deploy in a very few steps.

KA9Q was pretty much a multi-tasking OS, running "inside" DOS. if you had it on a floppy, you could "run" it on almost any PC, dial up, connect, have TCP/IP and then have your mostly almost asynchronous SMTP process be connected to and receive your email. I had a disgustingly heavy 486 laptop I lugged around the world on a work trip doing this. I wrote nothing to local disk in DOS, I lived in KA9Q. Phil Karn had written the OS we really wanted, inside DOS. Amazing. My main problem was buying the correct Telco approved connector for the modem and wiring it up to plug into my device each time.

I don't think RSTS or RTE or any of the different OS which ran on pdp11 were "exotic" at the point they were being put into deployment running radar and missile systems, but they were pretty different to how we view an OS these days. A lot of things in RSTS or George (the OS for UK Mainframes in banking) were uplifted into the future as encapsulated/emulated systems.

lproven
BTW, that is Norsk Data:

https://en.wikipedia.org/wiki/Norsk_Data

Nursie
> I was also using Norse Data systems with some OS specific to them ...

In the mid 00s I had a large set of C source dumped at my desk, absent build-system, and was asked to get it going across a few different OS. I found #ifdefs for a bunch of architectures I'd never even heard of, and #ifdef ND5000 was only resolved when I had a chat with one of the original authors and asked WTF??

A quick look at ndwiki.org (because of course that exists) tells me they had NDIX, which was a custom UNIX, and SINTRAN which might be what you're describing.

donkeybeer
What is meant by dumped on your desk here? Was it a big set of floppies or a stack of printouts?
Nursie
Hah, perhaps not that literal :)

I was given a very precious CD-R(!) with about a million lines of C on it, that we had bought from the vendor of some crucial middleware we were using. Included with it were the build files for a proprietary build system that we hadn't bought alongside the source license. My task was to understand it and produce a new build system for it that would work on Solaris/Sparc, HP-UX/PA-RISC and HP-UX/Itanium, AIX/Power, Linux/x64 and z/Linux, and Windows Server 2003.

It's funny how only about half of those are a going concern now!

ggm
I'd be lying if I said "yes" because its in the tape drive, the one I can't read any more because the rubber band inside my head broke. Thats the problem with old media (memory) it gets corrupted by not being refreshed, used, and too much heat.
bombcar
Linux has done more to kill exotic OSes than anything Microsoft or Apple has ever done.

Because Linux (and you can throw in the BSDs here if you want) is so capable as is, the chance someone will write an OS from scratch is pretty low. They'll much more likely base it off of Linux and go from there, which means that it'll just be another Unix clone.

Even things like Fuchsia are heavily influenced by it, and end up feeling "similar". As someone else mentioned, Unix won so hard most everything else is dead; even Windows is very "unix-like" in ways people don't even realize.

georgewsinger
This reminds me that Alan Kay really hates Linux: https://youtu.be/rmsIZUuBoQs

This stings since I really like Alan Kay and have been influenced by his ideas, but am also working on a Linux-based VR headset as a thinking tool[1]. I think Alan would approve of the hardware but not the software, perhaps ultimately saying something like "you can spend your whole life dicking around in Linux, but still not understand anything about computing".

[1] https://simulavr.com

indymike
> Linux has done more to kill exotic OSes than anything Microsoft or Apple has ever done.

I'm pretty sure commodity hardware, standards like POSIX, and the fact Unix-like OSes being taught in operating systems classes really did more than just Linux alone.

> Unix won so hard most everything else is dead; even Windows is very "unix-like" in ways people don't even realize.

I'm not so sure. I learned to code on VMS and then AT&T Unix, and I see a lot more VMS in Windows than anything. VMS was actually pretty interesting itself.

anyfoo
But then either VMS is much closer to UNIX than to things like IBM i (or IBM mainframes, which are however themselves very, very different from IBM i), or Windows is closer to UNIX than VMS. Because Windows is very close to UNIX compared to either of those.
mek6800d2
Microsoft hired the main architect of VMS, Dave Cutler, away from DEC to design Windows NT. (VMS++ = WNT)
Maursault
You've got it backwards. Digital cancelled Cutler's pet project and he shopped himself, and Digital's IP, to Microsoft. VMS and NT have a lot of similarities, but NT is not VMS, though they have a common ancestor: MICA = NT

[0] https://en.wikipedia.org/wiki/DEC_MICA

chasil
Windows has very stringent file locking; Unix doesn't.

I believe this major design difference came from VMS.

indymike
> But then either VMS is much closer to UNIX than to things like IBM i

I'm not really sure that is the case at all. VMS had versioning file system that had support for multiple file types including stream, sequential, indexed and relative. It had a very different security model, four levels of processor access (unix there's kernel and userland) and it had very different networking capabilities, including clustering baked in (in 1983).

> Because Windows is very close to UNIX compared to either of those.

There's deep VMS heritage in Windows NT: both VMS and NT were written by David Cutler.

anyfoo
I know! But then VMS is maybe closer than it seems? Have a look at IBM i.
lboc
Agreed. I've only limited experience with i, but more with z. After using them, the other OSes you might use start to look and feel pretty similar to each other.
mek6800d2
I worked for 5 years on VMS for a real-time satellite image processing HW/SW system for NASA. VMS was wonderful. At another company, I then worked on a Unix system for a few years, then 2 years on a port of our software to Unix. (The port was smooth and quick, thanks to VAX C and its relatively complete library; most of the 2 years was spent adding new capabilities for a particular customer.) The rest of my career has mostly been under Unix.

Microsoft hired the main architect of VMS, Dave Cutler, away from DEC to design Windows NT. (VMS++ = WNT!) I haven't worked on any in-depth Windows programming projects. I did read an article some years ago about Cutler, VMS, and WNT. The author pointed out an insightful distinction between VMS and Unix with regard to I/O operations: VMS tells you when an operation completes, whereas Unix tells you when you can begin an operation. As a result, I think, asynchronous I/O under VMS was there and available from the git-go, but always seemed like some odd thing grafted onto Unix, making it more painful than it should have been. I have never used it, but I believe WNT's overlapped I/O was patterned after the VMS model?

None
None
chasil
There is a book that addresses the reasons for Cutler's departure from DEC, but it does not go into the details that you have mentioned.

https://www.goodreads.com/en/book/show/1416925.Show_Stopper_

Maursault
> Microsoft hired the main architect of VMS, Dave Cutler, away from DEC to design Windows NT.

Not sure why you mice words. Cutler developed NT at Digital. That's better. In 1988, Cutler took his work (the literal data, the MICA OS code from DEC's cancelled Prism RISC project) and his engineering team at DEC with him to Microsoft. I believe it was much less like poaching and much more like defecting. DEC ultimately forced Microsoft into an alliance under threat of lawsuit for the IP theft to migrate the VMS user base to NT, with Microsoft paying quite a lot, $180M, to avoid lawsuit, for, get this... training DEC engineers to use NT. In 1995, someone at MIT found large chunks of DEC MICA code unaltered, including comments, in Windows NT.

spideymans
> even Windows is very "unix-like" in ways people don't even realize.

WSL is a tacit acceptance of UNIX dominance.

bombcar
Even before that, much of the UI of DOS itself is Unix inspired (DOS even had “files” for certain devices such as PRN and COM)

There are VMS-inspired parts also, but even that is somewhat in the Unix sphere of things.

q-big
> Even before that, much of the UI of DOS itself is Unix inspired

The inspiration rather came from CP/M.

p_l
Not exactly - several "killer features" of DOS 2.0 were features inspired by Xenix and which were not present in CP/M, including a /DEV/ directory, the mere concept of hierarchical filesystem and optional use of forward slashes in directory paths - backslashes were introduced because of CP/M legacy in commands
lproven
No they weren't.

This is an urban myth of computing, and it needs to die.

CP/M commands did not accept command-line switches in any standard way. You are trying to "correct" people who are telling the real story by repeating a myth.

http://www.os2museum.com/wp/why-does-windows-really-use-back...

p_l
Ah, I seem to have been waylaid by some of the older mentions there - thanks for the link with mention of M-DOS. I was certain of DEC connections, but thought it came through CP/M and forgot to take into account PDP-10 history of Microsoft.
chasil
...which in turn was strongly influenced by a previous product from DEC.

"Various aspects of CP/M were influenced by the TOPS-10 operating system of the DECsystem-10 mainframe computer, which Kildall had used as a development environment."

https://en.m.wikipedia.org/wiki/CP/M

babypuncher
With the upgrades to WSL in Windows 11 you can run fsv[1] with minimal hassle and finally get the true UNIX experience[2].

1: http://fsv.sourceforge.net/ 2: https://www.youtube.com/watch?v=dxIPcbmo1_U

lproven
WSL is Microsoft doing its classic "embrace and extend" strategy, yet again.

https://en.wikipedia.org/wiki/Embrace,_extend,_and_extinguis...

Apocryphon
Here's hoping Haiku gets more momentum. BeOS was a little less UNIX-like, I think.
BirAdam
However, Haiku is gaining a ton of GNU/Linux software. It becomes less interesting to me when people are just porting the same junk to it.

Serenity is far more interesting.

anthk
Most of the Linux crap it's QT based, which looks good enough in any OS.

For the rest there are some interesting native tools such as video editors.

But Haiku needs 3 things to get more real life use cases:

- GL/Vulkan for older Intel (both old and new)

- Async USB

- UVC webcams

d_tr
Serenity is still a very Unix-like OS though.
BirAdam
It is, but it doesn’t use any of the existing open source software, which makes it fun, different, and interesting. Heck, just having a non-blink/non-webkit/non-goanna/non-gecko browser is cool.
d_tr
Agreed.
actuallyalys
Their ports system (which they didn't invent, of course) offers a pretty nice compromise of continuing their "everything from scratch" ethos while letting people have fun porting. And who knows, maybe the ports will allow it to be a practical OS someday.
nomdep
I think Haiku biggest mistake was to try to be a regular desktop OS instead of focusing on a specific hardware like RaspberryPi.
jackvalentine
In their defence they’ve been at it since 2002 before any of those fun little pieces of hardware existed.
rcarmo
Well, they were also quite hostile to the idea of targeting the Pi. I remember being involved in a forum discussion where it was dissed as a toy, another around ARM never being a relevant platform, and another about the Broadcom bits being proprietary.

To this day, Haiku has been dragging its feet on the Pi (and yes, I know it’s a community effort, that it requires sponsorship, suitable volunteers, etc. - I’m just pointing out that they dropped the ball in even acknowledging the need for a port).

jackvalentine
‘Need’ is a big word to use for something voluntary.
wsc981
I kinda feel a modern System 7 [0] like OS could be neat. But with a terminal included.

It was kinda easy to get your head around the OS just navigating through the system folder. The UI was also more consistent than anything we have these days.

———

[0]: https://en.m.wikipedia.org/wiki/System_7

jsrcout
System 7 was so awesome. I was still developing Macintosh GUI software when it came out. It was great to use, and visually pleasing. And as you say, the user interface was consistent, also discoverable, and things did what you expected. Attributes I miss as GUI software inexorably moves into the browser.
ralphc
I have some PPC macs that have OS 9 on them, I love the UI. I look for excuses to use them whenever I can.
guenthert
Not sure if that's old people painting the past rose, but System 7 was not awesome. Cooperative scheduling and lack of memory protection were anachronisms already in 1991.

I think we're talking about different things here. The UI might have been superior in MacOS, but the technical foundations of Unix clearly were. Something OSX tried to (belatedly) reconcile. Nothing stops you from adding an awesome UI on top of Linux (although I'd say UI will always be a matter of personal taste and prior experience).

open-source-ux
This amazing demo of a in-progress operating system (writen by a single person) which has been discussed previously on HN. The developer says near the end of the video:

"I'm not a big fan of unix myself and so 'essence' is designed without any sort of unix influence"

Essence OS demo (2021): https://www.youtube.com/watch?v=aGxt-tQ5Bt

spankalee
In what ways do you see Fuchsia heavily influenced by Linux?
linguae
Rob Pike, one of the creators of Plan 9 from Bell Labs, spoke about this in his 2000 talk "Systems Software Research is Irrelevant" (http://doc.cat-v.org/bell_labs/utah2000/utah2000.pdf).

Writing an OS from scratch is hard work, and without complying with pre-existing standards, the new OS won't be able to take advantage of existing software or communicate with computers running different operating systems, making it a non-starter for most users. Even supporting a wide range of hardware is very challenging, especially given the fact that many hardware devices lack documentation sufficient to write drivers, which results in challenging reverse-engineering projects or relying on the hardware manufacturer to create drivers for the new OS; this is a challenge even for Linux, which has the best driver support outside of Windows. Most people who want to experiment with new ideas for computer systems end up building on top of Linux since they can leverage Linux's existing hardware support and its implementation of a wide range of standards allowing interoperability.

I'm working on an experimental desktop computing project that ironically uses Plan 9 as its base platform since Plan 9's everything-is-a-file interface and its overall simplicity allows me to build my project with less effort than dealing with Linux/X11/Wayland/dbus/etc., but a big part of me is tempted to eventually write some 9P services to emulate enough of the underlying Plan 9 architecture to run my project on Linux and the BSDs in order to take advantage of these operating systems' better driver support.

anthk
On plan9/9front, I'd love to port Nethack/Slashem with NPE:

https://git.sr.ht/~ft/npe/tree/master/item/include/npe

But not with a TTY UI. With libdraw:

http://wiki.9front.org/programming-gui

d_tr
A possible path would be something initially running on an SBC, like a RISC-V board or even an FPGA. People buy these boards to experiment and use for simple tasks. Something really interesting could gain a small initial following and slowly grow from there.

It does not have to be completely isolated. You can still have support for some common communication protocols and maybe some client software running on a mainstream system to do various things.

hammyhavoc
IMO, the interesting future is RISC-V and FPGA. Potential is huge. Just needs better availability of devkits. Long lead times on much of it.
galangalalgol
I'd like to see a minimalist os that just runs multiple instances of wasm micro or maybe wasmer. It would only have to support the wasi. A scheduler, time support, a sockets implementation and a file system. Targeting a sbc makes a lot of sense for that.
awannaphasch201
Urbit.
jcadam
The only older OS I actually miss is Amiga Workbench.

I had an older coworker at one of my first jobs who would go on and on about VMS all day and how UNIX sucks. I never used VMS so I wouldn't know :)

EvanAnderson
I still miss the Workbench CLI feature that allowed you to change directory by just entering the directory name.
michaelcampbell
That's an option in `zsh`.

    setopt auto_cd
akx
You might want to try an alternative shell for your CLI. E.g. `fish` (https://fishshell.com/) does that.
galangalalgol
I started learning c on workbench. It was very good. When we "upgraded" to a 386 I was pretty disappointed.

Edit: vms was used for my college's firewall/gateway. It was fine. I've used it professionally as well. I don't recall anything that stood out about it.

mmh0000
For those that like playing with arcane commands and systems; check out "Plan 9 from User Space"[0]. A port of Plan9's default applications that runs on Linux or MacOSX.

ACME[1], a text editor, is a great starting point.

[0] https://9fans.github.io/plan9port/

[1] http://acme.cat-v.org/

lproven
I would prefer to see the other way round:

A Unix-WINE for Plan 9, so we could run legacy Linux apps on a more modern OS.

And never forget: Plan 9 was just a stage and not the end of the developmental line. That was Inferno.

Plan 9 abstracts the network away into the filesystem.

Inferno does all that too and abstracts away CPU architectures as well, so the same compiled binary runs on x86 and ARM and RISC-V and whatever you happen to have.

anthk
Check vmx under 9front for amd64. It's like KVM for plan9.

https://9lab.org/plan9/virtualisation/

On Linuxemu, you need 9front for i386.

lproven
Thanks for the info. I was aware of them, but a VM isn't what I meant.

WINE isn't a VM or an emulator: it "just" translates Win32 API calls to Linux ones. Getting that working has required implementing a load of DLLs and things, but it works surprisingly well now.

An environment to let Linux binaries launch on Plan 9 (or insert preferred derivative here: 9front, Harvey, Jehanne OS, whatever), akin to the Linuxulator in FreeBSD or the Linux Zone on Solaris, would make the OS much more usable.

anthk
I know what Wine does, for sure. Linuxemu and BSDemu did that, the same as Linux_compat on free/netbsd or Wine, but with vmx the former emulation it's obsolete.

https://fqa.9front.org/fqa8.html#8.7.1

Old and outdated. You might be able to run some statically linked browser from elsewhere.

Linuxemu works with i386 ELF binaries, so maybe an Alpine i386 chroot could work.

chizhik-pyzhik
I particularly enjoyed the demo of BTRON starting around 27:10... interesting seeing a system update, for example, provided as an object that you drag from the installation document into the system settings window.
smm11
Holy cow! BTRON is awesome. It's like the gaslight home-lighting era, in the modern day. A web browser?
agumonkey
Yeah, and it has a BeOS feel to it.
selimnairb
This makes me mourn Apple’s failure to develop a next-generation OS in-house (as much as I like many things about the NeXT-based modern macOS).
jcynix
Oh, yes, Symbolics Genera, loved it, miss it. And miss the keyboard with its control, meta, super and hyper keys.

I actually do have one keyboard in my archive of things (aka "stuff" ;-), but I have no idea how to interface it to modern hardware, sigh.

floren
Odds are it would be extremely easy with, say, an Arduino Pro Micro. I've interfaced a variety of old hardware (Sun Type 5 keyboard, Depraz mouse, original Macintosh mouse) via USB using one.

You're probably more or less on your own in terms of figuring it out, though, because not many people have those keyboards!

jcynix
> You're probably more or less on your own in terms of figuring it out [...]

Sure, but as I'm not a good hardware tinkerer, ... but maybe I should visit some local self-repair community group and learn.

Symbolics produced a nubus(?) coprocessor with their Ivory chip in their final days, which used a box to interface the keyboard to Apple's ADB, but I never got hold of either the coprocesor nor the box.

floren
You might start with https://trmm.net/Symbolics/ which seems to be pretty much ready-to-go with an Arduino.

If you're in the Bay Area, I would build the adapter for you just to have the opportunity to check out a Symbolics keyboard first-hand :)

larve
Have a look at https://github.com/hanshuebner/symbolics-keyboard !
linguae
My dream is for Symbolics Genera to become open source, though I'd be satisfied if hobbyist licenses for the DEC Alpha port were available at a reasonable price. Another dream that I have is to actually use a Symbolics Lisp machine; I've only seen screenshots of Genera and demos on Youtube. I was born near the end of the 1980s AI boom, and thus the only time I've seen a Lisp machine in person was at a trip to the Computer History Museum in Silicon Valley nearly six years ago. I'd love to buy one, except Symbolics Lisp machines are very rare, and when they occasionally show up on eBay, they are well beyond what I can afford. Even if I could afford one, I don't have enough room in my apartment for one, though a backup option that is workable (though still very expensive) is to purchase a MacIvory card and a compatible 68k Macintosh.
LargoLasskhyfv
Maybe https://interlisp.org/hugo/ could be something for you?
linguae
Thank you for the link. I had an opportunity to try out the online VM for Interlisp-D a few months ago; it works very well. I'm glad that Xerox Interlisp-D has been made open source and that there is a team of people who are contributing to it.

It seems that Xerox PARC's work on Lisp is less known than its work on Smalltalk, despite the fact that a Lisp heavyweight, Gregor Kiczales of "The Art of the Metaobject Protocol" and aspect-oriented programming fame, worked there. Xerox PARC in its heyday was quite a fount of innovation, and the work done from the 1970s through roughly the 1990s (I don't know much about Xerox PARC beyond the 90s) remain a treasure trove of ideas that should be reexamined in today's world.

LargoLasskhyfv
Indeed.
lproven
I was going to mention InterLisp but @LargoLasskhyfv beat me to it.

I proposed an idea about it a year ago that got some traction here on HN:

https://news.ycombinator.com/item?id=28366292

Interlisp/Medley is the only rich graphical LispM type environment that's open source. OpenGenera isn't and probably won't be, which is tragic, but there are many tragic things in this world.

What many of the commenters to my blog post in that link don't get is that it is not a good thing that there are commercial graphical Lisp IDEs.

For comparison: when there were multiple commercial Unix implementations, the result was increased fragmentation and slower development.

Linux, as a FOSS, PC-native Unix, has propelled Unix forwards more in the last 25Y or so than the previous 25Y of work on commercial Unix ever did.

Old Lisp hands tell people to learn Emacs and install SLIME or something. Well, Emacs is about as appealing as other kinds of slime, like slug mucus, are: to most younger types, it's repellent and disgusting.

Emacs is a horrible crusty old 1970s editor.

To make Lisp look appealing and interesting, then it needs a rich modern GUI, a rich set of libraries to call upon and ways to access others. It needs a fancy graphical editor to show off its power.

The world has a FOSS Common Lisp: it's SBCL.

Find some way to run Medley under SBCL, however ugly the hack. Linux was an ugly hack once. UNIX itself was an ugly hack once. There's nothing wrong with ugly hacks. They are to be encouraged. They are the keystone of FOSS.

Get Medley running under SBCL somehow so there's a 1980s graphical FOSS Lisp environment, not a 1970s text-based one.

jf
I’ve looked into getting Genera open sourced. Long story short, it’s very unlikely to happen anytime soon.
trasz
This book might be of use: http://www.snee.com/bob/opsys/fullbook.pdf
jamesfmilne
I caught a glimpse of operating systems that had not been and would never be
christkv
VMS on the VAX was also a very interesting OS. I got to play with one of the last VAX models for a summer 25 years ago.
agumonkey
next: xerox, pharo, vpri ometa based os, oberon, and emacs
jamesfisher
Skip the boring intro: https://youtu.be/7RNbIEJvjUA?t=395
wudangmonk
I guess the only hope for seeing new non-toy OSes would be when hardware manufacturers move to a SoC and create a spec for it.
anyfoo
And that will still very much resemble most of today's OSes not only because that's what people know, but also because the development kit will need to run on common OSes, so the paradigms still have to be somewhat compatible.
snvzz
There's a lot going on, actually.

I wouldn't dare call seL4, haiku, genode or managarm toys.

gman83
There's SerenityOS -- https://serenityos.org/ One of the main authors has been documenting the development on YouTube, it's pretty fascinating.
dmd
SerenityOS, by design, is close to indistinguishable from any other unix.
boondaburrah
I think the only SoC that's close is the one that comes on the Raspberry Pi 1-3, since they even got broadcom to release the docs for the VideoCore IV GPU.

It's still not completely documented, but it's better than most "Here's the docs for the basic I/O peripherals but we only provide closed drivers for Linux" SoCs.

Otherwise there's still just the regular old x86 PC everyone has lurking in their modern PC.

jbverschoor
Well, proper "objects". So many great things in the 90s on windows btw..
WillAdams
The high-water mark of my graphical computer experience was using NeXTstep on a Cube paired w/ an NCR-3125 running Go Corp.'s PenPoint when away from my desk.
cf100clunk
Alternative link to video: https://vid.puffyan.us/watch?v=7RNbIEJvjUA
hulitu
I miss Apollo's Domain OS. I was schocked what could be done with a 20 MHz processor and 4 Megs of RAM.
29athrowaway
Xerox executives were true losers.

They had the keys to the future in their hands and wasted their opportunity.

29athrowaway
TempleOS has various impressive features.

http://www.codersnotes.com/notes/a-constructive-look-at-temp...

lproven
TempleOS is a hugely impressive one-man project, but it's a standalone OS with no networking.

There are other tiny OSes which are FOSS and are much more capable.

Oberon is my personal favourite, in terms of how much it does with how little. http://ignorethecode.net/blog/2009/04/22/oberon/

29athrowaway
There's networking now.

https://github.com/minexew/Shrine

lproven
Wow! That I did not know.

Terry Davies left it out intentionally, AFAICR, due to the complete lack of any security in TempleOS. Putting it back in seems a bit irresponsible, but nonetheless, it's impressive.

29athrowaway
Yes, true. But if you are only running TempleOS on a VM, or an old computer, or a Raspberry Pi or something (not sure if it's supported), it should be fine.
bombcar
RIP Terry.

It's things like this that we need more of - and eschewing networking is a great way to work out what you can actually do with a computer. I feel everything just kind of develops until it has a TCP stack and then becomes another "basically just a blob on the internet".

That's not even to get into the various things we just assume about networking that are actually just accidents of TCP, IP, or more and more HTTP.

Another thing we haven't really dug deep into is the "everything is a file" paradigm which basically has been interpreted as "everything is a text file" - the hatred for binary data runs deep, but binary data is most likely the best for computers. XML, HTML, etc are perhaps NOT the best representation for various complex forms of data that we want to use.

protomyth
Well, given that I am typing this about 15' from an IBM POWER S914 running the i operating system, I'm not sure its lost. Our accountants hate GUI stuff and love the green screen. Its amazing to have what essentially is a low maintenance machine that calls IBM when something isn't correct. We have the last i Series (a pre-POWER model) that lasted for over a decade, and I do expect this one to make it the same amount of time. It is a bit obtuse, but I dearly wish some other OSes would examine themselves for self administration to the level of the IBM i Series.
LeftHandPath
That's funny - I am currently working on a web app to GUI-ify the green screen for my company's IBM i OS on a similar Power8 system. They love the green screen but this was the easiest way to reduce the amount of manual entry we're doing for a specific 3rd party application we run on it.

A different tool I made ran in the PASE [1] environment on the same system -- compiling & running C++ in IBM's AIX runtime environment. Really interesting experience.

[1]: https://www.ibm.com/docs/en/i/7.3?topic=programming-pase-i

Globz
Same here, many years ago I made a web app for our sales team and third party so they can sell/buy our products, the whole business is still powered by IBM iSeries. Basically the AS/400 is the source of truth and the web app just pulls the data couple times a day and provide a nice GUI for the users. The sales orders are automatically sent to the iSeries in batch so no more manual entries and their status are reflected back on the web app so you can have an overview of each orders. Sadly we are looking at moving away from iSeries and we will be at the mercy of whatever cloud solution that will meet our needs.
wslh
Sidenote: I just googled about that equipment[1] and found it weird that the copywriting says: "IBM® Power System S914 easily integrates into your organization’s cloud & cognitive strategy and delivers superior price performance for your mission critical workloads..." the "cognitive strategy" words seems forced by a new marketing team. Cloud also seems weird in this context.

[1] https://www.ibm.com/products/power-system-s914

wmf
IBM is desperate to report cloud and AI revenue so they're cloudwashing and AIwashing all their products.
avhception
As late as 2011, we had a custom payroll system on MS-DOS. I was always blown away by the speed it's users achieved. They really flew through the menus and knew all hotkeys and commands of the TUI by heart. When they got "upgraded" to a modern GUI based system, it really slowed them down. Of course learning a new thing always takes time, but that's not the whole story. Especially with ever-changing websites that usually don't care about hotkeys, the mental load of visually scanning for elements and clicking them with the mouse can be really slow. And it always fascinates me how it was perfectly normal and expected for mere users to use a TUI while today I've heard grunts even from junior devs and self-proclaimed power-users when I told them to use the CLI for this or that.

How times have changed.

IBM i is fascinating, and in a world of ever-changing tech-stacks I sometimes yearn for a stable environment where you don't have to fight with 10,000 node.js dependencies every other month just to keep that payroll website going. I've never came in contact with IBM i or POWER tech in a professional capacity, but have purchased an RS/6000 and, more recently, a TalosII system to play around with ppc64le :)

LeftHandPath
I've noticed a lot of the more experienced people in our office stop trusting things as soon as they see drop shadows and branded color schemes. And they are all very quick with the old terminals.
at_compile_time
My favorite are the gratuitous animations that the program is too bloated to render smoothly.
ordiel
Developers of newer versions of a system even if it is a migration from TUI to a Web GUI should really strive to maintain the shortcuts and hotkeys, an upgrade its suposed to be "that", extra functionality not a replacement of the existing one. Sadly there is this assumption that given it has a GUI there is no need of shortcuts or hotkeys, forcing users to use the mouse.

Gmail does provide some hotkeys and I think even those "few" ones really help, also Atlassian applications yet most web pages I have interacted with have none

bombcar
Excel had Lotus-123 shortcut compatibility for decades after Lotus-123 was no longer a going concern.

It's still somewhat in there with / triggering the menu on Windows Excel.

fiddlerwoaroof
Part of the problem here is that desktop development frameworks included keyboard shortcuts essentially for free: Mac associates them with menu items, emacs/vim's key -> command -> action design give an obvious place to implement shortcuts; Delphi and similar programs had similar places to automatically slot in keyboard shortcuts. Web frameworks focus on visuals and mouse interactivity and keyboard control is almost an afterthought (aside from some minimal attention to tab-order because of accessibility).
mook
Another part of the problem is that the contents of the browser is mostly untrusted, and the browser itself already has a giant pile of shortcut keys that would conflict.
holri
As a pianist I am not surprised. The keystrokes a TUI requires is like a piece of piano music. You can learn the required keystrokes pretty fast and play it very fast, reliable and unconscious.

Image a piano played by touchscreen or mouse.

bombcar
The biggest key with the "green screen" terminals is they would NEVER EVER lose a keypress and they would buffer them, too.

So even if the computer was actually quite slow, if you knew what you were doing you could "type ahead" a few screens into the system, and then wander off and do something else.

That just doesn't work on GUIs (some rarely are well designed so that it can) and certainly has zero chance of working on webpages.

protomyth
I honestly wish someone had come up with some text markup language and "browser" so enterprise developers could deploy text UI apps. The web is just awful for the back office people. Frankly, I wonder how much money is being spent on the web when a TUI would have been more productive.
naikrovek
I personally don't care about how it's implemented, markup or otherwise, I'd just be happy to see TUIs return in numbers for modern terminal emulators. SSH apps are slowly becoming a thing, thankfully, but not quickly enough for my liking.
pxc
Does Wish fit the bill?

https://github.com/charmbracelet/wish

lifeisstillgood
I like the idea of a Text UI markup language. It's envisage some kind of nested table describing the layout? More declarative?
worthless-trash
I -think- that it might be possible to do so something like ncurses forms.
_glass
Honestly I work with SAP and for power users we still implement text UIs for certain special tasks. SAP's language ABAP is actually optimized for this and it works amazingly well. You can implement in 20 minutes a working UI that does a lot.
jon_adler
I’m an ex-Peoplesoft dev and the productivity was pretty mind blowing in that environment. More recently I was involved with a SAP project, and the development estimates for what seemed like trivial things were days and weeks. Your experience and my experience differ. Maybe the developers were just excessively padding?
_glass
Yeah, SAP folk are always very conservative. A lot of stuff is actually quite fast, but you need to know an incredible tech stack nowadays going literally back to the 80s to new technology. But if you're in the known, it's fast.
Gibbon1
Someone could write a curses package in web assembly.
p_l
That's literally how IBM block terminals operated, and it was AFAIK most visible with CICS which was designed around similar model to Web 1.0 apps - blocks of code ("transactions") in CICS would send a (possibly multipage) form to terminal, then when the terminal send back the response (usually just the filled in fields of the form) another transaction would get fired and perform processing on it, and so on and so on.

Of course the screens had rudimentary markup to support this including client-side validation of sorts.

IIsi50MHz
> So even if the computer was actually quite slow, if you knew what you were doing you could "type > ahead" a few screens into the system, and then wander off and do something else. > > That just doesn't work on GUIs (some rarely are well designed so that it can)

I used to do this in Macintosh System 7, since Macintosh System Software used a similar keyboard buffer. Of course, the UI heavily encouraged the mouse, and often required it. But I miss being able to tell apps "Do these 17 things, because I know what you're going to ask me and I know you're gonna be busy, and I can't be bothered to script it right now.".

bombcar
The older OSs seemed to work correctly even if you switched windows.

I.e., select a window, type ahead a few screens/commands, switch to another window and continue work - the keyboard buffer would stick with the previous window.

That sometimes works now with Terminal, etc, but sometimes it doesn't. I think it's a question if the OS is buffering, or the application, and/or if the application tracks which window was in focus when it buffered.

salmo
One of my first gigs was replacing a VAX/terminal based software package with a Windows NT/PC thick client one.

17 year old me scoffed at the dumb terminals. When we converted users and they hated it, I learned SO MUCH. They could fly through the TUI. They’d have a form filled out before the screen could paint.

The thick client didn’t even have keyboard shortcuts for most things. But the company had bought a ton of PCs and was moving everyone to the first iteration of Exchange. So I got the VAX up on the LAN and set them up with telnet.

The users liked that. White text on a blue background became the most popular color combo. The internet was becoming a more popular thing, so they could begin the proud tradition of screwing off at work staring at a browser.

Eventually the original package was retired and keyboard navigation was added to the thick client. But you couldn’t go “ahead” of it. And flipping between the keyboard and mouse is just slow for “real” work. They adjusted but I learned a lot about how a computer can best augment someone’s abilities. It’s not always the cool or most intuitive way.

That exercise forced me to learn VMS, then I touched Solaris and fell in love. I couldn’t afford a pizza box as a college kid, so installed RedHat 4.1 from a CD in a book.

Now I get mad when I have to edit text without vi.

Same as it ever was.

I wholeheartedly agree. It seems that there is no middle ground these days between Web- and mobile-inspired GUIs that have taken over the desktop (even in the macOS world) and doing everything via the command line. I feel the same way about GNOME 3's shift to mobile-influenced UI/UX paradigms; sadly this shift also occurred in Windows and macOS.

What I believe is needed are UIs for power users and developers. Nobody stays a novice forever; we need UIs that facilitate the tasks of technically-inclined users, something more ergonomic than CLIs but not oversimplified like modern UIs. Some examples of UI/UX that addresses the needs of power users are support for scriptability (such as AppleScript and Visual Basic for Applications), composability (such as OpenDoc [https://www.youtube.com/watch?v=oFJdjk2rq4E]), WordPerfect's Reveal Codes that allow writers more fine-grained control over formatting, and a demo I saw of Symbolics Genera where the CLI shell assists the user in completing the command (see https://youtu.be/7RNbIEJvjUA?t=380 for a demo of how that interfaced worked; while it's a CLI shell, it's much more ergonomic than any Unix shell I've seen). I would like to see more UIs that fit the needs of power users.

On Youtube: https://www.youtube.com/watch?v=7RNbIEJvjUA

(The media.ccc.de server was a bit slow for me.)

cuillevel3
Most mirrors are in Germany: https://cdn.media.ccc.de/events/rc3/h264-hd/rc3-r3s-20-eng-W...
albertzeyer
Yes, but I'm actually located in Germany myself. I assumed that there was just too much traffic caused by HN.
Feb 14, 2021 · 1 points, 0 comments · submitted by xkriva11
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.