HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
"What UNIX Cost Us" - Benno Rice (LCA 2020)

linux.conf.au · Youtube · 152 HN points · 0 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention linux.conf.au's video ""What UNIX Cost Us" - Benno Rice (LCA 2020)".
Youtube Summary
Benno Rice

https://lca2020.linux.org.au/schedule/presentation/28/

UNIX is a hell of a thing. From starting as a skunkworks project in Bell Labs to accidentally dominating the computer industry it's a huge part of the landscape that we work within. The thing is, was it the best thing we could have had? What could have been done better?

Join me for a bit of meditation on what else existed then, what was gained, what was lost, and what could (and should) be re-learned.

linux.conf.au is a conference about the Linux operating system, and all aspects of the thriving ecosystem of Free and Open Source Software that has grown up around it. Run since 1999, in a different Australian or New Zealand city each year, by a team of local volunteers, LCA invites more than 500 people to learn from the people who shape the future of Open Source. For more information on the conference see https://linux.conf.au/

Produced by NDV: https://youtube.com/channel/UCQ7dFBzZGlBvtU2hCecsBBg?sub_confirmation=1

#linux.conf.au #linux #foss #opensource

Wed Jan 15 13:30:00 2020 at Room 5
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Jan 17, 2020 · 152 points, 90 comments · submitted by stargrave
linguae
I thought this talk was an interesting overview of some of the limitations of Unix (particularly Linux) and C, although I wish the speaker had discussed more details as to why C is not a match for modern processor architectures and how modern languages such as Rust address these issues.

Another thing that I'm curious about is how much familiarity the speaker has with Plan 9? Plan 9 from Bell Labs is the spiritual successor to Unix, created by many of the Bell Labs researchers who originally worked on Unix. The creators of Plan 9 pushed "everything is a file" to its limits. Sadly, although I have read some Plan 9 papers, I haven't actually used the operating system. I'm really curious about what the USB example in this talk would look like in Plan 9?

I do wish there was more development in alternative operating systems. I like Unix, but there were many ideas from VMS, Smalltalk, the Genera operating system for Symbolics LISP machines, the classic Mac OS, BeOS, Plan 9, and other operating systems that we can learn from and that would be very useful to have today. Also, today's hardware is quite different from the hardware of 50 years ago. We have multi-core CPUs, GPUs with hundreds of cores, very fast NVMe storage devices, and other amazing technologies. There has also been much advancements in programming languages, with more people willing to explore alternatives to C and C++ such as Python, Rust, Go, Clojure, Haskell, Swift, and many more, each providing different abstractions for dealing with diverse programming tasks. I wonder what an operating system for the 2020s would look like given the advances in technology and the lessons learned in the past decade.

aruggirello
> there were many ideas from VMS, Smalltalk, the Genera operating system for Symbolics LISP machines, the classic Mac OS, BeOS, Plan 9, and other operating systems that we can learn from and that would be very useful to have today.

Since we're at it, I'd add: Amiga. And I'm not sure, but maybe AREXX also deserves a mention too.

kragen
What were the interesting ideas in AmigaOS? I don't have any experience with it, except that I played Harmony on it one day around 1990.
blihp
I don't recall any when compared to other OS's of the day. The special sauce of the Amiga was its hardware and the games/applications that it enabled.

Tools like ARexx and Amiga Basic were better (IMO) than what you got on other platforms, but the OS itself was rather meh. I guess you could say it was rather interesting in the sense that it had preemptive multitasking (years before PCs and Macs would) without memory protection... but that's hardly something you'd want to carry forward to present day.

kragen
What was better about AREXX?
blihp
ARexx was basically AppleScript before AppleScript existed. It was widely supported by most Amiga productivity software since it was a selling point in the same way AppleScript support is/was for Mac apps in terms of being the de-facto scripting solution on the platform for automating tasks. So if REXX was the dream, ARexx was the realization of that dream on a personal computer.
flohofwoe
I think the most interesting aspect of AmigaOS was that it brought features only known from much more expensive workstations into a cheap home computer.

To my knowledge there haven't been any outstanding innovations on the software side that haven't been available in other operating systems too, but it did so in a computer that was 10..100x cheaper than a UNIX workstation with a comparable graphical user interface.

AmigaOS was more like a stripped down UNIX (preemptive scheduling, but no memory protection/process isolation), MacOS looked like a toy in comparison to be honest, at least from the technical side.

In the later years, AmigaOS (or rather its applications) offered a glimpse into a future how applications and the underlying operating system could work together to create a professional working environment, applications didn't feel isolated, there were standards in place how UI applications can be controlled from the command line, how they can be automated through scripting, how applications can talk to each other, and how they can be extended via operating system plugins (for instance if one application added support for a new image format to the operating system, all other application dealing with images automatically gained support for this new image format too).

Basically: how can the straightforward automation from the command line brought into the domain of UI applications.

Alas, it was a glimpse into a future that would never happen.

pjmlp
I rather think of it as a stripped down version of a Xerox PARC workstation.

With exception of SGI and NeXT, UNIX never had a multimedia culture.

kragen
That sounds right. Also PARC had Cedar, which did multitasking without memory protection, which Unix didn't usually at the time.

BTW, I'm curious what you think about the question I asked in https://news.ycombinator.com/item?id=22083468

the8472
> The creators of Plan 9 pushed "everything is a file" to its limits.

Isn't linux advancing in similar direction, except that everything is a file descriptor (even processes are now!) instead of a file bound to the directory tree? And technically, everything that is a file descriptor is also a file, thanks to procfs.

duck2
procfs is "loan-feature" from plan9 :)
sparkie
I think Linux is going in the direction of "everything is a dbus API" instead. Linux needs it's own "9P," or equivalently practical, if it wants to reverse this trend. DBus, while perhaps not the best IPC mechanism, currently fills the gap. Several of the systemd binaries are there purely to shift from reading and writing files to pushing updates over dbus. See for example systemd-hostnamed.
scroot
> I do wish there was more development in alternative operating systems.

This should also go hand in hand with development in alternative computing systems, environments, and architectures as well. Just as the dominance of Unix occurred in part for social/economic/cultural reasons, so too did the current dominant computing architectures. We have examples from the past of interesting attempts (i432, Rekursiv, maybe Oberon etc).

Because we have universal data interchange standards today (think XML structures, JSON, hell even TCP/IP itself) we don't have to worry as much about the "compatibility" issues that were a real end user problem in the 80s and 90s. We can and should be experimenting with alternative systems. Maybe open hardware will also help with that.

stiray
I get little pink dots around the face when I see the sentences like this:

> alternatives to C and C++ such as Python, Rust, Go, Clojure, Haskell, Swift

Please advise me. I am just writting a kernel module for hooking kernel calls and I really """hate C""", I would love to do it in python (even better in js or vbscript, logo also comes to my mind - i just love driving a turtle). The input is python, the output .ko.

And as normal for kernel module it needs to be rock stable and as fast as possible as the hooked functions are called a lot on the system and cant loose much time there.

Can you share a sample please? I am thinking of doing it in bash script by emitting opcodes directly but it seems like it just take too long to dig out all opcodes. Yeah, I know what you think, but it is not assembly, it is bash script.

/sarcasm off

pjmlp
You could start with Circuit Python, Nim, and go from there.

I for one get dots when keep seeing people with lack of vision.

stiray
Sorry to hear that. I need efficiency and speed with lowest possible memory footprint. You know why? As I dont want you to complain that my software is hard for your system requirements and using it costs you twice the price in your favorite cloud as it is wasting resources that it doesnt need to waste. Not to mention coredumping a kernel from time to time due to a bug in python interpreter.

I will just stick to my hated "C". Anyway, you could try to persuade Torvalds to write linux kernel in C. Or rather join the freebsd forums and share the python kernel idea to the people there, I am sure they will apriciate it. Please DO share the responses, as they will surely love your visionary skills, not to mention cloud providers that will earn some extra money...

/sarcasm off(again)

Anyway not every software is suitable for any task. And at some times you do need knowlidge about what you are doing, including memory (which seems such a "born after 2000" nonpassable barrier as it is so hard to calculate how much memory you need and when to deallocate it at the cost of throwing money in form of cloud resources over the window).

/sacracsm really_off

pjmlp
I already joined the Apple and Microsoft cathedrals, which have better alternatives in place for C.
stiray
Guess what, I am also proficient in writting NT kernel drivers - I love their kernel btw. Maybe you can start by sharing your visionary skills with Microsoft. Or maybe Apple (but do try freebsd forums first, Apple and Microsoft might ignore you, but I know there will be responses there - maybe, just maybe, head to bsd kernel development mailing list directly).

/sarcasm ooooooooofffffff

ChrisSD
Microsoft writes much of the kernel in (msvc) C++ but they're keenly investigating other languages. They're also researching designs for new ones.

C isn't the only game in town.

pjmlp
In case you missed the message, C is deprecated on Windows, any updates beyond C89 are done as side effect of ISO C++ compliance, and UWP, including universal drivers, is not going away no matter how a couple few might hate it.

BSD is becoming irrelevant in world at large, so that would just be a waste of time.

In any case, the bazaar has proven that it is only technically able to produce UNIX clones.

dang
Please don't post in the flamewar style to HN. It's destructive of curious conversation, which is what we're going for here.

https://news.ycombinator.com/newsguidelines.html

Phrodo_00
The other bit is that when they say that some language is not a good fit for current processors, that also applies to x86_64 Assembly, that provides no way to control certain ways the processor works, like pipelining, that would be pretty interesting for a compiler to have access to.
jammygit
He gave a good talk about SystemD once too: https://www.youtube.com/watch?v=o_AIw9bGogo

edit: found a version with better camera angles

sprash
I didn't like that talk at all. I disagree with almost everything he says[1].

This talk is much better but some of his criticism are rather pointless because what he wants is a complete new OS design vastly different from UNIX. If he likes Windows/MacOS so much he should just use that.

1.: https://news.ycombinator.com/item?id=19966321

Koshkin
I agree that criticizing UNIX for being UNIX is pointless.
nwmcsween
> C is not a match for modern processor architectures and how modern languages such as Rust address these issues.

The basic answer I get when asking this is 'because hardware has changed' which is obvious but avoids the question. Sometimes someone will bring up vector instructions are supported in rust but for a large portion of programming you don't want to vectorize (seriously measure what you think you need to vectorize). Ideally a compiler could compile $lang to word at a time aka SWAR if needed.

pjmlp
> I wonder what an operating system for the 2020s would look like given the advances in technology and the lessons learned in the past decade.

While not perfect, with their own set of issues, macOS/iOS, Windows (with .NET/UWP) and Android are a couple of steps into that direction.

Interesting that you refer Plan 9, but fail to mention Inferno, which was the final vision from the same researchers.

Regarding UNIX, one thing that keeps being missed regarding NeXTSTEP and its derived OSes, and to go along your remark, it that for NeXT and Apple, UNIX wasn't never relevant.

The UNIX compatibility was just a means to bootstrap the OS and attack the UNIX workstation market, the Objective-C technology stack (now Swift as well) was/is where the real value of the OS was.

Anyone using macOS or nowadays WSL as UNIX replacement, is replicating the farmer's story from the talk.

cpach
I quite frequently see people mention Plan 9, but seldom Inferno. Does anyone know why Plan 9 became more popular?
pjmlp
Lack of teaching would be one.

I bet not many professors bother to dive into it.

The documentation is still available, http://doc.cat-v.org/inferno/4th_edition/, and you can get it as open source and commercial variants, http://www.vitanuova.com/inferno/

So it is mostly a matter of actually caring to dive into it.

Not only one will discover how many of the Plan 9 design issues got worked on, some of the design ideas that went into Go will be more clear.

It is also revealing to dive into Oberon OS linage and then see how some of its design went into ACME on Plan 9, dynamic loading on Inferno or eventually Go's method call and unsafe package design.

Instead most get off at the Plan 9 train station.

rumanator
> Interesting that you refer Plan 9, but fail to mention Inferno, which was the final vision from the same researchers.

For the lazy like me:

https://en.wikipedia.org/wiki/Inferno_(operating_system)

incompatible
I suppose on Linux, libusb itself is supposed to be the general programming interface, not the poking around with /dev or ioctl. The Windows and Mac interfaces he was using are quite possibly implemented as libraries over a more primitive layer.

Speculation, knowing nothing about USB specifically.

pjmlp
On Windows and Mac, the more primitive layer are kernel private APIs, which you aren't supposed to call directly.
magicalhippo
Linux is the kernel, so really the equivalent on Windows is the Windows kernel API, not the Win32 (user) API.

However, the kernel API seems to be somewhat similar to the Win32 version presented in the talk and not nearly as gnarly as the Linux equivalent:

https://github.com/microsoft/Windows-driver-samples/blob/mas...

https://docs.microsoft.com/en-us/windows-hardware/drivers/dd...

That said I'm just an application programmer so I've only ever used the Win32 API.

soraminazuki
Not everything may be a file indeed, but I didn't get what he was trying to point out with the specific example he gave. To make his point, he compared the USB APIs provided by Windows, macOS, and Linux. Then he went ranting about how Linux looks terrible because you had to manipulate files and use functions such as snprintf, ioctl, and fnctl. However, this is not at all a fair comparison because he was comparing the Linux kernel interface against much higher-level userspace library APIs in Windows and macOS. So what are the problems he encountered with this specific example, that had actually harmful consequences? I honestly can't tell.
thedracle
Not to mention, I'm not sure I prefer the Windows version of the USB API anyhow.

Sure, you have to snprintf these details of the USB device, to construct a path. But... You also have a general purpose facility available to explore USB devices with, which only requires the general ability to navigate the file system.

I still remember writing systems for data acquisition via RS232 serial devices, and GPIB, on Windows 98, and using specialized drivers, which gave some magical API, and then moving to Linux..

When I realized I could just cat from /dev/ttyS0, and use ioctl to set baud rate and other parameters, it was incredible. And I could share a lot of the code for interfacing with GPIB which was also just a character device.

pjmlp
It was completely fair, because other OSes don't expect users to directly access kernel interfaces, most of them even forbid it, other than for writing device drivers purposes.
rumanator
> It was completely fair, because other OSes don't expect users to directly access kernel interfaces

Neither does Linux or any other Unix or unix-like OS.

nottorp
Google libusb, which can be used to access most usb devices from user space.

Guess the video author hasn't heard of it either?

fetbaffe
He mentions libusb two minutes into the talk.
clktmr
Linux doesn't expect you to either. You are free to choose.
soraminazuki
How can it be "completely fair" when you're not comparing the same thing? Linux applications can and should use higher-level libraries too.
pjmlp
That is the low level version available to applications on Mac and Windows.

The high level ones are Mac Frameworks in Objective-C/Swift and .NET/COM libraries.

Direct kernel access is out of bounds.

emn13
Without getting into the specifics here, it's fair to compare linux kernal calls - sometimes - to windows api calls, because that's simply a technical detail, and in fact - that's just how those OS's expect you to communicate with the respective "OS" (whatever that even means; the linux kernel is hardly the whole OS). Calling the windows api's isn't calling some convenience wrappers, it is the api - the kernel api isn't (generally) public, and they try to avoid stability promises too, though, this being windows, I wouldn't be surprised if the kernel "internal" syscalls were nevertheless usually stable for legacy support reasons.
soraminazuki
The original speaker is comparing two totally different layers of abstraction. That's not a technical detail when considering if the comparison is appropriate. It's simply misleading when higher level libraries are available and widely used in Linux.

On the other hand, whether the kernel interface is documented or not is a technical detail which simply doesn't matter in this context.

pjmlp
The same high level libraries are relatively bad when compared against the respective Objective-C and .NET/UWP high level libraries.

But then I guess it would be considered fair.

soraminazuki
If that is indeed the case, then I would agree. But then, the "everything is a file" mantra, which the whole USB example was supposed to be about, doesn't become relevant anymore. It seems like an entirely different problem.
emn13
It's just not that simple. A kernel syscall just isn't an abstraction on windows, so if you want to compare the two, you need to include other OS APIs. Whether something is a syscall or not is a technical detail in the sense that it simply doesn't matter for it's actual function, but does matter for lots of non-functional concerns; possibly including api usability, performance, etc.

I don't know anything about the USB issue here, so I can't comment on that. But, typically, if you're looking for the closest equivalent to linux syscalls on windows, you're unavoidably going to be looking at non-kernelmode dll calls (which internally may or may not involve a kernel-mode syscall - that's a private implementation detail).

It's not a question choice of abstraction, simply of how the OS exposes low-level functionality (at least - it's the OS's choice, not yours).

The point being that if you reject any comparison of linux syscalls vs. window api calls then you effectively reject all comparisons between linux and windows apis. Fair enough; if you want - but it's not very helpful.

soraminazuki
If you want to compare the capabilities of Linux and Windows systems, comparing Linux kernel APIs and Windows library APIs might make sense. However, complaining that dealing with the Linux kernel interface directly requires more work than calling Windows API functions doesn't make any sense, because of course higher-level interfaces are more pleasant to work with regardless of the target platform. If you want to make that comparison in a meaningful way, you should be comparing Linux libraries against equivalent Windows APIs.
jsjohnst
To make it a bit more clear, libusb would be a closer comparison to the Windows/MacOS APIs OP is talking about being better.
emn13
Sure; nothing wrong with that comparison either - you're just not going to get a straightforwards apples to apples comparison. Unlike libusb, the windows api really is part of the OS; whereas libusb is a convenience library that - on windows - must use the lower level api; it's not a peer. Then again, there's no reason to care much about great convenience from a lowest-level api, so I'm not really sure what all the fuss is about: as long as the api isn't so wrong-headed to make good wrappers difficult or error prone, inconvenience seems like a fairly small price.
blablabla123
On the other hand it's possible to use Syscalls on macOS or Linux to try out new low level functions, since these tend to be documented quite well unlike on Windows. For instance copy-on-write or ptrace to alter/inspect low-level calls.
clktmr
"Everything is a file" is one of the most misunderstood concepts I encountered, to the point where people think that disk IO is involved. A file doesn't even need a filename (eg sockets).

The main point is to have a common interface for system resources, which happens to be the file api in Linux. Think of it as the base class of everything. There is a bunch of tools/code that operates on file descriptors, which you can reuse now.

Build better interfaces on top of that if you want to.

wbl
Sockets depart badly from that model with special purpose setup syscalls.
clktmr
Yeah but that's rather because the socket api is older than Linux. Underneath there is the sockfs implementation.
discreteevent
Everything is a bytestream.
gdm85
Some parts of the talk did not really stitch together; if we consider the main arguments against Unix (and the funny dab at Plan9) I felt them a bit lacking in the sense that they do not propose alternative philosophies, but only a liberating "mixing things together is cool".

I can totally understand that as a form of catharsis, and a mindset totally apt for hobby projects, but if we are playing in the realm of OS design...I would think we can do better in terms of architecture and philosophy? "anything else is better" does not really convince me, some structure is usually better than none at all.

thunderbong
Wonderful talk. Lot's of interesting points. Much of the technical went over my head though. But from what I understood, the main point he's trying to make was - we are stuck with a way of thinking in technology which was right when it was thought up, but has run out of it's usefulness.

I found his point, with respect to *nix, very insightful - > Unix suited it's time. I worry it has ended up straight-jacketing the way that we think because that was quite a while ago. It still works, which is amazing. But that doesn't mean that it's tenets and it's way of work should be sacrosanct. We should feel free to examine every idea and throw them out if we feel they no longer have value for what we're doing.

The analogies from history and communities were also very interesting (e.g. meritocracy).

Other points that I liked -

- Complex problems have simple, easy to understand, wrong answers.

- Understand the past, but don't let it bind the future

kccqzy
I watched the video. A few good points: should everything really be a file? He used the example of simply talking to a USB device on Linux without libusb involving a bunch of snprintf to construct a file path, and then a big bunch of ioctl. Setting up a device apparently requires creating a bunch of magic directories and magic files and magic symlinks and mounting magic filesystems... I feel that he has a good point about this. Not sure how Plan 9 does this better though.

Besides, he also talked about how I/O was blocking, and even when non-blocking I/O was available it's still synchronous until the recent introduction of io_uring. Windows appears to do this "right" with its completion ports. This part of the video was much less convincing, IMO.

qznc
My usual counter argument for "everything a file" is real time constraints. Playing audio or video (includes all games) is a soft real time task. The plan9 file API does not support that.
hinkley
There’s a phrase I haven’t heard in about forever unless I’m the one saying it: “passing a giggle test”. If you can’t describe the thing you did with a straight face, you should take it back and keep working in it until you can.

It’s so easy for things to get away from us by increments, and the longer we sit with something the harder a time we have predicting the reaction of people who have just seen it for the first time.

h1x
Also thought that some of the technical points were good, but...

Mixing together Linux and Unix? Saying that killing a process from command line in Linux is bad because MacOS has GUI for that? Not explaining in more detail a claim that C is responsible for Spectre? Topping a technical presentation with political statements?

That's too much for 30 minutes I think.

kragen
> Not explaining in more detail a claim that C is responsible for Spectre?

He said trying to paper over the concurrency of the hardware to provide the simple in-order model C expects is responsible for Spectre (and he did explain how Spectre works). This is true but I don't think C is really the reason for this; designing a language that productively exposes parallelism but is easy to program the kinds of things that run fast on modern amd64 CPUs, I think that might be an unsolved problem.

IshKebab
> Saying that killing a process from command line in Linux is bad because MacOS has GUI for that?

He wasn't saying that `ps | grep | kill` is bad because MacOS has a GUI. He was saying that it is bad (it is, that's pretty undeniable), and that dogmatically following the "Unix philosophy" would lead you to conclude that it is great, even though there's an obviously better way to do it (through a nice GUI).

To put it another way, "do one thing and do it well" is fine, if all of your users are happy to write all the glue to stick tiny programs together, and the only glue you have is completely unstructured text streams. But most people are not willing to do that.

To take it to an obvious extreme: what is the Unix version of Excel? `set_cell A4 'hello' && recalculate_formula B5 && cat A1:D2 | pi_chart`??

h1x
> To take it to an obvious extreme: what is the Unix version of Excel? `set_cell A4 'hello' && recalculate_formula B5 && cat A1:D2 | pi_chart`??

That's a nice question. It's not that obvious extreme for me though.

You could use sc and GNU plot (I know... GNU's not Unix). Simple example:

- first you configure your plot

$ echo "set terminal png\nset output 'plot.png'\nplot '/dev/stdin' with lines" > plotcfg.plg

- then you create a spreadsheet and pass it to gnuplot. It's easier to edit the spreadsheet with sc's TUI

$ echo 'let A0 = 1\nlet B0 = 8\nlet A1 = 2\nlet B1 = 16\nlet A2 = 3\nlet B2 = 32\n' | sc -W '%' - | gnuplot plotcfg.plg

IshKebab
So easy!
celticmusic
yeah, the whole conduct thing was weird and felt like part of a different talk. I kept expecting him to try and bring it back to the rest of his talk and he never really did outside of it being somewhat related to the idea of "lets do things differently in the future".
fetbaffe
Here is an article explaining Spectre & Meltdown with regards to C.

https://queue.acm.org/detail.cfm?id=3212479

It has been discussed previously multiple times here at HN, the latest from three weeks ago.

https://news.ycombinator.com/item?id=21888096

H_Pylori
>Mixing together Linux and Unix? Oh, the horror. For all intent and purposes they are exactly the same and they share the same philosophy.
catalogia
The Activity Monitor is hardly even the fastest way to kill a process in macOS. Usually when I'm working on something that goes haywire needs killing, I already know the process name and a simple `killall example` can be typed out faster than he can ⌘-space his way into the Activity Monitor and navigate that GUI. I don't think that proves anything profound though, just that his example was weak.

I went into this video agreeing with the stated premise (and still do), but this presentation was a bit underwhelming.

celticmusic
linux has pkill as well.
swiley
Unix at least picked one abstraction and stuck with it, contrast windows which is a dizzying mess of incompatible ideas. It’s so bad that more paranoid people may think Microsoft disorients people on purpose.

There are other operating systems that also chose singular abstractions like the cannon cat and smalltalk.

teddyh
Reference: https://en.wikipedia.org/wiki/Canon_Cat
rs23296008n1
I think I prefer the plan9 approach of "everything is a file/filesystem" rather than the unix "everything is a file" as typically seen. Actual usage of files, filesystems and namespaces under plan9 was more consistent than the file approach under linux. It was a lot of fun playing in plan9.

Regarding the video, of the three examples the windows approach was shown as more succinct. I thought the linux version was messy. The mac version seemed overly complex. Whether these were actually representative is another matter. I can't say how but I'm not convinced - there seemed to be some artistic license / suspension of belief required.

The remainder seemed to be meandering but I sense you had to be there.

layoutIfNeeded
I’m 29 now. I wonder if I’ll live long enough to be able to use cancellable non-blocking mkdir.
pjmlp
Very entertaining talk, specially regarding the cargo cult that gets carried around.
robert_tweed
I'd like to know, from anyone with Plan 9 experience, does its approach to "everything is a file" solve these problems?
pjmlp
No, everything is a file doesn't work at all if you care about high performance graphics and real time audio.
kragen
qznc made the same claim. Why would "everything is a file" be incompatible with high-performance graphics and real-time audio? I'm particularly interested in this question because my knowledge of those areas is not very deep, so I'm wondering what unknown unknowns I'm missing.

Here's my naïve thinking:

· Real-time audio requires low-latency scheduling, which is unrelated to the API for audio itself, and FIFO depth monitoring so that you don't get either buffer overruns or buffers that are so full that they induce latency in excess of what your scheduling and processing latency unavoidably adds. For output, you could provide this with a /dev/audio for writing data and a /dev/audiooutputbuffersize for reading the FIFO buffer depth. For input, you could provide this by making reads from /dev/audio nonblocking, by providing a /dev/audioinputbuffersize you would read before risking a blocking read on /dev/audio, or by reading in a separate thread.

· High-performance graphics covers a lot of ground, but one of the big pain points is unnecessary memcpying on the paper path from your CPU graphics program into the GPU, whether that's textures, vertex buffers, or prerendered pixel data (though maybe we could argue that that last case is never going to be high performance.) One way that unnecessary memcpying arises is from buffers for write() that are not correctly aligned with respect to the start of a page, which unavoidably means that they will have to get copied to a place with the correct alignment. This is a big problem if you're streaming out gigabytes per second of graphics data. In particular you don't want the data that goes to the GPU concatenated with data that tells the kernel graphics driver what to do with it.

Solution: open a new file in a /dev/gfx directory for the data to be sent to the GPU, then write the data to it from a properly aligned buffer. Fork a separate thread to write this data, which is done by marking the pages COW rather than copying them. (A more radical and less sneaky solution I'm exploring for Wercam and BubbleOS is to transfer ownership of pixel data buffers from the application to the window system, causing them to disappear from the application's memory map, avoiding the need to either copy or COW them. But this definitely departs from the Unix read()/write() model.)

· A different aspect of high-performance graphics, other than raw throughput, is that you might need to synchronize your drawing with monitor refreshes to avoid an extra half-frame of latency, particularly at low refresh rates like 60 Hz. This seems straightforward to solve in a filesystem interface: reading from /dev/vbi produces a byte just before each VBI, giving you time to send any necessary new data to the GPU.

Certainly it's true that Plan 9 does not provide high-performance graphics or low-latency audio. But I don't think that proves that filesystem-based interfaces can't provide them, just that the Plan 9 designers had spent the 1980s researching DSLs, fault-tolerance, concurrency, and typesetting, not rasterization, real-time control, and animation.

In general, system-call or IPC interfaces of the form "invoke function foobar with a baz handle and a quux struct or memory buffer" can be cleanly replaced with "open /bazzes/$i/foobar and write a quux struct or memory buffer to it", can't they? I mean, that's three system calls instead of one, so about 1 μs of system-call overhead instead of 300 ns, but in the cases where that's the performance bottleneck you can usually solve it by writing N quuxes instead of one. Can't you? A potential problem arises when you need to combine multiple handles to extra-process resources in a single system call, like SCM_RIGHTS or rename(), but I don't know where those would come up in the particular application domains you identify as problematic.

But maybe my proposed solutions above won't solve the problems I think they will, or maybe I'm missing the biggest issues entirely. What am I missing?

(I'm not claiming that this approach would be easier, safer, or more discoverable than the approach of adding a bunch of system calls; Benno's talk discusses the many ingenious ways a filesystem-based API can be hard to use, and no doubt some of the same criticisms can be leveled at the above strawman proposals. I'm just saying I don't see where it inherently fails to meet performance requirements.)

tedunangst
Perhaps getting into no true ship of theseus territory, but you can redefine read/write to do anything but it's arguable if that's still the same philosophy. Like, why even have open()? You could start each process with "/" already open, and then open other files by writing "open /etc/passwd" and reading back a new fd.
kragen
Yeah, so, is there a there there? I think there is. The open/close/read/write/getdents interface at least offers the possibility of certain kinds of REST-like uniformity, providing some benefits:

- A thin hourglass-waist interface makes it relatively easy to add new components to the system, whether clients, servers, or proxies. Plan 9 implemented both network fileservice and GUI windowing as filesystem-interface proxies, and they composed properly, allowing you to remote your windowing system over the network and to test new versions of the windowing system in a window. Less remarked on is the fact that if you need 500 system calls to invoke the full functionality of your operating system, most of those calls are going to remain inaccessible to newly ported scripting languages until you write a C extension module for them; and if implementing a fileserver involves handling hundreds of protocol messages, you aren't going to have very many kinds of fileservers. (CIFS, as far as I know, has only two: Windows and Samba.) ioctl and its demonspawn brethren are the reason we didn't have rr in the 1990s, when Michael Elizabeth Chastain wrote mec-replay.

- Putting all the system resources into a single namespace means that you can handle them uniformly in some other ways, like setting permissions and interactively exploring the hierarchy.

- A lot of system state can be meaningfully treated as data that can be either read or written, with certain useful properties: if you write and then read, then what you read is what you wrote (or is, unless there was an error or a subsequent state change), and if you write back something you read at some previous time, you restore it to the state it had at that time. The framebuffer is one example from Plan 9. The contents of NVRAM are a thing it might be more important to be able to back up and restore in this way.

- Also, byte-oriented things don't care what size your reads and writes are; you read the same sequence of bytes whether you read them one at a time or a million at a time. Usually.

- Naming things with strings means you can add more things later without breaking backward compatibility.

- A slightly richer interface that provides cache invalidation notifications (like inotify) can enable caching proxies and polling proxies, which are sort of dual to each other. Application containers (like Docker, but also like Vesta's build environment, and like rr) can provide isolation, reproducibility, observability, and auditing, but the difficulty of building them is multiplied by the number of different namespaces and system calls they need to interpose on.

- It's useful to unify the interprocess communication interface for sending a series of employee records from one process to another with the interface for writing them to a file, a tape, or a terminal, and the interface for receiving them from a file, a tape, or a terminal. script(1) and ttyrec and their kin take advantage of this polymorphic interface to make it possible to replay a terminal session later. Unfortunately as far as I know nobody has implemented a system that lets you record and play back GUI screencasts or mouse-click test scripts in such a simple way.

Now, open/close/read/write is really oriented toward the last point more than anything else, treating files as nothing more than recorded output streams that can be replayed. And you lose it if you start reading and writing multiple files! There are other uniform interfaces that have similar benefits for composability; SNMP provides one, REST provides another, and Named Data Networking proposes a third, one which unifies asynchronous notifications with satisfaction of read requests, which is sort of what Unix does too, except that as Benno complains, in Unix you have to bend over backwards to wait for any of multiple events, the polar opposite of the Win16 message loop. (Hilariously, as Benno points out, the Win32 APIs for nonblocking file and socket access are a total mess.)

The particular set of restrictions imposed by your chosen architectural style and protocols will determine what kinds of things you can do to your system once you have it running. Surely we can do better than the Unix filesystem interface in 2020.

kragen
I'm skeptical of Benno's claim that we should consider meritocracy "a dirty word" because "it's a lie". Certainly it is true that no community of practice achieves meritocracy, just as no polity achieves democracy — there are always some citizens with more influence than others, so it is always possible for the government to act against the interests and values of the majority of its population. Should we therefore consider democracy "a dirty word" because democracy "is a lie"? Perhaps it is better to consider it an ideal to which we aspire, without feigning to have achieved it.

Let's consider what alternative ideals are available in place of meritocracy for governing a community of intellectual practice.

We could strive for a democracy, in which the decisions are made by the majority — but the majority of whom? For this to meaningfully distinguish a community of intellectual practice from the surrounding community from which it arose, as a lotus blossom arises spotless from the swamp, a distinction must be made between voting members and outsiders. (Can you imagine a Linux User's Group where all the presentations are about Microsoft Windows, or a Python conference where all the talks are about Java?) But that is just a way of postponing the question of who the voters are.

We could strive for consensus, like the Quakers, in which any collective decision is postponed until every member agrees; but, like democracy, that demands gatekeeping that draws an ingroup/outgroup distinction, so it is not really an answer to the question of who governs, just how they govern.

We could strive for anarchy, in which all decisions are made individually, and there are thus no collective decisions to be made, whether by the meritorious or by anyone else. A variant of anarchy is "do-ocracy", where decisions are made by whoever shows up and makes the effort required to implement them.

We could strive for a gerontocracy, in which the oldest members — perhaps by length of membership rather than by physical age — make the collective decisions.

We could strive for a high-school clique, where the decisions are made by whoever is most popular.

We could strive for a plutocracy, where the decisions are made by whoever is wealthiest, or who donates the most.

Given these alternatives, it seems to me that when anarchy and consensus demand unacceptable tradeoffs, the least undesirable alternative is meritocracy. In meritocracy, the decisions are made by the best members of the group, according to some measure of merit that seems worthwhile to the group; in a community of intellectual practice, this usually amounts to some kind of knowledge and skill, seasoned with judgment and perhaps a guess about aptitude. If they are the wisest members, then they will make the best decisions. The greatest foolishness is to subject the wise to the government of the foolish. If some of the foolish and ignorant currently are so simply because they have not had the opportunity to learn, we can best remedy that by guiding them to learn from the wise, not by putting the foolish and ignorant in charge.

Of course, meritocracy as an ideal cannot be reached, only striven for; but it is a better ideal to strive for than a high-school clique, a gerontocracy, or a plutocracy.

Which of these are the campaigners against meritocracy hoping for?

Aqueous
am i the only one who thinks it’s a good thing that his Linux example was shorter than the other two and required less calling into hyper-specific API functionality?
dscpls
Awesome talk. Especially connecting how leaving an entrenched mindset behind applies both to architecture and to diversity in our communities.

And that our meritocracies are not as pure as we'd like to think.

dscpls
Lol - pointing out the elephant in the room really bugged someone
rumanator
Your post has nothing to do with pointing elephants. You've just threw a bunch of baseless assertions mixed with gratuitous blanket personal attacks. Ignorance mixed with resentment is never helpful.
dscpls
Please point out my

- baseless assertions

- gratuitous blanket personal attacks

Right now it looks like that's what you're doing, because I don't see that in my comment.

Did you watch the talk to the end?

Edit: line breaks

rumanator
> Please point out my

If you really need someone to point this stuff out then you can start by looking at your puerile jab regarding merit, but I'm not sure you're inclined to contribute constructively to conversations.

wizzwizz4
I cannot refute your comment, because it is not actually clear what you are referring to. Your comment is too vague to be useful; it barely conveys information above a sentiment.

An unfalsifiable theory is useless. An unfalsifiable comment equally so.

dscpls
I'm literally repeating words and phrases from the talk.
lightedman
You just clearly demonstrated that you did NOT watch the entire talk through to the end. What OP stated is in fact present in the talk itself.
Koshkin
There is a term for this (invented by Pauli): “not even wrong”.
BlackLotus89
This talk took some weird directions. It's called "What Unix Cost Us", but to be honest it isn't.

Starts with USB-driver coding in Win, Mac, Linux.

Goes over to colonialism.

Made some good points about Computer-architecture and how they shape programming languages and then some bad/wrong points.

Ends on community cultures and some controversial thoughts about gender equality and then he doesn't take any questions.

Talks off a shitstorm and then isn't ready to face it. Don't know have seen better videos about the same topic.

magicalhippo
It had some interesting points, but also some weird detours.

One of his core points revolve around hanging on to the way things were, and how that often ends up poorly.

Though I got this feeling that really, it's that often it's just really difficult to come up with good abstractions. Abstractions is after all what allows us to be so productive with our hardware.

For example, Electron and similar is abstracting away the operating system, which makes sense because for a lot of programs the specifics of the operating system doesn't really matter. However JavaScript and HTML can hardly be described as the best way of obtaining that abstraction.

HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.