HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Intel and Rust: the Future of Systems Programming: Josh Triplett

Intel Open Source · Youtube · 206 HN points · 0 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention Intel Open Source's video "Intel and Rust: the Future of Systems Programming: Josh Triplett".
Youtube Summary
Hear about how Intel is working to bring Rust to full parity with C, building the future of systems programming.
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Aug 25, 2019 · 198 points, 20 comments · submitted by mivvy
fouc
Interesting, the video mentions that the BIOS, Firmware, and Bootloaders used to be heavily implemented in assembly, and then nowadays it is mostly C with a sprinkling of assembly.

What's the chance this is also due to less limitations in the hardware, rather than C compiler achieving "parity" ?

dgaudet
since C pretty much requires a stack, and a stack requires "RAM" of some form, it used to be necessary that the memory system be trained prior to executing any C code. however once the cache grew large enough, and appropriate "cache as ram" hooks were designed, it became possible to stand up a stack before even the memory system was alive. this definitely reduces the footprint which has to be assembly.
dragontamer
8051 C Programmers remember their 8-bit stack pointer (256 words of stack space).

Enough for C to work out in most programs. Modern x86 systems have 64kB of L1 cache and 2MB of L3 cache to work with. Modern systems have more cache than the amount of full-RAM early systems had...

vardump
8051 programmers would love to have 256 bytes of stack (8051 word is byte). But because IRAM is needed for register banks (8-32 bytes, depending on how many you use), C library temporaries etc. you're really left with just 40-100 bytes of stack.
userbinator
CAR definitely works when a typical CPU these days has more RAM than an entire PC from the early 90s, but it's something that BIOSes (even pre-UEFI) have been doing for a long time, probably ever since CPUs had built-in caches that were large enough.
wyldfire
Memory mapped registers also help avoid the need to drop into inline assembly. I didn't get to the end of the video but it didn't seem like that was mentioned.
alxlaz
> What's the chance this is also due to less limitations in the hardware, rather than C compiler achieving "parity" ?

I'd say it's pretty small :). There's this quasi-mythical image of assembly language being used for clever optimizations and resources-constrained code by grey-bearded programmers but it's not entirely accurate. It's true that, since they try to

The video glosses a bit over the details of the timeling but most bootloaders I've seen after 1998 or so are definitely written in C, save for the early stages which are ASM largely because they're very CPU-specific and there's no reasonable way to go around that (i.e. they largely consist of stuff like "set bit X of register Y" and "load the address of Z in register W" -- all of which signal the CPU to switch into some particular mode, or tell it where a region of memory with some specific meaning is).

Prior to that it was more or less a combination mistrust in C compilers, assembly language being pretty common, and hardware and prevailing standards limitations, more or less in that order (the first two might be reversed depending on who you ask). For example, when it comes to bootloaders, it used to be that the MBR reserved only 512 bytes for the boot code (it used to be called "the boot sector"). 512 bytes used to be enough if what you wanted to boot was PC DOS 3.0, but definitely not for much else.

That was solved pretty quickly though -- albeit through a pretty convoluted scheme where the boot sector would contain only a small piece of code (the stage 1 loader) which prepped the system for, and loaded, subsequent stages, all the way to a full-blown boot prompt with menus and graphics and whatnot. It involved all sorts of trickery, too: you had to switch the CPU to the right mode (not a problem on most CPU architectures but a bit of a problem on x86), handle all sorts of crazy complications (remember logical partitions?) and so on.

A 386 would have been absolutely sufficient for all this, so the tech to do it existed all the way back in 1985. But for a long time, a lot of this floated around in assembly form in technical manuals and over various bulletin boards, so that's how people learned about it. Assembly language was common, there were good, free assemblers around (good and modestly-priced or easy-to-pirate C compilers would be available for DOS starting with the late 1980s, but the world didn't really get access to a good C compiler for free until gcc became popular) and a lot of this was intimately tied to the CPU anyway, so assembly was pretty much a natural choice. There wasn't much of a need for more complex bootloader code, either. With most hard drives in x86 computers in the tens of megabytes region, there was rarely much of a reason to do fancy things in the bootloader, either.

With BIOS and firmware it's a slightly different story. This was code that not only had to fit in a small amount of ROM (less of a problem than you'd think with C), but was also very CPU-specific, ran in a restricted environment (no RAM in some places, for example), and it's code that has a very long history behind it. I kindda lost touch with this field in the last couple of years, but back in the 00s, the codebase in a lot of BIOS ROMs dated back to the 1980s -- updated, of course, but the "initial commit" was in the 1980s. It had been through audits and certifications and long bug fixing sessions (some less efficient that others, one might argue). For a long time, big providers didn't see any reason to rewrite any of this in C. Sometimes they'd license it to other companies (some of them pretty big) that did wish it would have been easier to customize it (and often screwed up customizations) but life was what it was.

They did eventually move to "assembly with a sprinkling of C", but in principle, they could have done that as early as, what, 1992? I've definitely seen firmware code from that era written in C and ASM. But let's call it 1996 to be a bit conservative. Definitely 1998. But a lot of companies didn't move to that until way later, because there's a sort of caution (or inertia, depending on how you look at it) that being responsible for a high-volume product that sells millions upon millions of copies tends to impart. Or at least it did back then, when firmware updates were difficult to deliver.

There are some hardware limitations that do play a role in this. For example, early stages of firmware code run without RAM, so they run without a stack, which makes C a no-go (I dimly recall a C compiler that offered very limited support for that kind of environment, but not without a lot of hand-holding with macros and it was bad to the point of uselessness).

But by and large, that's not what held people back. If you work with a good compiler and understand what's behind the scenes (not a big problem if you stick to readable C without clever tricks), having just 8K of ROM and 512 bytes of RAM isn't necessarily so constricting that you just can't live without assembly language. I've seen (and written) non-trivial firmware code in C that did a lot of stuff in that kind of space.

Edit: if you're good with assembly, and trade the problem-space knowledge that you (unlike a compiler) have, you can write less general code than that which the compiler generates, and generally beat it in terms of size. So you can squeeze in more stuff in X amount of code than a compiler could. Whether or not you want to do that is another story. With firmware code, yeah, maybe, at least back when having 256K non-volatile memory was a significant hog on the motherboard price at high production volumes. Nowadays I'm pretty sure the answer's no (but I haven't written firmware code for that kind of hardware so take it with a grain of salt). With bootloaders I'm pretty sure the answer has been no for more than 20 years now :).

techntoke
My guess is that the assembly portions are used to get better optimization than available with C, which is also pretty common in Linux kernel development.
userbinator
Indeed, compilers are still very easy to beat on size optimisation, and the whole (U)EFI monstrosity was largely enabled by higher capacities of BIOS EEPROMs, which were in the 128/256KB range for a long time before multiple-MB ones became available. Now that motherboards with 16 and even 32MB(!) EEPROMs are available --- this is more than the entire RAM of regular PCs throughout the 80s and early 90s --- firmware has bloated considerably. In contrast, the original IBM PC had an 8KB BIOS.

If you looked at pre-EFI BIOS, they were exclusively written in very size-optimised and elegantly clever Asm; now it's just the usual bloaty compiler output you can find everywhere else, which makes me a bit sad.

minipci1321
To be fair to the modern BIOS, there is a whole lot more hardware to initialize than in 80s and 90s, the added ROM space isn't consumed only by bloated compiler output. Some motherboards store a second pristine copy of the BIOS image for fail-safe upgrades.

> and elegantly clever Asm

I personally miss returning boolean result in C (carry) flag.

nordsieck
> If you looked at pre-EFI BIOS, they were exclusively written in very size-optimised and elegantly clever Asm; now it's just the usual bloaty compiler output you can find everywhere else, which makes me a bit sad.

As a person who cares way more about software reliability than cleverness, I am unreservedly happy about this change.

kjeetgill
I don't think most people find UEFI more reliable than BIOS. Sometimes clever code is also just attention to detail.
JohnStrangeII
However, as a rule of thumb, larger and more complex systems tend to have more bugs than smaller and simpler ones.
marcosdumay
They are harder to read too.

The trick is to create large and simpler systems, instead of larger and more complex ones. That's hard to do for user facing software, but there is no reason firmware had to be those monsters we have now.

wtallis
The extra firmware capacity and the programming productivity it has enabled wasn't spent on improving reliability. It was spent on adding a silly GUI to the firmware. My experience is that annoying motherboard firmware bugs are just as easy to find today as they were in the early 2000s when ACPI was being implemented in assembly.
tormeh
I think this can be generalized. Computer programs always seek a minimum floor of cognitive demand on it's programmers. Whenever the cognitive load is lower than the floor, features will be added until it reaches the floor. Let's call this floor "The Floor of Pain". At this point, the program is sufficiently challenging to maintain and extend that the developers are barely able to add any new features. Any tools/languages/etc that help developers handle more complex programs therefore leads to more features to be added in the application until it reaches The Floor of Pain, simply because it is a manager's job to fill the feature pipeline with work, and maintenance does not fully occupy the programmers yet.

It follows that when Rust replaces C, the kind of programs that are now written in C will have their complexity grow dramatically.

nevi-me
Josh talks about a lot of exciting things that Intel's driving in Rust. Does anyone know if AMD is doing similar, especially around SIMD?
BlackMonday
This would interest me as well, especially since someone here on HN wrote that Intel has more software developers then AMD has employeess (10.000), which wouldn't surprise me considering that Intel has 10 times as much employees in total. So AMD has to be more selective about what they can explore/support/etc.
pas
Intel also makes SSDs, network cards, servers (at least used to), wifi/modem chips, they have fabs, and so on.
cryptonector
Definitely more work on linker functionality.

Advanced linker technology is a great part of what makes C great for systems programming in user-land:

  - weak symbols
  - explicit interposition (the INTERPOSE flag
    on Solaris/Illumos, LD_PRELOAD generally)
  - collision avoidance (recording dependencies where
    they are needed, rather than globally)
  - direct binding (Solaris/Illumos) /
    versioned symols (Linux)
  - libdl / dlopen() and friends, including dladdr()
    (LoadLibraryEx() and friends on Windows)
  - filters and auxilliary filters (Solaris/Illumos)
  - audit objects
Static linking could have a lot of the above, and, really really ought to, but static linking technology for C is stuck in 1980. This need not be so for Rust! Please, please, if you build static linking for any language, make sure to build ELF-style semantics, not C static linking semantics!
Aug 24, 2019 · 8 points, 0 comments · submitted by pjmlp
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.