HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
P99 CONF: Rust, Wright's Law, and the Future of Low-Latency Systems

ScyllaDB · Youtube · 120 HN points · 2 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention ScyllaDB's video "P99 CONF: Rust, Wright's Law, and the Future of Low-Latency Systems".
Youtube Summary
🎥 Watch all the P99 Conf 2021 talks here: https://www.p99conf.io/

The coming decade will see two important changes with profound ramifications for low-latency systems: the rise of Rust-based systems, and the ceding of Moore's Law to Wright's Law. In this talk, we will discuss these two trends, and (especially) their confluence -- and explain why we believe that the future of low-latency systems will include Rust programs in some surprising places.
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
This talk proved prophetic for me; since giving it three years ago, most of my technical work has been in the development of a de novo operating system in Rust -- not for its own sake, but rather because it is the right approach for the problem at hand. I talked about this a bit in a recent talk[0], but expect much more here soon, as we will be open sourcing this system in a few weeks.

Beyond the new system (called, appropriately enough, Hubris), we have been using Rust at more or less every layer of the stack: microcontroller firmware, early boot software, operating system kernel, hypervisor, and (of course) user-level. Again, this is not by fiat or to do it for its own sake; we are using Rust emphatically because it is the right tool for the job.

More generally: Rust has proved to be even more important that I thought it was when I gave this talk. My blog post from a year ago goes into some of this updated thinking[1], but honestly, my resolve on this continues to strengthen based on my own experience: Rust is -- truly -- a revolution for systems software.

[0] https://www.youtube.com/watch?v=cuvp-e4ztC0

[1] http://dtrace.org/blogs/bmc/2020/10/11/rust-after-the-honeym...

infogulch
I liked your talk, and the talk you referenced by Timothy Roscoe [0]. My understanding of your talks is that the issue we seem to be running into with system architecture design is that OS and userspace developers are clinging desperately to a dead model of the system as a homogeneous array of cores attached to large bank of unified memory. This falsehood is so deep that systems basically lie to us about 95% of their internal structure just so that we can continue playing out our little fantasy in obliviousness.

The biggest component of that lie is the unified memory assumption. To be fair to OS & app developers, writing for NUMA is hard, there are enough invariants that must be continuously upheld that it's impossible to just expect authors to keep everything in their in their head at all times. And using a language like C, described as "portable assembler", does not help at all.

Enter Rust, where the novel ownership system and strong type system allows encapsulating special knowledge of the system and packaging it up into contained bundle that can't be misused. Now you can compose multiple of these un-misuse-able (it's a word now) lego bricks reliably because the compiler enforces these invariants, freeing the author from the burden of reflecting on their design from N! perspectives every time they add a line. (Well, they still reflect on it, but they are privileged to reflect on it right after they make a mistake when the compiler complains instead of in an hours-long debugging session using a debugger or, worse, a specialized hardware debugging device.)

---

Your talk focuses on no_std, which is really a foundational necessity of such a new OS (the term "Operating System" feels too small now, maybe "Operating Environment"/OE? idk). I think the next important component is a path out of UMA-land, which I don't think is fully solved in Rust at the moment (that's not a knock, it's not solved anywhere else either). There's an ongoing Internals thread that started as a discussion about usize vs size_t across different architectures and now has dug down to questions such as "what even is a pointer?", "how are pointers represented as data?", and "how should you convert a bucket of bytes into a usable pointer?" [1] -- these are exactly the type of questions that Timothy's talk reveals as important (and has hitherto remained unanswered) and that you hinted at.

During the discussion, Ralf Jung presented an interface that would enable constructing a pointer from an integer by also separately identifying its provenance; I feel like this is a good direction.

    /// Returns a pointer pointing to `addr`, with the provenance
    /// taken from `provenance`.
    fn ptr_from_int<T>(addr: usize, provenance: *const T) -> *const T
---

What do you think of my summary? What do you think of the ongoing discussion on this Internals thread about the right way to construe Rust's memory model? What do you think of think of this idea presented by Ralf?

[0]: https://www.youtube.com/watch?v=36myc8wQhLo

[1]: https://internals.rust-lang.org/t/pre-rfc-usize-is-not-size-...

[2]: https://internals.rust-lang.org/t/pre-rfc-usize-is-not-size-...

infogulch
I should start a blog instead of using HN comments as an outlet.
Oct 08, 2021 · 120 points, 87 comments · submitted by zdw
chubot
Reminds me of Your computer is already a distributed system. Why isn’t your OS? from HotOS 2009:

https://www.usenix.org/legacy/events/hotos09/tech/full_paper...

We argue that a new OS for a multicore machine should be designed ground-up as a distributed system, using concepts from that field. Modern hardware resembles a networked system even more than past large multiprocessors: in addition to familiar latency effects, it exhibits node heterogeneity and dynamic membership changes

And now that I look at the author list, I see Roscoe, who gives the 2021 keynote that Cantrill recommends:

https://www.youtube.com/watch?v=36myc8wQhLo

thelazydogsback
QNX? - been around for a while - the first OS to run in protected 386 mode FWIR. (And I remember it booting from a single floppy :))
brundolf
So the idea is that the low-level abstraction becomes more like a distributed system where different pieces of actual hardware coordinate independently, instead of where they all go through this single, central CPU that has the final word on everything? Almost like "edge computing" at the scale of the motherboard
bitwize
I hate to be the smug Amiga weenie but the Amiga was doing this in 1985... the idea that everything goes through the CPU is largely an accident of PC dominance and the dearth of beauty and elegance in system design resulting therefrom.
brundolf
Interesting! I didn't know that
zozbot234
The early home computers were invariably designed around the video signal clock, so they were quite deterministic and predictable - very little was truly 'independent'. The PC with its custom extension cards was more of a pure distributed platform.
devmunchies
he says that rust can fit into these hidden cores (special compute elements) but that there won't be dedicated CPUs or memory.

I'm not a hardware guy so I don't have the imagination of what you could do on these cores without CPUs or meaningful memory.

ncmncm
They have CPUs, and memory. The memory is generally managed in a more restricted and idiosyncratic way than POSIX programs like to use. The CPUs are often not what you are used to compiling code for.

The tricky part is providing a programming model that is usable, understandable, and useful, given the wide variability of the conditions the code would need to operate in. The eBPF project has made remarkable progress in that direction. eBPF code can be generated from C, C++, Rust, and a bunch of other languages. The "no_std" feature in Rust has no substantial role in getting your code compiled down to eBPF, in preparation to be further translated to the object code that actually runs in the peripheral gadget.

f-jin
> The "no_std" features in Rust has _no_ substantial role in getting your code compiled down to eBPF

Is that a typo or intended?

ozten
These hidden cores are fascinating.

In the past, I was interviewing at a hardware company and one of the things they had just discovered was an "unused" and undocumented processing unit that they could run a process on to get squeeze out more compute from the cameras they used for calibration for the hardware bill of materials that they had already settled on.

bcantrill
I know that I recommended it in the talk, but I highly recommend Timothy Roscoe's OSDI keynote[0] -- and may also be worth catching the Twitter Space we did a few weeks ago discussing it.[1]

[0] https://www.youtube.com/watch?v=36myc8wQhLo

[1] https://github.com/oxidecomputer/twitter-spaces/blob/master/...

zja
I discovered Oxide's youtube channel a couple weeks ago, and I've been really enjoying listening to your twitter space discussions. Thanks for all the interesting content!
sam_bristow
Oooh, thanks for pointing that out. I really enjoyed their "On the Metal" podcast while they were running it.
aidenn0
Given that you continuously recommend Roscoe's keynote, and you are working on a message-passing microkernel; will that microkernel support message passing between heterogenous cores with heterogenous physical address spaces?
bcantrill
I'm not sure what you mean by "continuously", but the answer to your question is "no, it doesn't."
CalChris
I liked your talk but I didn't get the connection with Wright's Law. You did a good presentation of WL but I just didn't get its connection with low latency and future hardware software co-design.
zozbot234
I guess the connection is just that Rust is an up-and-coming language, so developing with it is going to get gradually easier and cheaper as the surrounding ecosystem grows to a larger scale.
aidenn0
- Low latency is desirable

- Transistors are getting cheaper

- Therefore it is cheap to put computing closer to I/O (which reduces latency)

- That computing will not have the sorts of things a general-purpose CPU has (maybe no MMU, limited RAM, perhaps it's a harvard architecture)

hinkley
We're swinging from The Network is the Computer to The Computer is a Network
convolvatron
a channel controller! last time i used one of those it was a 68k hanging off a convex
dralley
Hey Bryan, just curious, what are your thoughts on the tradeoffs that Zig is making compared to Rust?
RobLach
That keynote is great. It feels like looking up and seeing the sun from the floor of a deep hole you've been too focused on digging.
sremani
Roasting a conference in the keynote is whole another level of charisma. This talk is impressive even for non-OS people like me.
ncmncm
The first 15 minutes are just historical rehash, and can be skipped over without missing anything at all.

The rest just says that compute elements are showing up in our peripherals, and that we will need, and want, to program them. (People have said this for decades, but there has been little movement toward enabling it, because it will always be hard, for reasons.) Then it makes the absurd claim that Rust is uniquely capable of programming those elements, because it has a feature called "no_std", where your build fails if your code tries to use any standard library features.

Of course, all those peripheral processors are already programmed, today, almost all in C and C++. It is arbitrarily hard to run your own code on most, today, because where it is possible at all, you need to solder in a JTAG connector and reverse-engineer the code in there to figure out how your code can operate in the environment.

Lots of peripherals, though, get their object code loaded into them by the kernel driver at startup before they will do anything useful.

Maybe someday it will get easier to download your own code into them from a running system, to extend what they are already doing, and maybe someday somebody will document what your code would need to do to contribute to such operation. But if that ever happens -- and, to be clear, it is happening in certain, select places, like some NICs -- Rust will have no advantage over other languages. Using "no_std" will not materially help.

For example, a company Netronome has today a NIC that lets you run your own code on it, e.g. to filter or alter packets before they get DMA'd out to the ring buffer where the kernel sees them. It is programmed using eBPF, which is a virtual object code format that a kernel driver will translate to native machine code. eBPF code can be generated by running LLVM over a file generated by a compiler for C, C++, Rust, or really almost any language that can be compiled to LLVM intermediate code, and offers some way to call out to a C library API.

The Netronome kernel driver takes your eBPF object code, compiled from any language, translates it to machine code for one of the cores in the NIC, copies that onto the NIC, and patches it into what is already running there.

Of course none of the other languages have "no_std", and don't need it. Your Rust code doesn't need it either. Don't want to use library features? Just don't use them. The things an eBPF program is allowed to do are quite limited, but surprisingly powerful, including calling out to a special ("standard") eBPF library. That Rust's sum types are core language features gives it no advantage over the equivalent C++ Standard Library features that (also) do not depend on linking to a runtime support library.

brundolf
> Of course none of the other languages have "no_std", and don't need it. Your Rust code doesn't need it either. Don't want to use library features? Just don't use them.

1) no_std allows composability. One of Rust's biggest strengths is its package ecosystem, and I imagine this will only become more true in the increasingly "weird" and sprawling hardware ecosystem described by the OP, where the ability to re-use code that other people might have written for your hardware could save you from having to "solder in a JTAG connector and reverse-engineer the code in there to figure out how your code can operate in the environment". "Just don't use [the standard library]" doesn't work when you want to use third-party libraries.

2) no_std's static checks are nice even in your own code. The value of static analysis has been argued about so many times in threads like these that I don't feel like re-treading that ground. But suffice to say: it's clear some people think they can write flawless code and don't see the benefits of assistance, but plenty of others know their limitations and benefit from static analysis.

3) Rust seems to have straddled an interesting and useful line in terms of which features are available in these limited contexts and which ones aren't, giving you as much to work with as possible. And thanks to #1 and #2 you can use those features fearlessly; you never have to guess about whether or not they're in the "safe" category. Hygenic macros in particular can act as a compile-time force-multiplier when you want better abstractions but don't want to (or can't afford to) do extra things at runtime.

As always comes up in these discussions: yes, you can technically cover this usecase with C and C++. You can technically cover it with raw assembly code, or binary strings, or a pair of wires. Don't discount ease-of-use as a factor that can take things from "technically possible but practically infeasible" to "doable and worth doing".

jmull
I'm not sure I'm understanding this correctly... Rust libraries are generally composable, so that's not what makes no_std interesting.

I guess the advantage is that it sets a bar that means "can run in a wide range of environments" and it's a standard bar so that everything that's "no_std" can (more or less) run in the same wide range of environments.

But I don't know. A standard bar is nice, but I'm not sure it's really that much of a help.

I just dabble with embedded programming as a hobby, but it doesn't seem to me like non-Rust approaches (I'm thinking of C and Python) are suffering from a lack of composability.

ncmncm
The speaker does what many promoters of niche languages do: picks out a named feature and insists that it makes possible what other languages don't. It is immediately walked back to "makes feasible", and shortly "makes more convenient", before dissolving in meaninglessness.

I don't think Rust needs or benefits from that kind of promotion. It's a pretty good language that will get usable in more places as it matures. Overpromoting it into places it is not mature enough for yet causes problems. Is Rust mature enough for embedded use? In some places, certainly. In all places? Certainly not.

brundolf
Making enough easy things "more convenient" can make harder things "feasible", which can lead to virtually-impossible things becoming realistically-possible.

You've intentionally taken mine and others' words in the most uncharitable ways possible and you've displayed a general willingness to make this an unproductive discussion, so I'm going to stop engaging with you now.

ncmncm
I would be happy not to contradict you if you were to post comments that do not express absurdities.

Rust the language and Rust the community do not benefit from people posting transparent falsehoods. There is plenty of substance to Rust. It doesn't need puffing.

bcantrill
Wow, what a caustic collection of strawmen. So, I am not a "promoter of a niche language"; I am explaining why a language that many already find compelling is, in my experience, very compelling for our use case. I am not "immediately walking back anything"; I very much stand by everything I said. And I am certainly not saying that Rust is a fit for all use cases. What I am saying is that it's a fit for ours, and that I believe that our use case will be seen by an increasing number of engineers as we see more and more constrained compute elements in more and more places. You clearly disagree, which is fine -- but that doesn't invalidate our experience.
ncmncm
Honestly, failing to walk it back is strictly worse.

First discovering the world of embedded programming is tremendously exciting. We've all been there. Discovering all these embedded processors in equipment you already own compounds the excitement.

But programming "channel processors" goes back to the early '60s, six decades ago. We already know how. We haven't needed Rust. Embedded subsets are also re-invented over and over. That idea hasn't aged so well.

In practice, there is no upper limit to the complexity of code you might want to run in a channel processor, so you run into the boundary of your subset early. You often discover you want resource management support, so you link in an embedded RTOS. (OS as library did not originate with Unikernels.) There are many, many free RTOSes, many of them very good. The more POSIXy they are, the less fun and educational they might be.

bcantrill
I'm not sure what argument you think I'm making, and I'm even less sure what counterargument you're making (that... no one should implement embedded systems in Rust I guess?). Regardless, it's clear that Rust hits an emotional nerve for you; I'm sorry that you found this talk so agitating!
ncmncm
I have already explained at length. You are not obliged to read any of it, and evidently haven't. Repeating it here would be pointless.

But, to be clear: 1. There is nothing wrong with Rust, or with coding embedded Rust. 2. Rust brings nothing new to the embedded table, beyond what it does with ordinary programming; 3. non_std has been found by repeated experience over many decades to be a failed idea.

I don't like people promoting falsehoods on the internet. Instead of apologizing, just don't!

brundolf
Consider this scenario: you're writing some code that needs to run in an environment where some aspect of the standard library isn't an option (allocation, or whatever else)

Now you want to pull in a library

Does that library work in that constrained environment, or will it break in obvious or subtle ways?

In Rust, no_std is a first-class crate attribute. It's enforced inside the crate, and it tells your crate as much when on a dependency. It's impossible (as far as I know) to accidentally use a std crate from your no_std crate, and for a no_std crate to accidentally use something from std internally (or in its dependencies). You can search for a crate on crates.io, and it can have dependencies of its own and dependencies for those dependencies, and you can integrate it all into your project without having to dig into the source code or whatever else to try and find out whether it will fit this set of constraints. That's powerful.

In C you would have to either write everything yourself, read through all of your dependencies, or just cross your fingers.

ncmncm
Or, you link and see what symbols need definitions. As, in fact, everyone already does, and has done for decades.

Since no_std would forbid an enormous amount of what you probably also want to use, you probably don't use it. Instead, you see what symbols the linker says need to be defined in your runtime support library, and add those.

jmull
Thanks... that is what I thought. Honestly, like I mentioned, I don't see that as a particularly useful advantage.

The thing is, there is no single line that makes a library suitable for my project, whether it's for a constrained environment or not. I carefully consider many aspects of a library, all from the perspective of what, specifically, my project needs. no_std might be one thing to look for, but I don't know that it's really answering that many of my questions. Also, in my embedded dabbling, there seem to be such an incredibly rich set of libraries available for constrained environments it is astounding. So the lack of no_std doesn't seem to be holding non-Rust back.

> In C you would have to either write everything yourself, read through all of your dependencies, or just cross your fingers.

I really don't think no_std changes that at all.

brundolf
Fair points. I wonder what it would look like to have multiple kinds of enforceable crate-level constraints so that everyone doesn't have to take or leave the same "single line"?

> in my embedded dabbling, there seem to be such an incredibly rich set of libraries available for constrained environments it is astounding

What about more general libraries that aren't specifically designed for embedded scenarios? It seems like being able to know up-front whether or not those, for example, allocate, would be helpful

ncmncm
> "you can technically cover this usecase"

Not just technically: essentially all code covering "this usecase" -- operating peripheral cores -- is today C++ or C code.

It would be more meaningful to say you can technically do it in Rust, because it is possible in principle to do it, even though practically nobody ever has, and vanishingly few ever will.

Everyone who makes up an "embedded subset" congratulates themselves for identifying just the right subset. Then they immediately start moving other stuff over the line. And never stop. This is all familiar ground. All that is new is new people noticing it.

An embedded program in any language that relies on a library function not implemented in its runtime environment will, in any case, fail to link. That is static checking. Rust is not doing more static checking here. It is just doing it sooner, and forbidding a truly enormous amount of what would also be useful to embedded programs. Most of us would rather have access to all of that, and let the linker identify what support is needed for what is used.

brundolf
> essentially all code covering "this usecase" -- operating peripheral cores -- is today C++ or C code.

When has "essentially all...today" ever placed a limitation on future progress?

ncmncm
Not the point. Saying something is only "technically" possible when in fact it is already done in billions of devices is to suggest an absurdity.
tialaramex
> Not just technically: essentially all code covering "this usecase" -- operating peripheral cores -- is today C++ or C code.

How about you break "essentially all" down between these two different languages?

You must have good statistics on this, to have such confidence to say "essentially all" here, and yet for some reason you lump together C and C++ as if they're the same when you know they aren't. Might it in fact also have been true to say "essentially all" are C?

> Most of us would rather have access to all of that,

Which notable embedded systems did you program ncmncm? Or is "us" here based on a survey you can point us to about other embedded systems programmers who actually have practical experience?

ncmncm
If you knew of a language other than C or C++ used for any substantial fraction (say, more than 0.1%) of peripheral device designs, I am confident you would have cited it.
tialaramex
If we're just guessing here then I'd guess there'd be some considerable fraction of Assembler. Especially when you're very constrained. If your total code footprint is 16 kilobytes, suddenly it doesn't seem daunting to think about the actual hardware instructions - there's only a few thousand of them after all.

But people also seem to be merrily using special-purpose high level languages for their domain e.g. P4. What I couldn't find much of was C++. If you had numbers we'd be looking at them, right?

brundolf
I thought the "historical rehash" was interesting and well-presented
gnurizen
Anybody else chuckling at the irony here that eBPF is inspired by dtrace which was invented by Cantrill?
ncmncm
No.

If you want to do stuff like this, anything you use to do it will have to look a lot like eBPF. eBPF doesn't make it easy, it only makes it possible. But dtrace was not eBPF.

bcantrill
Well, it would have been hard to be eBPF because it pre-dated it. But perhaps you meant to say that eBPF is not DTrace? On that point, certainly agreed.
ncmncm
Dtrace provided huge value for quite little implementation effort. It has taken a positively enormous amount of more grueling, detailed work to make eBPF much more capable. It might not have been done without dtrace demonstrating the value available, but I credit eBPF to the people who did that hard work.
secondcoming
> The first 15 minutes are just historical rehash, and can be skipped over without missing anything at all.

I think the software world has got to the point where the obligatory Moore's Law graphs, etc are no longer required when talking about CPU performance.

zozbot234
You can't just take Rust code and transpile it into eBPF, it's not a Turing-complete language. Now WASM with the addition of some tailored API's could do what you're talking about.
ncmncm
Yet, Turing-complete or no, Rust (like C, like C++) can in fact be compiled to LLVM intermediate code, and that intermediate code, provided it conforms to eBPF requirements, can in fact be compiled down to eBPF. And, that is how what I'm "talking about" is in fact done today.

It is not being done with WASM, and probably will not be.

Animats
The first 15 minutes are just historical rehash, and can be skipped over without missing anything at all.

Right. Short version: Moore's law over, fabs too expensive.

Then it goes on to make the absurd claim that Rust is uniquely capable of programming those elements. This is asserted to be because of a feature called "no_std", where your build fails if your code tries to use any standard library features.

He's a bit vague there. What he's getting at, though, is that a common heap is a problem when you have enough CPUs. Shared-memory multiprocessor architecture is hitting a wall. Caches, and cache sharing, and cache intercommunication have made it possible to get a large number of CPUs to pretend they share memory. But they really don't share, and trying to maintain that illusion adds considerable overhead.

This is an old observation. It's led to lots of distributed systems - the Transputer, the Cell, and a whole bunch of experimental one-offs. All failed.

Now, there are successes of loosely coupled parallelism. GPUs. Neural net simulation chips. Bitcoin miners. Supercomputers doing finite element analysis. Those are useful for very specific problems where you need a large number of semi special purpose compute elements. What this guy may be thinking is some general product to do all that.

But he doesn't say much about how to do that.

ncmncm
He makes the same mistake that people always have in promoting what the ISO C and C++ Standards call a "free-standing implementation". That is supposed to be a build mode intended for use in embedded systems, where the program is itself the whole system, so it cannot rely on OS services.

This gets conflated, absurdly, with an inability to allocate, manage, and use heap memory. Billions of embedded devices reserve and manage heap memory without difficulty. Indeed, every OS kernel is running in just such an embedded environment. Managing memory for its own use and on behalf of user processes is among its chief activities.

The problem with defining a "free-standing" version of a Standard is that, in practice, real systems invariably need to use some of what is not specified to be part of the negotiated "free-standing" subset. There are at all times active proposals to add this or that extra library feature to the "free-standing" subset. Meanwhile, the language implementers have very little incentive to package any "free-standing" subset at all, because no one such subset that any substantial number of embedded users can actually use is possible.

In practice, embedded-system developers use the regular toolchain, and just link a runtime support library that implements the things they need. So, for example, C++'s std::vector will, by default, call operator new(). But any particular use of it, in an embedded program, may specify a custom allocator, and the object code for the program then ends up with no calls to operator new(), and so links happily to a runtime library without one.

tialaramex
> There are at all times active proposals to add this or that extra library feature to the "free-standing" subset.

This will obviously be true in C++ but it's far from obvious that it's a sound prediction of the future in Rust.

Let's take a fairly simple thing we might expect to be able to do. We have two fixed size static arrays of data of the same fundamental type. We're writing a function that processes such data (maybe it's writing control bytes to an MMIO register) and sometimes we will want to process both of them, we can't splash out on more RAM to just concatenate both arrays into a temporary vector, that would blow our budget. Cutting the function up into parts is messy, surely we can do better?

In both C++ and Rust, this feels not so hard to achieve. Our function can just take an iterator, it iterates over the provided data and processes all of it, we use iterator adaptors to feed both arrays in when that's what we need. The optimiser should make this just as nice as if we'd meticulously hand-rolled it, but with fewer opportunities for mistakes.

And in Rust it really is that easy, the natural array type works as expected, it is IntoIterator, the core::iter namespace provides a suitable adaptor to connect two together naturally (Chain).

But in C++ we immediately run into problems. The built-in array type is ghastly and we can't very well use it. There's a usable array type but it's not free standing for some reason. Even once we have an array type, we need iterators which also aren't free standing. And then we quickly discover C++ iterators aren't powerful enough here and we need to resort to third party libraries anyway.

ncmncm
> "But in C++ we immediately run into problems."

Except, we don't. The built-in array type works fine. Anyway std::array doesn't need runtime-library support, iterators likewise. Nobody implements "freestanding", or would use it, so that's a non-issue. Libraries are fine. But std::ranges::join_view [1] seems to be what you are talking about?

(NGL, you read a bit desperate. Things OK?)

It is a mark of weakness, in a language, to need to implement in its core what, with better core features, could as well have been in the library instead. Finding stuff that lives in the C++ Standard Library in core Rust is not really a thing to brag on.

[1] https://en.cppreference.com/w/cpp/ranges/join_view

tialaramex
> The built-in array type works fine.

It does not seem to be a Container and doesn't remember its own size. It's a sad relic of C and for whatever reason C++ can't or won't fix it.

> Nobody implements "freestanding", or would use it

So, you've gone from everybody wants to add things to free-standing, to now, in C++ nobody implements it and it's unused? No wonder offering something better would be popular.

It is true that I'd forgotten if you have C++ 20 you can now make a view out of two iterators by adding yet another non-freestanding dependency for ranges instead of needing Boost.

> (NGL, you read a bit desperate. Things OK?)

I'm not "NGL" whoever you think that is. My legal name is a matter of public record and so it's always striking to me that people so often think guessing who I am is somehow a revelation that will make up for the weakness of their argument.

> It is a mark of weakness, in a language, to need to implement in its core what, with better core features, could as well have been in the library instead.

Rust's core library is, as its name would suggest, a library, just one which unlike std is free from tricky environmental dependencies.

But sure, it is a weakness that in C++ "volatile" gets to be a language level type qualifier whose semantics are poorly defined, whereas in Rust the things you actually wanted (a way to read or write MMIO that doesn't tear or get messed about by the compiler) are provided by an intrinsic in the core library. This weakness is apparently something people are trying to repair in C++ 23, decades after the mistake was made. Better late than never?

A bunch of things in core couldn't have been handrolled because they are "magic". For example the Drop and Copy traits do define functions so they aren't pure markers, but the compiler cares that it can see how to copy something which claims to implement Copy, and won't allow Drop if the type is also Copy since that's nonsense -- so you couldn't be allowed to define these traits yourself in the natural way. You also aren't (yet) allowed to go around implementing the Fn* traits yourself either because there is considerable wizardry involved.

However, core also contains numerous things that you could roll yourself, but shouldn't, up to and including iterators. For example, core::mem::drop() isn't magic, it really is just an empty function. You could write that yourself. It would do the same thing, but, core::mem::drop() exists so that everybody understands what you're doing when you call it. Likewise core re-exports all the primitive types so that you can definitely say core::primitive::bool to refer to the primitive type bool, even in code where you've gone completely crazy and re-defined the type name bool.

ncmncm
(Not Gonna Lie, now you read as desperate and confused. I hope things will get better for you soon.)

"In the core language" means not in a library. Thus, "core library" is an oxymoron. So, having things like sum types and iterators implemented in the core language, not in the standard library, betrays language weakness. Is this concept difficult for you? Take your time.

I know you were already aware that volatile and native array types in C++ came to it from C. Nobody ever suggested C was a powerfully expressive language. So, only confusion could provoke criticism of volatile in C++. (You earlier expressed confusion about the need for C++ to maintain backward compatibility with C. Please do think it through.)

It might seem strange that nobody implements or uses the freestanding C++ subset, yet people nonetheless continually propose additions to it. Standardization is strange, and people have reasons they do not always tell.

tialaramex
It's certainly something that you feel the need to tell us when you aren't lying and perhaps "confused" is the right word, but at least you're not confusing me with somebody else.

Most oxymorons (including "oxymoron" itself in Latin) have this property in which the superficial appearance of contradiction is not the reality. A core library is a quite reasonable thing to offer, even if you don't seem quite clear whether C++ needs such a thing (and people are clamouring to add more to it) or doesn't (and so nobody implements or uses it)...

By their nature Rust's sum types are a language construct but specific sum types have to be implemented somewhere and Rust puts some very useful ones (such as Option and Result) that don't require environmental support in core.

Iterators are a tiny bit special because Rust's for loop is just syntactic sugar for iteration, the compiler knows core is providing the relevant stuff so it de-sugars a for loop by relying on IntoIterator, Iterator and Option implementations in core, if they somehow don't exist your for loops won't compile.

Compatibility claims are a weak excuse. Once upon a time I would have expected better, but it's certainly possible the modern C++ committee will agree with you that fixing volatile isn't worth doing and vote down P1382.

[P1382 is the same as the Rust volatile intrinsics except as a C++ template, not so much because it's inspired by Rust, but because those are in fact the volatile semantics you need -- same as C++ and Rust having the same maximum value for a 32-bit unsigned integer]

ncmncm
Your offer to pay for identifying and patching the billions of lines of code you propose to break is certainly generous. But I look forward to seeing this list you have of Rust 1.0 features you also propose to break.

Oh, you didn't? And you haven't? My mistake.

tialaramex
The committee didn't take Vittorio's Epoch proposal for C++ 20, and instead almost certainly doomed it permanently. Perhaps the committee will pay?

Without Epochs, C++ doesn't have a way to do what Rust does all the time with "editions". I expect you'll see a HN topic about it later this month, but Rust 2021 will follow Rust 2018 in "breaking" several Rust 1.0 (aka Rust 2015) features, knowing this is fine because those features still exist (and work just fine) in previous editions.

When Rust 1.0 shipped myArray.into_iter() was clearly referring to the reference, ie (&myArray).into_iter() since (&myArray) implements IntoIterator whereas back then myArray did not. There will be code out there that does this, and of course we can't very well just break it because it's ugly.

So, even though today's arrays implement IntoIterator, myArray.into_iter() is still treated as (&myArray).into_iter() to keep that old code working as expected. Well, you get a warning these past few months, but that's still what the code does.

Until Rust 2021. In Rust 2021, arrays have "always" implemented IntoIterator, so myArray.into_iter() just does what you'd naturally expect it to.

This is possible with editions, because as I said, all the old code is still working under the edition it was written for even on shiny new compilers. If somebody wants to update that code (perhaps they want a feature from a newer edition), there is tooling to help them, but if not it'll compile as it is under the old edition indefinitely.

steveklabnik
Rust allows you to layer on “just the heap please” and yes, many projects do exactly that. The point is that it is clearly delineated: no_std has no system dependencies, and then you can layer things like alloc on top of that, up to and maybe finally including full standard library support. Rust allows projects to explicitly signal what stuff they need, which helps make the ecosystem interoperable, which is the point made in the talk.

Incidentally, “every” OS does not have a heap. Many do, possibly even “most,” but that’s not required. It is extremely nice to be able to know what needs a heap and what does not.

ncmncm
I made no claim that every OS has a heap. Every OS does manage memory, and use memory.

It is not just nice, but essential, to know what needs a heap. And, every embedded coder does know. Or quickly learns.

steveklabnik
> Billions of embedded devices reserve and manage heap memory without difficulty. Indeed, every OS kernel is running in just such an embedded environment.

I guess I don't understand what you meant here, then.

tialaramex
> I made no claim that every OS has a heap.

> Billions of embedded devices reserve and manage heap memory without difficulty. Indeed, every OS kernel is running in just such an embedded environment.

You claim every OS kernel is "running in just such an embedded environment" and you describe that environment by saying it does "reserve and manage heap memory without difficulty". (My emphasis). Would you like to explain how this is different from the claim you now say you didn't make?

ncmncm
I invite you to read more carefully, and think for a moment. I am confident you will puzzle it out. Maybe ask Steve, he figured it out.
steveklabnik
I didn’t figure it out, it was just clear this conversation wasn’t going anywhere so I decided to drop it.
ncmncm
Smart. But I'm confident you would have.
h0h0z
Neither Go nor Java or dynamically typed languages. I also think it is hilarious he shits all over every language yet he himself spent the better part of a decade pushing javascript onto the server.

I wouldn't trust a thing that comes out of this SJW factory. Who put money into this company?

RobLach
What is a SJW factory?
gostsamo
Not op, but most likely "social justus warrior".
RobLach
I guessed that but this is a talk about the history and democratization of hardware and how it relates to software...
gostsamo
Consider me surprised as well.
cle
I'm guessing in reference to this: https://www.scylladb.com/2021/01/17/an-open-letter-to-the-sc...
secondcoming
Interesting, this the part of the ScyllaDB licence that Parler is accused of having violated:

> or (vii) use the Software or any part thereof in any unlawful, harmful or illegal manner.

'harmful'... is there a legal definition of that? I know adtech companies that use Scylla. Are they, or could they be, 'considered harmful'?

thekozmo
It's not the open source license which we can't forbid using, it's the enterprise version.
gostsamo
Didn't know about this case, thanks.

Interesting though why my comment is downvoted. For maybe accusing someone in using alt right language or for knowing about the existence of such language and what its abbreviations mean? People, I'm not even american, I don't fight in this war!

dundarious
I think the vast majority of people know what SJW stands for. I can't see the flagged comment, but asking what's an SJW factory is most likely a rhetorical joke, making fun of the flagged comment containing the phrase -- the factory aspect is unintentionally funny to me.
smoldesu
Simply Just Widgets
bcantrill
I have the same question! I guess this makes my kids SJWs?
scrubs
Ummm ... you cannot reason from H/W (video spends 80% of its time recounting H/W manufacturing history which is in the domain of natural sciences with a formal model e.g. physics, chem, statistical physics) to software. The first 80% of the video is not relevant.
replygirl
sorry but the kantian project is dead bro.
scrubs
OK, downvoted. The point of my comment was not that software cannot be aligned or co-located with hardware. As others pointed out C/C++ is just as capable. My point was that spending the first 75% of the video recounting H/W advances which is based in formal science (chem-eng and the rest) is orthogonal to software. Moore's law does not carry over into software. The video could have just started by pointing out with H/W specialization there are more places to put Rust code. This way 99% of the video could have been on the OP's like of Rust. Right? The top comment shares much the same sentiment: "Skip towards the end and Cantrill..."
wyldfire
Skip towards the end and Cantrill talks about:

* no_std

* composability of no_std crates

IMO the development-time packaging (cargo) means that as a developer I can have a deep and broad list of dependencies without having to orchestrate my development environment. This is Rust's killer feature, I agree. The fact that you can do it w/no_std is also very awesome.

Does anyone have more info about "Hubris" - the OS he refers to as under development?

steveklabnik
We will reveal more about hubris at the conference he referenced at the end, which is happening end of November/start of December.

I am extremely excited.

filereaper
Really looking forward to Hubris and especially the design decisons made in building it. Like Bryan says, the why behind it all.

Cheers.

PeterCorless
We also had a great chat about Hubris in the Speaker's Lounge at the event. Brian was definitely on fire about the topic!

The one thing I wanted to point out is that there's already a "namespace collision" when you search on "Rust" and "Hubris" — a language called Hubris written in Rust:

https://github.com/hubris-lang/hubris

zozbot234
IMHO we need to allow for local allocators as well as a bunch of other stuff (in-place constructors for pinned objects, ala C++) before we can claim to have true composability in a no_std environment. You see this stuff crop up all the time in "Rust kernel modules" discussions too, and there's a reason for that.
pas
For anyone who wants to know a bit more about this, this a recent talk about how to implement move constructors in Rust: https://www.youtube.com/watch?v=UrDhMWISR3w
mooman219
I agree that no_std is incredible. I really want to see more crates embrace it and encapsulate their no_std logic away from their standard logic. I very often see crates that are like 95% of the way to no_std but then choose to bundle some standard only features without flagging them.

I wrote fontdue [0] (which is very incomplete spec wise) because there just wasn't another font library that was no_std at that time. It felt like the existing libraries were in an arms race for gpu caches and bundling file loading. Like if I wanted to commit to a running on a platform I'd do the sane thing and use harfbuzz or the platform APIs.

It's very naive because I don't understand all of the complexity, but I'd really like to see the standard library being easier to piecewise implement from crates. Like a standard trait library and a standard implementation library for those traits.

[0] https://github.com/mooman219/fontdue

pas
> Like a standard trait library and a standard implementation library for those traits.

This sounds like a very good idea. (The thinness of the std future API paved the way for multiple async runtimes. And sowed confusion! evil laugh But seriously, having a high throughput optimized runtime like Tokio and a size optimized like smol is a good thing.) But a very much non trivial amount of work. Has this been discussed on internals?

HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.