HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Rust and the Future of Systems Programming

hacks.mozilla.org · 475 HN points · 0 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention hacks.mozilla.org's video "Rust and the Future of Systems Programming".
Watch on hacks.mozilla.org [↗]
hacks.mozilla.org Summary
If you’re a regular reader of Hacks, you probably know about Rust, the ground-breaking, community-driven systems programming language sponsored by Mozilla. I covered Rust on Hacks back in July, to ...
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Nov 16, 2016 · 475 points, 486 comments · submitted by philbo
wyldfire
> GC pause ... sufficiently low power hw .. cheap phone

Yeah, but even high-powered hardware can take a "major" hit from a GC pause when your application is extremely latency sensitive.

IMO it would be great to get folks who write the enormous base of existing realtime apps driving critical devices everywhere to sit up and take notice of Rust.

EDIT: I mean to say that many of my colleagues who write realtime software dismiss new languages as including GC baggage by default (because so many do!). So, hey, good that the video calls this out.

vvanders
It's starting to go that way but that's a very hard space to push new things into. I talk with all my ex-gamedev contacts and they're hesitant to even use lambda or other C++11 features that have been around for a while now.

I think Mozilla's plan of driving things forward with Servo and using that as an large-scale example of the gains that can be made is a good approach.

naasking
> IMO it would be great to get folks who write the enormous base of existing realtime apps driving critical devices everywhere to sit up and take notice of Rust.

Rust cannot make any latency guarantees either. Reference counting and its lifetimes also have pathological cases, ie. worst-case, an object can reference the entire heap which will take time proportional to the number of dead objects to free.

Copying collection in this case takes literally zero time, but it's pathological case is when all referenced data survives the current GC cycle, ie. proportional to live objects.

There's no free lunch!

akiselev
You will always have these pathological cases when you choose to use higher level memory management like simple reference counting or garbage collection no matter what language you use, whether it's Rust or assembler. The point of Rust is that you have complete control over what you use and pay for. If your concern is the overhead of lifetimes then you need to evaluate if you can afford heap allocation in the first place. Otherwise you can make de/allocation explicit in Rust just like in C, without losing the benefits of ownership checking.

Embedded hardware and software can only provide realtime guarantees because they are simpler, without complex pipelines, caches, branch predictors, or thread schedulers. If you want low latency embedded software you have to document the pathological cases, test whether they happen in real world use, and profile the code with each microarchitecture you're targetting anyway, let alone every product family. What language you use doesnt change that.

naasking
> You will always have these pathological cases when you choose to use higher level memory management like simple reference counting or garbage collection no matter what language you use

Not true, soft and hard realtime garbage collectors exist. Your runtime simply needs to bound the amount of reclamation work done at any given time.

For instance, the cascading free behaviour Rust is currently susceptible to can be broken up into a bounded series of free operations interleaved with ordinary program execution. Rust would then be realtime without truly changing its observable behaviour, except its timing in some programs.

akiselev
>> Not true, soft and hard realtime garbage collectors exist. Your runtime simply needs to bound the amount of reclamation work done at any given time.

That doesn't change anything! You're just choosing a garbage collector with a default deterministic pathological case, which is a guarantee you can make about almost any GC by carefully tailoring your memory usage to your scenario and choice of algorithm. That's all realtiem embedded software development is all about: writing code that has predictable timing given your expected inputs and environment. If all you need to do is flip a bit once every 10 minutes with a precision of 1 second while reading 1 bps from a sensor even a full blown Linux distribution on a modern Intel i7 running a Python or Ruby daemon can be considered "realtime". The language doesn't matter as long as you can predict how long everything is going to take in the worst case and your micro[controller/processor] is fast enough to react.

>> For instance, the cascading free behaviour Rust is currently susceptible to can be broken up into a bounded series of free operations interleaved with ordinary program execution. Rust would then be realtime without truly changing its observable behaviour, except its timing in some programs.

You know that's what the Drop trait is for, right? All you have to do is add whatever memory management code you'd have (in your C program) into the trait implementation and your memory deallocation will behave exactly as it would in any other low level language. These low level facilities have been part of the Rust design from the start, they just don't require you to manually call free() by default. That doesn't mean anything in Rust is stopping you from doing so and if you want to, you can opt out of that behavior entirely by providing a blank Drop implementation. After that, literally anything you can do in C you can also do in a Rust unsafe block.

naasking
> That doesn't change anything! You're just choosing a garbage collector with a default deterministic pathological case, which is a guarantee you can make about almost any GC by carefully tailoring your memory usage to your scenario and choice of algorithm.

The fact that you don't have to tailor anything is precisely the point. Latency is a property of a runtime, not a language. This has been my point all along. C/C++ or Rust don't guarantee low-latency realtime properties, and introducing tracing GC doesn't guarantee high-latency non-realtime properties.

> You know that's what the Drop trait is for, right? All you have to do is add whatever memory management code you'd have (in your C program) into the trait implementation and your memory deallocation will behave exactly as it would in any other low level language.

Great, but it doesn't guarantee any properties of code you haven't written, so it still can't achieve the global properties I've been talking about.

akiselev
> C/C++ or Rust don't guarantee low-latency realtime properties, and introducing tracing GC doesn't guarantee high-latency non-realtime properties.

We completely agree.

> Great, but it doesn't guarantee any properties of code you haven't written, so it still can't achieve the global properties I've been talking about.

How is this any different from C/C++? They do not give you any guarantees that Rust takes away in this regard. Any library that uses Box::new or vec! is exactly the same as a C library that calls malloc/free internally and you can implement the same heap allocation free algorithms in Rust as you can in C/C++.

I don't understand what global properties you expect a low level systems language to guarantee. They definitely can't guarantee that code you haven't written doesn't heap allocate, you have to check that they don't call malloc/free yourself.

Matthias247
> Your runtime simply needs to bound the amount of reclamation work done at any given time.

Wouldn't this transform the problem into a "no more predictable maximum memory usage" problem? As you can't really know if and when your GC will keep up it with the amount of work to do.

naasking
Possibly, but maximum memory usage is rarely predictable anyway. I expect it might be even less predictable than maximum latency.

However, it may still be possible to conservatively bound your maximum memory usage too, as long as your reclamation-work phase keeps up with your program's allocation rate, then you achieve a steady-state.

Suppose some amount of reclamation is done on malloc(), a tunable parameter could measure the ratio of allocation speed of the running program and amount of unreclaimed garbage. This ratio would control how much reclamation work to do before returning from malloc() so you can fall into steady-state.

lmm
> Possibly, but maximum memory usage is rarely predictable anyway. I expect it might be even less predictable than maximum latency.

Well, if you don't need a bound on memory usage you can just never deallocate.

> Suppose some amount of reclamation is done on malloc(), a tunable parameter could measure the ratio of allocation speed of the running program and amount of unreclaimed garbage. This ratio would control how much reclamation work to do before returning from malloc() so you can fall into steady-state.

Sure, but that doesn't guarantee anything about what your maximum spikes are going to be. You can have a firm bound on memory consumption or a firm bound on latency, but you can't get both without doing some serious application-specific work.

Manishearth
> For instance, the cascading free behaviour Rust is currently susceptible to can be broken up into a bounded series of free operations interleaved with ordinary program execution.

You can probably make this work by plugging in a different allocator, if jemalloc doesn't do this already. The ability to batch up frees and mallocs isn't tied to GCs.

This won't reduce the perf impact of running a large tree of `Drop` impls, but it will reduce the free calls.

naasking
> You can probably make this work by plugging in a different allocator, if jemalloc doesn't do this already. The ability to batch up frees and mallocs isn't tied to GCs.

That gets tricky, because Rust people no doubt expect deterministic destruction on scope exit. But yes, my ultimate point is that low latency is a property of a runtime, not a language. C/C++ or Rust aren't going to automatically give you bounded latency, and adding tracing GC doesn't automatically take it away.

Manishearth
> Rust people no doubt expect deterministic destruction on scope exit.

Deterministic destruction, but not deterministic deallocation :)

naasking
But this expectation is transitive. If you have an array of file handles, if you defer deallocating some of them but destruct them all upfront, you still have the latency issue we've been discussing. And if you defer destructing too, then you still have non-deterministic destruction and deallocation. I'm not sure there's a way around this tradeoff.
Manishearth
Right, I already mentioned this a couple of comments ago.

You can use various arena-like structures where you explicitly forgo the guarantee of deterministic drop for this.

pcwalton
I keep seeing this latency claim about GC, but it would be trivial to solve with free if it were actually a problem: just add freed objects to a list and incrementally free over time to achieve whatever latency guarantees you wish.

The reason why no malloc/free implementations that I'm aware of actually do this is that the latency of freeing isn't a problem in practice.

naasking
> The reason why no malloc/free implementations that I'm aware of actually do this is that the latency of freeing isn't a problem in practice.

Partly, and the other part is that it degrades allocation performance for the majority of non-problematic programs, which is what most people actually focus on.

But if we're being fair, latency of tracing GC isn't a problem for most programs either. So latency is largely a red herring, except when it's not, and you had better know when it's not, regardless of whether you're using C/C++/Rust or a runtime with tracing GC.

dikaiosune
It's worth mentioning that there are several strategies for avoiding cascading deallocations like arenas or arena-backed graph abstractions. For example:

https://crates.io/crates/typed-arena https://crates.io/crates/petgraph

Rust's generics make this fairly pleasant to work with, and lifetimes/borrowck ensure safety when managing your own object allocations.

naasking
Indeed, if you can live with the wasted memory use of objects outliving their conceptual lifetime, regions/arenas are a good solution.

Note however that you'd probably still have to run destructors when destroying an arena (to free file handles for instance), so you can still see high latency. With an arena you can perhaps schedule this better though.

dikaiosune
> if you can live with the wasted memory use of objects outliving their conceptual lifetime, regions/arenas are a good solution.

If you can live with the wasted memory use of objects outliving their conceptual lifetime, garbage collectors can be a good solution too.

Not that that's a bad thing for many use cases, but your above comment implies a comparison between Rust and GC. I think the quoted critique falls down a bit when Rust lets you opt-in to generational-esque GC-ish behavior with a very similar downside to what you'd get from a GC.

Rusky
Lifetimes are compile-time only and do not do any reference counting. So Rust has the same latency guarantees as C, for example.
None
None
naasking
> Lifetimes are compile-time only and do not do any reference counting.

I never said they did, I said lifetimes and reference counting both have this pathological case.

C also doesn't provide latency guarantees, as the same pathological programs can exist in C as well. It's a total myth that you need C in realtime domains due to "latency".

Maximum pause times are a property of a particular runtime, not a language.

steveklabnik
What is the pathological case with lifetimes? You're saying it takes "time proportional to the number of dead objects to free", but as the parent said, lifetimes are a compile-time construct, so they have no runtime properties.

(I'm not saying that for sure there are none, I'm saying that it seems like you're talking about refcounting only, the lifetime bit is unclear to me.)

Rusky
I suspect they're referring to graphs of Drop implementors, based on the sibling thread. If you for some reason have a linked-sea-of-nodes data structure that has to traverse itself on drop, that can behave similarly to dropping an Rc graph, though it still doesn't use lifetimes.
steveklabnik
I guess that would make sense, but I'm not sure it's lifetime-specific though. C++ doesn't have lifetimes but would still have this problem.
naasking
Yes, C/C++ would also have this problem. The point I was trying to make is that incrementality/latency is a property of a runtime. If your program has deep ownership graphs, any kind of naive reclamation procedure is going to have high latency, even if it's written in C/C++.
steveklabnik
Quite fair! Thanks for elaborating.
webkike
Regardless there is no overhead for lifetimes. It is a data flow analysis problem used solely to verify correctness of the code.
wtetzner
I think they're referring the the calls to "free" that are automatically injected.
worik
"C also doesn't provide latency guarantees, as the same pathological programs can exist in C as well."

But you have to code them. They are predictable, or you let them in by allowing data structures to grow indefinitely

With C you can make latency guarantees. You can write code that does not have such guarantees but it is your choice

With GC you are not in control so there are fewer choices. You will not be able to make guarantees.

naasking
> With GC you are not in control so there are fewer choices. You will not be able to make guarantees.

Not true. Hard and soft realtime GCs with sub-microsecond latencies exist. Latency is a property of a runtime, not of manual vs. automatic storage reclamation.

worik
That is a misunderstanding of latency.

"Very fast" is still latency.

naasking
"No latency" is a fiction. The only question of any relevance is how much latency is tolerable for a given domain. And describing latency in worst-case timings is standard, so I understand latency just fine thanks.
qwertyuiop924
Rust itself can't, but you as the programmer can.

It's not quite C, but you can do a lot of reasoning about what the compiler will do with your code, and avoid pathological cases.

naasking
Absolutely, you have to be aware of the ownership graph depth in both C and Rust if you want to bound latency.

You don't have to do this with tracing GC though, you just need a runtime that implements latency bounds.

foota
I believe their point is that an object can own potentially many objects, and when it is dropped it could cause a cascade of dropping. Which may not be expected by someone.
kazagistar
Well, the good news for you then is that there is progress underway to implement Gc-as-a-library in Rust, to give you this option if you need it as well.
qwertyuiop924
Wait, why would you have to do that? Most ownership is determininstically resolved at compile time, so you can know exactly when a resource will be freed. What you do have to know is about the rare refcounted variable, and what edge cases require ownership checking at runtime.
naasking
The latency problem isn't caused by determining what to free, the latency problem is caused by actually freeing. Imagine an array with 2^31 pointers, and now fill it with with 2^31 distinct pointers to the remainder of the 2^32 bit address space. When that array goes out of scope, you can now enjoy 2^31 individual free operations, because reclamation for Rust lifetimes and reference counting are both proportional to the number of dead objects (copying collection takes zero time in this case).

If bounded latency is a goal, you have to bound the depth of of your ownership graph if you're working on a platform that doesn't impose global latency properties. C/C++ and Rust do not do this.

qwertyuiop924
Ah.

But C has this problem as well. If you've malloc(3)ed an array of 2^31 pointers, each pointing to an object, enjoy your 2^31 free(3)s, or prepare to start leaking RAM.

So what's your point?

naasking
Yes, C/C++ and Rust have the same problems, as I said elsewhere. My ultimate point is that low latency is a property of a runtime, not a language. Using C/C++ or Rust aren't going to automatically give you bounded latency, and adding tracing GC doesn't automatically take it away.
qwertyuiop924
Oh.

Well then actually, we're in complete agreement. Sorry.

solidsnack9000
The language and runtime typically cooperate to provide the operational semantics that developers are looking for. The case of Objective-C is especially interesting in this regard: developers evolved a number of conventions around reference counting, because reference counting allowed for controllable, minimal latency and contributed to a snappy UI. The language gradually absorbed these conventions into the compiler, such that certain patterns of use are part of the language specification (certain method names, basically) and the ARC code is generated for developers.
pcwalton
This is only true for pure two-space copying collectors, which are rarely used in practice because of the absurd memory overhead. Once you introduce mark/sweep for some portion of the heap (like production GCs do), you reintroduce overhead proportional to the number of dead objects during the sweep phase.
solidsnack9000
The latency of your Rust or C program is "static" in that you can infer it from the program text. This is not actually true of most garbage collected languages. (Erlang, with per thread heaps, is a notable exception.)
Manishearth
> Rust cannot make any latency guarantees either. Reference counting and its lifetimes also have pathological cases, ie. worst-case, an object can reference the entire heap which will take time proportional to the number of dead objects to free.

Rust doesn't use reference counting by default. Refcounting is very rare in Rust, much more rare than it is in C++. Most large C++ codebases I've worked with have thrown in the towel and started refcounting all the things. In Servo, for example most of the refcounting is across threads (where you basically have no other option), and a few interesting cases in the DOM, each with very good reasons for using refcounting.

Lifetimes are a concept at compile time and don't exist at runtime.

Edit: Oh, I see what you're talking about. A sufficiently large owned tree/graph in Rust will introduce latency. It's predictable latency though. I can make the same argument about for loops.

Unpredictably sized large trees in Rust are again pretty rare in general.

None
None
Animats
Trees don't have to be refcounted in Rust. Single-ownership trees are possible. As long as they don't have backpointers. Backpointers are a problem under single ownership.
Manishearth
Right, I never said that trees have to be refcounted. A sufficiently large ownership tree will get deallocated all at once, which is the kind of latency the GP is talking about.
Animats
Linked data structures in Rust get complicated, though. See the "Too many lists" book.[1] Doubly linked lists, or trees with backlinks, are especially difficult. Either you have to use refcounts, or the forward pointer and backward pointer need to be updated as an unsafe unit operation. There might be an elegant way to do this with swapping, but I'm not sure yet.

[1] http://cglab.ca/~abeinges/blah/too-many-lists/book/

Manishearth
Right, so you implement them with unsafe. While you can implement doubly linked lists safely with refcounting, you're perfectly free to implement them with unsafe code. This is what unsafe code is for, designing low level abstractions with clean API boundaries.

(Also I don't see how this is relevant at all)

Animats
Right, so you implement them with unsafe.

If you need unsafe code for basic operations within the language, something is wrong with the language. This isn't about talking to hardware, or an external library. It's pure Rust code.

(Some pointer manipulations can be built from swap as a basic operation. That may work for doubly-linked lists. The other big problem is partially valid arrays, such as vectors with extra space reserved. There's no way to talk about that concept within the language. There could be, but this isn't the place to discuss it.)

Manishearth
I have been writing Rust code for almost three years now.

I have helped design a low level data structure exactly twice. In both cases, this was a highly custom concurrent data structure, which would have been even harder to get right in C++ or some other language.

If you need a regular run-of-the-mill datastructure it will exist in the stdlib or crates ecosystem. This is not a "basic task". Just because schools teach it early does not make it a "basic task". It's a task that needs to be done at some point, but doing it once and making it part of the stdlib or a crate is all that is necessary. It has become a "basic task" in C++ because it's easy enough to do that you don't need to reach for the stdlib, but that doesn't mean that it's necessary to have a bespoke implementation of a DLL that often in C++; usually the stdlib one will do.

The same "too many lists" book you linked to explains why DLLs are niche datastructures on the first page (singly linked lists can be implemented safely in Rust, though they can be somewhat niche too).

kazagistar
> need unsafe code for basic operations within the language

Building custom back-referencing data structures is not a "basic operation" anywhere outside programming classes. Adding significant complexity to rust Rust to make a 2% case marginally safer would be make the language worse. As long as the vast majority of code is not unsafe, then it achieves its goal.

pron
> even high-powered hardware can take a "major" hit from a GC pause when your application is extremely latency sensitive.

That's true, but high-powered, abundant-RAM realtime applications can use approaches that are cheaper than Rust's. See, e.g., the interesting work currently being done updating realtime Java[1]. The idea is that memory is composed of a few kinds of lifetimes: eternal, scoped and GCed-heap. Scoped memory is basically nested arenas, and GCed heap contains objects that are used by non-realtime portions of the app (which, even in realtime systems, may comprise the majority of code, especially when the system runs on large servers).

An approach like Rust's, however, is crucial when the application is RAM and/or energy constrained.

[1]: https://www.aicas.com/cms/en/rtsj

hinkley
If I had infinite free time, I'd love to explore the problem space of implementing interpreters for GC-based languages on top of Rust. It's quite hard to get the concurrency right, and indeed we see a number of major languages that gave up on even trying.
pron
What for? If you have the extra RAM and power for a GC, you don't need Rust for safety. HotSpot's next-gen (JIT) compiler is written in Java and is absolutely amazing.
hinkley
What other programming languages do you know of that have an 'absolutely amazing' GC implementation? Wouldn't you like that answer to be 'lots'?

The Java team has worked a lot longer and a lot harder on this problem than pretty much everyone else, and even they hit a wall at 1GB. One that took a dreadfully long time to overcome (so long in fact, that it contributed to me being an ex Java developer)

pron
Java has quite a few very good GCs, some in OpenJDK, some by Oracle, and one by Azul. Quite a few of them don't have a 1GB wall. They will become even better when Java finally gets value types and the GC won't have to work hard to do stuff it doesn't have to (this is why Go has decent GC performance even though its GC isn't very sophisticated). In any event, I don't see how Rust can make the work any easier. Coming up with that algorithm is 98% of the job.

Lots of languages do have good GC because they run on the JVM. OTOH, we don't know how hard it is to write similar kinds of applications in Rust. Good things take a while to get right -- Java has taken a while, and Rust has, too. It will be a while yet until Java has a GC that everyone likes, and it will be a while yet until Rust is fully fleshed out and its strengths and weaknesses understood. My personal opinion is that the two approaches are complementary, each being superior in a different domain.

pron
s/Oracle/IBM
takeda
Lack of extra RAM and power to run GC is not the problem. The problem is that GC makes code behavior not predictable.
nickpsecurity
Real-time GC's exist. Look up Aonix's Java stuff for what embedded or predictable apps do. Or JamaicaVM below. For enterprise, Azul has some amazing GC tech plus Java CPU's (Vega's).

http://www.ptc.com/developer-tools/perc

https://www.aicas.com/cms/en/JamaicaVM

Matthias247
However that's not something that is automatically solved by manual memory management. Using malloc/free on a desktop OS does also not provide a predictable runtime behavior, although unexpected pauses might be smaller than with most GCs.

The safest bet for predictable memory management and latency is the approach that is used by lots of embedded and realtime software: Don't allocate at all. Or at least don't do it in critical phases.

dbaupp
I think this is the point that is obscured when discussing "manual memory management" vs "GC" languages and just focusing on the behaviour/life-cycle of an individual allocation: the former generally provide tools and features that make easier/more natural to avoid allocations, whereas the latter makes the assumption that allocation is usually OK (which is, of course, a perfectly acceptable trade-off for the domains those languages target).
naasking
That's not true, some GC algorithms make code behaviour unpredictable. Real-time tracing GCs exist. Reference counting is GC too, but it too is unpredictable.
pron
That's not true in general. I've used realtime Java in a safety critical hard realtime application (running on a large server), with strong deadline guarantees (we're talking microsecond range). If you have the power and RAM to spare, the predictability issue is more cheaply solved with the approach I mentioned above (by cheaply I mean in terms of development costs; it is more costly than "plain" GC in terms of effort, but still cheaper than the Rust approach).

The only real cost of GC these days is RAM (and power).

nickpsecurity
Which VM did you use for that out of curiosity?
pron
Sun's Java Real-Time System. It is no longer supported, AFAIK, and I don't know which RT JVM the project has switched to because I'm no longer there (my guess would be IBM's).
None
None
geodel
Will that next-gen JIT be enabled by default for Java 9 or 10?
hyperpape
Not by default in 9, but you will be able to run it with a stock version of Hotspot.
deathanatos
> If you have the extra RAM and power for a GC, you don't need Rust for safety.

It isn't just about memory; Rust's safety guarantees in combination with RAII also mean that other resources such as mutex locks, open files, etc. also get closed in a deterministic fashion. (I'd argue that this is quite important for locks, but I've ran into hard-to-debug bugs b/c files weren't being closed out until a GC got to them.)

The way I've always viewed it is that RAII is general to all resources; GC only solves memory.

(I'm assuming the comment you're responding to is discussing getting a concurrent GC to work quite right, which isn't fully relevant w/ my reply; but I do think there is more to Rust's safety than just the memory management, which is what I got from your reply. I'd also argue that memory, in particular, is not abundant, both on mobile devices, but also out in the cloud, where it translates directly into cost both from more expensive VM instances, and from me needing to continually tweak the GC's params.)

hinkley
I was in fact thinking of getting concurrent GC to work right. [edit] but also concurrency in general. Global Interpreter Locks when even my laptop has 8 cores?

I also agree that the free memory lunch is going to be over for a while. Java in particular is going to lose out in the container space. I don't think it's an accident that they've suddenly begun taking memory footprint very seriously. They have to.

pron
I was referring to hinkley's idea for using Rust to write GCed VMs. As to other safety features, those are easily added to cheaper GCed languages. Memory is the hard bit, and if you can afford a GC, it is usually cheaper to just use one. As to RAM being costly, I think RAM is one of the few things that is getting very cheap relative to other resources, and GCs require less and less tuning; working hard to avoid a GC when you can afford one seems to me like the mother of all premature optimizations. But I see no point in debating the issue too much. Every company would make its own consideration about which approach is cheaper.

In any event, there are certainly very important use cases that simply cannot afford the power and RAM overhead required by a GC (again -- latency is not an issue; if you have the resources, there are cheap ways of getting extremely low latencies without doing away with a GC) and those use cases would benefit tremendously from a safe language.

shandor
> IMO it would be great to get folks who write the enormous base of existing realtime apps driving critical devices everywhere to sit up and take notice of Rust.

It would, and definitely should, move into the direction of safer languages than C.

The biggest problem I see is tooling and legacy. Tooling, because there's a ginormous amount of testing and design software that "works with C" (whatever that means in the context of the tool). Legacy, because everyone already has their 20 year old codebases and it's just not convenient to start focusing on two languages and switching the old code to Rust is just plain impossible economically.

Third problem is compiler. LLVM (rustc is a frontend to it, no?) is really good choice, but gcc has enormous advantage in supporting so many small platforms and it's very significant in this area of SW dev.

On the plus side, I've really gotten the impression that the Rust folks are truly trying to make adoption as smooth as possible. If that works, and Rust proves to be much better than C for these kinds of systems, I'd guess adopting Rust rather than not would start looking economically viable to companies. I mean, in the end that's what matters to them the most, and it's not easy to replace all you C ninjas with competent Rust writer.

duneroadrunner
If you're considering switching to Rust for code/memory safety reasons, SaferCPlusPlus[1] may be an easier/cheaper/low risk option. It allows you to add memory safety to your existing code base in a completely incremental way, with no dependency risk. (At the moment, standard library support is required though.)

[1] https://github.com/duneroadrunner/SaferCPlusPlus

pcwalton
"Safer C++", like all C++ template libraries, is not memory safe.
duneroadrunner
Are you confusing SaferCPlusPlus with a different library? SaferCPlusPlus is a new library that makes it practical to stick to a memory safe subset of C++ (i.e. no native pointers, no native arrays, no std::array<>, no std::vector<>, etc.).

Using the SaferCPlusPlus library to replace all uses of C++'s unsafe elements does result in code that is as memory safe as Rust, or any other modern language. The main shortcoming at the moment is that it doesn't yet provide memory safe replacements for all of the standard library's unsafe elements, just the most commonly used ones.

leshow
Correct me if I'm wrong, but it looks like this just provides some 'safe' alternatives to unsafe C++ things. It's still up to the diligence of the programmer to not use those things and nothing is getting statically verified.

By contrast, when I write Rust, memory safety (and type safety) are verified by the compiler.

braveo
Sometimes I think Rust people lose the forest for the trees. The end goal isn't for the compiler to verify the safety, the end goal is for the software itself to be safe in a way that's cheaper.

It doesn't really matter if they both end up at the same place, which is safe software.

MaulingMonkey
> It doesn't really matter if they both end up at the same place, which is safe software.

I think the contention is that, unless you're applying NASA style rigor, you don't end up in the same place without verifying the safety automatically, because in practice it's too expensive to verify the safety manually (without getting squeezed out of the space by your competitors.)

SaferCPlusPlus's goals are noble, but approaching the problem with a library-only solution is problematic. None of the huge swaths of legacy and third party code I'd like to sanitize uses it - and a large scale rewrite to 'fix' that may very well introduce more bugs than it fixes. A library cannot 'fix' fundamental language constructs either, short of telling you to please remember to perfectly avoid those language constructs even if you're very very used to them. Frankly, I'm skeptical of how useful I'd find SaferCPlusPlus even for new projects - especially when modern SC++L implementations already have a lot of error checking code built into them as well, at least for debug builds.

Meanwhile, I already credit these to saving me at least a month of debugging time: http://clang.llvm.org/docs/ThreadSafetyAnalysis.html

I'm interested in Rust because it takes the same approach to securing code from bugs as seems to help a lot when I apply it to C++: Static analysis and annotations, designs to make edge cases impossible to ignore, and where static analysis cannot perfectly find all problems, let it error out reliably at runtime instead of randomly corrupting memory unless I really really really mean it.

duneroadrunner
> None of the huge swaths of legacy and third party code I'd like to sanitize uses it - and a large scale rewrite to 'fix' that may very well introduce more bugs than it fixes.

SaferCPlusPlus is designed for compatible interaction with unsafe legacy code and library interfaces. Some may see this as flaw. But it allows you to incrementally "improve" C++ code without requiring a total rewrite. It also means that members of a team can adopt it unilaterally. It's regular C++ code that won't interfere or impose on your co-programmers, even when you're working on the same code.

> A library cannot 'fix' fundamental language constructs either, short of telling you to please remember to perfectly avoid those language constructs even if you're very very used to them.

Right, but the "safe replacement" elements in the library are designed to behave just like their unsafe counterparts, perhaps making the transition easier. In terms of enforcement, I think it may be a "use it and they will build it" scenario. Once there is significant adoption of the SaferCPlusPlus library, it should take a relatively modest effort to implement a static enforcer. I mean, you just want to flag any uses of unsafe elements, not even do any analysis on them.

> Frankly, I'm skeptical of how useful I'd find SaferCPlusPlus even for new projects - especially when modern SC++L implementations already have a lot of error checking code built into them as well, at least for debug builds.

That's the beauty of SaferCPlusPlus. Let's say you're using std::vector<> somewhere in your program. You can just replace "std::vector<>" with "mse::mstd::vector<>" and now your vector is (optionally) safe. With a compiler directive you can choose to "disable" the safety features in any build (i.e. mse::mstd::vector<> will be automatically aliased back to std::vector<>). Compilers generally just do bounds checking (the "sanitizers" notwithstanding). SaferCPlusPlus checks for things like "use-after-free" as well.

And you don't need to link to any library. You just need to add a couple of header files to your project.

> Meanwhile, I already credit these to saving me at least a month of debugging time: http://clang.llvm.org/docs/ThreadSafetyAnalysis.html

The sanitizers are fantastic. But they're not quite a substitute for SaferCPlusPlus [1]. SaferCPlusPlus addresses the issue of safely accessing objects from asynchronous threads.

> Static analysis and annotations, designs to make edge cases impossible to ignore, and where static analysis cannot perfectly find all problems, let it error out reliably at runtime instead of randomly corrupting memory unless I really really really mean it.

SaferCPlusPlus is not a competitor to, or an excuse to neglect static analysis. SaferCPlusPlus exists because static analysis does not fully solve the problem.

[1] http://duneroadrunner.github.io/SaferCPlusPlus/#safercpluspl...

duneroadrunner
Sorry, I misread "ThreadSafetyAnalysis" as ThreadSanitizer [1]. Like I said, static analyzers are great. Some may feel that they sufficiently address the code safety issue in practice, some may not.

[1] http://clang.llvm.org/docs/ThreadSanitizer.html

braveo
> I think the contention is that, unless you're applying NASA style rigor, you don't end up in the same place without verifying the safety automatically, because in practice it's too expensive to verify the safety manually (without getting squeezed out of the space by your competitors.)

That's a claim that has yet to be shown to be true. Maybe it is true, and maybe it isn't, but C++ compilers tend to give pretty good warnings that you can treat as errors, and coupled with good external tools it isn't clear that rust is significantly safer than C++.

The scary part of it all is how many rust users seem to think that it is a given when even the rust standard vec container has unsafe code in it.

I personally think that if rust is shown to statistically decrease the security/error rate on large projects, it's going to be with the use of 3rd party tools, not the specific semantics of the language. I'm of the opinion that the beauty of the unsafe block isn't in any inherent "safety", as much as it is giving more semantics for 3rd party tools to analyze.

tatterdemalion
With no `unsafe` blocks, show me how you could use the `Vec` type to break Rust's memory safety guarantees.
steveklabnik
and not just you, but also show http://rust-lang.org/security.html
braveo
That's the point, there are plenty of things that flat cannot be done without using inherently unsafe operations, even in rust.

This is why it has still yet to be shown that rust is actually safer than C++.

tatterdemalion
That's not the point. Array types in Ruby and Python are implemented in C. No one goes around saying those languages are actually no more memory safe than C++ (or maybe you do?).
None
None
braveo
> No one goes around saying those languages are actually no more memory safe than C++ (or maybe you do?).

It's unfortunate that you've chosen to try and make the scope smaller by referring specifically to "memory safety".

As a result, this will be my last response to you, I just don't have the energy to go back and forth with someone who isn't willing to be honest in this discussion.

But to answer your question, those languages are no safer than C++. I can write a C plugin in both that contains memory leaks and various safety issues. And in fact, both projects have had their own security problems.

MaulingMonkey
> It's unfortunate that you've chosen to try and make the scope smaller by referring specifically to "memory safety".

Okay, back to the broader scope - what's an area that you think Rust might do worse than C++ at? I'd be very interested in fixing any blind spots I might have.

tatterdemalion
unsafe blocks play no role in any kind of security aside from memory safety. You are communicating with extreme disingenuity.
braveo
Hey, you're right, getting memory usage correct doesn't affect safety at all.

good day.

dbaupp
Reading an array out of bounds is definitely unlikely to be correct/be a security vulnerability. Memory safety is absolutely a prerequisite for any other sort of safety one might want.
braveo
We agree on that, my point is that C++ does it via libraries, Rust does it by hiding unsafe blocks behind interfaces (aka libraries).

Time will tell which approach is ultimately superior (if either one of them is actually better), but until the it isn't clear that the Rust approach is statistically better than the C++ approach.

Ultimately the advantage Rust has is the ability to possibly provide better 3rd party tooling that will enable developers to make the right decisions more often than C++ does. Consider a tool that runs on code checkin that spits out a report of all sites where code that manipulates state that could affect an unsafe block was changed/written so that developers could then have a very focused peer review of the code to ensure the safe code doesn't put the state in such a spot that it causes problems.

I think in this way Rust may eventually be shown to be better than C++, but then again, maybe not.

burntsushi
> We agree on that, my point is that C++ does it via libraries, Rust does it by hiding unsafe blocks behind interfaces (aka libraries).

That's a false equivalency and completely ignores the fact that C++ is one giant `unsafe` block.

> but until the it isn't clear that the Rust approach is statistically better than the C++ approach

Could you please explain what kind of evidence would convince you?

braveo
> That's a false equivalency and completely ignores the fact that C++ is one giant `unsafe` block.

This is exactly what I meant when I said rust people miss the forest for the trees.

> Could you please explain what kind of evidence would convince you?

you quoted me explaining what I would need.

dbaupp
> But to answer your question, those languages are no safer than C++. I can write a C plugin in both that contains memory leaks and various safety issues. And in fact, both projects have had their own security problems.

This definition makes any comparison of the safety of different languages totally useless: according to it, all languages are equally unsafe. You're free to want to use that definition, but it's a tautology and thus doesn't actually allow distinguishing between anything nor serve any purpose.

It's true that all languages offer escape hatches, but it's also true that there's a major qualitative (at least) difference between the constrained rarely used escape hatches of Python, Java and Rust, and the "the whole language is an escape hatch" approach of C++ and C.

In mathematics and the verification of programs, proofs will build from small proofs: first show that a function `foo` has a certain behaviour and then use this to show that `bar` (which calls `foo`) has another behaviour, etc etc, until the whole program is proved correct. Languages like Python, Java and Rust are designed with this in mind: prove the unsafe code correct and the language guarantees the rest of the code is memory safe. C and C++ have no such properly: a proof of memory safety requires touching every single line of code, not just the small number that actually need to escape down a level.

braveo
> It's true that all languages offer escape hatches

And that all languages experience safety issues as a result of these escape hatches, and that all languages suffer security issues despite sequestering these escape hatches.

Which goes back to what I said before.

"That's a claim that has yet to be shown to be true. Maybe it is true, and maybe it isn't ..."

[snip]

"I personally think that if rust is shown to statistically decrease the security/error rate on large projects, it's going to be with the use of 3rd party tools, not the specific semantics of the language. I'm of the opinion that the beauty of the unsafe block isn't in any inherent "safety", as much as it is giving more semantics for 3rd party tools to analyze."

> In mathematics and the verification of programs, proofs will build from small proofs: first show that a function [snip]

This is a non-sequitur. You're trying to compare a deductive proof in a formal logic system whose only requirement is to be internally consistent with messy reality. Look at the difference in approach. I said we won't know if until we have enough experience and data to analyze to see if there's a significant statistical difference between the error rates of software written in C++ vs Rust. You basically said we already know because we can write small programs that are safe, therefore we can write large programs that are safe. It's a non-sequitur.

> a proof of memory safety requires touching every single line of code, not just the small number that actually need to escape down a level.

And the same can be said of Rust, the unsafe blocks give a false sense of security. No one really cares if it crashed in an unsafe block if the root cause is from state manipulated in safe code somewhere away from the unsafe block. It takes a lot of discipline and scrutiny to make sure you don't accidentally put the state into a spot where the unsafe block can do bad things. This is the same sort of discipline required in C++.

That's the point you're not getting, and it's why I think 3rd party tools that can tell us more about the code being affected by the unsafe block is going to be more useful in the long run. Imagine a tool that gets run on checkin, or at specific intervals that can identify immediately that there are code changes that manipulate state that an unsafe code block depends on? It means developers can then examine the changes to make sure nothing bad happens.

Or you're in an IDE that changes the variable color to indicate that what you're working with affects an unsafe block, so you can be sure that you need to pay extra careful attention and definitely get a code review.

These same techniques work succesfully in C++. People deal with it in the exact same manner, they put it behind an interface and use code reviews and external tools to identify potentially dangerous things that human beings then step in and examine much more closely.

The point is, there is nothing inherent in rust that definitely makes it safer C++. There are potentially aspects of it that enable better tooling that could eventually make it safer than C++, but it will take time and careful analysis before it's obvious that it's safer.

Modern C++ tends to sequester these things off the way Rust would.

dbaupp
> C++ compilers tend to give pretty good warnings that you can treat as errors

They miss far too many simple cases for this to possibly be a sensible claim, e.g. neither gcc -Wall nor clang -Weverything warn about the two massive problems in the following code:

  #include<vector>

  int &foo() {
    std::vector<int> v{ 0, 1, 2, 3 };

    int &x = v[0];
    v.clear();

    int y = x; // dereferencing dangling pointer!
    (void)y;

    return x; // escaping a dangling pointer!
  }
Rust is clearly a step up since it does actually catch these. The Rust compiler is the "third party" tool that helps get better code, unlike C and C++, the static analysis is built-in.
braveo
I'll have to check when I get home, but I'm fairly certain you're purposefully suppressing the compilers warnings here. That's not a good way to make your argument about the compiler not being able to warn you about problems.
dbaupp
Yes, I'm purposely suppressing the unused variable warning with the (void)y;, because presumably real code will actually do something with the value: I could've printed y or left out that line, whatever, the compilers still don't warn about the actual major problems.
braveo
Your argument as to why the compiler won't warn you about problems is to show an example where you purposefully supress the warnings the compiler gives you.

I honestly don't think we should continue this conversation.

dbaupp
Suppressing the unused variable warning is:

- unrelated to the dangling pointers

- not suppressing a warning about a memory safety problem

- not effecting the lack of warnings for the memory safety problems: remove the `(void)y;` line and there's still no warnings about the dangling pointers.

Seriously, you are focusing on something irrelevant. Either pretend I didn't write that line, or pretend it was std::cout << y << std::endl;. The fundamental fact remains that the compilers do not warn about the major problem of handling dangling pointers, despite both of these being fairly trivial cases, just a tiny step up from pure stack allocation.

Yes, C++ compilers do have some warnings for some things, but the interesting warnings for this topic are insidious memory safety bugs like dangling references, not the basic unused variable ones. Rust warns about both, C++ compilers catch only the second one: the code I wrote is wrong for two reasons, and neither of those reasons is the unused variable.

If you're going to tout the quality of C++ compiler's warnings, they better flag as many cases of problems like use after free (and use after move), dangling references and iterator invalidation as they can, but I've never had a C++ compiler warn about any of these (other than the most basic case of returning a reference to a local variable).

braveo
> The fundamental fact remains that the compilers do not warn about the major problem of handling dangling pointers

I'm going to quote myself, emphasis mine.

"The end goal isn't for the compiler to verify the safety, the end goal is for the software itself to be safe in a way that's cheaper."

[snip]

"C++ compilers tend to give pretty good warnings that you can treat as errors, and __coupled with good external tools__ it isn't clear that rust is significantly safer than C++."

leshow
quoting pcwalton up above:

"I don't care if the software is verified via libraries or compilers. The problem is that C++ verifiers don't work."

braveo
right, I too can make assertions with no evidence to back them up.

If that's really your bar, then we don't have much more to discuss.

MaulingMonkey
MSVC is continuing to improve their detection of invalidated pointers: https://youtu.be/hEx5DNLWGgA?t=3231

This is running the static analysis pass instead of the normal compile pass, but stuff is improving. Of course you're preaching to the choir as far as I'm concerned - this stuff is way late to the party, and speaking generally, has issues with false positives and failing to detect things.

dbaupp
You're 100% correct that the end goal is safe software in a way that's practical to achieve. However, having the computer check one's code is generally regarded as a great way (even the best way) to do this: NASA's JPL doesn't accidentally recommend[0] turning on all compiler warnings and using static analysis tools, and it seems a little unlikely that most major tech companies would be spending millions on static analysers and statically-typed languages if they didn't think it helped them write correct code.

[0]: https://en.wikipedia.org/wiki/The_Power_of_10:_Rules_for_Dev...

braveo
Sure, but C++ also has tools to do this checking in an automated way.
staticassertion
Well... how do you know the code is safe otherwise? Exhaustive tesitng is unreasonable. How do you ensure that you have achieved the end goal of safe software?
pcwalton
> Sometimes I think Rust people lose the forest for the trees. The end goal isn't for the compiler to verify the safety, the end goal is for the software itself to be safe in a way that's cheaper.

I don't care if the software is verified via libraries or compilers. The problem is that C++ verifiers don't work.

duneroadrunner
That's right. SaferCPlusPlus is not complete and does not yet include a static verifier/checker.

Without a static verifier, memory safety is not guaranteed, just dramatically improved. And for many cases where there is a large investment in an existing code base, this might still be a more expedient solution. Even if only an interim one.

For example, I would estimate that, with concerted effort, it would take a matter of weeks to "port" the existing Firefox C++ code base to SaferCPlusPlus. Presumably this would dramatically reduce "remote execution", and other memory bugs while we wait for the Rust implementation.

In cases where guaranteed memory safety is desired, you might think of it this way: In Rust, the static checker is built into the compiler. In C++, static checkers/analyzers are separate tools. You could choose to require that your C++ code must be verified to be safe by a static analyzer of your choosing. In C++, it can be difficult/inconvenient to write non-trivial code that fully appeases the static analyzer, just like in Rust. You can use SaferCPlusPlus to make it easier to fully appease the static analyzer (like the Rust language does).

I should also mention "Ironclad C++". It's similar in function to SaferCPlusPlus, but it uses garbage collection (where SaferCPlusPlus does not). It does include a static verifier/enforcer.

As a fan of "memory safety without using GC", I'm rooting for Rust. But I think the idea of achieving memory safety in C++ can be too quickly dismissed.

pcwalton
> In C++, static checkers/analyzers are separate tools. You could choose to require that your C++ code must be verified to be safe by a static analyzer of your choosing.

The problem is that, in C++, there is no such static checker in existence (except ones with GC).

duneroadrunner
Well, like I said in the other comment, you guys could fix that by unbundling the static checker in the Rust compiler and making it applicable to (a subset of) C++ code as well :)

So then would you agree with the notion that (a practical subset of) C++ combined with a static analyzer could be just as safe and fast as Rust if, hypothetically, there existed an enthusiastic community comparable to Rust's? Or are there intrinsic technical issues? Or syntax issues?

Also, let me throw this notion at you: Rather than disallow code that can't be verified to be (memory) safe, the compiler could instead inject runtime checks that would be optimized out using the same analysis that the static checker uses.

That is, instead of requiring that the code be fast and safe or it won't compile, it becomes: If your code is not clearly, intrinsically safe then it will have runtime checks that will slow it down. And the compiler could list any runtime checks that it wasn't able to optimize out.

The reason I suggest this is that memory safety is just the enforcement of certain invariants. There's no reason why we couldn't let the programmer define additional, application specific invariants and have the build process treat them the same way it treats memory access invariants.

So for example, when a user defines a class, it could have a standard member function called "assert_object_invariants()" or something, that the programmer can define. Then anytime a (non-const?) member function is called, the compiler can insert runtime asserts at the beginning and end of the member function call. And again the compiler can tell you when those runtime asserts aren't optimized out. Wouldn't that make sense? I haven't really thought it through.

pcwalton
> Well, like I said in the other comment, you guys could fix that by unbundling the static checker in the Rust compiler and making it applicable to (a subset of) C++ code as well :)

No, we can't do that. It is incompatible with C++.

> Rather than disallow code that can't be verified to be (memory) safe, the compiler could instead inject runtime checks that would be optimized out using the same analysis that the static checker uses.

That is not possible. It would require massive bookkeeping, much like your library does. That would eliminate most of the benefits of Rust.

Manishearth
> Well, like I said in the other comment, you guys could fix that by unbundling the static checker in the Rust compiler and making it applicable to (a subset of) C++ code as well :)

The problem is that you still need extra annotations. Namely lifetime annotations (or something similar relating between borrows -- either that, or use a lot of elision which can be crippling). On top of that, the programming style Rust encourages is not the same as the ones you tend to see in C++ codebase, and programming in the C++ style will lead to code that doesn't compile.

> Rather than disallow code that can't be verified to be (memory) safe, the compiler could instead inject runtime checks that would be optimized out using the same analysis that the static checker uses.

This might be more tractable (and is an interesting idea). But that optimizer would be hard to write.

> So then would you agree with the notion that (a practical subset of) C++ combined with a static analyzer could be just as safe and fast as Rust

I think this is what the new ISOCPP core guidelines are trying to do? Though they don't go far enough in preventing memory unsafety IIRC (this may have changed).

duneroadrunner
> The problem is that you still need extra annotations. Namely lifetime annotations

Well, the idea is not to have the static analyzer verify typical C++ code. Just some practical subset. So for example I think it's quite practical to write C++ code that uses only "scope" pointers (basically pointers to objects on the stack) and (not-null) refcounting pointers, that intrinsically don't outlive their targets. Lifetimes would be implied by the types. So wait, what more does Rust's static analyzer give us again? Does it somehow remove the need for refcounting heap objects?

> the programming style Rust encourages is not the same as the ones you tend to see in C++ codebase, and programming in the C++ style will lead to code that doesn't compile.

I have no problem with that. I have no attachment to the "traditional" C++ programming style.

> This might be more tractable (and is an interesting idea). But that optimizer would be hard to write.

Why? The static analyzer has an opinion on whether or not a program is safe. The optimizer just wants to know if it still thinks it's safe when you remove a runtime check.

> I think this is what the new ISOCPP core guidelines are trying to do? Though they don't go far enough in preventing memory unsafety IIRC (this may have changed).

The ISOCPP core guidelines approach is to recommend the use of C++'s intrinsically dangerous elements in a way that is "usually safe", but not always, and rely on their static analyzer to catch bugs. So the question becomes, what do you do in the many cases where the static analyzer doesn't know if it's safe or not. You can try to redesign your code so the static analyzer can understand that it's safe. But that's often very inconvenient or has a performance cost. Often the most practical (safe) solution is to resort to something like SaferCPlusPlus.

Manishearth
> So wait, what more does Rust's static analyzer give us again? Does it somehow remove the need for refcounting heap objects?

Refcounting is rarely needed because most sharing is done via "borrows", which usually work via scope-tied "references" which may point to either the stack or the heap.

Implementing and enforcing local scope pointers in C++ via static analysis is not hard. Making it possible to thread borrows through APIs and annotate things with the borrowing semantics (which is what makes Rust avoid refcounting or even allocation costs) requires a bit more work.

> I have no attachment to the "traditional" C++ programming style.

Right, but at this point you have a very weird looking subset of C++ that can't seamlessly integrate with other libraries, and can't be translated to from regular C++ without significant human intervention -- why not just use Rust?

> Why? The static analyzer has an opinion on whether or not a program is safe. The optimizer just wants to know if it still thinks it's safe when you remove a runtime check.

I guess I misunderstood your proposal. This sounds doable. But, again, you'd be using a weird subset of C++ that doesn't seamlessly integrate, and you're just better off using Rust at this point.

Instead of trying to port Rust's guarantees to C++ it makes more sense to use the same principles to organically build on top of C++, in a different way. IMO this is sort of what ISOCPP is trying to do, but they're not quite there yet, and trying to find a compromise between making the language too different and making it safe is hard.

> So the question becomes, what do you do in the many cases where the static analyzer doesn't know if it's safe or not. You can try to redesign your code so the static analyzer can understand that it's safe.

This is always going to be a problem regardless of the static analyzer. You have to design it to reject these cases. Rust does this too; there are some edge cases where you need to design around the borrow checker (though usually this doesn't incur additional cost, and the most common of these are going to be addressed). If designing low level abstractions like vectors and stuff (or doing FFI), Rust gives you an escape hatch ("unsafe"), which has a couple of checks disabled and can be used to write the code you need (verifying safety of a program then just requires verifying that these blocks of code are sound and do not rely on any invariants that can be broken by code outside of them).

duneroadrunner
> Right, but at this point you have a very weird looking subset of C++

It's a little weird looking at first glance, but ultimately it's not really that weird. The main unfamiliar thing is that objects that are going to be the target of a (safe) pointer need to be declared as such. So

    {
        std::string s1;
        auto s1_ptr = &s1;
    }
becomes

    {
        mse::TXScopeObj<std::string> s2;
        auto s2_ptr = &s2;
    }
s2 acts just like a regular string. It's just wrapped in a (transparent) type that overloads the & (address of) operator so that s2_ptr is a safe pointer. (For example, in this case s2_ptr cannot be retargeted or set to null).

> that can't seamlessly integrate with other libraries,

Sure it can, that's the point. For example:

    {
        std::string s1 = "abc";
        mse::TXScopeObj<std::string> s2 = "def";
        auto s2_ptr = &s2;
        std::string s3 = s1 + s2; // s2 totally works where an std::string is expected
        s3 += *s2_ptr;
        *s2_ptr = s1; // and vice versa
    }
> and can't be translated to from regular C++ without significant human intervention --

Umm, it could be automated, but you would need a tool that can recognize object declarations. But modern C++ code is mostly safe already. I mean you're supposed to try to avoid pointers in favor of standard containers and iterators. So just replace your "std::vector"s with "mse::mstd::vector"s and your "std::array"s with "mse::mstd::array"s and you're mostly there.

> why not just use Rust?

My impression is that Rust has been evolving a lot. Is the language stable now? Is it time to jump in? Has it vanquished D as the successor to C++? Are we happy with Rust's solution for exceptions?

Even if Rust is the future, and the future is here, I'm still stuck with existing C++ projects. And I'd feel better if they were (at least mostly) memory safe. There must be others in the same boat.

Manishearth
> The main unfamiliar thing is that objects that are going to be the target of a (safe) pointer need to be declared as such.

Your proposal was to take Rust's static analysis and make it work with C++. It's clear you don't know Rust. Why are you so confident about what kind of effect that would make on the language? Rust is not "like C++ but with more static analysis", it's a very different language. A lot of the safety that modern C++ gets you is something that Rust gets you, using different mechanisms.

> Sure it can, that's the point. For example:

This example seems to be a SaferCPlusPlus example? I'm talking specifically about your proposal to take Rust's static analysis and use it on C++. That isn't what SaferCPlusPlus seems to be doing. It seems like you might be talking about something else? The general applicability of safety based static analysis? I'm not arguing with that.

> My impression is that Rust has been evolving a lot. Is the language stable now?

Still evolving, just like C++ is, but is stable now. Has been for more than a year.

> Are we happy with Rust's solution for exceptions?

I am. Most folks in the Rust community are. There are no missing pieces now, though.

> Has it vanquished D as the successor to C++?

No, and that's subjective, and your C++-with-Rusts-static-analysis will not be in a different boat.

> I'm still stuck with existing C++ projects. And I'd feel better if they were (at least mostly) memory safe.

That's my point. The amount of work to convert existing C++ code to something that satisfies a static analyzer using Rust's exact set of invariants is just as much as the work required to convert to Rust. You won't be able to just throw a new static analyser at C++ code and stuff will magically work. It will require significant refactoring and effort. Nor will your code be able to easily talk with other C++ libraries.

> Umm, it could be automated

No, "human intervention" I said. It can't be automated easily, because the style it enforces is significantly different. I've done quite a bit of jumping back and forth between C++ and Rust these days (in the same codebase, with FFI), and the fact that the structure and style of programs is different is very apparent.

There is work on translating C to Rust (and might grow to C++ some day?), but IIRC you still need significant human intervention. For C at least there is no existing safety system to replace, so it's still easier, but translating from C++s (largely incompatible) existing safety system will be tough.

Translating code will need the translator to figure out what the code is trying to do, basically. This isn't like Python2->Python3. Like I said, the style enforced is different. I don't mean syntax style, I mean how code is structured at a higher level.

> I mean you're supposed to try to avoid pointers in favor of standard containers and iterators

If you want to be 100% safe you need to solve iterator invalidation and Rust's solution is something that is very hard to make work with C++s usual style of coding. If you want to avoid all unnecessary allocations and refcounting you need a lifetime system. To use Rust's model the mechanism of moving would have to be tweaked considerably.

Again, these problems can probably be solved organically from C++ itself (which I guess is what SaferCPlusPlus is doing?), building a static analyser that tries to solve them building on the existing mechanisms in C++. But importing Rust's analysis will just get you a completely new language which has almost no use.

duneroadrunner
> It's clear you don't know Rust.

Oh yeah, didn't mean to give the impression otherwise. But I think I've gained some understanding since yesterday. I'm just learning, but tell me if this I'm getting this at all:

- Rust only considers scope lifetimes (and "static" lifetime which is basically like the uber scope)?

- References can only target objects with a superset (scope) lifetime.

- You can only use one non-const reference to an object per scope. This solves the aliasing issue?

> This example seems to be a SaferCPlusPlus example? I'm talking specifically about your proposal to take Rust's static analysis and use it on C++.

Sorry, I misunderstood. I thought you'd switched context. Let me try again:

There are a couple of reasons for pursuing "Rustesque" programming in C++ as opposed to in Rust itself. First let me point out that there would have to be a mechanism for distinguishing between "statically enforced" safe blocks of C++ code and the rest of the code (just like Rust's "unsafe" blocks I guess).

So then the obvious advantage is a better interface to C++ code and libraries. Rust only supports plain C (FFI) interfaces? Is that right?

But another argument is that there multiple strategies to achieve memory safety (and code safety in general). The two popular ones are the Rust strategy and the GC strategy. One is not uniformly superior to the other. Superior maybe, but not uniformly so. Presumably the Rust strategy will be more memory efficient, and maybe theoretically faster, whereas the GC strategy might facilitate higher productivity.

If you choose Rust, you're committed to one strategy. Now, I don't know if it'll turn out to be realistic, but I'm wondering if it's possible that C++ can support both strategies. (And maybe some other ones too.) Not just different strategies in different applications, but even in the same application. The Rust static analyzer would of course only work on indicated blocks of code.

Of course writing code in one strategy or another would be more clunky in C++ than a language specifically designed for it, but everything's a trade-off. The question is, is it worth it?

It's easy to say the clunkiness isn't worth it, but Rust probably has the weakest argument in that respect. Right? (I mean doesn't Rust have a reputation of being clunky anyway?)

Again, I barely know any Rust, but it seems to me that the main safety functionality that Rust provides over, say, SaferCPlusPlus, is the static enforcement of "one non-const reference to an object per scope" as an efficient, but restrictive, solution to the aliasing issue.

Hmm, obviously I have to find some time to learn Rust better, but intuitively, it seems like the simple Rust examples I've seen so far would have a corresponding C++ implementation, and it's not immediately obvious to me why a static analyzer couldn't work on the corresponding C++ code. Is there a simple example that demonstrates the problem? Am I just underestimating the difficulty of static analysis?

Manishearth
> You can only use one non-const reference to an object per scope. This solves the aliasing issue?

More accurately, if you have a mutable reference you cannot have any other references.

> Rust only supports plain C (FFI) interfaces? Is that right?

Yes, but with bindgen you have a decent C++ interface.

My contention is that the "better interface" is only slightly better, and probably not enough to justify basically creating a whole new language. Note that for your safe RustyCPP code, the regular-C++ code will be completely unsafe to use and you'll have to write some safety wrappers that encode in the guarantees you need. I've been doing this in the Rust integration in Firefox, and I'm sure that a dialect of C++ that uses Rust's rules will need to do something similar. That's where the bulk of the integration cost comes from.

> If you choose Rust, you're committed to one strategy

I mean, you can just blindly use Rc<T> or Gc<T> in Rust (Gc<T> only exists as a POC right now but we plan to get a good one up some day).

But yeah, magical pervasive GC would be hard to do in Rust.

> The question is, is it worth it?

You're arguing between choosing Rust vs CPP-with-static-analysis. I'm arguing between choosing Rust vs CPP-with-Rust-esque-static-analysis. I think the latter strongly points towards Rust, but the former has interesting tradeoffs.

> I mean doesn't Rust have a reputation of being clunky anyway?

Not ... really? It has a reputation for having a steep initial learning curve.

> it seems like the simple Rust examples I've seen so far would have a corresponding C++ implementation

Oh, this would work. But the reverse -- taking C++ code and making it work under the Rust rules -- is very hard. Not because of the aliasing rules, but because of how copy/move constructors are used in C++ (Rust's model strongly depends on initialization being necessary), the whole duck-typed-templates thing in C++, and similar things with respect to coding patterns that don't translate well.

Again, you could build a safety system on C++ that respects these patterns, but it would not be the same as taking Rust's rules and enforcing them on C++.

lmm
> It's a little weird looking at first glance, but ultimately it's not really that weird.

Readability is important for maintainable code. And safe coding patterns tend to involve a lot of sum types (which you can model in C++ with the visitor pattern, but it's significant overhead in code length and possibly even at runtime), and a fair amount of generics (which are cumbersome in C++, and the error reporting is awful). If you're not going to get the existing tool/library infrastructure either way, so you're just evaluating on their merits as languages, I don't think you'd ever want to pick C++ over Rust.

> modern C++ code is mostly safe already.

I've been hearing that for about a decade now (and I suspect the only reason it isn't longer is that I wasn't programming before then). And yet we still see bugs, all the time. Not subtle bugs, but stupid, obvious bugs.

> Is the language stable now?

Yes, as of 1.0.

> Is it time to jump in? Has it vanquished D as the successor to C++? Are we happy with Rust's solution for exceptions?

Yes.

> Even if Rust is the future, and the future is here, I'm still stuck with existing C++ projects. And I'd feel better if they were (at least mostly) memory safe.

My belief is that no amount of whack-a-mole is going to make those projects memory-safe, and none of the linters/checkers/dialects is ever going to reach a point where it offers actual guarantees. If it were possible it would have happened by now. The only way you're going to get to memory safety is by rewriting those projects, bottom to top (which is probably what you'd have to do to use one of these C++ dialects anyway). If you want to do the migration gradually (and you should!) rust has pretty good interop.

duneroadrunner
> > Why? The static analyzer has an opinion on whether or not a program is safe. The optimizer just wants to know if it still thinks it's safe when you remove a runtime check.

> I guess I misunderstood your proposal. This sounds doable. But, again, you'd be using a weird subset of C++ that doesn't seamlessly integrate, and you're just better off using Rust at this point.

My proposal is sort of language independent. I'm just suggesting a better way to address the code safety/correctness issue might be with runtime asserts, because it's more general. Some of the runtime asserts (like the ones regarding memory safety) will be automatically generated by the compiler, and others would be user defined (but compiler placed). And the static analyzer (I guess "the borrow checker" in Rust) would be repurposed to strip out the unnecessary runtime checks. And the compiler/optimizer would tell you which runtime asserts it was unable to optimize out. (Presumably good Rust code would result in all the memory runtime asserts being optimized out.)

This allows for programs that are not just memory safe, but "application invariant" safe as well. Right? I mean it's not really a totally new concept, I guess it's kind of "design by contract" or whatever, but with a slight performance bent because the optimizer tells you what runtime checks it's having trouble getting rid of. And maybe there would be a way to indicate that you expect the optimizer to be able to get rid of certain runtime checks, and instruct it to generate a warning (or error) if it doesn't. I'm just sayin'...

pcwalton
I don't think it works. All of the "runtime asserts" require bookkeeping. That bookkeeping ends up being worse in terms of performance than what you have with a GC.

It's hard to beat a modern, tuned GC.

pcwalton
Create a vector. Push an element onto it. Take a reference to that element with operator[]. Clear the vector. Call a method on that dangling reference.

Create an object on the stack. Return a reference to that object. Call a method on that reference.

Create a vector. Push an element onto it. Call a method on that element that clears the vector and then calls another virtual method on itself, via the this pointer.

Accidentally share a vector between threads. Race push_back() and remove().

Etc. etc. We didn't implement lifetimes for no reason.

Additionally, the pointer registration mechanism that that library uses has a runtime performance cost worse than a GC write barrier (because it incurs writes on reads).

duneroadrunner
> Create a vector. Push an element onto it. Take a reference to that element with operator[]. Clear the vector. Call a method on that dangling reference.

> Create an object on the stack. Return a reference to that object. Call a method on that reference.

References are one of the unsafe C++ elements that SaferCPlusPlus is intended to be used to replace [1].

> Create a vector. Push an element onto it. Call a method on that element that clears the vector and then calls another virtual method on itself, via the this pointer.

Yes, that series of operations is safe. A related example from the "msetl_example.cpp" file:

        typedef mse::mstd::vector<int> vint_type;
        mse::mstd::vector<vint_type> vvi;
        {
            vint_type vi;
            vi.push_back(5);
            vvi.push_back(vi);
        }
        auto vi_it = vvi[0].begin();
        vvi.clear();
        try {
            /* At this point, the vint_type object is cleared from vvi, but it has not been deallocated/destructed yet because it
            "knows" that there is an iterator, namely vi_it, that is still referencing it. At the moment, std::shared_ptrs are being
            used to achieve this. */
            auto value = (*vi_it); /* So this is actually ok. vi_it still points to a valid item. */
            assert(5 == value);
            vint_type vi2;
            vi_it = vi2.begin();
            /* The vint_type object that vi_it was originally pointing to is now deallocated/destructed, because vi_it no longer
            references it. */
        }
        catch (...) {
            /* At present, no exception will be thrown. We're still debating whether it'd be better to throw an exception though. */
        }
I agree with the gist though. This kind of thing should be prevented at compile time. Rust has an excellent static analyzer/enforcer built into its compiler. Arguably, it would be a service to the community to unbundle it from the Rust compiler and make it available for application to C++ code as well. Arguably.

> Accidentally share a vector between threads. Race push_back() and remove().

SaferCPlusPlus addresses the sharing of objects between asynchronous threads [2]. A particular shortcoming of C++ wrt to object sharing is that it doesn't have a notion of "deep const/immutability".

> Additionally, the pointer registration mechanism that that library uses has a runtime performance cost worse than a GC write barrier (because it incurs writes on reads).

Um, yeah, modern code should try to avoid the use of general pointers (and generally does). Most modern languages don't provide general pointers. SaferCPlusPlus makes them safe and slow (and available for easy porting of legacy code). When writing new code you would instead, when required, use one of the faster pointer types available in the library.

Don't interpret SaferCPlusPlus as an assertion that C++ is a uniformly better language than Rust or other modern languages. It's more of a suggestion that C++ and existing C++ code bases can be salvaged to a greater degree than one might think.

[1] http://www.codeproject.com/Articles/1093894/How-To-Safely-Pa...

[2] http://www.codeproject.com/Articles/1106491/Sharing-Objects-...

pcwalton
> References are one of the unsafe C++ elements that SaferCPlusPlus is intended to be used to replace [1].

OK, so you can't use references. Then, as I said before, your pointer replacements have a runtime performance cost worse than GC write barriers.

> Yes, that series of operations is safe. A related example from the "msetl_example.cpp" file:

I don't think you understood me. I mean the this pointer. "this" is hardwired into C++ to be an unsafe pointer.

> I agree with the gist though. This kind of thing should be prevented at compile time. Rust has an excellent static analyzer/enforcer built into its compiler. Arguably, it would be a service to the community to unbundle it from the Rust compiler and make it available for application to C++ code as well. Arguably.

Not possible. It's totally incompatible with existing C++ designs.

> Um, yeah, modern code should try to avoid the use of general pointers (and generally does). Most modern languages don't provide general pointers.

I think you're getting lost in the weeds of what a "general pointer" is and is not. It doesn't matter.

The point is that if your references track their owners at runtime, then you are just creating a GC. If the overhead of doing that is worse than a traditional GC (which, if you are doing that much bookkeeping, it will be), then there's little purpose to it.

duneroadrunner
> OK, so you can't use references. Then, as I said before, your pointer replacements have a runtime performance cost worse than GC write barriers.

The library provides three types of pointers - "registered", "scope" and "refcounting". I believe you are referring to the registered pointers, that indeed have significant cost on construction, destruction and assignment. But registered pointers are really mostly intended to ease the task of initially porting legacy code. New or updated code would instead use either "scope" pointers, which point to objects that have (execution) scope lifetime, or "refcounting" pointers. Scope pointers have zero extra runtime overhead, but are (at the moment) lacking the needed "static enforcer" to ensure that scope objects are indeed allocated on the stack. (Their type definition does prevent a lot of potential inadvertent misuse, but not all. And Ironclad C++ does have such a static enforcer.)

> I don't think you understood me. I mean the this pointer. "this" is hardwired into C++ to be an unsafe pointer.

You're right, that's a good point. But really it's a practical issue rather than a technical one. I mean technically, use of the "this" pointer should be replaced with a safer pointer, just like any other native pointer.

For example this is technically one of the safe ways to implement it in SaferCPlusPlus:

    class CA { public:
        template<class safe_this_pointer_type, class safe_vector_pointer_type>
        void foo1(safe_this_pointer_type safe_this, safe_vector_pointer_type vec_ptr) {
            vec_ptr->clear();
            
            /* The next line will throw an exception (or whatever user specified behavior). */
            safe_this->m_i += 1;
        }

        int m_i = 0;
    }
    
    void main() {
        mse::TXScopeObj<mse::mstd::vector<CA>> vec1;
        vec1.resize(1);
        auto iter = vec1.begin();
        iter->foo1(iter, &vec1);
    }
That is, technically, if you're going to use the "this" pointer, explicitly or implicitly, you should pass a safe version of it (in this case "iter"). But yeah, in practice I don't expect people to be so diligent. I wonder how often this type of scenario arises in practice?

So do I understand correctly that the Rust language allows for the same type of code, but the compiler won't build it unless it can statically deduce that it is safe?

> Not possible. It's totally incompatible with existing C++ designs.

Even if you prohibit the unsafe elements? Including (implicit and explicit) "this" pointers?

kimundi
Rusts references behave like plain raw C/C++ pointers at runtime, without any bookkeeping code running at all.

The magic all lies in the compiletime borrow checker, which roughly works like this:

    - All data is accessed either through something on the stack or in static memory.
    - Accessing data, say by creating a reference to it, 
      causes the compiler to "borrow" the value for the scope in which the reference
      is alive.
    - The references can be alive for any scope equal or smaller than for which    
      access to the data itself is valid.
    - References track the original scope for which they are alive around as a 
      template-paramter-like thing called "lifetime parameter".
      Note that Rusts use of the word "lifetime" is thus a bit narrower than the
      one used in C++, since it just talks about stack scopes, and not the lifetime 
      of the actual value as would be tracked by a GC or ref counting.
      Example:

      let x = true;
      let r = &x;

      Here, r would infer to a type like `Reference<ScopeOfXVariable, bool>`.
      (The actual type in rust would be a `&'a T` with 
      'a = scope of x, and T = bool).
    - Because the scope is tracked as part of the reference type,
      it is possible to copy/move/transform/wrap references safely, since
      the compiler will always "know" about the original scope and thus can
      check that you never end up in a situation where you accidentally outlive the 
      thing you borrowed, say if you try to return a type that contains a reference 
      somewhere deep down.
    - The borrow itself acts as a compiletime read/write lock on the thing you referenced,
      so for the scope that the reference is alive for the compiler prevents
      you from changing or destroying the referenced thing. Example:

      // This errors:
      let mut a = 5;
      let b = &a;
      a = 10; // ERROR: a is borrowed
      println!("{}", *b);

      // This is fine:
      let mut c = 100;
      { 
          let d = &c;
          println!("{}", *d);
      }
      c = 50;

    - The above examples just use `&` for references, but Rust has two references types:
      - &'a T, called "shared reference", which cause "shared borrows".
      - &'a mut T, called "mutable references", which cause "mutable borrows".
    - Both behave the same in principle, but have different restrictions and guarantees:
      - A mutable borrow is exclusive, meaning no other other borrow to the same data 
        is allowed while the &mut T is alive, but allows you to freely change the T through 
        the reference.
      - A shared borrow may alias, so you can have multiple &T pointing
        to the same data at the same time, but you are not allowed to freely change T through 
        the reference.
      - (If those two cases are too rigid there is also a escape hatch that
        a specific type may opt-into to allow mutation of itself through a shared reference, with 
        exclusivity checked through some other mechanism like runtime borrow counting.)
    - Through these two reference types, Rust libraries can abstract with arbitrary APIs
      without loosing the borrow checker guarantees. Eg, the "reference to vector element"
      example boils down as this:

      let mut v = Vec::new();
      v.push(1);
      let r = &v[0]; // the reference in r now has a shared borrow on v.
      v.push(2);     // push tries to create a mutable borrow of v, which conflicts with the 
                        borrow kept alive by r, so you get a borrow error at compiletime.
      println!("{}", *r);
The important part is that all this is there, per default, for all Rust code in existence, so you can not accidentally ignore it like a library solution you might not know about, or like language features that don't know about the library solutions.
duneroadrunner
Great explanation. Thanks.
duneroadrunner
Hmm, a more practical approach might be to mirror the GC languages and only permit (not-null) refcounting pointers as elements of dynamic containers such as vectors. Ensuring that all references don't outlive their targets, thereby eliminating the implicit "this" pointer issue. I think. Is that how Rust does it?
whyever
> Is that how Rust does it?

No, safe Rust only has safe references, and that includes "this" ("self" in Rust). Because the lifetimes are part of the type, it does not require the runtime overhead of reference counting.

erikpukinskis
> because everyone already has their 20 year old [C] codebases

It's comments like these that remind me how exclusionary the software world is. Your definition of "everybody" is such a tiny number of people. But that's who you have in mind when you are constructing the world around you each day.

shandor
Well, one might notice "everybody" has to be a rhetoric exaggeration, since there's absolutely no way even remotely close to "everybody" would have any codebase at all whatsoever, not to talk about 20 year old legacy. Right?

Second, I was responding to a comment talking about "existing, critical, real time applications". Of which a huge number of cases do have existing, very old legacy codebases.

Third, I fail to see what you tried to bring to the conversation. If your only problem was with my rhetoric, see above.

None
None
qwertyuiop924
I wouldn't hold your breath for C to die though. C sucks at many things, but it's pretty good in embedded, if you're not writing ASM.
gcp
Actually, the only real advantage that C has over rust there is the availability of trimmed libcs.
eggy
C seems to be going through yet another renaissance.

C is a smaller language to learn.

There is tons of legacy code even for embedded systems.

gcp
C is a smaller language to learn.

There are engineers with 20 years of C programming experience that will still make security errors while handling basic strings. "Small" does not mean "good" and "learning" a language doesn't mean you'll write good code with it.

qwertyuiop924
No, small is good. But C isn't small. It's actually massive, and not terribly orthogonal. It's peppered with special cases, and things people think but aren't actually true (how would you check for an integer overflow in C?).

It's like comparing x86 to, say, m68k (or most things, really). One was designed. The other is an ungainly mess of hacks on top of hacks, which has a good, elegant design in there somewhere, desparately trying to get out. Guess which one is x86.

Now guess which one is C.

Worse really is better. Or at least, good enough.

C isn't a complete mess, and you can write good code in it if you're very careful, but it's not great.

qwertyuiop924
Why would you use libc embedded? Most of libc kind of expects an OS to be there.
gcp
Uh? There are definitely trimmed libc's (not GNU libc!) that run on bare metal. Such as the one used by avr-gcc for example...
qwertyuiop924
Yeah, but why? I've never used a libc on embedded, and I'm kind of confused as to why you'd want to. Am I just Doing It Wrong™?
saaadhu
Atleast for avr-gcc, (part of) startup code and vector table layout come from avr-libc.

I'm kinda surprised you got by without libc - no strxxx, memxxx ,xxxprintf/no math.h functions ever?

qwertyuiop924
No. But my embedded projects weren't particularly impressive. I can at least see the need now.
collyw
A load more developers as well?
dbaupp
Rust has libcore, which is, in many ways, more featureful than libc despite also having zero dependencies. From my perspective, the main advantage of C is the way in which chip manufacturers only provide (poorly supported/bug-ridden) C compilers, but this is likely to become less important as ARM takes over more and more of the world: it is only getting easier and cheaper to throw a full ARM chip into a device, due to economies of scale.
kibwen
Rather than a ground-up rewrite, I expect people will begin using Rust in the same way that Firefox has: identify individual components that would most benefit from Rust, segment those components off behind well-defined C interfaces, then write a compatible Rust lib using Rust's ability to expose C interfaces.
djhworld
I keep trying to learn rust but fail miserably.

They do say on their website that there's a hump that you have to climb over before everything fits into place, which is probably applicable to everything you'll learn, but sometimes I think that hump is too much of a hurdle

swuecho
same feeling here. There are docs here and there, but not good.

I am waiting a good book on rust. similar to Haskell, I really did not get much until the book http://learnyouahaskell.com/

currently, "The book" is too dry.

eggy
I am having the same inertia.

I have resorted to using Nim for now, and it is going real well. I would like to rewrite the Nim stuff in Rust to compare for my own sake when I have done something substantial in it.

dj-wonk
Programming Rust by O'Reilly is in early release and I recommend it.
dagmx
Have you tried: http://rustbyexample.com/
staticassertion
Feel free to stop by the IRC and ask questions/ seek help - lots of people there who are willing to answer your questions.

Or the rust forums: users.rust-lang.org

Or the rust subreddit: reddit.com/r/rust

None
None
Skalman
For me the hump was from the get-go.

Following the book, I installed Rust directly, but then i realized that I should've installed it via Rustup. Next, I want a good editing environment, so I install VS Code and Racer, but then I find out that I can't use Clippy unless I use Nightly... and I'm not interested in using Nightly, so I'll wait.

sidlls
I just installed the nightly tarball and used my usual editor (emacs). I'm not sure what the editing environment has to do with a learning hump.

Now, coming from loose dynamic typed languages or really terrible C or C++ code bases is another matter.

None
None
steveklabnik
Rustup has been in beta for a while, which is why we don't recommend it in the book. Soon though!
jokoon
A good language should be like a good game: easy to learn, hard to master.

We don't care about expert cases, we only care about getting productive ASAP, which means having students hoping into a language and learning it quickly. Solving the details is easy: just teach coding discipline and enforce good practice and do code reviews, and discourage "throwable" code. Security is not only the job of a programmer, it has many many facets.

In short: english is better than latin or esperanto. The more time and space you need to describe your language, parse it or the more character or syntax it requires and the longer it requires for an individual to read a program and guess what it does, the worse it is.

I'm getting a little tired of the "novelty" languages these last few years. Maybe I'm more conservative and don't like the hype. To me only D is relevant and it has been there for a long time now.

rweichler
LuaJIT + C have suited me just fine for systems programming. I don't care about security, so I find it hard to care about Rust. All I care about is lessening the burden on me as a programmer.

I think Rust may be useful in cases like ripgrep, where you basically rewrite an existing, established tool or service used by many to be as performant and secure as possible. But other than that niche use-case, I don't think Rust will catch on in the long-term.

Retra
I learned Rust in a couple of weeks, and I'm writing a game in it right now. It's a pleasurable experience, and far outside that niche. "I can't learn it so nobody will use it" is such a silly thing to think.
burntsushi
As the author of ripgrep, I can assure you, the burden on me as the programmer was lifted quite a bit! I probably wouldn't have been able to build it otherwise. (Not because it's physically impossible, but because it would have taken too much time.)

It's not like I just rewrote grep. ripgrep is built on a large number of libraries that are reusable in other applications. You can see my progress on that goal here: https://github.com/BurntSushi/ripgrep/issues/162 (And those are only the ones I wrote, nevermind all of the crates I use that have been written by others!)

rweichler
Oh wow, The creator in the "flesh"! Thanks for replying to me.

Hypothetical scenario: Let's say you're writing an experimental tool that doesn't exist anywhere else. You don't care about security, you don't care about speed. You just want it to exist so you can see what it does and possibly iterate on the idea if it ends up working out. Would Rust still be feasible?

From my impression of it, you would need to take care of a lot of corner cases and such (which don't exist in other languages) that may slow you down in the short run. I'd imagine those corner cases would be extremely helpful in the long run if you want to squeeze some extra performance out of it (or avoid technical debt). But from the perspective of "figuring out what's possible" I feel like Rust would get in the way a lot.

burntsushi
Good question! The inherent problem with asking me that is that I've been writing Rust continuously for over 2.5 years by now. I live and breathe it. It comes as naturally to me as Go or Python does at this point (which I've been writing continuously for even longer).

I will say that a comparison between Rust and C is much easier, because in the past, I've spent so much time debugging runtime errors in C with valgrind. Rust is an easy win there for me personally. I've never done much C++ so I can't provide a comparison point there.

After a bit of Rust coding you get quite familiar with the workings of the borrow checker, and it becomes pretty natural to work with it. There are plenty of things you can do in C that Rust's borrow checker will forbid because it isn't smart enough to prove it safe, but there are usually straight-forward work-arounds. Sometimes the borrow checker might even help make the code a bit clearer. :-) Some of this is institutional knowledge though, so there's still a lot more documentation work left to be done!

To bring this back to earth: Rust won't replace those ~100 line Python scripts that I sometimes write for quick data munging.

The other important bit of context is that before I started with Rust, I had already had quite a bit of experience with C, Standard ML and Haskell. This meant that the only truly new thing I had to cope with in Rust was the borrow checker, so it might have been easier for me to digest everything than it might have been for most.

rweichler
Thanks for the insight! I may give Rust another shot at some point. Interoperating it with Lua may be fun.
pcwalton
Do you have a proposal to achieve Rust's goals in a way that's easier to learn?
jokoon
syntax
pcwalton
Be specific.
fjrieiekd
As someone getting into and loving Rust, I have a few:

1) Drop the semicolons and implicit returns in multiline functions (I.e. like Swift). Eliminates hard to understand errors around missing or present semicolons.

2) Allow silent lossless integer upcasts. Sprinkling as usize and friends everywhere is unergonomic.

3) ? For return is fine but the line should still have one "try" at the beginning for legibility.

4) Allow full inference of generic type parameters. Would make it much easier to split code into helper functions.

5) Macros are nicer than in C but still hostile to comprehension.

There are others, but these are ones I've personally run into. Love the language and especially love Cargo but it has some newbie-hostile rough edges like this and others.

pcwalton
Many of these are not possible due to backwards compatibility. Even if we could:

1. Most Rust users, including me, like the implicit returns. We would face a ton of pushback if we tried to drop them. Rust doesn't actually need that many semicolons: you can frequently leave them off.

2. This is tricky, because it can have surprising semantics if not done right (e.g. right shifts). There are proposals to do something like this, though.

3. I disagree. That would eliminate much of the benefit of ? to begin with.

4. Not possible. This would complicate the already very complex typechecker too much.

5. Macros 2.0 is coming. We can't just remove macros: they're fundamental to basic things like #[derive] and string formatting.

fjrieiekd
Hi pcwalton,

1) If you dropped semicolons like Swift, you could keep implicit returns.

I just don't see the advantage of having them in multiline functions at the cost of having to retain semicolons. IMO, a trailing value after a multiline function also just looks plain bizarre for newbies.

I agree it's probably too late for this, but you asked how Rust could be easier to learn and this is one of the ways.

2) It would certainly be much nicer to have this.

3) See Swift for an example of where this was done more nicely IMO than in Rust.

4) Unfortunate. Splitting out a helper function means we lose Rust's inference which already exists when used in a let statement, and requires adding dependent crates from libraries to the binary's Cargo.toml and importing them into the module and hard coding the types. The client shouldn't have to care about this.

5) Great!

I still enjoy the language just the same, but you asked how it can be easier to learn and this is how I see it.

fjrieiekd
Hi pcwalton, I've seen a couple of your videos and want to know if you've posted any tutorials on integrating Rust with Xcode and using the native tools, or if others have? Some of us are coming from IDE Javaland, and while Rust is quite usable from a text editor, print debugging only goes so far, and gdb is incomprehensible gobbledygook for us.
fizzbatter
It's worth it, imo. I was in your boat a few months back and hated every minute of it. With that said, i'm still not 100%. I semi-regularly see syntax that makes me go "Wat is what!?", but then i sit for a moment and understand it. Rust introduces a lot of visual baggage and that seems to cause me syntax blindness.. not enjoyable.

Unfortunately though, i'm back on Go. I want to be on Rust, but i had to pick a language for work and i can't ask my Team to go through what i did. Rust, despite the safety, is too unnatural for our larger codebase.

Luckily i think Rust has seated itself as the language we will use if the need is truly there. Unfortunately though, not everything.. just the specific things that need it.

steveklabnik
How are you trying? What are you getting stuck on? I'd love to improve things.
cbHXBY1D
Not specifically about your book, but I would love if there was a quicker way to find methods in the docs.

Right now, if I want to find the methods used by BTreeMap you have to wade through a good amount of information until you can find how to just get the keys. I'm currently on mobile where the issue is more prominent.

nnethercote
If you click the "[-]" symbol at the top right of a docs page it collapses all the text and just shows the method signatures. Then you can click the "[+]" symbol next to any single method to get full details.
wyldfire
If I don't find what I'm looking for from the docs, I often use ripgrep on a copy of the rust repo locally to find answers.
travv0
> I often use ripgrep on a copy of the rust repo locally to find answers

I mean, that works, but it's a workaround and not a solution.

steveklabnik
Are you missing the search bar at the top? Typing "keys" in shows BTreeMap's keys right away, even https://doc.rust-lang.org/stable/std/?search=keys

That said, yeah, I hear you. It can still be tough sometimes. At some point, I'd love to work with some sort of information design / UX person to totally re-do rustdoc's output. There's a surprising number of thorny problems there. But there's always so much to do...

papaf
What are you getting stuck on? I'd love to improve things.

I feel I am getting over the hump of learning rust now and coding in rust is becoming less frustrating for me.

However, one thing that slows me down is the lack of indices in the documentation. For instance, if I want to know the return type of a vector len() I go here:

https://doc.rust-lang.org/std/vec/struct.Vec.html

.. and then I have to search the web page for all instances of "len". It would be good if there was an index similar to Javadoc, Godoc or Doxygen.

There might be a good reason for not having the index, but as a beginner it is lost on me.

steveklabnik
Two things:

If you click the little [-] button, you'll get an index for that page.

If you use the search bar at the top, https://doc.rust-lang.org/std/vec/struct.Vec.html?search=vec... will let you go right to the method. (In this case, you have to know that it's slice::len though)

Does that help?

EDIT: UX is hard! Glad people are discovering this. It's the same symbol HN uses, incidentally...

staticassertion
Been using rust for months now and I never knew the [-] did that. I never even thought to click it.

Seems like there's some UX improvements that could happen there.

papaf
Thanks, thats very helpful. I did not know about the minus sign.

A polite suggestion - maybe the link marked [-], that takes you to the index, could be labelled "index".

steveklabnik
Well, it's not so much that it's an index, it's that when you have a page with only signatures, it feels like an index. There's no redirect, just some JavaScript :)
phaylon
Maybe 'collapse' and 'expand' would be good substitutes for the '-' and '+' at the top?
None
None
jean-
Whoa, I have been working with Rust for a little over a year, and had no idea about the [-] button.

I would echo the suggestions to make that button much more visible. Or perhaps even have the top-level description expanded by default but method/trait descriptions hidden. I can't think of a case where you'd simultaneously want to see every method description.

lacampbell
Not OP, but as someone who theoretically would like Rust - I'll bite. Maybe my usecase is a common one.

I understand memory management in C. I understand it modern C++ (destruction when going out of scope, smart pointers etc).

Basically, a description and discussion of borrow-checking for people who have already used system programming languages would be really helpful. I feel like the book is targeting people who have only used garbage collected languages.

Or is the memory management of rust so novel it can't be described in those terms? I find the concepts aren't very concrete to me.

fuine
Have you tried looking at https://github.com/nrc/r4cppp ?
lacampbell
That looks like exactly what I'm after actually. I'll read this next time I try rust.
steveklabnik
It's trying to be accessible to those people, but not strictly for them. I don't think the issue here is that it's for GC'd users, but that it's trying to explain things from whole-cloth, where you're looking for a direct comparison, "c does this, rust does that."

I try generally to keep other languages out of Rust's docs for various reasons, but agree that these kinds of resources are useful; I wrote "Rust for Rubyists" after all!

I'm hoping that others will step in and fill this gap; the repo in my sibling is a great start.

shriek
I'm also in a similar boat but my primary thing being that I learn by doing and since it's branded as "system programming" I immediately think of big projects like kernels and drivers. I wish there were some small projects that I could do apart from just doing "project euler" that would be helpful to me. I even bought raspberry pi to learn rust but don't quite know what to do with it and rust.
bquinlan
This might be too much like Project Euler but I started by solving some common interview problems using Rust.

Try solving the problems without looking at the solutions first: https://github.com/brianquinlan/learn-rust

Here is a rough rank of difficultly: https://github.com/brianquinlan/learn-rust#understandability

steveklabnik
Makes sense! Have you seen http://www.chriskrycho.com/2016/using-rust-for-scripting.htm... ? Maybe something like that can be of inspiration.
nostrademons
There are a large number of projects on Cargo that maybe-perhaps do something useful, but don't get much love in the testing/documentation/polish department since the authors tend to move onto other projects. My personal wish list:

Varints: https://crates.io/crates/varint

Bloom filters: https://crates.io/crates/bloom

Iron: https://github.com/iron/iron

All the accessories for Iron: authentication middleware, integration with OAuth2, cookie-signing, integration with templating systems, etc. Also, an omakase framework (like Django or Rails) that pulls together a bunch of useful libraries into one crate that you can just use (with good docs) and not have to wire everything up.

Websockets: https://github.com/cyderize/rust-websocket or https://github.com/housleyjk/ws-rs

Server-sent events. I don't see a good alternative for these yet.

ElasticSearch: https://github.com/benashford/rs-es or https://github.com/erickt/rust-elasticsearch or https://github.com/KodrAus/elasticsearch-rs

An easy documentation/website generator for small libraries, that pulls examples, tests, and README files out of GitHub, runs RustDoc, and generates a professional-looking website that provides all the info that you need to get started with a library, with a minimum of extra effort for the library author. Basically, automate the job of going through libraries built to scratch someone's personal itch and "productizing" them.

Any of these could be a good project for a beginner, since there's already a lot of existing code to learn from, a small well-defined task, and an existing maintainer who has an incentive to help. Basically, just take a library, try to use it in a small test program (Linus's Law: "Never try to make a big important program. Always start with a small trivial program, and work on improving it"), and if anything is difficult or doesn't work right, figure out how to make it less difficult for the next person who runs across it and submit a pull request with that. As an added bonus, you can learn a lot of domain knowledge or basic CS data structures through digging through this, and that transfers to programming outside of Rust.

q3r3qr3q
I'd get excited and go through some tutorial on their website, probably the main? tutorial, bu then when you get to the lifetimes section is seemed to get really complicated instantly. I tried twice and think I hit the same problem. Maybe some more good examples would help?
rjammala
Did you go through these screencasts?

http://intorust.com

steveklabnik
Makes sense! I'm working on a second draft of the book right now, and it's making that stuff more clear. http://rust-lang.github.io/book/ is the draft; the lifetimes bit hasn't landed yet though.
harveywi
The state of Rust editors continues to evolve [1], but I would be curious to learn more about the editors/IDEs that people are using for Rust development. Any stories or thoughts?

[1] https://areweideyet.com/

hguant
Literally just started looking into Rust for use in production yesterday. I've been playing with it in two different editor.

SublimeText + Rust Enhanced has been really really awesome. In line error messages, fairly decent autocompletion (from what I've seen so far) - I don't really use full IDE's that often (vim is my go to) so this is what I imagine that feels like.

vim + vim-racer has been good. I use the `gd` command (go to definition) every now and then. I'm probably not getting the full use out of it.

I think my problem is that at the end of the day, it's a text editor and I don't have very high expectations. I want some syntax highlighting, I want curly brace and parenthesis matching, and I'd like to be able to change the font. Everything on top of that is gravy.

qwertyuiop924
Well, I'm an Emacs user. I don't necessarily care about the tools that more IDE focused people might (OTOH, RMS should loosen up on the AST: you can't afford to lose much more than you already have), but rust-mode works fine for me.
brandur
I've had good luck so far with Vim + You Complete Me [1] (which uses a Racer background).

Check the GIF in its repo, but it gets you code completion boxes in Vim that pop up automatically in which you can tab through for the next results. It's very ergonomic compared to Vim's awkward built-in completion shortcuts, and gives you completion intelligence that's trending towards something like Visual Studio + C#.

[1] https://github.com/Valloric/YouCompleteMe

hawkice
I use Emacs, mostly because my workspace has gotten too big for Atom to be consistently responsive (I've got dozens of projects with tens of thousands of lines of code and who knows how many files deep in the dependency managers internals).
vvanders
VSCode + Racer have been working really well for me. Pretty decent auto-complete and integration.
baq
vim + racer do the job, but you have to annotate types quite a bit for autocomplete to work. I'd love it if there was a way to auto-annotate variables like the haskell plugin does it.
fabianhjr
Spacemacs' Rust Layer (Emacs) https://github.com/syl20bnr/spacemacs/tree/develop/layers/+l...
dj-wonk
I use Atom with the language-rust package. I'm very pleased. Before Atom I was using Emacs. I don't expect to go back to Emacs unless I need terminal-based editing.
jdub
For the last few weeks, I've been doing all my Rust hacking in Visual Studio Code + the RustyCode extension on both Windows and Mac. It's a good experience, but the (currently early in development) Rust Language Server will make it superb.
zanny
KDE recently broke off the syntax highlighting from Kate, so I'm super hopeful for eventual Rust support using racer in KDevelop soon, since its semantic highlighting is second to none.
None
None
endisukaj
rust-mode[0] has worked well for me so far. But I haven't written anything major with rust yet.

[0][https://github.com/rust-lang/rust-mode]

allengeorge
I'm using IntelliJ-Rust [1]. Doesn't go to the definition of macros, and sometimes auto-complete doesn't work, but - it's comfortable enough to use.

[1] https://intellij-rust.github.io/

leshow
Rust is great. I've been following it since pre-1.0 and writing code with it for about as long also. It's really come a long way, my favorite language for pretty much anything except web development.
None
None
Narann
Wow! You've got my attention!

I mean, I feel many rust stuff (crates/dev/tools) seems to focus on high performance web so, as a low level guy enjoying bitwise ops and dynarec stuff I was wondering if rust what a good thing for me or if it would be better to stick in C. Can you tell me which kind of project you do in rust, what was your original language and why rust shine compared to your previous language?

dj-wonk
(different person here, but I wanted to reply) I'm not sure that I agree as strongly as the first post on this branch, but I really do enjoy Rust. I'm using it for a particular tree-search with pruning computation that caches to RAM and saves to disk. The speed, memory compactness, and correctness are very beneficial. I will admit that getting comfortable with the borrow checker has taken some time, and I still have more to learn!
leshow
I started on Java in school for software eng. After that I did a lot of front and backend development in dynamic languages and forgot all about types.

Until I discovered Haskell and saw what a really powerful type system can do. Rust's type system is very very similar to haskell's if you substitute the word 'trait' for 'typeclass'. However code is imperative, so that's nice for some applications, and is faster/lower level.

I've used the crate nom (https://github.com/Geal/nom) in the past, it's a parserc library for consuming bits/bytes. I used it to write a function that consumes a vector of unsigned bytes, 11 bits at a time. If you're interested in that sort of stuff, take a look for yourself and see if you like it.

As with most things in Rust nom generates parsers with little to no runtime overhead, so you pay nothing for the increase ergonomics.

htaunay
Congrats to all the Rust team!

Love the zero-cost abstraction approach, love the good practice compiler warnings, love the cargo ecosystem, love the markdown code dox that tests the example code.

First time I actually enjoyed reading a programming language's documentation.

CodeMage
I'd love to see someone write a game engine in Rust to compete with the "big boys" like Cry or Unreal. C++ game code can be such a nightmare.
mordocai
Game code in general is usually a nightmare. I do think Rust would be better than C++ though, in a lot of ways.
htaunay
Cry/Unreal? Unlikely. But something in the lines of LOVE(https://love2d.org/) would be awesome!
kazagistar
I don't see why not, other then time and a healthy skepticism of the untested by the AAA game devs who might build one.
jcoffland
It would be in Rust too.
CodeMage
True enough. However, I'm willing to bet that a non-trivial amount of nightmarish code in C++ comes from the language itself. Also, I'm willing to be that a Rust build would be an improvement over a C++ build. As an example, I'm really sick of header files.
madmax96
Maybe, maybe not. I think that you can write horrible code in any language (albeit horrible rust is safer than horrible C++ obviously). Especially considering the case when you're working on extremely compressed deadlines like many game shops, you're always in for a nightmare.
nercht12
Yes and no. Rust adds its own set of hassles. You think building becomes a synch? With Rust, you're fighting the compiler probably more than with C++. I'm sure some game developers would rather have an occasional crash they can fix down the road after their game is published than be forced to make a perfect system the first time. Remember, with game production, it's about time-to-market, not about perfect code.
solidsnack9000
> I'm sure some game developers would rather have an occasional crash they can fix down the road after their game is published than be forced to make a perfect system the first time.

> Remember, with game production, it's about time-to-market, not about perfect code.

In a way you are over-selling Rust, because it doesn't offer perfect code! I'm not sure why Mozilla would pay to build it if that's what it was about.

What it offers is a lower defect rate, which is something you can definitely leverage to improve productivity. Lower defect rate at any cost is clearly too expensive; but developers seem to have been able to absorb the complexity of C++ alright and it Rust can't be called more complex than C++.

nercht12
To be clear, I didn't say Rust offered perfect code. But supposedly better code comes from Rust, as you're arguing.

While Rust as a language isn't more complex, the paradigms are different. Games often have trees and lists, which are a real pain in Rust. To do things right in Rust requires learning new ways of doing things - not something game devs want to spend time on. They've been working with the same horse for years, so they keep riding it.

coolsunglasses
Cinch, like cinching up a belt.

Synch is short for synchronization.

nercht12
Oops. Thanks.
niketn
This might be a large investment for the engine itself, but minimal investment for the game. In my experience, the largest, buggiest and most complicated parts of a game are all contained within the engine.
alkonaut
Being able to cut corners with code hygiene works both ways though for productivity: you don't want to realize a week before a deadline that you have a hard-to-find memory leak or crash. A lot of the time it feels like rust development is slow because you are fighting the compiler. On the other hand once you run the program you often get that Haskell-y "it worked because it compiled" feeling.

With a normal OO language I often build/run just to spot the next place I have made some bad assumption that the compiler didn't catch.

jstimpfle
I haven't experienced that feeling for anything but toy programs.

But concerning productivity: fighting the compiler sometimes means abandoning perfectly reasonable (and efficient!) designs just because the compiler doesn't like them.

I'm not aware of any type corsets that I think force good designs.

In a really clean design mistakes are not terribly hard to fix, even in a language like C. Granted in C they are in some cases harder to find in the first place, but there might not be any commercial interest in going beyond "it seems to work".

tome
> fighting the compiler sometimes means abandoning perfectly reasonable (and efficient!) designs just because the compiler doesn't like them.

Citation needed!

jstimpfle
No.
oconnor663
One thing that's hard to do in safe code is getting two &mut references out of a HashMap at the same time. (If you know the keys are disjoint.) That might matter to some design somewhere?
lmm
The type corsets of ML-family languages have been the best design experience I've ever had. My programming intuition has got a lot better by using them; it's much easier to spot that there's a subtle issue with the design if it shows up as friction in the types. Even in something like Python the lessons apply, though I have to devote a lot more attention to it since I can't rely on the compiler to show me. Just my subjective experience of course.
jstimpfle
Maybe I am not smart enough, but it's never clear to me what things should be cast in type. Type the one invariant, can't type the other - and the other way around. The choice seems often arbitrary. But it has a huge influence on the overall design.

Sum types are a major headache. Is there a good rule for when to use a sum type vs distinct types? The expression problem is very practically relevant to me. Also the typical flat vs hierarchial data storage wisdom applies: Trees and hierarchies are very much encouraged by HM type systems, but the choice of what gets to be parent and what gets the child is arbitrary and often turns out super limiting further down the line. Similarly, there's the choice of what to include in a hierarchy and what to put in a separate one.

Tables on the other hand, supported by light usage of manually coded lookup tables, have been the real game changer for me. When I'm back in a normal imperative language I can be so naturally productive and write efficient programs without relying on black compiler magic. I don't see how most of the invariants in my programs could ever be codified in a practical type system. They are so relational - they involve variables with very diverse lifetimes and expressions depending on dynamic values.

In the end, I feel writing assertions is just much better for me, because I can be somewhat sure in a few tries that my invariants hold, in the same language that I use for coding, and having the same values available. Meanwhile I would waste hours trying to codify a small fraction of them in an HM type system.

lmm
> Maybe I am not smart enough, but it's never clear to me what things should be cast in type. Type the one invariant, can't type the other - and the other way around. The choice seems often arbitrary. But it has a huge influence on the overall design.

I find it best to let confidence guide me. If I'm not confident something's right, that's usually a sign I didn't type it enough. If I think I know it already, then it doesn't need more types. It affects the design but it should affect the design, I would say; decisions about which invariants are important are design decisions.

> Is there a good rule for when to use a sum type vs distinct types?

If at some point you have a value that you know (and care) is one particular, well, type, make it a real type. If you only ever have values that could be one or the other and it doesn't matter which they are (or the section where it matters can be reasonably confined to a match block) then a sum type.

> but the choice of what gets to be parent and what gets the child is arbitrary and often turns out super limiting further down the line.

I find this is much less true in an immutable-by-default language. E.g. in the standard circle/ellipse example it becomes completely obvious which is the parent and which is the child.

> Tables on the other hand, supported by light usage of manually coded lookup tables, have been the real game changer for me. When I'm back in a normal imperative language I can be so naturally productive and write efficient programs without relying on black compiler magic. I don't see how most of the invariants in my programs could ever be codified in a practical type system. They are so relational - they involve variables with very diverse lifetimes and expressions depending on dynamic values.

Heh, this is the opposite of my experience. I find tables always confuse me, and are usually a sign that my model needs to have an intermediate entity - like the experience described in http://wiki.c2.com/?WhatIsAnAdvancer

alkonaut
Having to fight the compiler over reasonable code that the compiler doesn't like will happen with any type system, you don't need one with lifetimes to have that. The question is of course where to draw the line (where the drawbacks outweigh). I really like how you opting out of the guarantees, e.g. to initialize a doubly linked pointer you might just drop to unsafe because you can overview the memory safety implications of those two lines, but not the whole program.

How effective (and thus popular) Rust will be for creating large systems on tight deadlines remains to be seen I suppose - if it isn't competitive with C++ in that respect, then I'd consider that a failure. And a surprise.

Thaxll
Not going to happen in the next 10 years, everything is built around C++. And tbh I don't see what would games benefit from using Rust instead of C++.
zanny
Games are large codebases often not replacing the OS standard calls (which is where replacing C with Rust becomes problematic as you start breaking the standard library) that would benefit insanely from the safeties and default saneness Rust gives developers.

Any time you work in a large team in a corporate environment you want a language that lets you stab yourself in the foot as little as possible.

cderwin
I've never worked on anything gaming related, but the bigger cross-platform games certainly have crashes. I can't even tell you how many times Fallout 4 has crashed for me (likely hundreds). I obviously don't know if rust could help, but it's not unreasonable to think it might.
wyldfire
Consider Piston. Not able to compete with Unreal and Cry yet but it's a WIP.

[1] http://www.piston.rs/

[2] https://github.com/PistonDevelopers/piston

Retra
Piston is a mess, documentation wise. It's got like 50 different things called `Texture` and they're spread across 20 modules and every function you'll want to call is hidden under 5 layers of trait indirection.

I'm exaggerating, of course, but my experience trying to use Piston was absolutely miserable. Next to zero documentation, with endless layers of confusing abstraction. It's designed to have swappable back-ends, and that's a big hassle when you don't care about that.

shmerl
Does Emscripten support Rust already?
steveklabnik
https://users.rust-lang.org/t/compiling-to-the-web-with-rust...
shmerl
Thanks, I should experiment with that.
gcp
AFAIK yes but not in a "production ready" way.
None
None
wyldfire
Yes. See [1] for details.

[1] https://users.rust-lang.org/t/compiling-to-the-web-with-rust...

etherealG
Thanks so much, I was digging into this a few weeks ago eagerly and didn't find anything. Glad it's here now!
anfroid555
Anyone know a tutorial course for someone only knowing high level languages. And not c or Ruby. More JavaScript, PHP, nodejs, Python
oldsj
Rust for rubyists?
steveklabnik
It basically doesn't exist anymore; I stopped maintaining it pre-Rust 1.0.
salicideblock
I did one of the rust-koans out there and it went very well. Can't really remember which one, though...
mavsman
Coincidence that many of the Rust promoters have red hair?
user5994461
I'd like to see an analysis: "The amount of real world jobs available for Rust, in comparison to Ada."

The numbers may be self explanatory.

gcp
What would they explain though?
wyldfire
They would explain Rust's relative youth and corresponding growth opportunity. ;)
user5994461
Ada was designed for system programming and, at a time, was also a young language with a huge opportunity for growth ;)
staticassertion
To compare Ada's history to Rust's is silly. Even if the languages have a common goal of safety, the projects themselves are fundamentally different in many many ways. One of which is the fact that Ada was designed by the DoD for the DoD, and required a verified compiler, etc etc etc.

They're totally different projects, even if the languages have a small number of shared goals.

VoidAgent
IMO C++ is unstoppable now, c++ 11, 14 and 17 additions with GSL and all things in the pipeline ...
qwertyuiop924
I disagree. There are a lot of us who want a higher level systems language, but don't think that C++'s kitchen sink approach is a good design. Those people are flocking to Rust already, and more people are coming because it's interesting or cool.

Rust doesn't have to beat C++. If Rust steals 1/10th of C++'s marketshare, it will have a viable ecosystem.

gcp
A project like Servo means that the proof will be in the pudding.

Gecko and WebKit/Blink are C++ (though stuck at C++11 due to compiler and platform compatibility). So we'll see how they end up comparing.

thewhitetulip
I really wish that they replace Firefox with a browser written using Servo. That'll be fun.
AsyncAwait
They're putting components from Servo inside Firefox. Servo is not yet at the point where it can full sale replace Gecko, but it's getting there.
thewhitetulip
Yes, I know. I am waiting for that day when Servo will reach this stage. I hear it is having better performance than even chrome's engine.
gcp
https://medium.com/mozilla-tech/a-quantum-leap-for-the-web-a...

The intention is to replace the parts where Servo > Gecko.

colemickens
Except that even if you write perfect code and have a compiler that supports all of the features... you code still won't be as safe as it is if rustc signs off on it.

This conversation happens in literally every single thread about Rust. Every single time someone implies that new features in 14/17 will somehow make Rust irrelevant, but it's simply not true. There are classes of problems which Rust can catch which a C++ compiler cannot. And I don't really care about "if all devs wrote perfect code" because we get a new CVE every 3 weeks from some C/C++ codebase that has brilliant people working on them.

scythe
Rust will have a niche regardless of the momentum of C++ as long as it's easier to learn to use than C++ for someone who's new to solving these kinds of problems. The question is whether that goal is in sight.
wyldfire
I love C++, for decades it was one of the only games in town. But IMO there's a huge pipeline stall in C++17.

Features postponed beyond C++17 related to or bound to C's legacy are already in Rust (modules, e.g).

baq
rust does things C++ simply cannot and will never do. that doesn't mean rust is strictly better. actually i believe C++ and rust should be best friends; there's only this small ABI issue - i hear smart people are working on it, though.
kibwen
Rust doesn't need to stop C++ to be successful. The market for programmers is already large enough to successfully support dozens of programming languages, and that market will only continue to grow. PHP didn't "stop" Perl, Python didn't "stop" PHP, Ruby didn't "stop" Python, and yet all of these languages continue to be enormously successful. Why should we expect a monoculture on the systems side?
unethical_ban
Python in my mind has supplanted perl, and ruby seems only to live on for the sake of Rails.
empath75
Chef, too.
None
None
user5994461
What's the place for Rust in the market?

The distributed systems have already moved to Go (when it's not Java/C# :D).

The system programming is done in either C or C++. (Depending on history and availability).

The oldest stuff and/or most constrained is stuck with C. They're struggling to have anything moved over to C++. Rust is entirely out of question.

bsder
> What's the place for Rust in the market?

A language that is better than C that isn't C++.

Calling C++ a single language is like calling Chinese a single language. The various dialects share characters and that's about it. :)

C++ is at least 3 fundamentally different languages over the course of 20+ years. The C++ you would write to the current standard looks vastly different from the C++ you wrote even 10 years ago which itself looks different from the C++ of 10 years before that.

The problem is that even if you only write the modern stuff, you must understand the older stuff as your libraries are often written in it.

takeda
Rust overlaps with C/C++ and even with Go. Go was initially trying to be a replacement for C/C++, but hasn't succeeded, because it is too high level. Rust on the other hand looks very promising in that area, which requires low level access but also desires safety.
Matthias247
I wouldn't say Go hasn't succeeded. Maybe not as a general replacement for C/C++. But still lots of new applications that would most likely have been written in C/C++ a few years ago are now popping up in Go (unix daemons, commandline tools, ...).
user5994461
Go has succeeded in its niche (distributed systems). There are real jobs opportunities in Go, in the major tech hubs.

Funny thing is. Everything that is done in Go could be done in Java. Go is succeeding because lots of dev despise Java (related to the entreprisey & the usual culture of java companies).

donatj
As well as the fact that Go runs markedly faster and lighter (memory) than Java, and without needing to worry about JVM versions. We've been slowly migrating services and the statically compiled binaries are truly amazing for ensuring it will run. Not once have I encountered an 'it runs on my machine' situation.
baq
and yet the ecosystem grows every day.
None
None
gcp
The system programming is done in either C or C++

Rust is addressed to some of the shortcomings of those. This implies there would a competitive advantage of doing systems programming in Rust, if it succeeds in addressing them.

So far, it does look like it's succeeding there.

qznc
I'd say Python did stop Ruby roughly 2006. ;)

https://www.google.com/trends/explore?date=all&q=python%20pr...

ajmurmann
It's interesting to look at the maps in your link. It seems like Phyton is popular in more countries, but ruby IDs more popular where both are present.
kibwen
I'll be sure to tell that to all of my non-techy friends who are attending boot camps for, and subsequently finding jobs in, Ruby rather than Python. :P And I say this as someone who personally prefers Python!
hubert123
still compiles too slowly, a language in 2016 just cannot take multiple seconds for me to use it add 5 crates and watch ur compile/run cycle climb to a cool 10+ second average.
computerphage
Incremental compilation is stabilizing on Nightly. It won't help the initial compile, but waiting even 5-10 Minutes when I start working on a new codebase doesn't really matter to me. It's only the incremental speed that I care about.
steveklabnik
We had significant compile-time improvements in the release yesterday, with more to come in the future. Also, you might want to give incremental recompilation a try, it's nightly-only for now, but being actively worked on.

> add 5 crates and watch ur compile/run cycle climb to a cool 10+ second average.

It should not recompile those crates each time, if it does, that's a bug. Please report them!

nnethercote
> It should not recompile those crates each time, if it does, that's a bug. Please report them!

Depends on the inter-crate dependencies, alas. Touching one crate can easily cause multiple other crates to rebuild.

steveklabnik
Only if you need to modify both crates; not if you're just adding them as dependencies, because then you're not modifying them. I was assuming that's what the parent was doing, due to their wording, but that might have been a bad assumption.
coolsunglasses
Just to litmus test this, I rebuilt a small Rust project of mine that depends on 41 crates.

Debug build took 15 seconds, release build took 37 seconds. Rebuilds are sub-second.

This is 0.15 nightly.

TBQH, Rust's release builds are already faster than GHC's -O0 builds for me. I'm still bereft a REPL (I live in a REPL), but it's not a deal-breaker.

pimeys
Now I have four services running on production, all written with Rust. If it compiles, it usually works. Of course you have these late night sessions where you write that one unwrap() because, hey, this will never return an error, right? And bam...

I'm seriously waiting that tokio train to be stable and a unified way of writing async services without needing to use some tricks with the channels or writing lots of ugly callback code. Also the native tls support is coming and the dependency hell with openssl would be gone forever.

If you need http server/client, I'd wait for a moment for Hyper to get their tokio branch stable and maybe having support for http2 by migrating the Solicit library.

deavmi
I agree. We need safer languages. Not that I know much about safety. I still know we need it.
KeshandKooley
Go watch my YouTube video everyone! And remember to Subscribe ️️️ https://m.youtube.com/watch?v=QRQPSp-vTd4
gutfuck
I've literally seen you tweet at people to kill themselves and then delete it. Idk about any of this other stuff but you certainly can't claim that your hands are clean in all this.
steveklabnik
I do not tell people to kill themselves, as that's not cool.

That said, I'm not interested in replying to brand new accounts made to troll me (and conveniently without evidence), so I won't be replying further.

gutfuck
I'm sure you won't, don't want somebody to see this and post some screenshots. Wish I had taken some myself!
seeekr
He's obviously a troll or something. Keep up the good work, Steve! Rust's community is one of the most amazing and friendliest I've ever seen anywhere on the internet, and it seems to me that your intent, effort and presence is a significant part of that :) (Not to downplay any other people's contributions!)
None
None
sctb
We've banned this troll account. You can't create new accounts just to violate the guidelines.

We detached this subthread from https://news.ycombinator.com/item?id=12970047 and marked it off-topic.

cryptrash
I'll never use rust for anything important. Too dangerous, unstable, badly organized, toxic development community, the list goes on.
chamakits
A lot of your attacks seemed to be aimed at personal beliefs of some of the rust developers. I follow some on twitter and they may have some views people disagree with, but nothing exceedingly controversial.

Furthermore, all other criticisms you've mentioned specifically towards rust and the community of rust are literal opposites of everything I've experienced.

The community is friendly and helpful. Rust is extremely stable, both as code, and as far as the developers making sure to not break backwards compatibility, going as far as testing EVERY PROJECT available in crates.io for regressions. They are also extremely organized, having all discussions in well thought out and documented RFCs,and allowing all to give input on it all.

I don't know what problems you have with Rust. I'm sure there valid things to critize. But everything you mentioned is categorically false. If you disagree, I'd be open to see proof of it all.

shmerl
Huh? Did you mix it up with something else? Rust community is one of the best. And what's "dangerous and unstable" there?
cryptrash
Of course I didnt. Klabnik is practically a communist, lots of anti-American sentiments, etc. Pcwalton has been spamming HN for ages about rust. They're all terrible. I've heard rumors of memory vulns in rust as well.
yedpodtrzitko
> I've heard rumors of memory vulns in rust as well.

I guess you did read it on the internet, so it has to be truth. Are you referring to that clickbait "Heartbleed in Rust"?

steveklabnik
I make it a strong point to keep my personal politics outside of Rust; there's actually many people who work within Rust and its community that I would fight bitterly with about politics, but we instead focus on the technology and work together in a productive manner.

If you see me acting inappropriate within Rust project spaces, please report me to the moderation team: the core team is subject to moderation just like anyone else,and no core team members are allowed to be part of moderation for exactly that reason.

  > I've heard rumors of memory vulns in rust as well.
Please report these to https://www.rust-lang.org/security.html, as we care deeply about fixing them, if any. Otherwise, that's just FUD.
None
None
sctb
We've asked you specifically not to do this, so we've banned this account.
cpeterso
Does HN no longer hellban accounts?
sctb
We'll quietly ban serial trolls or outright spammers, but otherwise we provide an explanation so that the community can benefit from observing the application of the guidelines. We have other tools for dealing with accounts created as a result of the apparent ban.
noir_lord
Check his comment history and draw your own conclusions ;).
gbersac
First time I saw someone with negative karma.
colemickens
I've had nothing but amazing experiences in the Rust community, and was actually proud/embarrassed the one time that I was chastised for being rude in /r/rust.

The Rust community reminds me of how welcoming the Go community used to be... (I love Go and write it every day, this isn't some sort of commentary about Go).

TazeTSchnitzel
Toxic?
allengeorge
Of all the invectives to level at Rust, "toxic development community" would be the least applicable. Frankly, I've found the Rust community super welcoming and willing to answer questions from a newbie.

As for the rest of your points - I don't think they're applicable as well. I've found Rust to be super solid and the ecosystem a pleasure to work with. Could you give examples to support your claim?

z3t4
when you can run javascript on a pebble watch it blurs the boundaries ... Want to concat an number and string ? JavaScript wont complain.
wyager
> Want to concat an number and string ? JavaScript wont complain.

Is that supposed to be an endorsement?

z3t4

  foo = (string, number) => string + number
Now go write that in your preferred language!
wyager
Why would I want to? Better to explicitly convert the number to a string.
z3t4
This is something you do often in JavaScript, for example:

  "You have " + messages + " new messages"
kaoD
"You have undefined new messages"

Glorious!

z3t4
It's very easy to spot the error here, compared to for example a segfault.
lmm
That's damning with faint praise if I ever heard it. (And it's not even true; undefined tends to propagate further, the segfault usually happens closer to the actual error. Not that I'm defending languages that segfault by any means)
kaoD
Not to mention a SEGFAULT in Rust is a bug in Rust the language/std, not your program (unless you're writing unsafe code, which is pretty much never).
nixos
And in Java, and in any language with operator overloading
z3t4
operator overloading and meta is very nice, but can add a lot of complexity.
nixos
Yes, but you can have static typing and ease of use
wyager
"You have " ++ show messages ++ "new messages"

This is type safe.

colemickens
So, since we're talking about Rust...

    print!("You have %d new messages", messages)
Was that supposed to make me want to abandon all type safety and embrace a GC'd language that runs in a VM?
z3t4
There are advantages to GC, even Rust has runtime GC. But where is the memory freed ? You don't have to think of that at all in JavaScript. Also when something goes wrong with types in JavaScript the worst case scenario is "2"+2 turns into "22" witch is easy to avoid, compared to a silent overwrite/overflow. Even if the types in JavaScript is very loose, they are much safer.
steveklabnik

  > even Rust has runtime GC
Only in an extremely narrow sense; there's no tracing GC, only reference counted types. And they're not used very often.
lmm
> There are advantages to GC, even Rust has runtime GC.

If you want GC there are any number of good languages to pick with it (e.g. OCaml).

> Also when something goes wrong with types in JavaScript the worst case scenario is "2"+2 turns into "22" witch is easy to avoid, compared to a silent overwrite/overflow.

It's memory-safe but it's not safe. What if an error like that happens in your permission-checking code? I agree that silent overwrites and silent overflows should not happen and that language that have those things are bad, but that doesn't make silent type errors any better.

z3t4
The problem here is that + is used for both addition and concatenation. Most of the time you know what you are doing though (smile) and string is the default. When I started out with JavaScript I used -- (minus minus) for addition just to be safe. If you where to compare "22" == 22 ? it would be true, so the type doesn't really matter. If you want to do arithmetics on a string, no problem! Even null behaves like a teddy bear compared to other languages where it can bite you hard. Also note that this child toy language can run on embedded systems, and doing memory safe IO concurrency is so easy it's hard not to, like oops I did ten thousand concurrent request, but it finished in less then a second, because this is 2016 and not 1986 and computing performance and memory has grown exponentially since. Don't get me wrong though, systems languages has their place in lower level where the bits, bytes and performance matters. Someone once told me that JavaScript is a lot like assembly because you are so free, there is no one telling you "No, you shouldn't do it that way" like in Rust.
lmm
> Someone once told me that JavaScript is a lot like assembly because you are so free, there is no one telling you "No, you shouldn't do it that way" like in Rust.

Um, yeah. I think that's a fair comparison. And anyone who's had to debug or maintain a large system written in assembly/javascript knows why that's a bad idea.

bsder
> when you can run javascript on a pebble watch it blurs the boundaries

And yet Javascript can't access a network socket or my flash storage in any useful portable way.

Javascript is hampered by everybody trying to bolt on "security" after the fact and basically hobbling usage of the language.

z3t4
You also need a hardware abstraction layer with Rust, so why not use a higher level language like JavaScript or Haskell, besides optimizations and personal preference !?
0xAA55
Want to make a fast application on a pebble and with actual type checking? C and Ada won't complain or blur your lines... http://blog.adacore.com/make-with-ada-formal-proof-on-my-wri...
z3t4
And real programmers write in machine code. Want to get shit done ? Most systems now a day has more then 20kb RAM! Where do you draw the line between systems programming and non system programming ? And why not write some quick and dirty code in say JavaScript and then do what needs optimization in C/assembly ? Assuming you are not restricted to a CPU that cost less then a dollar. And where does Rust come in ?
takeda
With smart watches we still have problems, mainly to hold enough power. Yes, Pebble can last a week, but that's because it is running less resource intensive hardware, and even then 1 week is still laughable when comparing to old watches that require battery change once per 2 years.

By being more conservative, you can achieve more with less resources.

0xAA55
Every single application or website is a system (see the push in recent years for web assembly).

The difference is that a "systems programming language" has the capability of being used low level if needed while a "shit scripting language" such as javascript does not. Basically what I am saying here is that Javascript is an underwear taint-stain for modern computer science. With any luck it will be rid of this world in 20 years and you javascript programmers can stop torturing yourselves with new "hot libraries" and "ECMA transpilers" every 2-4 years.

wyager
There isn't actually a trade-off between efficiency and ease of use across all languages. For example, JavaScript is both slow and memory hungry while also being inconvenient to use.
Retra
What kind of engineer goes around saying "It could be faster, leaner, and more efficient? No! Build the biggest, flakiest, ugliest thing that gets the job done today!"
steveklabnik
You can do it in Rust too; though most of the demos I see on GitHub are just demos.
quotemstr
I lament what Rust could have been had its designers not jumped on the anti-exception bandwagon. Rust's error handling is bad and makes me prefer C++
IshKebab
Seriously? Rust's error handling is great, and frankly I'm glad that exceptions have gone out of favour. I tried them but they never really delivered their promise. At their best they do is give you nice stack traces. At their worst they make error handling stupidly verbose, they erase the context you need to properly handle errors, and they make it much more difficult to even know which errors can occur!

Rust's solution is the best I've seen so far. Go's multiple return values are pretty good too but I think Rusts's is better.

quotemstr
I very strongly believe exceptions are a lot closer to optimal than Result is. Exceptions remove the need for inline error checking code and make it possible to implement types with value semantics. You can't have reasonable value types that own resources if you need explicit error checking.

The need for explicit error checking makes OOM handling in Rust awkward at best. I've ranted about this side effect before.

Also, in Rust since we can panic, we need to worry about exception anyway! Rust has the worst of both kinds of error handling.

Exceptions are anti verbosity: you're supposed to let them propagate, not catch and rethrow them. They don't erase context: they preserve it, since an exception object is constructed as close as possible to the error site that caused it.

I realize that I have a minority view, but I'm utterly convinced that I'm right and that I'm living in a world gone mad. I've written a ton of code over the years. At least hear me out and try to understand my perspective.

zubat
I think I know what this trend is, more generally. It has to do with how explicit our code is. We've been through an era where you have some very powerful and compact indirection constructs(event callbacks, polymorphic objects, dynamic types, exceptions) in common parlance and the trend has turned against these lately. Their utility in many instances is mostly to enable technical debt, by worrying about the edge case later, and this has scared a generation of coders who have seen it go astray too often and create code that is hard to usefully refactor. And while exceptions aren't at the center of that trend, they're often implicated as a contributor.

With the new trend, you pay up front and go verbose, put more logic at the call site instead of indirecting it away. Go is at the leading edge of that: lots of LOC for boilerplate is OK in Go-land.

I'm more pragmatic with my error handing methods: I mostly care about whether I can eliminate a class of errors altogether, and secondarily what debugging implications are presented. I don't have a strong opinion on verbosity although I have followed the trend in that respect towards more call site logic.

leshow
Go is more verbose because it lacks a type system that allows generic programming, that seems like a very different kind of verbosity, than explicit error handling IMO.

Correct me if I'm wrong, but I don't believe Go's type system enforces you check and handle errors either, so it's not really enforcing verbosity where it counts.

burntsushi
> Correct me if I'm wrong, but I don't believe Go's type system enforces you check and handle errors either

Go's type system doesn't, but the compiler will in a good number of cases. For example, this will produce a compile time error:

    value, err := Foo()
    fmt.Println(value)
It produces an error because `err` was declared and unused.

The following defeats this check though:

    value, _ := Foo()
    // Bar returns one value: an error
    _ = Bar()
    Bar()
The above list probably isn't exhaustive.
quotemstr
This trend represents the unlearning of very hard earned lessons. Personally, I can't wait until people rediscover that programming can be more fun and productive without boilerplate. A language being explicit, by itself, is not a feature. Being explicit instead of implicit is only worthwhile if you get clarity in exchange, and boilerplate is clarity-reducing because it's so regular and so obscures program logic.
burntsushi
In Rust, in the common case, the boiler plate you're talking about here is literally a single sigil. (Previously, it was `try!(...)`.)

This is of course to say nothing about the advantages of having something that signals "this operation can return an error," (at the call site) but you seem to dismiss that out-of-hand.

quotemstr
I do dismiss this advantage out of hand: almost everything can fail, because most things allocate memory. It's only because Rust treats memory specially (incorrectly --- memory is just another resource) that it doesn't appear that more functions can fail. It's much more valuable to flip the sense of the annotation and mark the few functions that cannot fail in any way.

Allowing most code to throw just reflects reality and allows you to stop obscuring your program logic with mechanical error handling plumbing.

All you do with "try!" is annoy readers by constantly reminding them that things can go wrong when the default assumption should be that things can go wrong.

burntsushi
> It's only because Rust treats memory specially (incorrectly --- memory is just another resource)

Which is a perfectly reasonable trade off to make. Of course, it's not always the right trade off, so we're looking to improve our story there.

If you can't acknowledge that there are real trade offs at play here, then I don't really see how it's possible to meaningfully understand your position (or have a non-frustrating conversation).

saurik
(I just want to tell you somewhere that you are not alone, and that I appreciate the time you are taking to have this conversation. The arguments you are making are the exact same things I am often found saying, and in my experience it takes hours of time alone in front of a blackboard with someone to really get them to understand concept like "everything can fail" and "memory is no different than disk space". The one thing I haven't seen you argue with yet is that functions which are adamant they can't fail today often find themselves in a position where they can fail tomorrow, such as by adding authentication or storage; this makes "explicit" error handling a limiting contract that either requires you to proactively return error codes from everything even when they only currently return Success, never extend the functionality of existing systems to avoid accumulating an error condition, be prepared to break that contract in the future, or fall back really really hard on panic.)
computerphage
I think people would be more inclined to try to understand your perspective if you hadn't called Rust's design "jumping on the bandwagon", which, to me, implies a thoughtless act of conformity rather than a deliberate trade-off towards explicitness. To me, verbosity is bad, but implicitness is worse. I like the trade-off that ? (the question mark operator) has stuck for Rust.

I agree that OOM handling in Rust should be improved.

I don't agree with the way you talk about panics. They aren't just exceptions by another name because they're not intended to be used for handling expected errors (like exceptions are). Instead, they terminate the program. That's like attacking Java's System.exit() for not being just another exception. Many other environments share this distinction between fatal and recoverable errors. It can sometimes be a difficult choice to choose what to use, but having panics doesn't mean that all code must somehow try to handle them.

quotemstr
Panics are recoverable: initially at task boundaries, and these days at catch points. They are literally exceptions and unwind the same way. The designers intended for programs as a whole to keep running after a task panics. Code running in such a context needs to avoid leaking and corrupting resources on unwind --- i.e., be exception safe. On the rust development list, people call this property literally "exception safety".

I really do think that Rust error design avoided exceptions without properly considering the advantages of the exception model and the inevitably of turning panics into a full exception mechanism.

computerphage
I don't agree that panics are exactly equivalent to exceptions in languages like Java or Python because the language doesn't support using them as a general purpose error handling technique by providing `try/catch/finally` or by documenting what exceptions may be thrown with `throws`. I agree that the implementation of unwinding is similar in both cases, but I'd argue that that's not sufficient in practice to be able to apply knowledge about the usage of exceptions in, say, Java to the usage of panics in Rust.

I'm aware of catch-panic and the unwinding related to panics. Nevertheless, the vast majority of panic usage I've seen is related to the semantic that a panic signals an error that is fatal to the context of the panic. (This is a more nuanced view of panics that didn't exist in my previous post.)

Yes, you can use catch-panic on a server that is meant to stay up even if one of its threads panics. Or you can replace unwinding with aborting to save on code size. But idiomatic Rust code doesn't use panics simply to signal to a caller that some sort of typical error occurred like a file not existing.

Edit: I realize that my post may come across as arguing over semantics. I really don't care whether they're called exceptions or panics or even if they're mechanically the same thing. What I'm trying to talk about is whether they cause code to become harder to understand because of implicit error cases that are invisible in the source.

2nd edit: I appreciate your several good points in your replies to my posts.

burntsushi
> Panics are recoverable

You can't rely on this. Panics could just as easily abort your program. (By changing a flag on the compiler.)

quotemstr
If you write a general purpose library, you have to make conservative assumptions and work either way.

By the way: I am depressed as hell that the Rust people, knowing full well what went wrong in the C++ ecosystem, repeated the mistake of having an -fno-exceptions and thereby fragmentating the language.

burntsushi
In C++, exceptions are a first class error handling mechanism. This is not the case in Rust, so it can't be the same mistake.

Rust (and C++) are systems programming languages. Users must retain the ability to opt out of the cost of exceptions. The problem with C++ is that exceptions are a first class error handling mechanism.

> If you write a general purpose library, you have to make conservative assumptions and work either way.

If panics abort and they were our only error handling mechanism, then there is no other way to do error handling.

> All of this because some people don't like exceptions

On the one hand, you want people to understand your position. But on the other hand, you come off as implying the Rust designers (which includes the entire community) are a bunch of incompetent boobs that irrationally dismissed exceptions just because they "didn't like them." Do you see how these things conflict with each other? I'd suggest reconsidering your approach when engaging in discussion on this topic with others. As it stands now, you're extremely difficult to talk to.

quotemstr
I agree that it must be possible to use the language in a very low overhead, literal way. That's not most use, I think. Most applications can tolerate exceptions, so stdlib should have used exceptions to report errors. The people who can't tolerate exceptions are the same ones who want precise machine control and who probably don't want stdlib either.

If exceptions were the only way to report errors in stdlib, stdlib wouldn't have had to panic on errors --- or, rather, users would have come to expect these panics instead of treating them as a fatal, anomalous condition.

The trade off seems wrong here --- you should be able to support stdlib and exceptions for clarity or !stdlib and !exceptions for full control, but I don't see a big case for stdlib and ! exceptions, but the way Rust is designed, everyone pays for the stdlib and !exceptions model. (And they have to care about exceptions anyway, but can't rely on them.)

I think Rust's error handling is very poor design. I regret that this opinion coming across makes it difficult to have a conversation.

burntsushi
If your only error handling mechanism is exceptions and you disable exceptions because you can't bare the cost, then what are you left with?

> The people who can't tolerate exceptions are the same ones who want precise machine control and who probably don't want stdlib either.

I don't agree. Just because I don't want to pay for exceptions doesn't mean I don't want, say, convenient platform abstractions over file systems or I/O.

> I regret that this opinion

The opinion that you don't like Rust's error handling isn't the problem. It's all of the insinuations you're making about the people who worked on it. You paint a picture of carelessness and irrationality, but that couldn't be further from the truth.

The other problem is that exceptions vs. error values---aside from the performance or implementation implications---is a debate unto itself without a clear answer, but you pretend as if it's a solved problem and that your view couldn't possibly be wrong. On the other hand, I'm trying to sit here and say that there are trade offs, but you don't want to acknowledge them.

quotemstr
What is reporting the errors? In a standalone environment, in which you're left with the core language, anything that reports errors is something you can define, and you can define that component to use error codes, just as we would in C. It feels odd to want exact control over the error handling abstraction but want to use Rust's convenient IO abstraction. Performance either matters or it doesn't.

> debate unto itself without a clear answer, but you pretend as if it's a solved problem and that your view couldn't possibly be wrong

This debate was settled: we started out with error codes. We saw a generation of languages with exceptions --- Java, C++, CL with its condition system, Python, etc. arise in reaction to the problems with error codes.

The current movement away from exceptions, which I think started with Go, feels like backsliding, especially because most of the justifications for error code primacy that I see either ignore the actual (instead of mythologized) costs of exceptions or claim that exceptional code is a hardship inconsistent with my experience.

Now, it's possible that we should think of exceptions as just a failed experiment, but that view isn't consistent with the extreme utility I've seen in exception systems. Exceptions are so useful that people build them out of longjmp!

Anyway, it's frustrating that because Rust tried to solve simultaneously for ergonomics, exception freedom, and a rich standard library, it ended up in a position of having to abort on OOM. (I realize that there are more options these days.)

Still, exceptions in stdlib with an option for an exception free standalone system feels like the more appropriate trade-off.

I understand that there are trade-offs everywhere, but this observation doesn't mean that I have to excuse what I see as very bad trade-offs.

burntsushi
> It feels odd to want exact control over the error handling abstraction but want to use Rust's convenient IO abstraction. Performance either matters or it doesn't.

This doesn't make any sense. What are the performance costs of doing I/O in Rust using `std::io`? If there are none, why would I want to give it up? AFAIK, the only reason to give up `std::io` is if your platform isn't supported by `std`.

> it's possible that we should think of exceptions as just a failed experiment

Who said that? Why does one way have to be right? There are trade offs! I'm sure you can find plenty of articles on the Internet that discuss exceptions vs. values. There are plenty of reasonable arguments on both sides.

> This debate was settled

OK, that's enough. I won't waste any more of my time with someone who is so certain of themselves.

quotemstr
You seem to be using the fact that there are trade-offs as a justification for the specific trade offs you've made and using tone policing as a substitute for defending these trade-offs
burntsushi
If you can't acknowledge the presence of trade offs, then I don't see how I could justify specific trade offs.

Just because I want to engage in a productive conversation doesn't mean my entire argument boils down to tone policing. It is OK to stop talking to someone because they are too frustrating to talk to.

quotemstr
Where did I disagree with the existence of trade-offs? What I find invalid is the idea that all trade-offs are equally good. The Rust scheme has certain advantages and certain disadvantages. I believe that the advantages don't matter much and that the disadvantages are worse than other people think. The advantages and disadvantages of the conventional C++ and Java model are better for a general purpose systems language.
burntsushi
> Where did I disagree with the existence of trade-offs?

When you say stuff like this:

> This debate was settled

and this

> I do dismiss this advantage out of hand

In general, most of your comments on this topic make every possible negative point about Rust's error handling without ever taking care to balance it with the positive points. If we can't even come to a mutual understanding that there are some trade offs involved in this decision, then it's really hard to move on to balancing the trade offs. Especially when you say things like this:

> All you do with "try!" is annoy readers

> had its designers not jumped on the anti-exception bandwagon

> without properly considering the advantages of the exception model

> All of this because some people don't like exceptions

This is a consistent dismissal of both the trade offs involved and of the people that actually worked on this stuff. Do you actually believe Rust is the way it is because we just hopped on a bandwagon? If so, that's extraordinary bad faith.

quotemstr
I'm criticizing Rust's error handling strategy. I shouldn't have to defend it at the same time. To be clear: everything is a trade-off. I don't think it's fair to claim that I don't think trade-offs exist merely because I haven't enumerated the scant good sides of the specific trade-off Rust made
Manishearth
> Anyway, it's frustrating that because Rust tried to solve simultaneously for ergonomics, exception freedom, and a rich standard library, it ended up in a position of having to abort on OOM. (I realize that there are more options these days.)

... if you realize that there are more options, please don't make this point, because the point doesn't make sense anymore. Aborting on OOM in Rust is a default, but you're not stuck to that system.

quotemstr
Sure, but then we're back to exceptions in one form or another, so now we have Result all over the place and have to deal with panics. Being able to panic on OOM won't go back in time and rewrite stdlib
Manishearth
> Being able to panic on OOM won't go back in time and rewrite stdlib

I don't see how that's relevant? If you have to deal with OOM you're probably not going to deal with it at a fine-grained level, you'll have one high-level panic catcher somewhere that handles this and all other panics.

Given that overcommit exists as well, this makes the cases where you want workable Result-on-OOM quite niche.

(And there is work -- lower priority work, but it exists -- for pluggable allocators which will let the stdlib eventually abstract over more of this)

quotemstr
Hundreds of millions of people use non-overcommit systems. That's a good thing, because overcommit is a mistake that encourages profligate use of system resources. I fear that abort-on-OOM will only reinforce the presumption of overcommit in the minds of developers. Even on overcommit systems, you can run out of address space or vsize.

I believe in treating memory like any other resource. You wouldn't abort by default when you run out of disk space, would you?

Manishearth
Overcommit wasn't my main point. My point was that the intersection of systems where:

- You are using the stdlib (so not designing an OS or other low level programming)

- OOM must be recovered from

- OOM-as-panic with recovery at a higher level in the application is not going to work and you need fine grained OOM recovery

is small. Overcommit makes it smaller, but ignoring that it is still small.

lmm
> You wouldn't abort by default when you run out of disk space, would you?

If most of the language and standard library required allocating disk space to function then I would indeed abort by default, because very few programs would be able to do anything useful in those conditions, so the most useful thing is to fail fast.

It's possible to design a language and standard library that can remain usable in out-of-memory conditions, but the costs would be severe, and not justified for the overwhelming majority of rust use cases, I think.

lmm
> The current movement away from exceptions, which I think started with Go, feels like backsliding, especially because most of the justifications for error code primacy that I see either ignore the actual (instead of mythologized) costs of exceptions or claim that exceptional code is a hardship inconsistent with my experience.

I don't think it comes from Go; rather it comes from the ML family (which Rust is arguably a member of). ML has had exceptions and error codes for decades, and ultimately that experience has come down on the side of error codes, because with good sum types and higher-order functions they are safer and more effective than exceptions. An analogy: early fighter planes were aerodynamically unstable, as it was hard to make them maneuverable any other way. Later fighter planes were stable as this was safer and easier to control. Modern fighter planes are aerodynamically unstable, as modern control systems can control such planes effectively and the original advantages remain.

Manishearth
> Panics are recoverable: initially at task boundaries, and these days at catch points. They are literally exceptions and unwind the same way.

No. They provide roughly the same functionality as exceptions and are implemented the same way.

Here's the thing; exceptions implies using unwind recovery for error handling. This is not the case in Rust. The panic recovery API isn't designed to be used like this. No API relies on panics being recoverable. Panic recovery is supposed to be used for two very specific use cases:

- Stopping an application/service from crashing at a higher level (eg at an event loop boundary)

- Preventing unwinds from crossing into FFI.

Now, you could use Rust panics to build an exception system. It wouldn't be great, but you could. Sure. But a lot of the tradeoffs of a feature need to be considered in the context of how it's going to be used. Nobody's going to implement a (serious) exception library using Rust panics. Even if they did, it wouldn't work well with the rest of the ecosystem.

Panics are _not_ a "full exception mechanism". The tradeoffs are not the same as that of C++ exceptions (which are pervasively used). There are tradeoffs, mind you, but different ones.

> Code running in such a context needs to avoid leaking and corrupting resources on unwind --- i.e., be exception safe. On the rust development list, people call this property literally "exception safety".

Exception safety is a common issue between Rust and C++ but you rarely have to think about it in Rust (because all libraries are written assuming that panics may or may not abort, and very few applications catch panics and have to think about it for a very small area near the boundary), whereas it's more common in C++. Nobody writes libraries trying to avoid leaks on panics because catching panics is rare (and leaks are considered safe in Rust, though it usually requires contrived code to cause a leak in Rust).

Overall, it is not a problem. It's something you need to think of in extremely niche cases. Equating C++ exceptions and Rust panics is oversimplifying things.

dragonwriter
> You can't have reasonable value types that own resources if you need explicit error checking.

The proper place for the period in that sentence is before the "if"; owning resources is not a reasonable thing for a value type to do.

quotemstr
Tell me more about how std::string is unreasonable.
steveklabnik
Are you talking about Rust's or C++'s? Name conflict :)

In Rust, std::string::String is not a "value type", and by that, I mean it's not Copy.

quotemstr
C++'s.
steveklabnik
Then I won't comment, though I wonder if the parent is thinking of things like https://groups.google.com/a/chromium.org/forum/#!msg/chromiu...
tupshin
Personally I love Rust's error handling, but I do think that I understand your perspective.

At least part of your objections would seem to be addressed by the combination of the ? operator to make it trivial to do the equivalent of re-throwing inline, and the error-chain crate, which can make exception-like stack traces/etc trivial to implement.

https://brson.github.io/error-chain/error_chain/index.html

Animats
Unfortunately, this is just a promotion for a series of videos. Probably talking head videos with PowerPoints.

Most people who do systems programming probably already know about Rust. They may not have used it, but know it exists.

What's "Hacks"?

kibwen
https://hacks.mozilla.org/ ("Hacks" for short) is Mozilla's blog for technical content focused at developers, as opposed to their outlets aimed at normal Firefox users or general web advocates.
steveklabnik

  > Probably talking head videos with PowerPoints.
Almost everyone in this video is an engineer on Rust, Servo, or Firefox. The exception is Dave Herman, head of Mozilla Research.

These are unscripted responses to questions, no powerpoint involved.

  > They may not have used it, but know it exists.
There are a lot of programmers in the world, and many of them don't read Hacker News. I meet new people who have never heard of Rust every week almost. Or may have heard of it, but can't tell you what it's for, just vaguely remember hearing the name. Or have heard of it, but remember their first impression from a pre-1.0 version and haven't listened since. ("How's its GC?")
kibwen

  > but remember their first impression from a pre-1.0 version
Can confirm, I routinely observe people in the wild dismissing Rust based on the existence of the `@` and `~` sigils, not realizing that we removed those years ago. :P
wyldfire
I dismissed it when I first heard about it (prob just prior to 1.0). I saw "Mozilla" as a primary sponsor and just knee-jerk assumed it was part of the what-if-we-put-JS-here mania.

Thankfully due to repeated popularity on HN I gave it a try and I'm a convert now!

enqk
Although I am not against improving the safety of languages we use for system programming, the model that Rust advocates are pushing of "preventing mistakes" as a way to make systems secure doesn't convince me.

Mistakes (and security breaches) happen. A system should be written in such a way as mistakes are few, indeed, however it is essential to also efficiently protect our users' assets (data) first, assuming that mistakes are being made.

Ideally as a user I should not care much, nor can I trust that all software is "mistake free." I need the assurance and the tools to guarantee that 1) mistakes are detected 2) my data/privacy/identity is protected.

SkyMarshal
The name of the game for the software industry is to continually create better tools that systematically, inexorably reduce the attack surface of apps built with them.

Haskell does this in an interesting way by quarantining all side-effect code into Monads and preventing data races with immutability. Rust does it by making memory errors and race conditions impossible via its ownership mechanism.

Think of it as guiderails that reduce the cognitive load on the programmer by ensuring some classes of mistakes/errors/bugs simply cannot get past the compiler. Compared to C/C++, the programmer has less to think about since the compiler handles that for them. That is immensely valuable.

enqk
Nevertheless these language specific solutions are completely opaque to an end user. These are software developer centric. As an end user I cannot inspect how these were applied and therefore develop trust in the end product.

There's always an element of trust at the root of running a piece of software on my machine. I wish I could quickly inspect and be sure that say, the audio engine that mozilla firefox is running runs completly in a segregated zone/container/sandbox that will prevent, even in the presence of defects, its exploitation to leak data that is important to me.

I guess it's just a question of words and goals being a little bit too overlapping. Rust's approach is to address safety as in the rates of defects. However it does nothing, and probably cannot, address the security i.e. the impact on my own safety or that of my assets.

As an end-user I care much more about security than I care about defects.

ehsanu1
Defects can heavily affect the security of your assets though - just preventing buffer overflow is a huge win there. It just isn't a full solution to the problem of security in general, and I don't know that any such thing could exist, whether at the programming language level or otherwise.
ianleeclark
> I guess it's just a question of words and goals being a little bit too overlapping. Rust's approach is to address safety as in the rates of defects. However it does nothing, and probably cannot, address the security i.e. the impact on my own safety or that of my assets.

To me it seems like this is more of an implementation detail and not language-specific.

None
None
__david__
> However it does nothing, and probably cannot, address the security i.e. the impact on my own safety or that of my assets.

Sure it can. It actively prevents certain type of errors (buffer overflows, integer overflows), which are prime security exploits. Less exploits == more security.

enqk
My standpoint is that: - Less exploits is an improvement in frequency - More security is a question of limiting impact
staticassertion
I don't think I want to get into what is or is not security, but what I will say is that rust's security does not interfere with safety net features, like sandboxing.

I've found that sandboxing rust is quite easy, as easy as C or C++. I find it much easier than Java/ Python, which have runtimes that can often make things difficult (you certainly would not want to use seccomp in either).

Take a look at DropBox's Brotli - it's a great example of this. It makes use of seccomp, but seccomp is not available on all platforms, so it also relies on rust's safety.

__david__
I would say 'security = frequency * impact'. Reduce either one and you're doing better. Reduce either of them to zero and you're golden.
dbaupp
Risk analysis is typically done including both the probability of a problem and the impact of the problem. Not including the former in an analysis will result in spending vastly disproportionate amounts of effort on exceedingly rare problems, e.g. the consequences of a 1000ft tsunami hitting New York would be huge, but we don't build massive sea walls because it is also ridiculously unlikely. This applies perfectly well to digital security too: reducing the probability of an exploit is just as important as limiting impact if it does happen, which is why (for instance) Mozilla is doing both.
gcp
Nothing prevents you from doing both approaches. I mean, while Mozilla is increasing Rust usage, they're also rolling out more extensive sandboxing.

Either of those by themselves are not a good enough solution though. Just trusting on Rust means you are vulnerable to Rust bugs or logic errors. Just trusting on sandboxing means you're hoping that your trusted code (written in C/C++) doesn't have any security bugs.

None
None
sidlls
It also means trusting that the "unsafe" usage in the Rust code doesn't have any security bugs.
bsder
Sure, but the question is how wide the "unsafe" boundaries are.

If I'm writing an application that uses the network in a standard way, I should be able to write a program with NO unsafe blocks.

As always, things have bugs. Rust, itself, may have bugs that get exposed over time once adoption starts to increase. Rust gets some "security through obscurity" for the moment.

Once they start pushing Rust code into Firefox, that will change dramatically.

steveklabnik
There's already a small amount of Rust in Firefox.
bsder
That's really cool, and I'm glad to hear it.

As it expands in the Firefox codebase, it gives me the ability to tell people that "Yes, Rust is being used by lots of people".

kibwen
Indeed, though trusting unsafe blocks in Rust is more tractable than trusting an entire C/C++ codebase due to the fact that unsafe blocks present a drastically reduced auditing surface.
sidlls
That's the contention, yes. In practice currently there is an incredible self-selection bias with respect to how tractable that position is if (when?) Rust becomes more widely used outside the circle of the True Faithful.
qwertyuiop924
#2 is up to the engineers making the product. #1 is why Rust exists: it eliminates whole classes of errors. No, it won't stop you from screwing up badly, but it just might stop the buffer overflow or race condition that would have helped a hacker steal your users' identities.
glandium
The fact that #1 is (partially) handled by the language also allows engineers to spend more time on #2.
pcwalton
Defending against remote code execution turns out to be an excellent way to help protect your data.
Jweb_Guru
Every systematic approach I've seen to enforcing privacy, nondisclosure, etc. and has to deal with dynamic security rules ends up having to include knowledge of information flow (via things like program counters, aka lifetimes, and in some cases explicit ownership tracking) into the language. Rust isn't enough but I think something like it will be necessary to take that next step.
erikbye
Reading your comment made me think of Erlang, where the guiding principle is the opposite: large systems will contain errors, and will fail. That is what fault-tolerant is, you design your software (language, libraries and end program) in a way that will handle unforeseen errors and failures. Because in large systems there will always be bugs.

http://erlang.org/download/armstrong_thesis_2003.pdf

enqk
That does sound rather interesting! (And anti-fragile, to use a talebian neologism)
oldsj
Possibly, but is the system getting stronger on each failure(like a human muscle), or is it just recovering?
erikbye
It's resilient and fault-tolerant. Not antifragile according to Taleb's definition. Though, no reason you could not spawn 2 (or more) new processes for every 1 that crash.
staticassertion
I'm also a fan of this approach, and am currently learning Erlang because of it. However, I do not feel that the approaches are exclusive.

Erlang is dynamically typed and it is robust because Actors act as isolation boundaries and are managed by supervisors. So the approach is "bugs happen, always recover". You can also use type annotations in Erlang to get "bugs happen less, always recover".

erikbye
Perhaps not mutually exclusive, all programmers try to introduce as few bugs as possible; but the philosophy is in fact "let it crash", then you reset to a stable state. Not only is the programmer unburdened of the cognitive load of trying to imagine every single point of failure, but also of the workload of a lot of defensive programming.

The whole reasoning behind this is what I said before: in large systems there will always be bugs, errors and failures will happen (some even outside of your code, e.g. power failure). Imagining every point of failure is impossible, so don't try to. Don't write preemptive code.

Yes, Erlang is dynamically typed, but not weakly typed.

AsyncAwait
I don't get what you're saying here.

- Without Rust you program to protect users assets without guaranteed memory safety.

- With Rust you program to protect users assets, with the addition of guaranteed memory safety.

Sounds like a win win to me.

enqk
Indeed it is a good thing. What I'm just a little bit weary of is equating that sort of improvement with increasing overall security. We went there already with Java and this hasn't stopped Java from being an attack vector.

The way this sort of reasoning affects the behavior of engineers and the products they put out in front of users.

AsyncAwait
I get that you don't want developers to learn the bad habit of believing that Rust means they no longer need to pay any attention to security themselves, but that doesn't negate the fact that Rust does indeed increase some aspects of your program's security as compared to C.

The security provided by Rust shouldn't be overestimated, but I think it's unfair to underestimate the benefits of the memory safety as well.

Retra
It should be pretty obvious that security is not solely the responsibility of your compiler. I mean, if your OS is compromised, in can inject code into whatever you're running, whether it be rust code or not. Anyone trying to make iron-clad systems needs to understand the whole system.

But if you're not trying to do that, it's nice that you don't have needless and easy-to-avoid weaknesses like those your compiler could prevent.

kibwen

  > assuming that mistakes are being made
Mozilla isn't using Rust as an excuse to skimp on security elsewhere. For example, Servo sandboxes tabs as you'd expect from a modern browser, despite being written in Rust from day one. Defense-in-depth is fundamental to good security, and being able to rely on the safety properties of your language adds yet another layer of defense.
pimeys
Now I have four services running on production, all written with Rust. If it compiles, it usually works. Of course you have these late night sessions where you write that one unwrap() because, hey, this will never return an error, right? And bam...

I'm seriously waiting that tokio train to be stable and a unified way of writing async services without needing to use some tricks with the channels or writing lots of ugly callback code. Also the native tls support is coming and the dependency hell with openssl would be gone forever.

If you need http server/client, I'd wait for a moment for Hyper to get their tokio branch stable and maybe having support for http2 by migrating the Solicit library.

amelius
> I'm seriously waiting that tokio train to be stable and a unified way of writing async services without needing to use some tricks with the channels or writing lots of ugly callback code

By which [1] is meant, for those not knowing.

[1] https://github.com/tokio-rs/tokio

shmerl
Wasn't there some other effort in Rust to enable asynchronous I/O?
steveklabnik
There's been a few; this one is built on top of mio, one of the most popular previous ones.
shmerl
I meant this one which I saw mentioned recently: https://github.com/alexcrichton/futures-rs
AsyncAwait
tokio is not an alternative to futures, but rather a more high-level framework that builds on top of futures - both are being actively developed.
steveklabnik
Ah yes. Basically, tokio is mio + futures.
dep_b
Isn't if let similar to Swift's if let where it either unwraps safely or does something else? I really wish I could disable forced unwrapping as it mostly leads to mistakes by less experienced or overconfident programmers and the amount of extra code by using guard instead if you program smart is negligible.
AsyncAwait
Rust's if let is indeed inspired by that of Swift and behaves practically the same way.
None
None
kibwen
Though Swift's `if let` is AFAIK hardcoded to their built-in optional type, whereas when Rust lifted the idea they made it work with any enum. I believe Swift recently gained `if case let` as an equivalent to how `if let` works in Rust.
steveklabnik
Any unrefutable pattern, I believe.

EDIT: whoops! I got it backwards. It's refutable, right, duh. :)

gue5t
If-let can be used with refutable patterns (the refutation/counterexample to the pattern is when the else case runs). Irrefutable patterns can be used with regular let.
tomjakubowski
Any refutable pattern; for irrefutable patterns the "if let"'s pattern would always match, so it would be kinda redundant. :-)
dep_b
It was new in Swift 2 I think. Didn't use it yet. But it's basically a single case statement lifted from switch, nothing more. But enums and switch statements can be really complex beasts by themselves in Swift.

If Rust only allows it on enums that would be extremely weird.

In a switch statement I sometimes want to switch on the object, then see if I'm allowed to unwrap it to a certain class and immediately use it afterwards. Useful for parsing an array of mixed object types that should be processed differently.

But I honestly think Swift allows people to write elaborate illegible codegolf-y bullshit sometimes. Sometimes the Swift compiler still chokes on too complex expressions and you need to add some explicit types or separate out in several statements what you were trying to do in one statement. Usually a good warning that your code is hard to read even by humans.

tmzt
Would it make sense to have a cargo/rustc flag to disable unwrap and friends when building for production?
burntsushi
To put this into perspective, this would also necessarily disable expressions of the form `xs[i]` where `xs` is a slice. Why? Because `xs[i]` is equivalent to `*xs.get(i).unwrap()`.

In other words, banning unwrap isn't really that productive because an unwrap, when properly used, is an expression of a runtime invariant.

The problem is that unwrap can be very easily misused as an error handling strategy in a library, and in that case, it's pretty much always wrong. But that doesn't mean using unwrap in a library is wrong all on its own, for example.

kibwen
Sometimes I do wish I could disable the indexing syntax, though. :P At least in my own code, I find that I naturally reach for iterators rather than doing any manual indexing.
steveklabnik
unwrap has legitimate use-cases, and it's not clear what "disable" it would be, as it changes the type of the thing it returns. You could write a lint to fail the build, if you want, I guess...
tmzt
Right, it would likely use clippy but it would essentially be a --production target or profile, intended for builds where the binary will be run in production. And by 'and friends' I mean calls that panic for the same reason as unwrap, such as expect or ok.

The goal is to stop code from reaching production inadvertently, not to prevent all sources of panics.

wyldfire
But if we accept the premise that "it's acceptable in some cases to have unwrap() in code that targets 'production'" then it wouldn't make sense to have a production profile that bars its use. The word "production" is in the global namespace and I think you want something more specific to your use case.

Rather, one could define a rust coding guide for themselves that deems unwrap() inappropriate for production use. (and in that case use the lint Steve suggests).

xorxornop
Seems like you'd want a lint rule where if unwrap is used, it must have a comment preceding it (of some formal syntax) describing why it's necessary or appropriate. Thus the build can have all uses of unwrap known as explicitly allowed, allowing all the possible sites of panics to be enumerated and known, a useful property to have
staticassertion
In that case you may want "expect" over unwrap.
Manishearth
clippy has this already :)
bumblebeard
For those not familiar with Clippy:

https://github.com/Manishearth/rust-clippy

It's a very useful tool; the main thing about it that annoys me is that it only works with nightly.

gue5t
.unwrap() is only the right choice if you need to optimize for binary size and can't afford the cost of the precise error message you would pass to .expect(). There are situations where you can't possibly continue running the application if an error occurs, but you shouldn't rely on a backtrace (which you might not manage to capture, e.g. if RUST_BACKTRACE is unset or you don't have symbols) as your only method of communication with your future self.
dep_b
I can't reply to steveklabnik for some reason. But I think if let would replace that unwrap he uses there.
steveklabnik
If you try to reply on HN too quickly, it hides the reply button as to discourage quick back-and-forths.

I mention in the post that this specific code would be best written with if let, but that it's not about the specifics, it's about the general pattern.

tatterdemalion
This is pedantic. It seems clear from context that this conversation is about panicking on the None/Err cases, and unwrap is a shorthand for "unwrap or except or match with a panic branch."
steveklabnik
This is not true. For example, consider this code:

    if foo.is_some() {
        let foo = foo.unwrap();
    } else {
        // other code
    }
Here, I _know_ that foo is some. The extra error message from expect will _never_ be seen.

Now, this is a contrived example, and would better be written with `if let` in today's Rust, but this is the _kind_ of situation in which unwrap is totally, 100% cool, but the compiler can't know.

gue5t
The error message in this case might be something like "foo became None after verifying it to be Some". This could happen, for example, if incorrect unsafe code in another thread concurrently mutates foo through a raw pointer. My point is that of course while writing them you don't think your unwraps will fail, but if they do, it's good to have a reminder of what's going on. Even if the expect never fails, the message provides additional documentation for those reading the code.
kzrdude
You'll like Rust, because concurrent mutation of a value is impossible if you hold a `&` or `&mut` to the value. So this can in fact be ruled out by the programmer.
Retra
It's only impossible in safe code. Unsafe cade can violate those rules all day long. You can't guarantee that there's no unsafe code running concurrently.
cynicalkane
Concurrently modifying aliased memory (`&` references and pointers) is undefined behavior. Not just in Rust, but in just about any language.

As an aside, alias unsafety in Rust is always UB, even without concurrency.

None
None
kzrdude
Any use of `unsafe` that breaks unrelated safe code is broken and buggy; if that scenario would happen like you describe it, the code is breaking Rust's aliasing rules: that's possible using `unsafe` but invalid and leads to UB.
Retra
I'm not talking about 'uses of unsafe', I'm talking about code that is unsafe. Much of that code is not even written in Rust, so there's no 'unsafe' to use.
kzrdude
Ok, so code that is memory unsafe (broken!). One must still say "unsafe" to bring it into Rust (to use ffi, or make a safe wrapper); so there is still a clear location in the Rust code that is to blame.
lmm
> Now, this is a contrived example, and would better be written with `if let` in today's Rust, but this is the _kind_ of situation in which unwrap is totally, 100% cool, but the compiler can't know.

If the author knows it's safe, they should be able to express how they know in a way that the compiler can understand. Certainly I think there's a large space of use cases where the extra guarantee provided by forbidding unwrap would be well worth the cost of outlawing some "legitimate" cases, especially if we're just talking about doing so on a project level. (Though maybe they're not the rust target audience).

heinrich5991
What if I want to have a reference to the last element in a vector that I just pushed? Without a push method that immediately returns such a reference, this will always involve an unwrap. And this is not just some weirdly constructed example, I've needed to do this in real code a couple of times already.
lmm
Pushing an element to a vector could return a guaranteed-non-empty vector. Admittedly it's unpleasant to write non-empty collections in a language that lacks HKT, since you have to reimplement a lot of stuff, but I'd consider that an argument for HKT rather than an argument for unwrap.
dep_b
I write Swift daily and I just don't force unwrap anymore, ever. I don't think a hard crash is very usable in a production application, a lot of people disagree and want a hard crash while testing but I think for those bugs that slip through the user experience between for example "loading the first screen but my avatar isn't set" is so much better than "loading the first screen and the app kills itself" just because you force unwrapped the URL of the avatar from the JSON response that had a slight problem in production.

Well perhaps we should have something that logs the error in production but keeps on trucking and crashes the application when the debug or test flag is set?

steveklabnik
panics are explicitly for unrecoverable errors, so recovering from them and keeping on going means that you're not using the right kind of error handling. If that's the behavior you'd want, then you wouldn't want to use unwrap.
conradev
If by "and friends" you mean "calls that can panic", there are a few of those. Slice bounds checks, for one.
wyldfire
"Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?"

Unwrap implies "either this operation should succeed or we should panic". Doing anything other than panic seems like it's terribly difficult to determine what that should be. And I might've put unwrap() there because that's really what I want to happen. For `mv`: don't let's dare go ahead and do the unlink() if the link() failed!

Tangent: errors and exceptions aren't bad, they're how computers work. I've encountered folks who, when faced with runtime errors pepper their code with "if (!NULL)" or truly evil things like "except:pass"/"catch (...) { }" which rarely make sense anywhere but the base of the stack and even then don't usually. If you've ever asked yourself "but how did we even get here‽" it may be because someone dropped something totally incongruous like that in a related module.

rwj
I think the idea would be the program would fail to compile, and you would need to go back and replace the unwrap with proper error handling. The goal would be to allow the use of unwrap during development, but require the final polish before the code goes into production.
db48x
Sometimes unwrap is the proper error-handling behavior.
josephg
unwrap is proper error handling. It says "Try to do this. If it fails, panic". Its like an assert on an invariant that the compiler requires.

If my script depends on a database connection, I might connect to a database and unwrap() it so the script errors out if the database isn't available. If I wrote that logic myself I would just be awkwardly rewriting unwrap.

kibwen
To be more accurate, unwrap can be a legitimate means of error handling when used in an application, as opposed to a library. But if you're writing a library, then unwrapping rather than using Result is a surefire way to make your users hate you. :)
Nemo157
Sometimes you've proven some invariant in some other way, so you know that unwrapping is guaranteed to not panic. Although in those cases I prefer to use .expect("this will not fail because of blah") instead in the spirit of self-documenting code.
kibwen
Agreed, I use `.expect("Infallible")` for such cases.
staticassertion
Even in a library there are valid use cases for an unwrap. If you can literally guarantee that the value is present it's fine.

In libraries one should be wary about it, and never use it if the unwrap might actually panic, but otherwise you're good to go.

wyldfire
Ah! I stand corrected. "library" is a pretty sane use case for barring unwrap(), one that cargo knows is the current goal. I still don't think it's general enough but maybe it's worth a warning.

Aside: CPython's gdbm support is provided by libgdbm that calls exit() for you if if finds something it's not happy with (corrupted database, e.g.). O.o

rubber_duck
Wouldn't it be more useful to fail with an error message at least ?
Matthias247
> If my script depends on a database connection, I might connect to a database and unwrap() it so the script errors out if the database isn't available.

I think that's a bad example, as it is one of those things that can really fail at runtime and which should be properly handled. Even if handling means printing an error message and stopping the process with an exit code - but not crashing.

I think unwrap is for things that really should not happen if everything is implemented correctly.

josephg
Thats fair. I said script for that reason. A better example would be assert equivalents - if an API could return null (Option) under normal circumstances but you know it can never be null based on how you're using it, unwrap() makes sense. That contract could only be violated if there's a bug in the implementation. If thats the case all guarantees are out the window and usually the best / only thing you can do is to crash and allow the process to restart.

Also in those case (in my experience) having a human-readable error message is rarely useful. When assertions are violated I almost always have to consult the code anyway. And 80% of my asserts are never hit. I usually don't bother preemptively writing decent error messages. File name and line number is the right information, and panic provides that anyway.

MaulingMonkey
> If thats the case all guarantees are out the window and usually the best / only thing you can do is to crash and allow the process to restart.

I've found that there are a lot of minor bugs in the implementation that, in something client-facing (e.g. not on a server somewhere that can simply be taken out of the load balancing rotation until it restarts or whatever) probably shouldn't crash.

Report and log errors remotely - to be fixed - and skip some logic that relied on those guarantees, but not crash.

> Also in those case (in my experience) having a human-readable error message is rarely useful.

I'll settle for developer-readable, then ;). Panic can format error messages, and itself provides context information (the file and line you mention) as a decent means of reporting fatal errors. Some assertions are obvious enough as to their reason and cause from context - as you say, they don't need a message.

But I've also found taking the 10 seconds or so to think of a decentish error message pays off quite frequently. Even if I'm pretty sure it's unnecessary. Sometimes it may save me only a minute of context switching by telling me exactly what the problem was (instead of roughly describing some assumption made for unknown reasons), sometimes the only way I can make progress is by adding more logging and messaging and reproducing the problem because I couldn't suss out exactly what was happening - and deciding this could take a lot longer than a minute if I know it's hard to reproduce.

Manishearth
I think the idea is to fail to compile if you have certain kinds of panic?

I agree that making unwrap/expect silently ... not happen will just cause worse problems.

wyldfire
Sure, but in that case it would effectively elide it from the language spec! Who doesn't target "production" eventually?
Manishearth
> Sure, but in that case it would effectively elide it from the language spec!

No. Some unwraps are necessary. You wouldn't be absolutist here; you could still allow some carefully-labeled unwraps.

Also, not all production users need to care about panics that much.

tmzt
Yes, I would envision an allow attribute of some sort.

This is simply a contract with compiler that you didn't copy and paste some example code somewhere in your codebase that fails with an unwrap. It's about extending the "if it compiles it runs" near-guarantee that we so love about Rust.

Ideally a program would fail fast and be restarted if it reached an unrecoverable state, with supervision trees like Erlang. Also ideally, unwrap would be used for exceptional states, not only ones that are unlikely to fail until something goes wrong, like a port being closed or a file unreadble or not present.

Manishearth
This is basically what https://github.com/Manishearth/rust-clippy/wiki#option_unwra... does

It doesn't work transitively (so if a crate you depend on unwraps you can't protect yourself), but https://github.com/llogiq/metacollect plans to fix that

kzrdude
Early exploration and tests amount to a lot of code, and the language needs to make those parts pleasant to write as well. I think .unwraps() are especially common there.

I imagine `println!()` is another thing whose design is influenced by meeting the needs of early exploration implementations (and it's another example of a library function that handles errors with panics, for example, but it's not just the thing that makes me think so).

Lordarminius
> "Pray, Mr. Babbage ..." (for those who do not know the quote)

“On two occasions I have been asked, ‘Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?’ . . . I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.”

— Charles Babbage,

Passages from the Life of a Philosopher (1864)

kibwen
Speaking of HTTP clients, just yesterday the person behind Hyper announced their new high-level HTTP client crate: http://seanmonstar.com/post/153221119046/introducing-reqwest
Animats
Not sure what to think of that. Does everything have to be async I/O now? How often do you need massive numbers of client connections?
steveklabnik
1. This introduces a primarily synchronous API for now. Async will come later. All those code samples are synchronous.

2. Async I/O is an extremely hot topic in Rust right now, so it's likely that Rust people do care about it.

dbaupp
I assume you're talking about why reqwest is necessary (i.e. why hyper is moving to async), rather than the single paragraph mentioning asynchronicity as a possible future direction for reqwest?

Async I/O is important for more than just multiplexing a million requests. Hyper is moving to it for that purpose, and also because it is a more natural way for working with in-flight I/O, e.g. cancelling requests or selecting over them.

kbenson
Simply put, IO as accessed by the Os is asynchronous by nature. The synchronicity you are used to is a nice artifact and lie the OS tells you to make simple programming easier. Underneath, the OS is doing everything asynchronously, and just waiting until complete to return, otherwise we would be throwing away massive quantities of compute cycles waiting for IO to complete.

Why let the OS and other programs running at the same time reap all that benefit? You too can program such that while you would normally be waiting for IO and some other programs was utilizing the CPU, your own cycles can be used and you can be accomplishing so much more.

lmm
I don't think everything has to be async, but if there's ever a place to use it then it's HTTP. There are too many systems that rely on an external web service and collapse in a ball of threads if that external web service ever gets a bit slow.
jcrites
Asynchronous nonblocking interfaces are more general-purpose than synchronous blocking interfaces. I can't speak for this library or Rust specifically, but in my experience well-designed asynchronous libraries allow you to interact with them in a synchronous style as well, if you wish.

Netty is an asynchronous, event-driven network framework for Java, and it's perfectly possible to expose synchronous blocking abstractions on top of it. The mechanism is pretty simple: the asynchronous framework exposes a future representing the result of an operation, and to provide a synchronous interface you simply block on the completion of that future before returning. Client libraries can handle this for you, providing the interface of e.g. a regular blocking HTTP client on top of Netty async IO.

This approach can be convenient, since it's possible for both synchronous and asynchronous style code to coexist easily in the same application. The application designer can incrementally change parts of the application into asynchronous style as performance needs dictate. For example, you might choose to serve typical small RPC requests using blocking workers in a thread pool, but when you need to stream the content of a large file across the network you could use a separate nonblocking worker pool that interacts with both the file system and network asynchronously.

The ability to interact in a blocking way via futures means that asynchronous facilities can serve both synchronous and asynchronous needs, making them the better choice for most frameworks today. While it used to be the case that async IO frameworks took a performance penalty compared to well-implemented sync IO ones, from what I understand that gap has been closed, and the highest performance frameworks are now all async IO. For example, check out the TechEmpower Web Framework Benchmarks. Most or all of the top performers use asynchronous approaches: https://www.techempower.com/benchmarks/#section=data-r13&hw=...

Animats
Yes, I know, async I/O is the new cool thing. Here's an async I/O program from 1972.[1] John Walker wrote this. EXEC 8 had the IO$ system call, which, unlike IOW$, returned immediately. A "completion routine" was called when the I/O operation finished. Note how similar those libraries are to what's used today, now that people are reading Dijkstra again. The problem, of course, is that a callback system dominates the architecture of the entire program.

(When I moved from UNIVAC mainframes to UNIX, things seemed so sequential. No threads. No async I/O.)

[1] http://www.fourmilab.ch/documents/univac/fang/

hinkley
Where pretty much anything related to concurrency is concerned we've been busy reliving the 70's for most of the last decade. Locking, asynchronous I/O, you name it.

Hell even Microsoft had I/O Completion ports back in what, 2003 or so? Or am I wrong and it was a lot earlier? The coolest things in Javascript land were all done by Microsoft first and everyone (me especially) can't bring themselves to acknowledge that.

blt
Web servers and GUI apps are both long-running, event driven programs that need to do IO or slow computations while staying responsive to new events. It's not surprising that they are both well supported by the same programming model. The Win32 API is ugly but the methods of app/OS interaction it supports are fundamentally sound for high performance interactive programs.
kryptiskt
IO completion ports were introduced in NT 4.0 in 1996.
fulafel
Isn't the NT async IO API just a front for a kernel side thread pool though, and may block depending on worker thread availability? They say[1] "if you issue an asynchronous cached read, and the pages are not in memory, the file system driver assumes that you do not want your thread blocked and the request will be handled by a limited pool of worker threads"

Things may be different for socket IO, but there Unix had select() much earlier, around 4.2BSD (1983)

[1] https://support.microsoft.com/en-us/kb/156932

dbaupp
Huh, so are you implying that hyper should stay synchronous so that it doesn't appear to just be copying things from 40 years ago?! This comment sounds like you think that it was good back then, but now you don't know what to think of a library that is aiming to switch to asynchronous IO and/or don't know why it's a good thing?

(It's also not like the comment you're replying to said that async IO is a recent invention, your low-effort sarcasm as a response is unfortunate.)

saurik
One is not more powerful than the other: they are considered "duals" and this has been proven in the literature (back in 1978, no less); most of the supposed downsides of threads are due to people assuming a specific implementation of threads (many if not most of which suck).

Here are some papers that would normally be assigned reading in a graduate level Computer Science course in Operating Systems as background reference.

https://pdfs.semanticscholar.org/2948/a0d014852ba47dd115fcc7...

http://capriccio.cs.berkeley.edu/pubs/threads-hotos-2003.pdf

But like, it should be obvious: with a lightweight co-routine library you can convert anything that is synchronous into something that is asynchronous with no more if not less overhead than you would get from context switches as you are forced to incur from returning and calling a new function to implement event processing. This is no more onerous than using that same co-routine library to implement blocking on a future (to convert an asynchronous API into a synchronous one).

jcrites
The fact that two styles or concepts are formally dual does not make them equally practical or useful in all circumstances.

Consider: in calling conventions, the continuation passing style is dual to the "direct" calling convention (i.e., call stack with return values); the call-by-name style is dual to call-by-value style; Lambda Calculus and Turing Machines are dual in their ability to compute all effectively calculable functions.

These dualities do not mean it's equally practical to build systems in both ways. Sometimes one approach ends up being more practically useful.

Most programmers prefer to use the direct calling convention, and find complex continuation passing style to be difficult to read and maintain. JavaScript programmers may be familiar with the pain of CPS due to excessive use of callbacks (not strictly CPS but has similar drawbacks). Similarly, writing code purely in call-by-name style can be confusing and have difficult to predict performance impacts (e.g., Haskell lazy evaluation semantics).

In their article the article "On the Duality of Operating System Structures", Lauer and Needham present a similar conclusion [3]:

> "The principal conclusion we will draw from these observations is that the considerations for choosing which model to adopt in a given system [...] [are] a function of which set of primitive operations and mechanisms are easier to build or better suited to the constraints imposed by the machine architecture and hardware."

In that passage they are describing message passing vs. procedure call systems, and I interpret this to be their acknowledgment that, though the systems are dual, one architecture or another is more appropriate in certain circumstances.

Getting back to our original topic: this thread was about the decision of a Rust library to offer async or sync IO as its choice of primary primitive. I think async is the better general-purpose choice, because it's clean, simply, and straightforward to expose a synchronous interface on top of an async interfaces with futures; and the other way around is messy and difficult.

Can you elaborate on the lightweight co-routine library that can be used to convert anything synchronous into async? I'm curious about that, because Rust previously had support for coroutines (green threads), and decided to remove them due to a number of problems [1]. Meanwhile, Rust developers were able to devise a zero-cost futures abstraction on top of asynchronous IO [2]. Unlikely the problematic green threads strategy, this approach doesn't impose any complicated constraints on the systems that use it (FFI requirements), and doesn't add runtime overhead.

What co-routine library would you recommend that avoids the downsides in [1]?

[1] https://github.com/aturon/rfcs/blob/remove-runtime/active/00... describes some pretty tricky challenges.

[2] https://aturon.github.io/blog/2016/08/11/futures/

[3] https://pdfs.semanticscholar.org/2948/a0d014852ba47dd115fcc7...

HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.